Search
Explore digital transformation resources.
Uncover insights, best practises and case studies.
Search
Explore digital transformation resources.
Uncover insights, best practises and case studies.
The EU AI Act, Cyber Resilience Act, Data Act, Product Liability Directive, and ViDA introduce major compliance requirements for software companies between 2026 and 2027, directly impacting how products are built, secured, and operated.
Service
Software companies operating in the EU face five major regulatory changes between 2026 and 2027: the AI Act, the Cyber Resilience Act, the Data Act, the Product Liability Directive, and ViDA.
Each introduces product-level requirements that affect how software is designed, built, and maintained.
For software companies, compliance moves from legal to engineering: it must be built into the product, not added later.
The upside is real. When compliance becomes a design discipline rather than a last-minute checklist, products get smarter, safer, and easier to plug into complex ecosystems.
The downside is just as real. Tracking what changes, when, and what they mean for your codebase is a full-time job. And the deadlines won't move because you're busy.
This is your cheat sheet to the key 2026 and 2027 compliance milestones.
For each regulation, we cover:
|
Regulation |
Deadline |
Who it affects |
Key requirement |
| AI Act | Aug 2026 | AI systems | Transparency, risk classification |
| CRA | 2026–2027 | Software products | 24h incident reporting |
| Data Act | Sept 2025–2027 | Data holders | Data portability |
| PLD | 2026 | Software providers | Expanded liability |
| ViDA | 2026–2030 | Fintech/SaaS | Real-time VAT reporting |
What: EU AI Act (Regulation (EU) 2024/1689)
When: In force: August 1, 2024 | Main compliance deadline: August 2026
Who: Software manufacturers embedding AI into products and services sold or used in the EU
The European Union’s comprehensive legal AI framework entered into force on August 1st, 2024, but 2026 is the real operational milestone for (EU) 2024/1689. Most provisions take effect then, including those for high-risk systems. Transparency obligations also begin, alongside national AI regulatory sandboxes.
In essence, the Act aims to do to AI use what the GDPR did to data protection – make its use visible, controlled, and accountable. In practice, users must be informed when interacting with AI (e.g., chatbots or deepfakes).
Automated decision-making must be labeled, while high-risk healthcare or education systems require human oversight, auditability, and fairness checks. Some uses, such as social scoring or certain forms of biometric surveillance, are banned outright.
Product-level impact
For software companies embedding artificial intelligence into their products, that’s a major inflection point. For example, a recruitment SaaS platform using AI to screen CVs must now log every scoring decision, provide explainability for rejected candidates, and allow human recruiters to override automated rankings.
In response, most companies are already updating their systems to meet these requirements. But the window between clear guidelines and compliance deadlines is narrowing.
Delays, time squeeze, and evolving deadlines require agile expertise to translate legal requirements into product decisions, implement traceability and monitoring at scale, and retrofit existing systems without breaking core functionality or delivery timelines.
Financial stakes
The financial stakes sharpen the urgency with penalties for non-conformance potentially exceeding GDPR levels: up to €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations, and €7,5 million or 1,5% for the supply of incorrect information (Council of the EU).
Team augmentation is among the fastest levers available when deadlines are close. It allows companies to plug compliance-aware engineers, ML specialists, or QA experts directly into existing teams without slowing delivery or overloading core teams.
For multinational organizations, European-based Global Capability Centers (GCCs) can be a longer-term extension, providing embedded capability to build, maintain, and evolve compliant systems over time.
The EU AI Act (2026) impacts software development by introducing risk-based rules for AI systems. From 2026, high-risk AI used in areas like healthcare, hiring, or education must include human oversight, transparency, documentation, and ongoing monitoring. Companies must also clearly inform users when they interact with AI systems such as chatbots or automated decision tools. Some uses, like social scoring and certain biometric surveillance, are prohibited. For software teams, this means compliance must be built directly into product design rather than added later.
What: Cyber Resilience Act (Regulation (EU) 2024/2847)
When: December 2024 | Key obligations phase-in: 2026–2027 | Full compliance deadline: End of 2027
Who: Manufacturers of software and connected digital products sold in the EU
Another crucial regulation that has already moved into force, but whose obligations will strengthen in the course of 2026/27, is the Cyber Resilience Act (Regulation (EU) 2024/2847). Its goal is to ensure that digital products for enterprises and consumers are free from cyber threats.
Even though 2024 marked the start of official preparations, Cyber Resilience Act reporting obligations will now require manufacturers to report any actively exploited vulnerabilities and severe cybersecurity incidents to the national cybersecurity reporting bodies, as well as to the EU cybersecurity agency ENISA, within strict 24-hour timelines to provide an early warning signal.
Product lifecycle impact
While some provisions of the act apply only to new software products, barring reporting, which will also be mandatory for manufacturers of existing software, the end of 2027 marks the line beyond which all digital products will have to comply with the CRA requirements across the board.
This means that all new digital products will need to be secure by design, by default, and meticulously monitored and reported. So when, for instance, a cloud CRM provider discovers a critical authentication bypass, they must notify ENISA within 24 hours, even if the vulnerability is already being actively exploited in production systems. Furthermore, the responsibilities won’t cease once the product is released, but will carry over for lifetime security for SaaS and embedded systems.
Industry constraint
For software companies, the heightened requirements may necessitate strengthening internal cybersecurity capabilities, and that’s where many hit a wall. According to the World Economic Forum, 56% of organizations struggling with cybersecurity say skills shortages are a serious barrier to improving resilience. In that context, team strategies such as augmentation or selective outsourcing become less about cost and more about speed of access to scarce, regulation-ready expertise.
One important caveat: external support doesn’t immediately transfer accountability. This and similar regulations increase the pressure to assess providers not just for their technical capabilities but also for their alignment with evolving EU regulations, their implementation of secure development practices, and their compliance across the delivery lifecycle.
Under the Cyber Resilience Act, software and connected product manufacturers must report actively exploited vulnerabilities and serious security incidents within 24 hours of their discovery. Reports must go to national cybersecurity authorities and ENISA. The requirement applies throughout the product lifecycle, meaning companies must continuously monitor, document, and respond to security issues even after release. This makes security reporting a core operational obligation, not a one-time compliance task.
What: Data Act (Regulation (EU) 2023/2854)
When: January 2024 | Applies from: September 2025 | Connected products: September 2026 | Unfair terms: 2027
Who: Software firms handling user data in SaaS or analytics, data holders, users, and providers of connected products/services in the EU
With data, especially the one collected via IoT devices, control used to sit firmly with the vendor. Even if users generated it, data remained in the provider’s infrastructure, shaped by its logic, and accessed on its terms. Contracts rarely clarified ownership in depth, and portability was technically possible, but operationally painful. As a result, switching providers was difficult, and data tended to stay where it was created.
From September 2025, the EU Data Act resets that balance.
Regulatory reset
Data Act (Regulation (EU) 2023/2854) sets rules for data access, sharing, and use, requiring that users can obtain their data in full – raw, structured, and ready to reuse – and share it with third parties. The emphasis is on interoperability and frictionless movement between systems, not controlled exposure within a single platform.
Take a SaaS analytics platform. Previously, users might rely on dashboards or partial exports, with full datasets locked behind APIs, limits, or commercial terms. Moving that data into another tool was possible, but rarely straightforward. Under the new rules, that same data must be accessible on demand and in a form that can be used elsewhere without rework.
Engineering implications
For software providers, the Data Act data portability requirements mark a double shift. It’s no longer enough to store and process data efficiently; it also has to be made accessible, traceable, and secure at a much finer level.
That means building robust data-sharing interfaces, extending GDPR-aligned controls to access and transfer flows, and supporting near-real-time availability where relevant. Beneath that, data pipelines and governance models often need to be reworked to ensure consistent, auditable, and reliable access. As users move beyond dashboards to raw data, the responsibility for its accuracy, security, and controlled exposure increases accordingly.
Connected products deadline
By September 2026, for connected products, this moves firmly into the product layer. Data access cannot be retrofitted; it must be built in. For IoT providers that haven’t started, this is the last order. Delaying further means redesigning the core architecture under deadline pressure. For others, the work doesn’t end at implementation.
Maintaining compliant data access and evolving pipelines now becomes an ongoing effort, often exposing gaps in DevOps, CI/CD, or data engineering capabilities. In that context, bringing in external support, whether through team augmentation for speed and integration or selective IT outsourcing for well-scoped components, can provide faster access to the required skills, as long as providers are vetted for their compliance with their own standards and their ability to operate within EU regulatory frameworks.
The EU Data Act requires software and SaaS providers to give users full access to their data in structured, usable formats. Users must be able to retrieve and transfer data easily to other services, including third-party platforms. For companies, this means redesigning data architecture to support interoperability, real-time access, and secure sharing. From 2026, connected products must also embed data access by design, making portability a built-in product requirement rather than a manual export feature.
It’s been several years since the General Data Protection Regulation came into force, yet its enforcement has remained uneven. For companies operating across multiple EU jurisdictions, differences in interpretation between national regulators, prolonged investigation timelines, and unclear procedural steps have created friction and uncertainty, rather than clarity.
To address this, the European Commission proposed a dedicated procedural framework in 2023 to streamline the enforcement of the GDPR in cross-border cases. Formally adopted in 2025, the GDPR Procedural Regulation (Regulation (EU) 2025/2518) regulation took effect in January 2026. However, its practical impact hinges on April 2nd, 2027, when all new cross-border GDPR cases must follow the updated procedures.
Substantively, nothing changes. The obligations under GDPR remain exactly the same. The difference lies in how they are enforced. The regulation introduces clearer timelines, more structured cooperation between lead and concerned Data Protection Authorities, and more defined procedural rights for companies under investigation.
In effect, it removes much of the ambiguity that previously slowed enforcement down. For software companies, this doesn’t introduce new requirements, but it does reduce the room for delay, interpretation gaps, or procedural inconsistencies. Enforcement becomes more coordinated, more predictable, and harder to avoid.
What: Revised Product Liability Directive (Directive (EU) 2024/2853)
When: Signed and approved: October 2024 | Entered into force: December 2024 | Transposition by December 9th 2026
Who: Manufacturers, importers, distributors of defective products (including software/AI) on the EU market
By the end of 2024, the new Product Liability Directive 2024/2853 had replaced legislation that had stood largely unchanged for four decades. Its most consequential shift was redefining "product" to encompass digital goods such as software, AI-integrated platforms and systems, and in many cases, SaaS. 2026 is a milestone year, with each EU member state now mandated to transpose the directive into national law.
Liability expansion
The revised PLD law fundamentally changes the exposure of software manufacturers, importers, and distributors. Liability no longer stops at bugs or obvious security breaches, but extends to a much broader category of defects, including security vulnerabilities, inadequate safeguards around user identity or data, and unsafe AI behavior. Imagine an AI-powered medical triage tool that incorrectly deprioritizes urgent patient cases. It could trigger liability claims even if no traditional “software bug” exists, if the system behavior is deemed unsafe in context.
Legal ambiguity
Unlike GDPR, the 2024/2853 directive does not prescribe specific fines or revenue-based penalties. Instead, it is enforced through claims for damages. This is where things get much more elusive: there is no closed, precise definition of what constitutes a "defect."
Rather than fitness for use, defectiveness is assessed against "the lack of the safety that a person is entitled to expect, or that is required under Union or national law" (Eur-Lex Europa), with judicial evaluation grounded in "an objective analysis of the safety that the public at large is entitled to expect." For legal teams hoping for bright lines, this will offer limited comfort.
Burden of proof shift
The burden of proof has also tilted – and not in software companies' favor. If a claim is brought, a company may be required to explain in detail how its system works and demonstrate that complexity itself is not a liability shield. Failure to do so could result in liability being established. Critically, that exposure doesn't freeze at launch, but persists across updates, ongoing maintenance, security patches, and AI behavior as it evolves over time.
Now, the trap is not a single compliance failure; it's the accumulating technical debt that turns defensible software into indefensible software. Under this directive, every unreviewed update, undocumented security decision, or unmonitored AI behavior is a potential exhibit in a future claim. Companies that have scaled quickly, leaned on small core teams, or deprioritized documentation and security hygiene now face a legal landscape in which those shortcuts carry real downstream risk.
For businesses looking to close those gaps without the overhead of permanent hires, specialist team augmentation can be a practical starting point, often faster and more targeted than scaling internal teams.
Bringing in vetted engineers and security experts on a flexible basis helps teams review code and architecture, improve vulnerability handling, and put clearer documentation in place. Over time, this builds a track record that demonstrates due diligence. It also tends to surface issues that internal teams, close to the codebase for a long time, may have gradually stopped seeing.
Under the revised Product Liability Directive 2024/2853, software and AI systems can be considered defective if they fail to provide the level of safety a user is entitled to expect. This includes vulnerabilities, unsafe AI behavior, or inadequate security safeguards. Liability is no longer limited to physical defects or obvious bugs. Courts assess safety based on expectations set by law and usage context, and companies may be required to explain system design in detail if a claim is made. Liability also continues after release, including updates and maintenance.
What: ViDA (VAT in the Digital Age) Package (Including E-Invoicing Mandates)
When: Proposed: December 2022 | Adopted: March 11, 2025 | Entry into force: March 12, 2025 | National e-invoicing rollouts: from 2026 (country-specific) | Full intra-EU digital reporting: July 1, 2030
Who: VAT-registered businesses operating in the EU, including software platforms and fintechs handling invoicing, payments, or transaction data, with extraterritorial scope for cross-border transactions
Rounding things out, ViDA represents one of the most significant overhauls of VAT administration in the EU's history. Adopted in early 2025, it sets a firm deadline for full compliance with intra-EU digital reporting and e-invoicing by July 1, 2030, with staged national rollouts beginning in 2026.
Product-level shift
For fintechs and financial or accounting software providers, the burden goes well beyond updating an invoice template. ViDA effectively embeds VAT reporting within the transaction layer, transforming invoicing and tax compliance from a periodic back-office task into a continuous, real-time process. That modification has to be supported at the architecture level.
Some of the mandatory updates will involve adopting EN16931-compliant invoice formats, integrating with the Pan-European Public Procurement Online (PEPPOL/PEP) framework, and, in many cases, building or overhauling real-time reporting APIs that can communicate directly with tax authorities. For platforms operating across multiple EU jurisdictions, each country's local transposition of the directive adds another layer of variation to navigate different timelines, different technical specifications, and different submission protocols. For example, a subscription billing SaaS serving EU customers must automatically generate structured e-invoices at the point of transaction and transmit VAT-relevant data to national tax systems without manual intervention.
Systemic complexity
The challenge for product teams is that this is not a single integration project with a clear finish line. It requires sustained capability across tax law interpretation, financial data engineering, API compliance, and ongoing monitoring as national implementations evolve. If your product touches billing, subscriptions, marketplaces, or procurement in any EU market, e-invoicing turns from a feature into a core compliance requirement at the intersection of finance, engineering, and regulatory reporting.
Meeting the regulation will likely require a specific combination of skills: engineers familiar with structured data standards and electronic data interchange formats, developers experienced in building and maintaining compliance APIs, and specialists who understand VAT rules across jurisdictions. For many teams, sourcing all of that internally and keeping it current as requirements evolve is one of the more underestimated challenges ViDA will surface over the next few years.
ViDA requires EU businesses to move VAT reporting into real-time digital systems. Companies must issue structured e-invoices in line with EU standards such as EN16931 and integrate reporting directly with tax authorities via digital systems such as PEPPOL. For software and fintech platforms, this means embedding tax compliance into transaction workflows. Instead of periodic reporting, VAT becomes continuous, requiring systems that can generate, validate, and transmit invoice data automatically across jurisdictions.
Five EU regulations land between 2025 and 2030. The AI Act, Cyber Resilience Act, Data Act, Product Liability Directive, and ViDA share one thing: they all reach into the product layer. Risk classification, incident reporting, data portability, liability for AI behavior, and real-time VAT reporting are engineering problems now, not legal ones.
That's the shift worth paying attention to. Compliance used to be something you handled after the product was built. These regulations assume it was considered when it was built, and if it wasn't, the burden of proof is yours.
Most engineering teams weren't built for five simultaneous regulatory overhauls. We bring vetted, EU-regulation-ready engineers to your team without disrupting your work, putting the right people in place quickly.
We work with UK companies from candidate sourcing through onboarding and ongoing management to build dedicated tech teams that slot into your workflows, ramp up without friction, and start shipping within weeks.
Start with the data: download our 2026 CEE Tech Salary Guide for country-by-country breakdowns of pay ranges, role availability, and regulatory considerations.
Five major regulations affect EU software companies between 2025 and 2030: the AI Act, the Cyber Resilience Act, the Data Act, the revised Product Liability Directive, and ViDA.
Each one reaches into the product layer, making compliance an engineering problem as much as a legal one. For most software companies, 2026 is the year multiple deadlines land simultaneously – the AI Act in August, CRA incident reporting in September, and the PLD transposition in December.
The AI Act classifies systems as high-risk based on use case, not technical architecture. AI used in hiring, healthcare, education, credit scoring, law enforcement, and critical infrastructure falls into this category.
High-risk systems must meet strict requirements for human oversight, documentation, and transparency – a CV screening tool or a medical triage assistant would likely qualify.
Yes, in most cases. The CRA applies to software products with digital elements sold on the EU market, and most SaaS and cloud products fall within scope.
Incident reporting obligations have been active since September 2026, and full compliance, including secure-by-design requirements, is required by December 2027.
Secure by design means cybersecurity is built into a product from the start, not added after release.
In practice, this means minimizing the attack surface, enforcing secure default configurations, and ensuring the product can receive security updates throughout its lifecycle. It is a product-level engineering requirement, not an organizational policy.
Yes. Like the GDPR, the Data Act applies extraterritorially. If your product collects or processes data from EU users, the regulation applies regardless of where you are headquartered.
Non-EU SaaS providers and IoT manufacturers serving EU customers must comply with data portability requirements, with the deadline for connected products set for September 2026.
GDPR imposes fixed fines for data protection failures. The PLD creates civil liability for damage caused by defective software or AI, enforced through damage claims with no fixed cap.
Liability also continues after release, covering updates, patches, and evolving AI behavior – meaning a product that was compliant at launch can still generate exposure later.
Yes, frequently. A single AI-powered SaaS product could fall under the AI Act, the CRA, the Data Act, and the PLD simultaneously.
Each regulation applies based on what your product does and who it affects, not by company size or sector. Mapping exposure across all applicable frameworks, rather than treating each as a separate workstream, is the only practical approach.
Outsourcing development does not transfer compliance obligations. Under both the CRA and the AI Act, accountability stays with whoever places the product on the EU market.
You remain responsible for your supply chain, open-source dependencies, and external partners, making vendor compliance posture a core part of due diligence, not an optional consideration.