AI Act and CRA: Leveraging Synergies for Cybersecurity Compliance

Illustration: EU AI Act and Cyber Resilience Act - Synergies in Cybersecurity Compliance

With the AI Act (Regulation (EU) 2024/1689) and the Cyber Resilience Act (CRA) (Regulation (EU) 2024/2847), the EU has introduced two new and far-reaching legal frameworks. Instead of treating them in isolation, companies should leverage their interaction strategically. An integrated implementation can reduce duplication and save costs.

How can overlapping compliance requirements between the AI Act and the Cyber Resilience Act be streamlined?

Focus on Security and Robustness

The AI Act targets the quality, transparency, and robustness of AI systems. For so-called high-risk AI systems, Article 15 explicitly requires manufacturers to protect their systems against manipulation, malfunction, and attacks.

The Cyber Resilience Act establishes, for the first time, an EU-wide baseline for cybersecurity of all products with digital elements - ranging from IoT devices to industrial control systems. It requires that such products be developed, maintained, and updated securely throughout their entire life cycle.

Although the two regulations may appear to pursue different objectives at first glance, the legislator has created several points of alignment that are particularly relevant for implementation and compliance teams.

Conformity Assessment: Synergies for High-Risk AI Systems

The overlap between the two acts becomes particularly visible in the conformity assessment process for high-risk AI systems that also qualify as products with digital elements.

Streamlining Through Automatic Conformity

Article 12 CRA provides that if a high-risk AI system meets the basic cybersecurity requirements of the CRA - including the conditions set out in Article 12(1)(a) to (c) and this is demonstrated in the EU Declaration of Conformity, it is considered compliant with the cybersecurity requirements under Article 15 AI Act (without prejudice to the requirements on accuracy and robustness).

In practice, this means that manufacturers may not need to go through two separate cybersecurity assessments. The verification of AI-related and cybersecurity requirements can be carried out within one coordinated process.

The Stricter Assessment Framework Takes Precedence

This simplification does not apply to products with particularly high security relevance, such as those listed in Annexes III and IV of the CRA (e.g., important or critical products). If such a product also qualifies as a high-risk AI system but would only be subject to an internal control procedure under Annex VI AI Act, the more comprehensive conformity assessment procedure under the CRA must be applied at least for the cybersecurity aspects. Products that play a key role in the digital infrastructure are therefore always subject to the stricter review regime.

Practical Implications for Manufacturers and Software Providers

The close interlinkage between both regulations calls for a new, integrated compliance strategy.

  • Process integration: Obligations under the AI Act and CRA should be addressed jointly within the product development process (e.g., CE conformity). Documentation and risk management should be aligned accordingly.
  • Lifecycle security: The CRA requires continuous vulnerability and patch management throughout the product lifecycle. This also applies to AI systems, as changes to models or training data can affect the overall security architecture.
  • AI-specific threats: According to Recital 51 CRA, cybersecurity assessments must also consider AI-specific attack vectors - such as adversarial attacks, manipulation of training data (data poisoning), or exploitation of algorithmic weaknesses.

Conclusion

The interaction between the AI Act and the CRA demonstrates that the EU has established an integrated security and trust framework for AI-based products. Companies that make strategic use of this mutual recognition can streamline their compliance processes while enhancing the resilience and integrity of their AI solutions. As both regulations are gradually implemented, the ability to demonstrate security, data protection, and AI compliance within a unified framework will become a key competitive factor in the European market.

However, particular diligence is required during implementation: The correct delineation between the requirements of the AI Act and the CRA is complex in individual cases, particularly regarding the priority conformity assessment procedures for critical products. In particular, an incorrect classification can have significant implications for the CE marking. To avoid legal disadvantages or risks, a legal review of the applicability and classification should always be conducted prior to the initiation of the procedure.

Note: This article is intended for general informational purposes only and does not constitute legal advice.

Contact.

Get in touch

If you have legal questions or would like to arrange an initial consultation, please feel free to get in touch.

Direct contact

Email: info@kanzlei-happel.de
Tel.: +49 (6106) 639 24 25
Consultation via email, phone, video conference, or by appointment.