05 March 2026

The European Internet Forum convened policymakers, regulators and industry representatives to examine the relationship between artificial intelligence and cybersecurity. The discussion explored two directions at once: how AI can strengthen cyber defences, and how cybersecurity must become a baseline condition for trustworthy AI. Participants converged on the view that the two cannot be addressed in isolation, and that the current legislative moment offers a critical opportunity to get the foundations right.

Securing AI from itself: how to address cybersecurity for AI and AI for cybersecurity

 

Opening Remarks 

As rapporteur of the AI Act and co-chair of the European Parliament's AI working group, Brando Benifei MEP stressed that AI and cybersecurity can no longer be treated as separate policy domains. He outlined the dual nature of the challenge: AI enables defenders to detect vulnerabilities and automate incident response, while simultaneously lowering the barrier for sophisticated attacks. Agentic AI systems received particular attention, with MEP Benifei warning that their capacity to act autonomously creates risks that are not yet receiving sufficient regulatory scrutiny. He recalled that the AI Act makes cybersecurity a legal requirement for high-risk systems, and called for the ongoing simplification debate to clarify responsibilities rather than weaken safeguards.

The Commission's view

Luca Mazzarelli of the EU AI Office outlined why AI is changing the cybersecurity equation: weaknesses in AI systems can propagate at scale, AI amplifies capability on both sides of the security divide, and it introduces attack surfaces that extend well beyond classical IT infrastructure to training data, model weights and supply chains. He confirmed that the AI Act addresses these realities directly, making cybersecurity by design a legal requirement for high-risk systems and obliging providers of general-purpose models with systemic risk to protect against large-scale cyber offence under Article 55.

Stakeholder perspectives

Matteo Quattrocchi of Cisco described the transition to the agentic era as a significant shift in the threat landscape. Only 16% of companies globally consider themselves fully ready for AI adoption from a cybersecurity standpoint. AI is industrialising offensive capabilities, making sophisticated attacks more accessible, while new vulnerabilities such as data poisoning, prompt injections and supply chain weaknesses are expanding the attack surface. He called for the forthcoming Cloud and AI Development Act to embed security into European AI infrastructure from the ground up, and urged delivery of the AI Skills Academy as a parallel priority.

Martin Chatel of ETSI argued that robust, market-driven standards are essential to translating regulation into interoperable and secure systems. ETSI published a European standard on baseline security requirements for AI systems in January 2025, introducing a lifecycle-based framework that defines responsibilities from developers through to end users and addresses AI-specific threats including data poisoning, model obfuscation and prompt-based attacks. He noted that the work is already extending to cover agentic and embedded AI systems.

Yana Humen of IBM stressed that AI governance deserves treatment as a standalone priority, not a subset of technical security. Data from IBM's threat intelligence research showed that 97% of organisations reporting AI-related breaches were also lacking basic access controls, illustrating that foundational practices remain the most critical line of defence. She cautioned against reactive and fragmented policy responses driven by short-term trends, and argued that the core principles of the AI Act provide a more durable foundation for secure AI adoption.

Professor Mariarosaria Taddeo of Oxford University highlighted the structural fragility of AI as a technology: very minor manipulations can radically alter system behaviour, making AI-specific attacks fundamentally different from conventional cyber incidents in that they seek to acquire control rather than disrupt. She argued that state-sponsored cyber activity, documented across multiple jurisdictions, makes cybersecurity an international relations challenge for which adequate frameworks are still lacking. Prof. Taddeo called for stronger duties on open source AI providers, the creation of a European security incident registry, and warned that stronger cybersecurity must not become a justification for expanded surveillance of individuals working on digital infrastructures.

Videos

  • EU Consumer Law: Present and Future
  • Building a sovereign EU cloud ecosystem
  • MEP visit to MWC26

Related content