SHASAI project protecting AI systems from cybersecurity threats in real-world applications
Technology & Innovation

SHASAI Project: Strengthening AI Systems Against Cybersecurity Threats

As artificial intelligence (AI) systems become increasingly complex and data-driven, their vulnerability to cyber-attacks has grown. In response, the European Union has funded a new initiative, SHASAI, aimed at enhancing the cybersecurity, resilience, and trustworthiness of AI-based systems from design to real-world deployment.

A Lifecycle Approach to AI Cybersecurity

Funded under the Horizon Europe programme, SHASAI seeks to address AI cybersecurity as a lifecycle challenge, moving beyond fragmented or ad-hoc security measures.

Leticia Montalvillo Mendizabal, Cybersecurity Researcher at IKERLAN and SHASAI Project Coordinator, explained:
“With SHASAI, we aim to move beyond fragmented security solutions and address AI cybersecurity as a lifecycle challenge. By combining secure hardware and software, risk-driven engineering, and real-world validation, the project will help organisations deploy AI systems that are not only innovative, but also resilient, trustworthy, and compliant with European regulations.”

Real-World Scenarios for Testing AI Security

SHASAI will demonstrate and validate its tools and methods in three real-world use cases:

  1. AI-enabled cutting machines in the agrifood sector – ensuring precision and safety while preventing potential cyber risks.
  2. Eye-tracking systems in assistive healthcare technologies – protecting sensitive patient data and maintaining reliability.
  3. Tele-operated last-mile delivery vehicles in the mobility sector – securing autonomous operations and connected transport systems.

These diverse scenarios allow the project team to test their approach across different industries while ensuring that results can be applied to other AI applications.

Building Trustworthy and Compliant AI

The expected outcome of SHASAI is a robust, adaptive, and trustworthy security architecture. By integrating secure hardware, software, and risk-driven engineering principles, the project aims to ensure that AI systems remain resilient, traceable, and compliant with evolving cybersecurity standards, even in high-risk environments.

Supporting Europe’s Vision for Trustworthy AI

SHASAI also aligns with Europe’s broader ambitions to foster trustworthy AI. The project supports key EU initiatives, including the:

  • EU AI Act
  • Cyber Resilience Act (CRA)
  • NIS2 Directive
  • EU Cybersecurity Strategy

By translating high-level cybersecurity and AI safety principles into practical, deployable solutions, SHASAI contributes to a safer, more secure AI ecosystem across the continent.

Collaboration and Timeline

The SHASAI consortium combines expertise from research organisations, universities, industry partners, and technology providers. The project officially started on 1 November 2025 and is expected to run until April 2029, driving innovation in AI cybersecurity while ensuring compliance with European regulations.