NO.253 Cutting-edge AI and Hardware Security
September 14 - 17, 2026 (Check-in: September 13, 2026 )
Organizers
- Shivam Bhasin
- NTU Singapore, Singapore
- Stjepan Picek
- University of Zagreb, Croatia and Radboud University, The Netherlands
- Kota Yoshida
- Ritsumeikan University, Japan
Overview
Description of the Meeting
Hardware security is a critical field that supports the reliability and safety of digital society. Hardware-assisted security features, such as a root of trust (RoT) and trusted execution environments (TEEs), have become foundational in establishing secure computing platforms. Secure elements, which provide a main feature of the RoT, are vital in managing cryptographic keys and executing cryptographic algorithms, ensuring data confidentiality and authentication. Protecting this component is paramount because it is an attractive target for attackers wanting to extract sensitive information. Recognizing the priority of hardware security, government agencies like NIST in the US, BSI in Germany, and other (inter-) national institutes have been actively developing standards, regulations, and requirements to enhance hardware security and guide industries.
Recently, rapid advancements in Artificial Intelligence (AI), especially Machine Learning (ML) and Deep Learning (DL), have opened up new possibilities for addressing these challenges. By applying AI technologies to hardware security, attack strategies have evolved significantly. Such advancements help us to develop defense techniques and perform security assessments of hardware. For example, NIST IR 8517 provides security failure scenarios from vulnerabilities in hardware, and ISO/IEC 15408 and 18045 provide common criteria (CC) of security assessment and common evaluation methodology (CEM) for evaluation methods.
Physical Unclonable Functions (PUFs), which are expected to perform key storage elements in the root of trust (RoT), are now facing threats of modeling attacks by AI, enabling adversaries to predict keys. Similarly, AI-assisted side-channel attacks have shown the potential to overcome conventional countermeasures, such as masking, shuffling, and jittering techniques in AES and other cryptographic circuits. In active implementation attacks (fault injection), AI is again used to find better parameter values to inject the faults or to predict the responses of a target in the presence of faults. Microarchitectural attacks also benefit from using AI since they can be made faster and easier to mount. Security fuzzing commonly uses diverse AI techniques to reach better code coverage and find more security bugs.
The role of AI is not limited to enhancing attacks. By analyzing AI to obtain insights into side-channel leakages, it is possible to identify vulnerabilities that conventional methods often overlook and suggest a way to more robust hardware designs. At the same time, while there is a growing expectation that AI could provide a quantitative framework for evaluating the robustness and weaknesses of defense techniques, such a framework remains elusive. Its realization is a significant research objective: achieving it would offer more precise hardware security assessments and improve system robustness. More recently, the progress with large language models (LLMs) has also become relevant for the security community as LLMs allow code generation, finding code anomalies, zero-shot vulnerability examination, etc. Finally, hardware-enabled mechanisms (HEMs) can support responsible AI development by enabling verifiable reporting of key properties of AI training activities such as quantity of compute used, training cluster configuration or location, as well as policy enforcement.
This Shonan meeting focuses on the application of AI in hardware security. It brings together experts to discuss how AI can challenge and enhance the security of hardware systems. We aim to explore the latest research, share insights on threats and defenses, and chart a course for future directions in the field.
Around three months before the event, we plan to send a survey to participants to remind the participants of the workshop goals and gather the ideas/topics/challenges that the participants want to discuss and receive feedback during the meeting. Based on the responses, we plan to group the responses, and if needed, send additional, short surveys with specific, topical questions.
During the first day, we plan that every participant introduces themselves with a single slide and in the afternoon, poster session. Moreover, we will have a session where participants can briefly present the challenges they are interested in (prepared based on the results from surveys).
For the program, we plan to have a mix of sessions of talks and brainstorming activities. For each brainstorming session, we will appoint the leader and decide at the end of the session whether it continues in next days. During the brainstorming sessions, we will also discuss the challenges presented on the first day. To allow more time before brainstorming sessions and reporting, we plan to start the days with brainstorming sessions, continue after lunch with talks, and finish with reports from brainstorming sessions. The results from the brainstorming sessions will be gathered into a report.
Topics
-
AI-assisted side-channel analysis
-
AI-based attacks and evaluation on PUF
-
AI-assisted fault injection analysis
- Trojan analysis and attacks on AI implementations
- Secure-aware design flow or design methodology
- Circuit design with AI
- Microarchitectural attacks and AI
- Hardware vulnerabilities and attacks in AI accelerators
- Hardware-based authentication and encryption
- Hardware root of trust for secure AI applications
- Hardware-accelerated privacy-preserving machine learning
- Secure-aware EDA algorithms
- Security fuzzing