Workshop Security AI

Security and Trustworthiness of AI - an ACCSS workshop

The Security and Trustworthiness of AI workshop in the Netherlands (supported by ACCSS) aims to foster collaboration between Dutch research institutes working in the intersection of security and artificial intelligence. Join us in this workshop to share new ideas, experiences and research opportunities within the field of security and AI.

The 1st edition of this workshop will be held on the 27th of September, at the Centrum voor Veiligheid en Digitalisering (CVD) building, Wapenrustlaan 11 in Apeldoorn.

We welcome contributions in the form of short talks from researchers working in the field, and we especially encourage young academics to submit an abstract and share their research results.

You can sign up for the workshop, and optionally submit an abstract for a 10-minute talk here: https://forms.office.com/e/cBEf8sfm3t

Important Dates

Participation

Participation is free of charge, however, registration is required. Register here: https://forms.office.com/e/cBEf8sfm3t.

Program

The program may be subject to change.

Timeslot

What?

10:00 - 10:15

Opening

10:15 - 11:00

Keynote - Vera Rimmer - The Ambivalence of Deep Learning in Cybersecurity: Balancing Promises and Pitfalls

11:00 - 11:15

Coffee break

11:15 - 11:30

Short Talk - Harald Vranken - Applying AI for detecting security vulnerabilities and anomalies

11:30 - 11:45

Short Talk - Marc Damie - AI-based spam detection in decentralized social media: challenges and opportunities

11:45 - 12:00

Short Talk - Stefano Simonetto - From CVE to MITRE: Understanding the actions hackers can take

12:00 - 12:15

Short Talk - Koen Teeuwen - Ruling the Unruly: Network Intrusion Detection Rule Design Principles for Specificity and Coverage to Decrease Unnecessary Workload in SOCs

12:15 - 13:00

Lunch

13:00 - 13:40

Keynote - Azqa Nadeem - XAI4Security: What is holding us back?

13:40 - 13:50

Break

13:50 - 14:05

Short Talk - Emanuele Mezzi - Applying LLMs to extract Cyber Threat Intelligence (CTI) from unstructured reports

14:05 - 14:20

Short Talk - Aditya Shankar - SiloFuse: Cross Silo Synthetic Data Generation with Latent Tabular Diffusion Models

14:20 - 14:35

Short Talk - Rob van der Veer - AI security standardization from the trenches

14:35 - 14:45

Break

14:45 - 15:30

Panel discussion

15:30 - 16:30

Drinks

Keynotes

Vera Rimmer - The Ambivalence of Deep Learning in Cybersecurity: Balancing Promises and Pitfalls

Vera Rimmer

Vera Rimmer is a Research Expert at the DistriNet research group at KU Leuven, where she conducts and leads research activities in the intersection of security, privacy and AI. She completed her PhD at KU Leuven in 2022, with the main focus on applying deep learning in anonymity networks and network defense systems. Currently, Vera and her team explore data analytics in intrusion and malware detection, and trustworthiness of data-driven AI in the wider ICT context. Vera is interested in developing comprehensive understanding, reasonable expectations and mitigation of risks of data-driven AI in the age of uncontrolled data collection and inference.

The transformative potential of deep learning in enhancing computer science solutions has not gone unnoticed in the fields of security and privacy. However, the sheer volume of related scientific literature and the significant gap between a lab context and real-world environments make it extremely challenging to assess the current progress in the area. In this talk, we will review underlying mechanisms, main principles and common pitfalls behind deep learning when applied to offensive and defensive cybersecurity. The focus will be set on two use cases: traffic analysis attacks on a popular anonymity system (Tor), and defenses against network intrusions. The discussion will challenge the common (mis)perception of the purely end-to-end functionality of applied deep learning in cybersecurity. The keynote is meant to equip cybersecurity researchers and practitioners with the necessary insights to begin incorporating deep learning in their toolbox while maintaining a critical and holistic perspective.

Azqa Nadeem - Enhancing Automated Defense via Explainable Artificial Intelligence

Azqa Nadeem

Azqa Nadeem is an Assistant Professor at the University of Twente in the Semantics, Cybersecurity, and Services (SCS) group. Her research focuses on developing explainable machine learning solutions for cybersecurity tasks such as incident response, malware analysis, and intrusion detection. Before joining UT, she held research positions at Eurecom, RIT, EPFL, and CERN. She obtained her PhD in 2024 at TU Delft, where she developed explainable sequential machine learning toolchains for automating cyber threat intelligence. Azqa’s mission is to go beyond prediction probabilities and extract semantically meaningful insights from ML models to create AI-assisted practitioners.

Explainable Artificial Intelligence (XAI) has seen an uptick in usage within cybersecurity literature to improve the understandability of ML models deployed in safety-critical domains. At the same time, security practitioners are hesitant to use XAI, or much of ML, because of trust issues. As it turns out, the explanations produced by current literature are still quite rudimentary and might even be useless given practitioner workflows. In fact, the community has not even arrived at a consensus on what an explanation is. In this talk, we go over the major themes addressed by the XAI literature within cybersecurity and identify pitfalls that are limiting developments in this important area.