Workshop on AI for Security and Security of AI

Call for Papers


Recent years have seen a dramatic increase in applications of Machine Learning (ML) and Artificial Intelligence (AI) to security and privacy problems. Such widespread adoption fosters new lines of research, where ML and AI techniques help fight threats side to side with already-deployed security solutions. However, the same abundance of AI and ML techniques also raises doubts about their intrinsic security, as they might be the target of the next cyber-attacks that leverage their weak spot, becoming de-facto the weakest link in the security chain to attack.

To address these fundamental issues, this workshop focuses on inspiring and leading technical discussions around the entanglement between cyber security and ML/AI. In particular, our interests are twofold: (i) AI for security, referring to the analysis and study of all the cyber-security applications that can be improved and automated by ML/AI techniques (like malware, spam, phishing, and botnet detection); (ii) security of AI, since we are interested in understanding how ML/AI technologies can be spoiled by skilled attackers (e.g., data poisoning, adversarial examples, and privacy-related threats), and how to develop novel techniques to harden existing ML/AI solutions (e.g. robust training, data augmentation, domain knowledge). 

By investigating these aspects, we believe we will spectate the development of the next generation of ML/AI applications. These will not only improve the security of other technologies, by making them more effective in detecting threats, but also they will stand as trustworthy techniques that can face skilled adversaries in the wild without being subdued.

Topics of Interest

Topics of interest include (but are not limited to):

AI for Security

  • Spam and malware detection
  • Phishing detection and prevention
  • Botnet detection
  • Intrusion detection and response
  • Security in social networks
  • Biometric identification/verification
  • User authentication

Security of AI

  • Adversarial machine learning
  • Security of deep learning systems
  • Attacks and defenses on machine learning and AI (e.g., adversarial examples, data poisoning, privacy-related attacks)
  • Robust statistics
  • Privacy-preserving machine learning
Submission Guidelines

Papers must be in English, formatted in pdf according to the ITASEC conference template (Easychair style: and no longer than 10 pages, excluding bibliography. This workshop has no official proceedings, so we will also accept submissions that have been published elsewhere, provided that this is clearly acknowledged in the submission (e.g., with a footnote on the first page reporting the full reference), and that the submission is adapted according to the given template and page limits.

Submission Site

Submission link:

Important Dates
  • May 1 (extend to 15th) 2022: Workshop submission deadline
  • May 31, 2022: Workshop paper acceptance results
  • June 10, 2022: Workshop camera-ready version
  • June 20, 2022 Workshop day

Workshop Chairs

  • Battista Biggio, Assistant Professor, University of Cagliari, Italy; Pluribus One
  • Maura Pintor, Postdoctoral Researcher, University of Cagliari, Italy; Pluribus One
  • Luca Demetrio, Postdoctoral Researcher, University of Cagliari, Italy; Pluribus One
  • Fabio Roli, Professor, University of Genova, Italy; Pluribus One