AI for Security and Security of AI
Panel, invited e open submissions
Chairs: Battista BIGGIO, Kathrin GROSSE, Fabio ROLI
Recent years have seen a dramatic increase in applications of Machine Learning (ML) and Artificial Intelligence (AI) to security and privacy problems. The use of AI and ML algorithms to extract actionable knowledge and automate decisions in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates not only novel opportunities but also novel challenges for security research. The recent widespread adoption of deep learning techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research.
This workshop aims to discuss new developments at the intersection of security and privacy with ML and AI. In particular, we focus on (i) the use of AI for security, including automated approaches for spam and malware detection, social network analysis, biometric identification, network traffic analysis, user authentication, and (ii) the security of AI, including attacks on ML/AI algorithms (e.g., data poisoning, adversarial examples and privacy-related threats) and defense strategies.
We argue that, as conceptually depicted on the left, AI enables security and security enables AI. This is a fundamental aspect towards developing AI products that are the strongest link in the security chain and not the weakest one. Along with work on privacy- preserving machine learning and explainable AI, we firmly believe that AI security will pave the way towards developing the next generation of trustworthy AI systems.
Important dates (Papers):
March 12 Feb 28, 2021: Workshop submission deadline March 22, 2021: Workshop paper acceptance results March 31, 2021: Workshop camera ready version April 7, 2021: Workshop
Topics of interest include (but are not limited to):
- AI for Security
- Spam and malware detection
- Phishing detection and prevention
- Botnet detection
- Intrusion detection and response
- Security in social networks
- Biometric identification/verification
- User authentication
- Security of AI
- Adversarial machine learning
- Security of deep learning systems
- Attacks and defenses on machine learning and AI
(e.g., adversarial examples, data poisoning, privacy-related attacks)
- Robust statistics
- Learning in games
- Economics of security
- Privacy-preserving machine learning
Papers must be in English, formatted in pdf according to the ITASEC conference template (Easychair style) and no longer than 10 pages, excluding bibliography. This workshop has no official proceedings, so we will also accept submissions that have been published elsewhere, provided that this is clearly acknowledged in the submission (e.g., with a footnote on the first page reporting the full reference), and that the submission is adapted according to the given template and page limits.