TAIC25: Trustworthy AI for Cybersecurity ITASEC 25 Bologna, Italy, February 3-8, 2025 |
Conference website | https://taicworkshop.github.io/ |
Submission link | https://easychair.org/conferences/?conf=taic25 |
Submission deadline | December 9, 2024 |
Workshop Abstract
Artificial Intelligence (AI) has now become a well-established technology in cybersecurity applications. In this scenario, the use of AI and Machine Learning (ML) techniques aims to foster the security of existing tools by operating as a core or additional mechanism to prevent and detect threats, revolutionizing key areas such as vulnerability and malware detection. As cyber threats grow increasingly sophisticated and complex, the cybersecurity landscape demands innovative solutions. AI-driven approaches offer the automation and intelligence necessary to stay ahead of evolving attacks and novel threats, providing a crucial line of defense in our rapidly changing digital ecosystem.However, alongside an increasing number of cyberthreats, a likewise alarming number of vulnerabilities is associated with AI techniques, raising concerns about their use. In addition, such techniques are often conceived as a “black box”, thus giving decisions whose rationale remains unclear, sometimes even incorporating undesired biases. Given the wide use of AI techniques to support decision-making in high-stakes scenarios, such as in cybersecurity applications, these issues led the research community to focus on the trustworthiness of AI techniques, with the unified goal of validating their use by increasing security, transparency and fairness.In light of these issues and considerations, this workshop focuses on the trustworthiness of AI techniques for cybersecurity systems. Therefore, we are interested in two specific aspects: (i) Trustworthy AI, where we focus on the trustworthiness of AI systems, thus aiming to advance the discussion on the security of models and algorithms by analyzing attack and defense techniques (e.g., evasion attacks and adversarial training, respectively), on explainability techniques increasing transparency, and finally on methods analyzing the fairness of the models and algorithms;(ii) AI for Cybersecurity, which refers to the study and analysis of cybersecurity tasks where the use of AI can improve the overall level of security, like spam, malware, and botnet detection, as well as automatically localizing and fixing security vulnerabilities in software applications.Through these two separate yet affine aspects, our goal is to foster a unified discussion on trustworthy AI in cybersecurity. By doing so, we aim to help mitigate these issues and prevent them from hindering the development and adoption of AI techniques.
Topics of Interest:
The topics of interest include (but are not limited to):
Trustworthy AI:
- Adversarial machine learning
- Attacks and defenses on machine learning and AI
- Explainability techniques
- Explainability-based attacks and defenses
- Fairness techniques
- Cybersecurity for AI
AI for Cybersecurity
- Spam/Phishing detection
- Botnet and Malware detection
- Intrusion detection and response systems
- Biometric identification/verification
- Automated software vulnerability detection and repair
- Automated generation of security tests
- Automated exploit generation
Submission Guidelines:
Papers must be in English, formatted in pdf according to the ITASEC conference template (Easychair style: https://easychair.org/publications/for_authors) and comprised between 5 and 7 pages, excluding bibliography. This workshop has no official proceedings, so we will also accept submissions that have been published elsewhere, provided that this is clearly acknowledged in the submission (e.g., with a footnote on the first page reporting the full reference), and that the submission is adapted according to the given template and page limits.
Important dates:
- December 9, 2024: Deadline for Workshop Paper Submission
- January 3, 2025: Author Notification
- February 3-8, 2025: TAIC Workshop
- February 20, 2025: Camera-ready deadline