Call for Papers

The static nature of current computing systems has made them easy to attack and hard to defend. Adversaries have an asymmetric advantage in that they have the time to study a system, identify its vulnerabilities, and choose the time and place of attack to gain the maximum benefit. The idea of AACD is to impose the same asymmetric disadvantage on attackers by making systems random, diverse, and dynamic and therefore harder to explore and predict. With a constantly changing system and its ever-adapting attack surface, attackers will have to deal with significant uncertainty just like defenders do today. The ultimate goal of AACD is to increase the attackers' workload so as to level the cybersecurity playing field for defenders and attackers – ultimately tilting it in favor of the defender.

The workshop seeks to bring together researchers from academia, government, and industry to report on the latest research efforts on moving-target defense, and to have productive discussion and constructive debate on this topic. We solicit submissions (both short and long) on original research in the broad area of AACD, with possible topics such as those listed below. We also solicit submissions for Systematization of Knowledge. These submissions will be reviewed similar to regular submissions. We also welcome all contributions that fall under the broad scope of AACD, including research that shows negative results.

List of Broad Topics:

  • System randomization
  • Artificial diversity
  • Cyber maneuver and agility
  • Software diversity
  • Dynamic network configuration
  • Moving target in the cloud
  • System diversification techniques
  • Dynamic compilation techniques
  • Adaptive/proactive defenses
  • Intelligent countermeasure selection
  • AACD strategies and planning
  • Deep learning for AACD
  • AACD quantification methods and models
  • AACD evaluation and assessment frameworks
  • Large-scale AACD (using multiple techniques)
  • Moving target in software coding, application API virtualization
  • Autonomous technologies for AACD
  • Paper Submissions

    Submitted papers must not substantially overlap with papers that have been published or simultaneously submitted to a journal or a conference with proceedings. Short submissions should be at most 4 pages in the ACM double-column format. Submissions should be at most 10 pages in the ACM double-column format, excluding well-marked appendices, and at most 12 pages in total. Submissions are not required to be anonymized. SoK submissions could be at most 15 pages long, excluding well-marked appendices, and at most 17 pages in total. Submissions go to https://aacd24.hotcrp.com/.

    Only PDF files will be accepted. Submissions not meeting these guidelines risk rejection without consideration of their merits. Papers must be received by the deadline of June 20, 2024 to be considered. Notification of acceptance or rejection will be sent to authors by August 08, 2024. Camera ready papers must be submitted by September 05, 2024. Authors of accepted papers must guarantee that one of the authors will register and present the paper at the workshop. Proceedings of the workshop will become part of the ACM Digital Library.

    Important Dates

  • Paper submission due: June 20, 2024 July 7, 2024
  • Notification of Acceptance: August 08, 2024 August 24, 2024
  • Camera ready due: September 05, 2024
  • Keynote Speakers

    Jun Xu

    Title: Autonomous Software Security with 2024's AI: Some Observations as a DARPA AIxCC Player

    Abstract: In this talk, I would like to share some observations about society's progress in autonomous software security, from the perspective of a DARPA AIxCC (AI Cyber Challenge) player. The talk will consist of two parts. In the first part, I will introduce the definition and setup of autonomous software security adopted by AIxCC, explain the participation and process of the challenge, and summarize the collective results. In the second part, I will elaborate on what techniques work better for autonomous software security and where and how today's AI may help.

    Bio: Jun Xu is an Assistant Professor in the Kahlert School of Computing at The University of Utah. Before joining Utah, he worked as an Assistant Professor at Stevens Institute of Technology. Jun's research focuses on software security and system security. His research has led to many papers published in top-tier computer security conferences, including IEEE S&P, ACM CCS, USENIX Security, and NDSS. Jun is a recipient of NSF CAREER Award, CCS Outstanding Paper Award, SecureComm Best Paper Award, Penn State Alumni Dissertation Award, and RSAC Security Scholarship. He is also a core member of Team 42-b3yond-6ug, one of the seven winning teams in DARPA AIxCC semi-finals.


    Amrita Roy Chowdhury

    Title: Robustness against Poisoning under Local Differential Privacy

    Abstract: Today, data is generated on billions of smart devices at the edge, leading to a decentralized data ecosystem comprising multiple data owners (clients) and a service provider (server). The clients interact with the server with their personal data for specific services, while the server performs analysis on the joint dataset. However, as an untrusted entity, the server is often incentivized to extract as much information as possible, potentially compromising the clients' privacy. Local Differential Privacy (LDP) has emerged as a leading solution for privacy in decentralized data analytics. Yet, as its adoption grows, it is essential to examine its vulnerabilities. The decentralized nature of LDP makes it vulnerable to poisoning attacks, where adversaries can inject fake clients that provide poisoned or malformed data. In this talk, we will explore solutions to provide provable robustness against such attacks. Specifically, we will analyze how LDP protocols possess a unique characteristic that distinguishes them from non-private ones —the clear separation between the input and the final response (obtained after randomization). This separation provides adversaries with two distinct opportunities to tamper with the data. We will discuss strategies to mitigate both types of tampering by applying them in real-world settings and exploring the associated challenges.

    Bio: Amrita Roy Chowdhury is an Assistant Professor at the University of Michigan, Ann Arbor. Her work explores the synergy between differential privacy and cryptography through novel algorithms that expose the rich interconnections between the two areas, both in theory and practice. She has been recognized as a Rising Star in EECS in 2020 and 2021, and a UChicago Rising Star in Data Science, 2021.


    Xiaofeng Wang

    Title: Security Of AI, By AI and For AI: AI-Centered Cybersecurity Research and Innovations

    Abstract: The rapid advancements in artificial intelligence (AI) technologies and the unyielding demand for their transformative applications have ushered in significant opportunities for security and privacy research and innovations. There is an urgent need for innovative and practical solutions to protect data and other assets to support the training and utilization of large, complicated machine learning (ML) models in a scalable and cost-effective manner ("Security For AI"). In the meantime, substantial research efforts are focused on understanding the security and privacy implications of AI systems, particularly identification of vulnerabilities in ML models and mitigation of associated risks ("Security Of AI"). Furthermore, cutting-edge AI technologies are increasingly being deployed to enhance the security of computing systems, offering intelligent protection and more effective defenses against real-world threats ("Security By AI").

    In this presentation, I will use our research in these areas to demonstrate how AI innovations have expanded the horizons of security and privacy research. For instance, under the theme "Security For AI," I will provide an overview of ongoing research at the Center for Distributed Confidential Computing (CDCC) — one of the largest initiatives funded by the US National Science Foundation aimed at advancing practical, scalable data-in-use protection. This initiative is poised to have a transformative impact on AI research. Regarding "Security Of AI," I will discuss our investigations into Trojan threats to ML models, exploring the fundamentality of this emerging security risk, its defensibility in particular. In the context of "Security By AI," I will showcase how AI and ML technologies are revolutionizing the detection and prediction of security threats within carrier networks—a vital infrastructure—by automating the analysis of their documentations. Lastly, I will discuss potential future directions in the vast space of AI-centered cybersecurity research and innovations.

    Bio: XiaoFeng Wang is the Associate Dean for Research and a James H. Rudy Professor of Luddy School of Informatics, Computing and Engineering, Indiana University at Bloomington. His research focuses on systems security and data privacy with a specialization on security and privacy issues in mobile and cloud computing, cellular networks and intelligent systems, and privacy issues in dissemination and computation of human genomic data. He is a Fellow of ACM, IEEE, and AAAS.


    Frank Li

    Title: Considerations for Designing and Deploying Adaptive and Autonomous Defenses

    Abstract: Adaptive and autonomous security systems are designed to offer promising security gains, leveling the playing field between attackers and defenders. However, beyond their security properties, these defenses can have broader impacts on various stakeholders, sometimes negatively affecting practical deployment. In this talk, I will reflect on considerations for designing and deploying such randomized or dynamic approaches, including how these approaches can impact end users, operator/administrators, developers, and other stakeholders (such as network or security analysts). Through highlighting these considerations, I hope to provide some suggestions for future research in these directions.

    Bio: Frank Li is an Assistant Professor at Georgia Tech in the School of Cybersecurity and Privacy and the School of Electrical and Computer Engineering. His research is on empirically-driven methods for improving Internet and web measurements (including consideration for human factors). He has received best paper awards at ACM IMC and USENIX SOUPS, and his PhD dissertation won the ACM SIGSAC Doctoral Dissertation Runner-Up Award.

    Program Chairs

  • Neil Gong, Duke University, USA
  • Qi Li, Tsinghua University, China
  • Steering Committee

  • Kun Sun, Chair, George Mason University, USA
  • Sushil Jajodia, George Mason University, USA
  • Cliff Wang, National Science Foundation, USA
  • Dijiang Huang, Arizona State University, USA
  • Hamed Okhravi, MIT Lincoln Laboratory, USA
  • Xinming Ou, University of South Florida, USA
  • Publicity Chair

  • Xiaoli Zhang, University of Science and Technology Beijing, China
  • Web Chair

  • Wenxiang Sun, Zhejiang University of Technology, China
  • PC Members

  • Yang Xiao, University of Kentucky, USA
  • Dan Dongseong Kim, The University of Queensland, Australia
  • Kun Sun, George Mason University, USA
  • Chengyu Song, UC Riverside, USA
  • Alex Bardas, University of Kansas, USA
  • Ziming Zhao, University at Buffalo, USA
  • Zhen Zeng, University of Wisconsin Milwaukee, USA
  • Ning Wang, University of South Florida, USA
  • Minghong Fang, Duke University, USA
  • Yuchen Yang, Johns Hopkins University, USA
  • Hongbin Liu, Duke University, USA
  • Program

    All times are in Mountain Daylight Time (MDT)
    09:05 – 9:10
    Opening remarks
    09:10 – 10:00
    Keynote talk 1: Autonomous Software Security with 2024's AI: Some Observations as a DARPA AIxCC Player
    Jun Xu
    10:00 – 10:15
    Paper 1: Act as a Honeytoken Generator! An Investigation into Honeytoken Generation with Large Language Models
    Daniel Reti, Norman Becker, Tillmann Angeli, Anasuya Chattopadhyay, Daniel Schneider, Sebastian Vollmer, Hans Dieter Schotten (German Research Center for Artificial Intelligence (DFKI))
    10:15 – 10:30
    Paper 2: RESONANT: Reinforcement Learning-based Moving Target Defense for Credit Card Fraud Detection
    George Abdel Messih, Tyler Cody, Peter Beling, Jin-Hee Cho (Virginia Tech)
    10:30 – 11:00
    Coffee break
    11:00 – 11:50
    Keynote talk 2: Robustness against Poisoning under Local Differential Privacy
    Amrita Roy Chowdhury
    11:50 – 13:30
    Lunch
    13:30 – 14:30
    Keynote talk 3: Security Of AI, By AI and For AI: AI-Centered Cybersecurity Research and Innovations
    Xiaofeng Wang
    14:30 – 14:45
    Paper 3: Adaptive Network Intrusion Detection Systems Against Performance Degradation via Model Agnostic Meta-Learning
    Goktug Ekinci, Alexandre Broggi, Lance Fiondella (University of Massachusetts Dartmouth), Nathaniel Bastian (USMA), Gokhan Kul (University of Massachusetts Dartmouth)
    14:45 – 15:00
    Paper 4: Adaptive Input Sanitization in Medical Systems with eBPF
    Sinyin Chang (National Taiwan University), Ao Li, Evin Jaff, Yuanhaur Chang, Jinwen Wang, Ning Zhang (Washington University in St. Louis), Hsu-Chun Hsiao (National Taiwan University)
    15:00 – 15:30
    Coffee break
    15:30 – 16:20
    Keynote talk 4: Considerations for Designing and Deploying Adaptive and Autonomous Defenses
    Frank Li