Submission Rules

All submissions must include complete, self‑contained code that allows the organizers to reproduce the reported results on the official evaluation setup. Submissions must be provided as a single compressed archive containing a repository with a clear and well‑organized structure. The repository must include all code necessary to run the attack and generate predictions for the evaluation data.

To ensure reproducibility, each submission must specify its execution environment in one of the following ways. Submissions may either include a Dockerfile that builds a runnable container capable of executing the full evaluation pipeline, or provide a requirements.txt (or equivalent, such as pyproject.toml) that lists all required Python dependencies with explicit version constraints, enabling the code to be run in a clean environment using standard tooling (e.g., pip). Submissions must not rely on undocumented system‑level dependencies.

Each submission must include a README that clearly documents how to set up the environment, run the code, and produce the final outputs used for evaluation. The repository must define a clear entry point (for example, a script or command) that accepts the provided model and dataset as input and produces outputs in the required format. Any configuration files, scripts, or auxiliary assets needed to reproduce the results must be included in the archive.

Submissions will be executed under fixed resource constraints. Each submission must run on one NVIDIA Tesla A100 GPU with 80 GB of memory, 128 GB of system RAM, 16 Cores (Intel Xeon Gold 6448Y) and a maximum wall‑clock runtime of 12 hours. Submissions that exceed these limits, fail to terminate within the allotted time, or require additional computational resources will be disqualified. Participants are responsible for ensuring that their methods fit within these constraints.

Submissions that cannot be executed using the provided instructions, that fail to reproduce the reported results, or that violate the competition rules may be disqualified at the organizers’ discretion.

Submission Format

👉Submission Link👈

Each submission must include a report, in a two to four page short paper format. The report must outline the proposed approach. Describe the implementation and rationale behind the approach and present the results on the open test set.

In addition to the report, each submission must be accompanied by a comprehensive replication package that enables the organizing committee to validate the reported results and execute the proposed approach on the held-out test set and model.

Submissions must be written in English and provided as PDF files, adhering to the page limits specified above. Authors are required to prepare their manuscripts using the official ACM Primary Article Template, available from the ACM Proceedings Template page. For LaTeX submissions, the sigconf format should be used together with the review option to enable line numbering for reviewer reference.

Review Procedure

All submissions will undergo a single-blind peer review process, with three reviewers evaluating each contribution. Based on the assessment quality and constructive feedback provided, participants may be offered an opportunity to revise their report or replication package to address reviewer concerns and strengthen their submission for acceptance consideration. The replication package must be made public after acceptance.

Proceedings Inclusion

Participants will have the option to include their reports in the ACM Digital Library Proceedings, allowing teams to gain additional visibility and academic recognition for their work. To maximize accessibility and inclusivity, competitors are welcome to participate and present their solutions without formally registering for the broader conference and without including their submission in the proceedings. We also allow submissions that apply techniques currently under review or consideration at other venues.

Following the submission deadline and prior to the conference, the organizing committee will prepare a report analyzing all accepted submissions. In this report we list the approaches proposed by the participants, compare their strategies and results. Additionally, the report will document unexplored directions that emerged from the competition, providing open challenges that warrant further investigation. The report will be included in the proceedings.