Designing and Developing Ethically Aligned Military AI
mei 6 - 9:00 - mei 7 - 17:00
Register
Mondai | House of AI is pleased to host the
Designing and Developing Ethically Aligned Military AI Conference
organised by the ELSA Defense Lab, in collaboration with the TU Delft Digital Ethics Centre
Designing and Developing Ethically Aligned Military AI Conference
organised by the ELSA Defense Lab, in collaboration with the TU Delft Digital Ethics Centre
Advances in artificial intelligence (AI) are enabling military systems to operate in environments where uncertainty, adversarial dynamics, and time-critical decision-making are the norm rather than the exception. In such contexts, ethical design cannot rely solely on predictable scenarios, assumptions of human oversight, or static rule-based constraints; rather, it requires careful and substantial ethical programming and design to ensure that AI-enabled systems behave in alignment with moral and legal principles throughout their operational lifecycle.
This conference explores how ethically aligned military AI can be conceived, designed, and developed for deployment in uncertain, adversarial, and time-critical environments. Across two days, contributors examine normative and methodological foundations related to the embedding of moral and ethical constraints during the early stages of the lifecycle of military AI systems.
Call for Abstracts
This conference focuses specifically on the challenge of designing and developing ethically aligned military AI technologies. We invite contributions that address, among others, the following questions:
- What methods should be used to embed robust moral constraints into particular AI-enabled systems that must act adaptively under uncertainty?
- What technical architectures (e.g., constraint learning, formal verification, runtime monitoring) best support ethical and moral guardrails under environmental uncertainty for different military AI capabilities?
- What fail-safe behaviors and override mechanisms should be incorporated depending on the risks-levels?
- How should human–machine decision authority be allocated and dynamically recalibrated as operations evolve in real time?
- How do data pipeline choices (dataset curation, adversarial robustness, bias mitigation) influence downstream ethical reliability for particular military AI decision-support systems?
- What forms of explainability are meaningful and practical in time-critical command settings for particular military AI decision-support systems?
- How can testing, validation, and verification frameworks account for emergent behaviors in deployment environments?
Particularly welcome contributions explore the relationship between normative ethical principles, applied ethical solutions, and concrete engineering methods, ensuring that ethically relevant constraints are embedded from the design phase through the deployment
How to hand in
Abstracts of no more than 500 words should be submitted to Perica Jovchevski by February 1, 2026, via email (p.j.jovchevski@tudelft.nl).
Notifications of acceptance will be communicated by March 1, 2026.
Accepted abstracts will be allocated 25 minutes for presentation and 15 minutes for discussion.
Revised conference papers (8.000-10.000 words), to be considered for publication, are due by June 30, 2026.
For further inquiries, please also contact Perica Jovchevski, via the button below.
Papers presented at the conference, will be considered for publication in an edited volume on the conference theme. Please note that acceptance to present at the conference does not guarantee publication.
Conference Programme
Please keep an eye on this page for more updates in the future!
Organisation
- Perica Jovchevski, Post-doctoral Researcher in the section of Ethics and Philosophy of Technology at TU Delft.
- Stefan Buijsman, Associate Professor Responsible AI at TU Delft.
Add to Calendar
Register
