Loading Events

Symposium and Opening Centre for Meaningful Human Control

October 1 - 9:00 - 18:00

Mondai | House of AI is glad to host the Symposium and Opening of the new Centre for Meaningful Human Control of the TU Delft!

Hosted in collaboration with the Centre for Meaningful Human Control and the TU Delft AI Initiative.

As AI algorithms rapidly revolutionize various sectors – from healthcare through public services to transportation – they also raise concerns over their potential to spiral beyond human control and responsibility. In 2018, TU Delft launched the AiTech initiative to address the transdisciplinary challenges related to designing systems under Meaningful Human Control. Concurrently, the project “Meaningful Human Control over Automated Driving Systems” (2017-2021) further developed and operationalised this concept in the context of driving automation. These and other initiatives have produced impactful research, fostered community building – both national and international, and influenced key policy documents by Dutch and EU authorities.

In this event we invite you to celebrate with us two recent milestones from our MHC community! The release of the first Research Handbook on Meaningful Human Control and the launch of the Centre for Meaningful Human Control.

Register here!

This event is a unique opportunity to engage with leading academics and practitioners in the field! Exchange perspectives – philosophical, legal, and technical – on the challenges and approaches towards keeping meaningful human control over technology.

Programme
08.30 – 09.00 Walk-in and Registration
09.00 – 10.15 Welcome by David Abbink (Scientific Director CMHC) and Luciano Cavalcante Siebert (Co-Director CMHC), interactive session by Nazli Cila and Deborah Forster (Research Team CMHC), keynote by Carles Sierra (Artificial Intelligence Research Institute, Spanish National Research Council)
10.15 – 10.40 Break and Handbook Gallery
10.40 – 12.30 Keynotes by Johannes Himmelreich (Syracuse University), and Tanya Krupiy (Newcastle University), and a Panel Discussion with Authors of the Handbook
12.30 – 13.30 Lunch
13.30 – 14.50 Welcome by Arkady Zgonnikov (Co-Director CMHC). Keynotes by David Abbink (Scientific Director CMHC) and Kim van Sparrentak (European Parliament)
14.25 – 14.50 Break
14.50 – 16.40 Keynotes by Barbara Holder (Embry-Riddle Aeronautical University) and Ilse Harms (Dutch Vehicle Authority), and an interactive session on bringing MHC research to practice
16.40 – 17.00 Official Launch of the Centre for Meaningful Human Control
17.00 Celebratory Drinks!

Morning Programme: Academic Challenges of Meaningful Human Control

Exciting keynotes provide an integrated overview from various academic perspectives – ethical, legal, design and engineering. And, an interactive session with the authors and editors provides a change to deep-dive into the Handbook on Meaningful Human Control.

On the Engineering of Social Values
By: Carles Sierra (Artificial Intelligence Research Institute)

Ethics in Artificial Intelligence is a wide-ranging field encompassing many open questions regarding the moral, legal and technical issues of using and designing ethically-compliant autonomous agents. Under this umbrella, the computational ethics area is concerned with formulating and codifying ethical principles into software components. In this talk, I will look at a particular problem in computational ethics: engineering moral values into autonomous agents. I will focus on the essential role of human communities in defining social values and their associated norm-based social contracts.

Carles Sierra is the Director of the Artificial Intelligence Research Institute (IIIA) of the Spanish National Research Council (CSIC) located in Barcelona. He is the President of EurAI, the European Association of Artificial Intelligence. He has been contributing to Artificial Intelligence research since 1985 in the areas of Knowledge Representation, Auctions, Electronic Institutions, Autonomous Agents, Multiagent Systems and Agreement Technologies. He is a Fellow of the European Association of AI, EurAI, and recipient of the ACM/SIGAI Autonomous Agents Research Award 2019.

For Knowledge and Commitment — Or: What’s the Point of Meaningful Human Control?
By: Johannes Himmelreich (Syracuse University)

Johannes suggests that Meaningful Human Control (MHC) may be missing the point. Theories of MHC typically concentrate on intention, intervention, and action; and the theories seek to warrant moral responsibility and avoid harms. That, he takes it, is the point of MHC. But this typical focus on intention, action, and intervention misses this point, or so he argues. To ensure responsibility, knowledge matters more than intention or intervention. And to avoid certain outcomes, a commitment to refrain from acting may matter more than maintaining human control. In fact, with partially superhuman Artificial Intelligence we need both. We need to *know* when AI outperforms humans to then *commit* to defer to AI. This often avoids harmful outcomes without undermining moral responsibility.

Johannes Himmelreich is a philosopher who teaches and works in a policy school. He is an Assistant Professor in Public Administration and International Affairs in the Maxwell School at Syracuse University. He works in the areas of political philosophy, applied ethics, and philosophy of science. Currently, he researches the ethical quandaries that data scientists face, how the government should use AI, and how to check for algorithmic fairness under uncertainty. He published papers on “Responsibility for Killer Robots,” the trolley problem and the ethics of self-driving cars, as well as on the role of embodiment in virtual reality. He holds a PhD in Philosophy from the London School of Economics (LSE). Prior to joining Syracuse, he was a post-doctoral fellow at Humboldt University in Berlin and at the McCoy Family Center for Ethics in Society at Stanford University. During his time in Silicon Valley, he consulted on tech ethics for Fortune 500 companies, and taught ethics at Apple.

Governance of the Human-AI Coupling
By: Tanya Krupiy (Newcastle University)

Juliane Beck and Thomas Burri discuss the fact that the debate over what constitutes meaningful human control over artificial intelligence systems has been largely confined to the context of military applications of artificial intelligence. Scholars, such as Jonathan Kwik and Frank Flemisch et al, have proposed various definitions for the term meaningful human control in the context of the use of artificial intelligence systems. Article 14 of the Artificial Intelligence Act 2024 gives effect to the aspiration to have human oversight over the operation of high-risk artificial intelligence systems. This presentation will examine what duties the Netherlands has under the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) in regard to governing human-artificial intelligence coupling in the context of the organisations employing artificial intelligence as part of the decision-making process. It will conclude that the Netherlands needs to enact legislation in order to comply with its international human rights law obligations under CEDAW. This stems from the fact that there is a tension between obligations flowing from the CEDAW and the Artificial Intelligence Act 2024.

Tanya Krupiy is a lecturer in digital law, policy and society at Newcastle University. She has expertise in international human rights law, international humanitarian law and international criminal law. Tanya holds a Master of Laws with distinction in public international law from the London School of Economics and Political Science. She gained a Doctor of Philosophy in law from the University of Essex. She received funding from the Social Sciences and Humanities Research Council of Canada to carry out a postdoctoral fellowship at McGill University in Canada. Thereafter, she was a postdoctoral fellow at Tilburg University. Tanya’s research appears in Oxford University Press, University of Melbourne and Brill publications among others.

 

Afternoon Programme: Pragmatic Challenges of Meaningful Human Control

In the afternoon we focus on the practical challenges of meaningful human control. Via talks and interactive sessions around real-world challenges from international experts we take a closer look at sectors like the aviation and automotive industry, and Dutch and European policy making. We also take the time to explore how you might interact with its network and expertise.

At the end of the day we also officially launch and celebrate our Centre on Meaningful Human Control!

Tech Regulation: the way towards Ethical AI
By: Kim van Sparrentak (European Parliament)

Kim van Sparrentak is a Dutch politician who has been serving as a Member of the European Parliament for the GroenLinks political party since 2019. She co-wrote legislation that limited the influence of major tech companies and that granted municipalities greater discretion in regulating which properties can be rented out for short-term homestays through platforms such as Airbnb. Van Sparrentak was re-elected in June 2024 as the fourth candidate on the shared GroenLinks–PvdA list, which received a plurality in the Netherlands of eight seats.

 

Who’s flying this thing?! Considerations for a shared Human-Automation Future
By: Barbara Holder (Embry-Riddle Aeronautical University)

Barbara Holder is Associate Professor and Presidential Fellow in the College of Aviation at Embry-Riddle Aeronautical University (ERAU). Before joining ERAU in November 2021, Holder had worked since 2015 as a fellow in Advanced Technology at Honeywell Aerospace, where she studied human-machine issues across a wide range of aircraft. Earlier, she spent 15 years with The Boeing Company. There, she was an associate technical fellow and lead scientist of the Flight Deck Concept Center. Holder was a post-doctoral research fellow at NASA Ames Research Center where she investigated how pilots come to understand the auto-flight system of the Airbus A320 while flying the line.

Holder is chair of the Human Factors Subcommittee to the U.S. Federal Aviation Administration’s (FAA) Research, Engineering and Development Advisory Committee.  She is also a member of the FAA’s Air Carrier Training Aviation Rulemaking Committee’s Flight Path Management Working Group. She is a fellow of the Royal Aeronautical Society. She has nine patents and multiple scholarly publications.

The Practical Challenges of Interacting with Automated Cars
By: Ilse Harms (Dutch Verhicle Authority)

Driving a car is a complex and dynamic task. It entails the execution of tasks varying from motor-executive tasks, such as keeping the car within its lane, to more cognitive tasks such as understanding the driving environment to decide whether it is safe to overtake, till keeping track off which exit to take. These days, in-vehicle systems are increasingly assisting with, or taking over, part of the driving tasks. Even up to the point that humans feel that the car is actually driving itself. Under specific conditions, some cars actually can take over full control over the driving task. This interplay between the human driver and the machine driver has design implications for the vehicle, which need to be assessed in vehicle type approval. Considering her work at the Dutch Vehicle Authority, Ilse will share with you some of the practical challenges related to human control in the context of assisted and automated driving

“Human Factors is an integral part of mobility.” This combination is also the recurring theme in Ilse Harms’s career. Ilse is a traffic psychologist who enjoys working at the intersection of theory and practice. She conducted her PhD research at the University of Groningen while working for the Dutch government.

Currently she works at RDW – the Dutch Vehicle Authority – where she is a leading figure in the field of human factors and vehicle automation. Furthermore, Ilse has successfully worked to get the topic of human factors in Euro NCAP’s Vision 2030. At Euro NCAP she is both the alternate director for the Netherlands and the Chair for the HMI & Human Factors Working Group.

A Moment of Celebration!

We are very excited to celebrate the official launch of the Centre for Meaningful Human Control over systems with autonomous capabilities with you. The mission of the Centre is to connect academics and practitioners to better conceptualise, design, implement, and assess systems under meaningful human control. We strive to be a lighthouse for collaboration among multiple stakeholders: to leverage interdisciplinary expertise, existing initiatives at TU Delft, and an international network of collaborators at the forefront of research and practice on meaningful human control.

Register here!

Add to Calendar

Go to calendar

Share this event

twitterFacebooklinkedinmail