2025-12-05T00:00:00+01:00
Laden Evenementen

Both, Between, Beyond: Ethics and Epistemology of AI

september 24 - 12:30 - september 26 - 14:30

Registrations are closed for this event

Mondai | House of AI is happy to host the coming workshop on the Ethics and Epistemology of AI: Both, Between, Beyond

(De voertaal van dit evenement is Engels – this event will be held in English)

General Programme
Wednesday Sept, 24th
12:30 Welcome
13:00 – 17:30 Keynote and Talks
17:30 Networking Drinks

Thursday Sept, 25th
10.15 Welcome
10.30 – 16.00 Keynotes and Talks
18:00 Dinner

Friday Sept, 26th
09.45 Welcome
10.00 Keynote and Talks
14.30 End

This workshop focuses on the interplay between epistemological and ethical questions arising with the use of AI systems. So far, central epistemologically and ethically relevant aspects pertaining to these technologies have been largely analyzed in their singularity. For example, epistemic limitations of these systems, such as their opacity, have been the center of the epistemological debate but have only been marginally addressed in ethical studies. On the other hand, issues of responsibility, fairness, and privacy, among others, have received considerable attention in discussions on the ethics of AI. However, even though some efforts are present in the literature to bring these two dimensions together (Russo et al., 2023; Pozzi and Durán, 2024), more needs to be said to tackle relevant and philosophically interesting issues that fall in their intersection.

Against this background, this workshop aims to bring together scholars working on topics at the intersection between the ethics and epistemology of AI, focusing on different philosophical traditions and perspectives.

Programme (updated!)

Wednesday September 24th
12.30 – 13.00 Welcome and walk-in
13.00 – 14.00 Keynote by Claus Beisbart: “AI and Non-Epistemic Values. Insights from the Debate on Science and Values”
14.00 – 14.20 Break
14.20 – 15.00 Talk by Tuğba Yoldaş: “Virtue Epistemology and Responsible Knowing with Generative AI”
15.00 – 15.40 Talk by Anna Smadjor and Yael Friedman: “Epistemological and Ethical Dimensions of Synthetic Data for Trustworthy AI”
15.40 – 16.00 Break
16.00 – 16.40 Talk by Giacomo Figà Talamanca and Niel Conradie: “The Significance of Vulnerability for being Trustworthy about AI”
From 17.30 Informal gathering and drinks!

To what extent do AI systems incorporate non-epistemic values, such as moral values? And to what degree must they do so? To answer these questions, I propose examining the debate on values in science. In this debate, proponents of value-free science have tried to show that science can be kept free of non-epistemic values. Detractors have argued that this is neither possible nor desirable, mainly drawing on Rudner’s argument. In this talk, I will first summarize key insights and arguments from the debate on values in science. I’ll then draw consequences for the question of whether AI systems do, or must, incorporate tradeoffs between non-epistemic values. A key conclusion is that the answer depends on how AI is used. Some moves made in the debate on values in science can be used to propose uses of AI that minimize the impact of non-epistemic values. However, it’s another question of whether such uses are feasible in practice.

Thursday September 25th
10.15 – 10.30 Walk-in
10.30 – 11.10 Talk by Shaoyu Han: “Accidental Hate Speech and the Extended Mind: Rethinking Epistemic Responsibility in AI-Generated Content”
11.10 – 11.50 Talk by Johan Largo. Human-AI interaction, epistemic credit and moral responsibility
12.00 – 13.00 Lunch Break
13.00 – 13.40 Talk by Hatice Tülün: “The Invisible Third: The Ethical and Epistemic Role of Recommender Systems in Mediated Social Interactions”
13.40 – 14.20 Talk by Karim Barakat: “Algorithmic Censorship and the Public Sphere”
14.20 – 14.40 Break
14.40 – 15.40 Keynote by Eva Schmidt: “Engineering a Concept of AI Trustworthiness as Competence and Character”
15.40 – 16.00 Coffee
From 18.30 Dinner

In their paper, they critique two prominent views on AI trustworthniess: The skeptical view, which holds that the concept of trustworthiness cannot be applied to AI systems, and the reductive view, which maintains that AI trustworthiness can be reduced to reliability, competence, or well-functioning. They contest both views by pointing out that something like good character or goodwill is highly relevant when interacting with AI systems. They then propose that AI systems are trustworthy for a stakeholder just in case they meet two conditions: (1) the system’s goals align with the stakeholder’s goals and (2) the system is competent in pursuing these goals. The concept of AI trustworthiness can, if conceptualized in this way, fulfill several theoretical and practical functions that cannot be fulfilled by the concept of reliability alone. This becomes apparent as soon as one takes the issue to be a conceptual engineering problem

Friday September 26th
09.45 – 10.00 Walk-in
10.00 – 10.40 Talk by Joshua Hatherley: “Federation Opacity and the Promise of Federated Learning in Healthcare”
10.40 – 11.20 Talk by Omkar Chattar: “What Is Phantom Trust? Ontology and Epistemology in AI Reliance”
11.20 – 11.40 Break
11.40 – 12.20 Talk by Eric Owens: “On Machines and Medical Schools: Machine Learning, Epistemic Institutions and the Physician”
12.20 – 13.30 Lunch Break
13.30 – 14.30 Keynote by Karin Jongsma and Megan Milota: “Ethics, Epistemology and Praxis: Making AI’s Impact on the Diagnostic Workflow Visible”
14.30 Goodbye

Machine learning and deep learning have proven to be particularly useful for pattern recognition. This may explain why the majority of current AI applications in medicine are used to aid image-based diagnostics in fields like radiology and pathology. In their interactions with these new technologies, medical professionals will have to renegotiate their position and role in the digital transition; they will also have to critically consider what tasks they are willing to outsource to AI tools and which (new) competencies and expertise medical professionals need to responsibly use these technologies.

We conducted ethnographic work and produced an ethnographic film to study the ways in which AI influences the daily work of pathologists. The film shows the skills and knowledge pathologists and lab technicians require when conducting their work and provides a clearer image of the responsibilities they bear. In this session, we will screen this film and will facilitate an interactive discussion of questions related to the ethics and epistemology of AI in image driven care.

Workshop Organisers

Giorgia Pozzi
TU Delft
Chirag Arora
TU Delft
Juan M. Durán
TU Delft
Emma-Jane Spencer
Erasmus MC & TU Delft

Add to Calendar

Registrations are closed for this event

Go to calendar

Share this event

twitterFacebooklinkedinmail