"Stereotyping and Medical AI" : Sowerby Philosophy & Medicine Project Online Summer Colloquium
- Jun 9, 2021
- 3 min read
The Sowerby Philosophy & Medicine Project are very pleased to announce our Summer Colloquium Serieson Stereotyping and Medical AI.
The series commences on the 17th of June at 5:00 pm GMT+1 with a talk by Professor Erin Beeghly(Utah) on “Stereotyping and Prejudice: The Problem of Statistical Stereotyping”!
You can register for the event on our Eventbrite site here.
Stereotyping and Medical AI
Summer Colloquium Series by the Sowerby Philosophy and Medicine Project
The aim of this fortnightly colloquium series on Stereotyping and Medical AI is to explore philosophical and in particular ethical and epistemological issues around stereotyping in medicine, with a specific focus on the use of artificial intelligence in health contexts. We are particularly interested in whether medical AI that uses statistical data to generate predictions about individual patients can be said to “stereotype” patients, and whether we should draw the same ethical and epistemic conclusions about stereotyping by artificial agents as we do about stereotyping by human agents, i.e., medical professionals.
Other questions we are interested in exploring as part of this series include but are not limited to the following:
How should we understand “stereotyping” in medical contexts?
What is the relationship between stereotyping and bias, including algorithmic bias (and how should we understand “bias” in different contexts?)?
Why does stereotyping in medicine often seem less morally or epistemically problematic than stereotyping in other domains, such as in legal, criminal, financial, educational, etc., domains? Might beliefs about biological racial realism in the medical context explain this asymmetry?
When and why might it be wrong for medical professionals to stereotype their patients? And when and why might it be wrong for medical AI, i.e. artificial agents, to stereotype patients?
How do (medical) AI beliefs relate to the beliefs of human agents, particularly with respect to agents’ moral responsibility for their beliefs?
Can non-evidential or non-truth-related considerations be relevant with respect to what beliefs medical professionals or medical AI ought to hold? Is there moral or pragmatic encroachment on AI beliefs or on the beliefs of medical professionals?
What are potential consequences of either patients or doctors being stereotyped by doctors or by medical AI in medicine? Can, for example, patients be doxastically wronged by doctors or AI in virtue of being stereotyped by them?
We will be tackling these topics through a series of online colloquia hosted by the Sowerby Philosophy and Medicine Project at King's College London. The colloquium series will feature a variety contributors from across the disciplinary spectrum. We hope to ensure a discursive format with time set aside for discussion and Q&A by the audience. This event is open to the public and all are very welcome.
And we are again very pleased that our first speaker in this series of colloquia will be:
Professor Erin Beeghly (Utah) - "Stereotyping and Prejudice: The Problem of Statistical Stereotyping"
Time: 17th of June, 17.00 - 18.30 GMT+1
Register for the event here via Eventbrite.
And you can find out more about this series and the Philosophy & Medicine Project here.
All best wishes, and we very much hope you can join us!
The Organizers (Professor Elselijn Kingma, Winnie Ma, Harriet Fagerberg and Eveliina Ilola)
--
Winnie Ma
Research Associate and Project Manager at the Sowerby Philosophy & Medicine Project
PhD Student in Philosophy at King's College London


Comments