Skip to content

Machine Learning in Medicine:

Opening the New Data Protection Black Box

open-access


DOI https://doi.org/10.21552/edpl/2018/3/10

Agata Ferretti, Manuel Schneider, Alessandro Blasimme


Artificial intelligence (AI) systems, especially those employing machine learning methods, are often considered black boxes, that is, systems whose inner workings and decisional logics remain fundamentally opaque to human understanding. In this article, we set out to clarify what the new General Data Protection Regulation (GDPR) says on profiling and automated decision-making employing opaque systems. More specifically, we focus on the application of such systems in the domain of healthcare. We conducted a conceptual analysis of the notion of opacity (black box) using concrete examples of existing or envisaged medical applications. Our analysis distinguishes among three forms of opacity: (i) lack of disclosure, (ii) epistemic opacity, and (iii) explanatory opacity. For each type of opacity, we discuss where it originates from, and how it can be dealt with according to the GDPR in the context of healthcare. This analysis can offer insights regarding the contested issue of the explainability of AI systems in medicine, and its potential effects on the patient-doctor relationship.
Keywords: Artificial Intelligence, Machine Learning, Black Box, Medicine, GDPR, Transparency

Agata Ferretti, equal contributor, PhD candidate at the Health Ethics and Policy Lab, ETH Zurich. Manuel Schneider, equal contributor, PhD candidate at the Health Ethics and Policy Lab, ETH Zurich. Dr Alessandro Blasimme, Senior Researcher in Bioethics at the Health Ethics and Policy Lab, ETH Zurich. For correspondence: <mailto:alessandro.blasimme@hest.ethz.ch>.

Share


Lx-Number Search

A
|
(e.g. A | 000123 | 01)

Export Citation