Weiter zum Inhalt
  • «
  • 1
  • »

Die Suche erzielte 6 Treffer.

Two Necessary Approaches to a Privacy-Friendly 2033 Journal Artikel

Alexander Dix

European Data Protection Law Review, Jahrgang 9 (2023), Ausgabe 3, Seite 305 - 310

Niels Bohr, the Danish Nobel prize winner, is known to have said ‘Prediction is very difficult, especially if it’s about the future !’ It is therefore rather futile to try and predict the future of data and privacy protection. Instead this article puts forward two conditions (inter alia) which should be met to make sure that privacy and data protection have the same or an even better standing as at present. They are of a regulatory and a non-regulatory nature. On a regulatory level it is suggested that the responsibility for implementing existing legal rules should no longer be restricted to controllers. On a non-regulatory level the key importance of improving media literacy and raising public awareness as a condition for individual autonomy in the digital age is stressed. Keywords: artificial intelligence, informational self-determination, media literacy, privacy by design, product liability




Forgetful AI: AI and the Right to Erasure under the GDPR Journal Artikel

Tiago Sérgio Cabral

European Data Protection Law Review, Jahrgang 6 (2020), Ausgabe 3, Seite 378 - 389

Artificial Intelligence and, specifically, Machine Learning, depends on data for its development and continuous evolution. Frequently, the information used to train Machine Learning algorithms is personal data and, thereby, subject to the rules contained within the GDPR. If the necessary requirements are fulfilled, Article 17 of the GDPR grants to the data subject the right to request from the controller the erasure of personal data concerning him/her. In this paper we will study the impact of the right to erasure under the GDPR in the development of Artificial Intelligence in the European Union. We will assess whether datasets, mathematical models and the results of applying such models to new data need to be erased, pursuant to a valid request from the data subject. We will also analyse the challenges created by this erasure, how they can be minimized and the most adequate legal interpretations to ensure seamless AI development that is also compatible with the principles of privacy and data protection currently in force within the European Union. Keywords: Artificial Intelligence, GDPR, Right to Erasure


Machine Learning in Medicine: Journal Artikel open-access

Opening the New Data Protection Black Box

Agata Ferretti, Manuel Schneider, Alessandro Blasimme

European Data Protection Law Review, Jahrgang 4 (2018), Ausgabe 3, Seite 320 - 332

Artificial intelligence (AI) systems, especially those employing machine learning methods, are often considered black boxes, that is, systems whose inner workings and decisional logics remain fundamentally opaque to human understanding. In this article, we set out to clarify what the new General Data Protection Regulation (GDPR) says on profiling and automated decision-making employing opaque systems. More specifically, we focus on the application of such systems in the domain of healthcare. We conducted a conceptual analysis of the notion of opacity (black box) using concrete examples of existing or envisaged medical applications. Our analysis distinguishes among three forms of opacity: (i) lack of disclosure, (ii) epistemic opacity, and (iii) explanatory opacity. For each type of opacity, we discuss where it originates from, and how it can be dealt with according to the GDPR in the context of healthcare. This analysis can offer insights regarding the contested issue of the explainability of AI systems in medicine, and its potential effects on the patient-doctor relationship. Keywords: Artificial Intelligence, Machine Learning, Black Box, Medicine, GDPR, Transparency


Machine Learning for Diagnosis and Treatment: Journal Artikel

Gymnastics for the GDPR

Robin Pierce

European Data Protection Law Review, Jahrgang 4 (2018), Ausgabe 3, Seite 333 - 343

Machine Learning (ML), a form of artificial intelligence (AI) that produces iterative refinement of outputs without human intervention, is gaining traction in healthcare as a promising way of streamlining diagnosis and treatment and is even being explored as a more efficient alternative to clinical trials. ML is increasingly being identified as an essential tool in the arsenal of Big Data for medicine. ML can process and analyse the data resulting in outputs that can inform treatment and diagnosis. Consequently, ML is likely to occupy a central role in precision medicine, an approach that tailors treatment based on characteristics of individual patients instead of traditional ‘average’ or one-size-fits-all medicine, potentially optimising outcomes as well as resource allocation. ML falls into a category of data-reliant technologies that have the potential to enhance healthcare in significant ways. However, as such, concerns about data protection and the GDPR may arise as ML assumes a growing role in healthcare, prompting questions about the extent to which the GDPR and related legislation will be able to provide adequate data protection for data subjects. Focusing on issues of transparency, fairness, storage limitation, purpose limitation and data minimisation as well as specific provisions supporting these principles, this article examines the interaction between ML and data protection law. Keywords: Machine Learning, GDPR, Data Protection, Artificial Intelligence in Medicine, Health Data, Automated Processing, Data Minimisation

  • «
  • 1
  • »

Aktuelle Ausgabe