Skip to content
  • «
  • 1
  • »

The search returned 5 results.

The Future AI Act and Facial Recognition Technologies in Public Spaces: journal article

Nice to Have or Strictly Necessary?

Catherine Jasserand

European Data Protection Law Review, Volume 9 (2023), Issue 4, Page 430 - 443

This article discusses whether the deployment of facial recognition technologies for surveillance and policing purposes is necessary in democratic public spaces, taking into account the tests of necessity as established by the European Court of Human Rights and the European Court of Justice in interpreting the limits to the fundamental rights to privacy and data protection. The article focuses, in particular, on the rules proposed by the European Commission in the future AI Act aimed at regulating the use of Remote Biometric Identification systems (RBIs). This generic term ‘RBI’ covers multiple biometric applications, including facial recognition technologies. In its legislative proposal, the European Commission proposed banning the live use of the technologies, except in three situations. The Council agreed with the proposal, while the European Parliament proposed to ban all uses by all actors, except the retrospective use by law enforcement authorities to prosecute specific serious crimes after judicial authorisation. Using the tests developed by the two Courts and the guidance of the European Data Protection Supervisor on necessity, the article assesses whether the proposed rules on RBIs stand the legal test of necessity. Keywords: FRTs, AI Act, law enforcement, necessity, proportionality


Two Necessary Approaches to a Privacy-Friendly 2033 journal article

Alexander Dix

European Data Protection Law Review, Volume 9 (2023), Issue 3, Page 305 - 310

Niels Bohr, the Danish Nobel prize winner, is known to have said ‘Prediction is very difficult, especially if it’s about the future !’ It is therefore rather futile to try and predict the future of data and privacy protection. Instead this article puts forward two conditions (inter alia) which should be met to make sure that privacy and data protection have the same or an even better standing as at present. They are of a regulatory and a non-regulatory nature. On a regulatory level it is suggested that the responsibility for implementing existing legal rules should no longer be restricted to controllers. On a non-regulatory level the key importance of improving media literacy and raising public awareness as a condition for individual autonomy in the digital age is stressed. Keywords: artificial intelligence, informational self-determination, media literacy, privacy by design, product liability



Artificial Intelligence in Medical Diagnoses and the Right to Explanation journal article

Thomas Hoeren, Maurice Niehoff

European Data Protection Law Review, Volume 4 (2018), Issue 3, Page 308 - 319

Artificial intelligence and automation is also finding its way into the healthcare sector with some systems even claiming to deliver better results than human physicians. However, the increasing automation of medical decision-making is also accompanied by problems, as the question of how the relationship of trust between physicians and patients can be maintained or how decisions can be verified. This is where the right to explanation comes into play, which is enshrined in the General Data Protection Regulation (GDPR). This article explains how the right is derived from the GDPR and how it should be established. Keywords: Data Protection, Privacy, AI, Articial Intelligence, Algorithm


Contesting Automated Decisions: journal article

A View of Transparency Implications

Emre Bayamlioglu

European Data Protection Law Review, Volume 4 (2018), Issue 4, Page 433 - 446

This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors. Keywords: Algorithmic Transparency, Automated Decisions, GDPR Article 22

  • «
  • 1
  • »