Legal Risks of Artificial Intelligence (Author: Anar Mahmudov)

Artificial intelligence (AI) is everywhere and its development, deployment and use are moving forward rapidly and contributing to the global economy. AI has many benefits such as improvements in creativity, services, safety, lifestyles, helping solve problems and yet at the same time, raises many concerns such as adverse impacts on human autonomy, privacy, and fundamental rights and freedoms (OECD 2019).

The legal discourse on the legal and human rights issues of artificial intelligence (AI) is established, with many a detailed legal analysis of specific individual issues. But this field is a regulatory moving target and there is a need for an exploratory, and looking at the breadth of issues, curated in a single place. Critically missing also is a greater discussion and mapping of vulnerability to such issues.

The essay’s main research question is: What are the main legal risks related to AI?

 Discrimination and biased AI systems

Artificial intelligence systems have the potential to reinforce pre-existing human biases. A machine has no predetermined concept of right and wrong, only those which are programmed into it. A system that can learn for itself and act in a way unforeseen by its creators, may act contrary to its original intentions. While the big data on which AI is based is extensive, it is neither complete nor perfect. This imperfect data feed algorithms and AI, and can ‘bake discrimination into algorithms. As a result, human biases will be accentuated and not resolved.

The truth is that many AI devices are better than human beings at identifying small differences. However, algorithms and machine learning may also develop false correlations between appearance, origin or other human attributes, that replicate and extend discriminatory practices. Several recent controversies have illustrated this type of bias in a particularly shocking way. In 2015, Google Photos, a face recognition software, caused an uproar when two young African Americans realized that one of their photos had been tagged as ‘Gorillas’. The algorithms and model’s result may be incorrect or discriminatory if the training data renders a biased picture reality, or if it has no relevance to the area in question. Such use of personal data would be in contravention of the fairness principle.

Unfairness, bias and discrimination repeatedly pop up as issues and have been identified as a major challenge related to the use of algorithms and automated decision-making systems, e.g., to make decisions related to health, employment, credit, criminal justice, and insurance. For instance, in August 2020, protests were made and legal challenges are expected over the use of a controversial exam algorithm used to assign grades to GCSE students in England.

A focused paper from the EU Agency for Fundamental Rights outlines the potential for discrimination against individuals via algorithms and states that “the principle of non-discrimination, as enshrined in Article 21 of the Charter of Fundamental Rights of the European Union, needs to be taken into account when applying algorithms to everyday life” (FRA 2018). It cites examples with potential for discrimination: automated selection of candidates for job interviews, use of risk scores in creditworthiness or in trials. European Parliament report on the fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law-enforcement, European Parliament stressed that “because of the data sets and algorithmic systems used when making assessments and predictions at the different stages of data processing, big data may result not only in infringements of the fundamental rights of individuals but also in differential treatment of and indirect discrimination against groups of people with similar characteristics, particularly with regard to fairness and equality of opportunities for access to education and employment, when recruiting or assessing individuals or when determining the new consumer habits of social media users”.

It is, therefore, clear that AI systems may create inequalities and discrimination, thus posing a legal risk.

   Liability for harms

The deployment and use of AI technologies possess some risks which can cause damage to persons and property. For example, running over pedestrians by driverless cars, crashing and damage caused by a partially operated drone, wrongful medical treatment diagnosis by an AI software programme. As there are many parties involved in an AI system (data provider, designer, manufacturer, programmer, developer, user and AI system itself), liability is difficult to establish when something goes wrong and there are many factors to be taken into consideration.

As outlined by the Assessment List for Trustworthy AI (ALTAI), accountability calls for mechanisms to be put in place to ensure responsibility for the development, deployment and use of AI systems – risk management, identifying and mitigating risks in a transparent way that can be explained to and audited by third parties. Accountability in AI requires both the function of guiding action (by forming beliefs and making decisions) and the function of explanation (by placing decisions in a broader context and by classifying them along with moral values). The accountability gap is a worse problem than it might first seem to cause problems in three areas: causality, justice, and compensation. According to the Privacy International and Article 19 report, even when potential harm is found, it can be difficult to ensure accountability for violations of those responsible.

   Affected human rights principles

International human rights treaties lay down obligations that their signatories are bound to respect and fulfil. States must refrain from interfering with rights and take positive actions to fulfil their enjoyment. While none of them currently explicitly apply or mention ‘Artificial Intelligence or machine learning, their broad and general scope would cover most of the risks and challenges identified.

                  AI Legal riskHuman rights principles that might be affected
                     Cybersecurity vulnerabilitiesthe right to privacy; freedom of expression and the free flow of information  
                 Unfairness, bias and discrimination  elimination of all forms of discrimination against women; equal rights of men and women; enjoyment of children’s rights without discrimination; equality before the law, equal protection of the law without discrimination; enjoyment of prescribed rights without discrimination; non-discrimination, right to life of migrant workers; right to liberty and security of the person; prohibition of discrimination on the basis of disability; right to a fair trial; right to freedom from discrimination  
              Privacy and data protection issues  migrant’s right to privacy; respect for the privacy of the person with disabilities; right to respect for private and family life; right to privacy and data protection; children’s privacy; protection of the integrity of older persons and their privacy and intimacy  
  Liability issues related to damage caused and Lack of accountability for harms    right to life; right to effective remedies

             Vulnerability

It is not enough to simply outline the legal issues, gaps and challenges and the human rights principles AI implicates. The frame of ‘vulnerability’ help consolidates the identification of critical areas of concern and guide legal AI risks and impact mitigation efforts to better protect human and societal well-being.

More specifically, in human rights discourse, for instance, the term vulnerability is used to indicate a heightened susceptibility of certain individuals or groups to being harmed or wronged by others or by the state. Populations that are particularly prone to be harmed, exploited or discriminated include, among others, children, women, older people, people with disabilities, and members of ethnic or religious minority groups. It does not mean that these groups are being elevated above others. Characterizing them as ‘vulnerable’ simply reflects the hard reality that these groups are more likely to encounter discrimination or other human rights violations than others – as all of these categories are implicated in some form or manner in the legal issues and human rights principles at stake.

The use and deployment of AI technologies disproportionately affect vulnerable groups. E.g., The UNESCO COMEST Preliminary Study on the Ethics Of Artificial Intelligence gives an example of the Allegheny Family Screening Tool (AFST), a predictive model used to forecast child neglect and abuse. It states that it “exacerbates existing structural discrimination against the poor and has a disproportionately adverse impact on vulnerable communities” via oversampling of the poor and using proxies to understand and predict child abuse in a way that inherently disadvantages poor working families. There are also some concerns about “increasingly relying on technology to collect personal data of vulnerable people such as migrants and refugees,” to “create additional bureaucratic processes that could lead to exclusion from protection.” For instance, children are particularly vulnerable. As, the Initial Coin Offering (ICO) explains, “they may be less able to understand how their data is being used, anticipate how this might affect them, and protect themselves against any unwanted consequences”. AI-powered data-driven and intensive economies might be more lucrative or attractive targets for cyber attacks given their expansive use of, and dependence on AI and big data.

List of References

  • Accuracy and fairness for juvenile justice risk assessments.

Journal of Empirical Legal Studies (2019)

  • Artificial intelligence in cybersecurity is vulnerable.

SC Magazine (2019).

https://www.scmagazine.com/home/opinion/artifical-intelligence-in-cybersecurity-is-vulnerable/

  • Artificial intelligence: privacy and legal issues.

CPO Magazine (2018)

https://www.cpomagazine.com/data-privacy/artificial-intelligence-privacy-and-legal-issues/

  • Council of Europe (2018) – Discrimination, artificial intelligence, and algorithmic decision-making, Study by Prof. Frederik Zuiderveen Borgesius, Professor of Law, Institute for Computing and Information Sciences (iCIS), Radboud University Nijmegen, and Researcher at the Institute for Information Law, University of Amsterdam (the Netherlands). 

https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73.

  • European Parliament (2017) Resolution of 14 March 2017 on fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law-enforcement.
  • Human dignity of the vulnerable in the age of rights, Springer, Cham (2016), pp. 257-272
  • Legal and human rights issues of AI: Gaps, challenges and vulnerabilities.

Journal of Responsible Technology. Volume 4, December 2020, 100005

https://www.sciencedirect.com/science/article/pii/S2666659620300056#:~:text=Such%20issues%20include%3A%20algorithmic%20transparency,damage%20and%20lack%20of%20accountability

  • Liability for Artificial Intelligence and other emerging digital technologies. 
  • On the legal responsibility of autonomous machines,

Artificial Intelligence and Law, 25 (3) (2017), p. 293

  • Policing by machine, Predictive Policing and the threats to our rights. H. Couchman (2019)
  • Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law

Common Market Law Review, 55 (4) (2018), pp. 1143-1185

Leave a Comment