Ethics in AI

Guide for Artificial Intelligence Ethical Requirements Elicitation

Start Guide
RE4AI Ethical Guide logo

Principles

The evolution of the emergence of software that makes use of AI techniques, mostly ML, amplifies the manifestations of accidents and the awareness of the associated ethical issues [1]. In general, ethics in AI has been addressed, in the literature, in its theoretical field, through ethical guidelines [2]. In the last three years, there has been a veritable proliferation of organisations publishing guidelines seeking to provide normative guides to AI ethics [3,4]. As of November 2019, at least 84 of these initiatives have published reports describing ethical principles, values or other high-level abstract requirements for the development and deployment of AI [2]. Due to this high number of publications, sometimes the terms appear interchangeably in the papers, as in the Asilomar AI Principles, where they present principles composed of guidelines. We assume throughout this paper that the guidelines -- the guides -- contain the principles of AI ethics.

Whilst the existence of guidelines and principles is necessary, little practical direction exists for developers -- those responsible for implementing ethics in AI-based systems -- to apply in real-world contexts, even more so with the demands for market deliverables [2], where often the ethical considerations involved is a quality to be considered in software only after its deployment [5]. Furthermore, developers do not receive adequate training within development projects, nor during their education. There are no legal consequences for not implementing AI ethics, as the guidelines present in the literature, and proposed by organisations, are often non-binding laws, and the AI developer not being a formal profession. To clarify: "Reports and guidance documents for AI ethics are examples of what is called policy instruments of a non-binding character or soft law''[6]. Thus, there is neither motivation nor punishment for developers in the area of AI ethics. In this sense, binding laws are paramount to effectively align public interests with practice in application development in the context of AI.

Legally binding documents, backed by legislation, provide the actors involved in the process with real binding responsibilities and rights. These types of documents are called binding or hard law. We will present the two most notorious binding laws. First, the European Union's (EU) General Data Protection Regulation (GDPR) -- which came into force on 25 May 2018 and has been hugely influential in establishing safeguards for personal data protection in today's technological environment. Several countries outside the EU have adopted similar data protection rules, analogous to or inspired by GDPR, which is increasingly being recognised for its high standard of data protection, Brazil being one such nation. Aimed at empowering EU citizens to have control over their data and protect them from data and privacy breaches, the GDPR applies to all relevant actors within the EU and those who process, monitor or store EU citizens' data outside the EU [7]. Second, the General Law on Personal Data Protection (LGPD) in Brazil, which came into force on 18 September 2020, with the sanction of Law 14.058/2020, originating from Provisional Measure (MP) 959/20. The LGPD defines "rights of individuals in relation to their personal information and rules for those who collect and handle these records with the aim of protecting the fundamental rights of freedom and privacy and the free development of the personality of citizens.". An effort towards harmonisation between AI ethics guidelines (non-binding) and legislation (legally binding) is an important next step for the global community [6]

AI ethics guidelines contain ethical principles, and each published guideline contains its own set of principles. In the literature, most studies focus on the conceptual part of AI ethics, and one of them is the compilation, presentation and evaluation of ethics guidelines and their principles. Several authors have used different methodologies to explore sets of documents and extract the most recurrent principles and their definitions, usually concluding that they are too general, have high level of abstraction and degree of difficulty in applying them in real contexts, besides there is an overlap between the principles.


Ryan and Stahl [8] conducted a rigorous study with a robust methodology that reviewed a set of guidelines and compiled the detailed guidance that is available, presenting a list of principles aimed at developers and users. To the best of our knowledge, this is the study that makes use of a methodology that encompasses the most guidelines and definitions -- as well as presenting a comprehensive taxonomy.

We have included below the authors' original text for readability.


1. Transparency

Transparency has quickly become one of the most widely discussed principles within the AI ethics debate, with Floridi (2019) and the High-Level Expert Group on AI (2019) viewing it as a defining characteristic within the debate. Transparency can typically be understood in two ways: the transparency of the AI technology itself and the transparency of the AI organisations developing and using it. Throughout our analysis, transparency was regularly discussed directly, or in relation to processes required to ensure it, such as explainability, understandability and communication.


1.2 Transparency.

AI developers need to ensure transparency because it protects many other requirements – such as the fundamental human rights, privacy, dignity, autonomy and well-being (UNI Global Union, 2017). Organisations using AI should be transparent about their aim for using AI, benefits and harms and potential outcomes that may occur (IBM, 2017). AI developers should ensure transparency because it allows consumers to make informed choices about sharing their data and using AI (ADMA, 2013). 


1.3 Explainability.

AI must be subject to active monitoring to ensure that they are producing accurate results (Algo.Rules, 2019). AI organisations should document how their AI makes certain decisions and be able to reproduce them for audits (SIIA, 2017). AI should be explainable to external algorithmic auditing bodies to ensure the technical and ethical functionality of their AI. If there is a tension between performance and explainability, this should be clearly identified (Cerna Collectif, 2018). 


1.4 Explicability.

AI organisations (i.e. organisations using or developing AI) should be able to intelligibly explain the data that goes in, the data coming out, what their algorithms do, and their objective for doing so (Demiaux and Abdallah, 2017, p. 51). AI organisations should ensure traceability and explicability to guarantee safety (OECD 2019). AI needs to have a strong degree of traceability to ensure that if harms arise, they can be traced back to the cause (IEEE, 2017). Data should be traceable back to where, how and when it was captured, retrieved, cleaned and analysed (Cerna Collectif, 2018). Decisions made by AI should be reproducible by external auditors (AMA, 2018).


1.5 Understandability.

AI organisations need to implement appropriate methods to monitor the data, algorithms and the decisions that will be arrived at by those processes, and for actions taken by AI to be comprehensible by human beings (European Parliament, 2017). AI organisations should understand how their AI works and explain the technical functioning and decisions reached by those technologies, whenever possible (Floridi et al., 2018).


1.6 Interpretability.

While there is a degree of opaqueness in some machine-learning technologies, AI organisations should be able to understand how a decision was reached and how human oversight ensures that harms caused by algorithmic black-boxing are addressed and prevented (IEEE, 2019). High-stake domains (such as health care, criminal justice and welfare) should reconsider using black-box AI altogether (AI Now Institute, 2017). Algorithmic reviews should be done on a regular basis to determine if they are fit-for-purpose and interpretable (Algo.Rules, 2019). Organisations should be able to clearly interpret and demonstrate how their AI is abiding by current legislation, such as the general data protection regulation (GDPR), and be able to demonstrate what measures are being taken to ensure compliance (UK Government, 2018). 


1.7 Communication.

End users should be provided with accurate information to ensure that they are not manipulated, deceived, or coerced by AI (High-Level Expert Group on AI, 2019, p. 16). End users should be informed about the intent and outcomes of the technology (IBM, 2018). AI companies should be explicitly clear and discuss in a jargon-free manner, the potential flaws or harm that may arise from their AI (Algo.Rules, 2019). Communication methods may have to change for different industries, expertise and context of use (Floridi et al.,2018). AI organisations should communicate their progress and likelihood to hit particular milestones to governments, so that they can plan for these outcomes (NSTC, 2016a). 


1.8 Disclosure.

AI should be designed and used to retrieve little to no personal data, or if required, that any data retrieved is anonymised, encrypted and securely processed, while being able to demonstrate this to a third-party auditor (High-Level Expert Group on AI, 2019). AI should go through internal and external auditing to ensure they are fit for purpose, but the organisation also needs to be able to explain and justify the use of their AI. Organisations should allow for independent analysis and review of their systems (Amnesty International/Access Now, 2018). 


1.9 Showing.

Data should be accurate, up-to-date and fit-for-purpose, and companies should be able to demonstrate this (ICO, 2017). Data quality should be transparent, available for periodic assessment and there should be regular and continued anomaly detection set in place [United Nations Development Group (UNDG), 2017]. Developers of AI should also be able to provide their ethics codes to public authorities, organisational users and where possible, the public (University of Montreal, 2017). This can be achieved through periodic review sessions, appropriate oversight mechanisms and collective responsibility approaches within the organisation (ICDPPC, 2018). It should also be clear to the end user that they are interacting with an AI system, rather than a human (EPSRC, 2011).


2. Justice and fairness

Discrimination and unfair outcomes stemming from algorithms has become a hot topic within the media and academic circles (O’Neil, 2016). It is not surprising that issues of fairness, equality and equity were repeatedly discussed throughout the ethics guidelines. In addition to simply addressing issues of harm and injustice themselves, many of the guidelines provided recommendations on how to implement steps to minimise these harms. Furthermore, some documents also highlighted how different organisations should implement methods to reverse, remedy and allow fair redress, in instances where harms have occurred. 


2.1 Justice.

AI practitioners should identify what levels of justice and fairness can be implemented into the AI system during the design process (NSTC, 2016b). For example, if AI is used within the judicial system in any way, accountability should still lie with the human user, e.g. the judge (Rathenau Institute, 2017, p. 43). In addition, AI will replace many human jobs in the future, so it is important that there are effective and just ways to retrain and retool the human workforce (COMEST/UNESCO, 2017, pp. 52-53). 


2.2 Fairness.

While AI developers may have their own values, they should not develop algorithms with historically unfair prejudices (Latonero, 2018). There should be steps in place to ensure that data being used by AI is not unfair, or contains errors and inaccuracies, that will corrupt the response and decisions taken by the AI (ICO, 2017). To ensure the fairness of AI, their design should be fit for purpose, identify impacts on different aspects of society and should be designed to promote human welfare, rather than endanger it (ICDPPC, 2018). Organisations should consider using fairness-aware data mining algorithms (FATML, 2016). 


2.3 Consistency.

To prevent harmful actions in the decision-making process, organisations should ensure that accurate and representative sample data is collected, analysed and used [IPC Ontario (Information and Privacy Commissioner of Ontario), 2017]. Organisations need to establish procedures to ensure the identification, prevention and the minimisation of inaccuracies in their AI. To achieve this, data should be of the highest quality (UNDG, 2017), external algorithmic auditing should be carried out (Intel , 2017), and there should be consistent, repeated and regular discussions with end users and stakeholders that may be affected (PwC, 2019). 


2.4 Inclusion.

AI should not become another tool for exclusion within society (AI for Humanity, 2018). Particular attention should be given to under-represented and vulnerable groups and communities, such as those with disabilities, ethnic minorities, children and those in the developing world (High-Level Expert Group on AI, 2019). Data that is being used should be representative of the target population and should be as inclusive as possible (High-Level Expert Group on AI, 2019). AI organisations should not only reduce exclusion issues but should promote active inclusion of women and minority groups into the development and design of AI (Gilburt, 2019; WEF, 2018). 


2.5 Equality.

AI should not harm, and where ever possible, should promote, the equality of individuals in respect to their rights, dignity and freedom to flourish (The Future Society, 2018; Tieto, 2018). One way equality can be enabled is through greater diversity in AI teams and data sets and designs (Sage, 2017). More steps need to be taken to address sexist, misogynistic and gender-biased harms resulting from some AI (World Wide Web Foundation, 2018). 


2.6 Equity.

The aims of AI, generally, should be to empower and benefit individuals, provide equal opportunities while distributing the rewards from its use in a fair and equitable manner (EGE, 2018; IEEE, 2019; SIIA, 2017). AI should be developed so that it can be used within society in a fair and equal way (Japanese Society for Artificial Intelligence, 2017).


2.7 Non-bias.

AI organisations should invest in ways to identify, address and mitigate unfair biases (ICDPPC, 2018). Developers should examine unfair biases at every stage of the development process and should eliminate those found (The Public Voice, 2018). There should be close attention paid to the training data used, potential human biases and bias derived from the results of algorithmic processes (Cerna Collectif, 2018). Developers and organisational users of AI should conduct analysis to identify unfair bias, and there should be explicit attempts to avoid individual and societal bias, continual mechanisms in place and dialogue with stakeholders to raise awareness and reverse any biases detected (IBM, 2018). If there is any indication of unfair bias, the AI organisations should demonstrate the elimination of such bias before a competent authority (Council of Europe, 2017). 


2.8 Non-discrimination.

AI should be designed for universal usage and not discriminate against people, or groups of people, based on gender, race, culture, religion, age or ethnicity (Cerna Collectif, 2018). There should be mechanisms in place to effectively prevent, remedy and reverse discriminatory outcomes resulting from AI use (Amnesty International/Access Now, 2018). AI use should not lead to discrimination against individuals or groups of individuals in accordance with the Equality Act 2010, and organisations should create “discrimination impact assessments” to identify issues before their AI are used (AI for Humanity, 2018).


2.9 Diversity.

To promote diversity, AI organisations should instil an inclusionary working environment (Cerna Collectif, 2018), hire teams from a range of backgrounds (IBM, 2018) and disciplines (SAP, 2018), conduct regular diversity sessions and incorporate the viewpoints from a wide range of stakeholders (Amnesty International/Access Now, 2018). Organisations implementing and using AI should encourage a diversity of opinions throughout every stage of its use (Smart Dubai, 2019). 


2.10 Plurality.

AI developers should consider the range of social and cultural

viewpoints within society and should attempt to prevent societal homogenization of behaviour and practices (University of Montreal, 2017). Organisations should not only be focused on “pipeline model” changes in their organisation but should ensure that the plurality of individuals within their organisation have a voice and they create a culture of inclusion, which should be reflected in the AI technology (AI Now Institute, 2018). Create a multi-stakeholder dialogue and incorporate the viewpoints of women, underrepresented groups and marginalised individuals at every stage of AI applications (Leaders of the G7, 2018).


2.11 Accessibility.

Organisations should protect the rights of data subjects, such as the right of information access about them (Datatilsynet, 2018). Individuals have a right to access data that is being stored and used about them, and subsequently, to request that this is rectified or deleted (Datatilsynet, 2018). When decisions are made about individuals, explanations should be available that are easily accessed, free of charge and user-friendly (Smart Dubai, 2019). 


2.12 Reversibility.

It is important to clearly articulate if the outcomes of AI decisions are reversible, e.g. if individuals are refused a loan because of an AI algorithm, can such a decision be reversed if the customer can demonstrate their credit-worthiness (Personal Data Protection Commission Singapore, 2019, p. 16)? Organisations using AI need to ensure that the autonomy of AI is restricted and the outcomes are reversible when there is a harm caused (Floridi et al., 2018). AI should be programmed with a condition of reversibility, which ensures controllability and safety of the system: The ability to undo the last action or a sequence of actions allows users to undo undesired actions and get back to the ‘good’ stage of their work” (Clark, 2019). 


2.13 Remedy.

When AI holds the possibility of creating harm, there needs to be preemptive steps in place to trace these issues and deal with them in a prompt and responsible manner. Organisations should abide by the “termination obligation”, which states that when a system is no longer under human control, then it must be terminated (Telef?onica, 2018). There needs to be specific “red lines” drawn, that when breached, appropriate steps are taken to override the system, terminate it temporarily or indefinitely and remedy any potential issues that may have occurred (PwC, 2019).


2.14 Redress.

In situations where harmful and/or unjust events occur as a result of using AI, those affected should have appropriate and visible measures of redress in a timely manner (FATML, 2016). When decisions made by algorithms create harmful or questionable results, individuals should have the possibility to lodge a complaint and request a justification of the decision (Algo.Rules, 2019). This should be done in a manner that is understandable by those affected and should allow them the opportunity to challenge these decisions (B Debate, 2017). Accountability strategies should be created within companies, with appropriate measures for redress if these internal and external standards are not met (Dawson et al.,2019). 


2.15 Challenge.

AI companies should allow for “conscientious objectors, employee organizing and ethical whistleblowers” (AI Now Institute, 2018). There should be clear policies to protect conscientious objectors, employees to voice their concerns and whistle- blowers to feel protected, when it is in the public interest and safety (AI Now Institute, 2018). 


2.16 Access and distribution. 

AI organisations should ensure that their technologies are fair and accessible among a diversity of user groups within society (Smart Dubai, 2019). Organisations should especially concentrate on “populations that currently lack such access” (AI Now Institute, 2016, p. 3). AI should be accessible to those that are often socially disadvantaged (such as those with vision problems, dyslexia or mobility issues) (Sage, 2017). Wherever possible, organisations should use open data for their AI to ensure access and transparency (NSTC, 2016b).


3. Non-maleficence

The principle of nonmaleficence gained attention, resulting from Beauchamp and Childress (1979) ground-breaking Principles of Biomedical Ethics and its subsequent editions. In its most basic form, it means to do no harm or avoid doing harm to others. In AI ethics, the avoidance of harm to human beings has been one of the greatest concerns, with some of the most high-profile examples coming from killer robots, autonomous cars and drone technology. It is no surprise that most of the ethics guidelines had a strong emphasis on ensuring no harm comes to citizens, through security and safety of the AI, and precautionary and remedial steps to be taken, if harm occurs. 


3.1 Non-maleficence.

AI should be designed with the intent of not doing foreseeable harm to human beings (Personal Data Protection Commission Singapore, 2018). Developers and organisations using AI should receive and incorporate the advice of legal authorities and research ethics boards to ensure that data is retrieved, analysed and used in a manner that does not harm individuals [IPC Ontario (Information and Privacy Commissioner of Ontario), 2017]. Organisations should regularly test their algorithms to determine that no harm results from them (ACM2017; American College of Radiology, 2019). 


3.2 Security.

AI should be robust, secure and safe throughout their life cycle and must function appropriately and not pose unreasonable safety risks (OECD 2019). Organisations must ensure effective cybersecurity so that their AI is protected against attacks (Allistene, 2014). Security must be built into the architecture of the AI (Public Voice 2018) and must be tested before implementation (Algo.Rules, 2019). When security researchers find vulnerabilities or design flaws, they should disclose these findings to be resolved (Internet Society, 2017). 


3.3 Safety.

Developers and organisational users should ensure that AI does not infringe on human rights by ensuring their technology’s safety (EGE 2018). They must assess the public safety risks that arise from their AI and implement effective safety controls (Public Voice 2018). Organisations should enforce strict safety measures, ensuring their AI’s manageability and control and that adequate procedures are in place for security breaches (Algo.Rules, 2019). AI should pass quality assurance processes and be tested in real-world scenarios before, during and after deployment (SAP 2018). 


3.4 Harm.

The objectives and expected impact of AI must be assessed and documented in the development stage (Algo.Rules, 2019). The effects of these systems must be reviewed on an ongoing basis (Algo.Rules, 2019). Organisations should encourage a form of “algorithmic accountability” and should exercise caution when developing AI that may have negative impacts (ICO, 2017). AI technology that replaces human activity should produce at least a diminution of harm before it is allowed on the market (Federal Ministry of Transport and Digital Infrastructure, 2017). AI should not “cause bodily injury or severe emotional distress to any person” (IIIM, 2015).


3.5 Protection.

Developers should implement mechanisms and safeguards to protect user safety (OECD 2019), and AI must be safe and secure throughout their life cycle (IEEE, 2019). AI systems should prioritize the protection of human life (Federal Ministry of Transport and Digital Infrastructure, 2017). External auditors should be allowed to conduct examinations and report negative impacts of the AI without fear of harm or threat by the AI organisations. In addition, the protection of whistle-blowers within AI organisations should also be ensured to allow for effective and legitimate reporting of harms (High-Level Expert Group on AI, 2019,p.20). 


3.6 Precaution.

Those who develop AI must have the necessary skills to understand how they function and their potential impacts (Algo.Rules, 2019), and security precautions must be well documented (Public Voice 2018). AI organisations may receive advice from trained legal professionals, ethicists working in the area and policy analysts. If no consensus can be agreed upon, development of the AI “should not proceed in that form” (High-Level Expert Group on AI, 2019, p. 20). AI systems need to allow for human interruption, or their shutdown, when there is potential harm (Internet Society, 2017). 


3.7 Prevention.

An AI system must be manageable throughout the lifetime and its control must be made possible (Algo.Rules, 2019). The reliability and robustness of AI and its reliability with respect to attacks, access and manipulation must be guaranteed (Public Voice 2018). Great effort should be put into ensuring reliability and safety (IEEE, 2019). AI systems should prevent accidents from occurring, whenever possible, and avoid critical situations from occurring in the first place (Federal Ministry of Transport and Digital Infrastructure, 2017). 


3.8 Integrity.

Attacks against AI should not compromise the bodily and mental integrity of people by ensuring the reliability and internal robustness of the systems (EGE 2018). AI should “fail gracefully” (e.g. shutdown safely or go into safe mode) (IEEE, 2019). 


3.9 Non-subversion.

AI systems should be used to respect and improve the lives of citizens, rather than “subvert, the social and civic processes on which the health of society depends” (Future of LifeInstitute, 2017).


4. Responsibility

Moral responsibility is a very important issue within AI ethics, with a fear that companies will try to obfuscate blame and responsibility onto the autonomous or semi-autonomous system. There may also be incidences where because of this relative autonomy, AI creates a

“responsibility gap”, whereby it is unclear who is responsible. Issues of responsibility, accountability, liability and acting with integrity appeared in many of the ethics guidelines that we analysed. 


4.1 Responsibility.

Developers are primarily responsible for the design and functionality of the AI, and when there is an error or harm, then the onus of responsibility often lies with them. When the issue is caused by the use and implementation of the technology, the onus is with the organisational user of the AI. There needs to be clear and concise allocation of responsibilities within the organisation using AI, and the creation of potential scenarios and ways to deal with harms when they occur (EGE 2018; FATML, 2016).


4.2 Accountability.

AI organisations need to be aware of the issues involved with using poor data and be held accountable if there are harmful consequences as a result of this. Developers need to be aware that they are accountable for these systems’ impact on the world (IBM, 2018). They need to be open and accountable by means of auditing, monitoring and conducting impact assessments of AI (ICDPPC, 2018). A legal person must always be held accountable for harms caused by AI and this blame cannot be placed on the tools that cause the damage (Algo.Rules, 2019). 


4.3 Liability.

There is a need to distinguish between the designer and organisational users of those systems for legal reasons (Cerna Collectif, 2018). To attribute liability in situations of malfunction, error and harms, there needs to be clear attributions of responsibility. Definitive liability should be established for when autonomous systems cause undesired effects (EGE, 2018). This can be achieved through adequate record-keeping, systems for registration, and documentation (IEEE, 2019). 


4.4 Acting with integrity.

AI organisations must ensure that their data meets quality and integrity standards at every stage of use (ITI, 2017). If those working with AI discover errors, security breaches or data leaks, then they must report these issues to the relevant authorities, stakeholders, and if relevant, the wider public (University of Montreal, 2017). Ethics training should be implemented to ensure responsible development and deployment of AI (AI for Humanity 2018). AI companies should respect and support the academic and professional integrity of their partners and researchers (Deepmind, 2017).


5. Privacy

Since the GDPR came into force in 2018, privacy has been a hot topic for anyone working in fields where personal data is being used. Particularly, there is a great concern in the development and use of AI, with many of the ethics guidelines strongly featuring privacy and data protection as key tenets in their recommendations. Because of the large abundance of data that is required for AI to work, it is important that individuals’ privacy is not jeopardised as a result. 


5.1 Privacy.

Some of the steps that AI organisations should take to ensure privacy are the security of databases, storage and AI systems through de-identification, anomaly- detection and effective cybersecurity (IPC of Ontario, 2017); ensuring informed consent is retrieved (EGE, 2018); users should have control and access to data stored about them (IEEE, 2019); follow current data protection regulations (UK Government, 2018) and non- regulatory privacy-by-design frameworks (ICDPPC, 2018) and ensuring that the data retrieved is of a high standard. Organisations purchasing off-the-shelf AI can cultivate a privacy culture by demanding privacy-by-design AI (Datatilsynet, 2018). 


5.2 Personal or private information. 

The development and use of AI should ensure a strong adherence to the privacy and data protection standards outlined in the General Data Protection Regulation (2018), in addition to non-regulatory frameworks, such as privacy-by- design and privacy impact assessment frameworks (IEEE, 2019; Intel, 2018). Developers and organisational users of AI must place the end user’s privacy and personal data at the forefront of the design process, viewing privacy as a human right (Latonero, 2018). The end user’s personal data, and data derived or created about them, should be processed in a fair, lawful and legitimate way (UNDG, 2017). Whenever possible, the collection and use of personal data should be kept to a minimum, unless completely necessary and relevant.


6. Beneficence

The principle of beneficence also gained greater acknowledgement and adoption after Beauchamp and Childress (1979) Principles of Biomedical Ethics. Beneficence essentially means to do good, to carry out an activity with the intention of benefitting someone or society as a whole. Often, beneficence is overlooked in the AI ethics literature, often being seen as a given that AI will bring benefits. The ethics guidelines we analysed highlighted beneficence to promote the flourishing of individual well-being, ensuring people receive benefits fromAI use, or that it should promote peace and the social and common good. 


6.1 Benefits.

AI organisations should ensure that their AI is designed to benefit humans (IEEE, 2019). They should clearly map out those benefits and the parties benefiting from them (The Information Accountability Foundation, 2015). AI systems must create greater benefits than their costs for people (Dawson et al.,2019, p. 6) and should benefitas many people as possible (Future of LifeInstitute, 2017; The Partnership on AI, 2016). AI organisations should “advance scientific understanding of the world, and to enable the application of this knowledge for the benefit and betterment of humankind” (IIIM, 2015). 


6.2 Beneficence.

AI organisations should find solutions to some of the world’s greatest problems, such as curing diseases, ensuring food security and preventing environmental damage (Intel, 2017). AI organisations should use data retrieved for the benefit of their customers and society (OP, 2019). Ultimately, AI should “compliment the human experience in a positive way” (Unity Blog, 2018). 


6.3 Well-being.

AI organisations should ensure individual well-being and flourishing (IEEE, 2019). They should ensure that their AI is fit-for-purpose and that it does not prohibit individual development and access to primary goods, it ensures human welfare, and allows for the empowerment of individuals around the world (EGE, 2018). AI should be used to compliment those working in the health care sector to provide better care and support the well-being of patients (RCP London, 2018). 


6.4 Peace.

AI organisations should aim to avoid an “arms race in lethal autonomous weapons” (Future of Life Institute, 2017; see also Smart Dubai, 2019). If AI threatens peace, organisations should collaborate with governments to reduce potential conflicts (OpenAI, 2018).


6.5 Social good.

AI should bring an improvement in beneficial opportunities for society (The Information Accountability Foundation, 2015, p. 10). AI organisations should cultivate a healthy AI industry ecosystem, built on cooperation and healthy competition (Government of the Republic of Korea, 2017, p. 62). The use of AI should not come at a cost of causing a conflict with non-users of these technologies (Ministry of State for Science and Technology Policy, 2019, p. 22). 4.6.6 Common good. AI should be developed to support the common good (Future of Life

Institute, 2017) and the service of people (AGID, 2018). AI organisations should weigh up the benefits and harms resulting fromAI and should take careful consideration to develop ways to mitigate and harms to ensure an overall common good for society (The Information Accountability Foundation, 2015, p. 8). Appropriate steps should be considered to ensure that AI is used for good and that humanity is protected from potentially harmful impacts resulting from it (OpenAI, 2018).


7. Freedom and autonomy

Democratic societies place value in freedom and autonomy, and it is important that AI use does not encumber or harm these for us. The ethics guidelines addressed ways to ensure autonomy-promoting and liberty-protecting AI. For example, the AI organisation should ensure that individuals consent to how their data is being used, AI should not harm individuals’ abilities to make choices, or manipulate their self-determination. 


7.1 Freedom.

Developers should acknowledge, identify and ameliorate circumstances where AI may create harm against human freedoms. Organisations should ensure that the end users’ freedoms are not infringed upon during the use of AI (High-Level Expert Group on AI, 2019). Developers should ensure that AI does not harm end users through tracking (freedom of movement), censorship (freedom of expression) or surveillance (freedom of association). 


7.2 Autonomy.

AI organisations should ensure that end users are informed, not deceived or manipulated by AI and should be allowed to exercise their autonomy (EGE, 2018). AI organisations need to ensure that the “principle of user autonomy must be central to the system’s functionality” (High-Level Expert Group on AI, 2019, p. 16). Users should be informed actors and have control over their decisions when interacting with AI (Council of Europe, 2019). 


7.3 Consent.

The use of personal data must be clearly articulated and agreed upon before its use (UNDG, 2017). If personal data is repurposed, developers should ensure that it is compatible with the original fair processing requirements when consent is given (ICO, 2017), in those cases where consent is the legal basis of data processing. Personal data should not be processed in a way that the data subject considers inappropriate or objectionable (Council of Europe, 2017). The use of personal data should also be done within reasonable expectations and consent of the individuals but must also be used for legitimate purposes (Future Advocacy, 2019). 


7.4 Choice.

AI should protect users’ power to decide about decisions in their lives (Floridi et al., 2018). AI should not “compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens” (European Group on Ethics in Science and New Technologies, 2018,p.17). 


7.5 Self-determination.

There needs to be a balance between decision-making power that is freely given by the user to the autonomous systems and when this option is taken away or undermined by the system (Floridi et al.,2018). AI organisations should not manipulate individual’s self-determination, particularly those who may be vulnerable to abuse (Rathenau Institute, 2017,p. 26). 


7.6 Liberty.

AI organisations need to ensure that their AI protects individuals’ liberties, as outlined in many human rights legislations, such as the EU’s Charter of Fundamental Human Rights (2000) and the Universal Declaration of Human Rights (1948). Liberty refers to rights such as freedom of speech, freedom of assembly and freedom of movement. During the development of AI, there should be strong adherence to the protection of liberties, outlined in these fundamental human rights documents. 


4.AI should be used to empower and strengthen our human rights, rather than curtailing or infringing upon them (ICDPPC, 2018). If decisions are made about individuals that may harm their liberties, they should be empowered with the right to challenge such decisions (ICO, 2017).


8. Trust

Trust is such a fundamental principle for interpersonal interactions and is a foundational precept for society to function. Similarly, trust is being acknowledged as a key requirement for the ethical deployment and use of AI. The HLEG (2019) even use it as their defining paradigm for their ethics guidelines, referring to it throughout the entire document. It appears to be a relatively new phenomenon however, with most of the guidelines that make reference to trust coming after 2017. 


8.1 Trustworthiness.

AI organisations should prove they are trustworthy and that their technologies are reliable (Digital Decisions, 2019; MI Garage, 2019). End users should be able to justly trust AI organisations to fulfil their promises and to ensure that their systems function as intended (Deutsche Telekom, 2018; Institute of Business Ethics, 2018; Microsoft, 2018; Sony, 2018; NITI Aayog, 2018; and Microsoft, 2017). Building trust should be encouraged by ensuring accountability, transparency and safety of AI (Royal Society, Organisations can cultivate trust by demonstrating the security of their AI (Intel, 2017) and guard the data retrieved from these systems in a responsible way (Unity Blog, 2018).


9. Sustainability

Sustainability is a key principle in global discussions at present, and its importance is only set to rapidly increase as a result of climate change predictions and ongoing environmental destruction. All fields and disciplines are affected and need to incorporate sustainability agendas, and AI is no exception. Despite this, it did not appear as an overly pressing concern in the majority of guidelines, demonstrating a greater need to identify how it can be incorporated more effectively. 


9.1 Sustainability.

AI organisations need to ensure that they are environmentally sustainable and incorporate environmental outcomes within their decision-making (Special Interest Group on Artificial Intelligence, 2018). There must be an adherence to resource- efficient, sustainable energy-promoting and the protection of biodiversity, by the AI. 


9.2 Environment (nature).

Organisations should use AI that has been developed in an environmentally conscious manner (SIIA, 2018). In situations where there is ecological harm caused by AI beyond acceptable levels, steps should be taken to either immediately halt it (temporarily or permanently), identify ways to use it in a non-harmful way or consult the designers for potential solutions and responses. AI should not be used to harm biodiversity (UNI Global 2017). 


9.3 Energy.

The use of AI should be respectful of energy efficiency, mitigate greenhouse gas emissions and protect biodiversity (University of Montreal, 2017). Those responsible for AI should ensure that its ecological footprint is minimal and all efforts are taken to reduce emission levels (Green Digital Working Group, 2016,p. 7). 


9.4 Resources (energy).

AI should be created in a way that ensures effective energy and resource consumption, promotes resource efficiency, the use of renewable materials, and reduction of use of scarce materials and minimal waste (European Parliament, 2017). Resource use and environmental impact should be held in importance in the life cycle impact assessment of AI (COMEST/UNESCO, 2017, p. 55).


10. Dignity

Human dignity is the recognition that individuals have inherent worth and that their rights should be respected. It is important that AI does not infringe or harm the dignity of end users or other members within society. Respecting individuals’ dignity is a vital principle that should be taken into account within AI ethics guidelines. 


10.1 Dignity.

Human beings have intrinsic value and developers/organisational users should ensure that this is respected in the design and use of AI (The Conference toward AI Network Society, 2017). AI should be developed and used in a way that “respects, serves and protects humans” physical and mental integrity, personal and cultural sense of identity, and satisfaction of their essential needs” (High-Level Expert Group on AI, 2019, p. 10). AI needs to be developed and used in a way that makes it clear to the user that they are interacting with AI and not another human being (EGE, 2018). Efforts need to be made to ensure that AI is not confused with human beings, as dignity is a value inherent to human beings (COMEST/UNESCO, 2017, p. 50). Organisations should ensure that their AI does not violate the end-user’s dignity and should closely follow the principle of dignity outlined in the first chapter of the EU Charter (Latonero, 2018).


11. Solidarity

With the widespread use of AI to disseminate fake news, its potential to surveil and invade individuals’ privacy, there is a growing concern that AI may be used to undermine and jeopardise societal relationships and solidarity. It is important to consider if the AI supports rich and meaningful social interaction, both professionally and in private life, and not support segregation and division, within the design and development process. AI should promote social security and cohesion and should not jeopardise societal bonds and relationships. 


11.1 Solidarity.

AI should be developed to promote, or avoid harm to, societal bonds and relationships between people and generations (University of Montreal, 2017). AI should facilitate and promote human development, rather than being designed to obstruct or endanger it (ICDPPC, 2018). There should be consideration towards preserving and promoting solidarity and should not undermine existing social structures (Floridi et al., 2018). AI should not create “social dislocation”, whereby it adversely harm cultural and social identity, and those organisations that cause it should be held responsible (Accenture, 2019).


11.2 Social security.

Democratic values should not be jeopardised as a result of AI use and citizens should receive accurate and impartial information without interference or manipulation for political purposes (EGE, 2018). AI should not be developed or used to undermine electoral and political decision-making (High-Level Expert Group on AI, 2019). This can be done by ensuring that democratic values are promoted in AI development and implementation (EGE, 2018). 


11.3 Cohesion.

AI organisations should promote fair distribution of benefits from AI to ensure social cohesion is not harmed (Koski and Husso, 2018, p. 51). The use of AI should contribute to global justice, in the aim to promote social cohesion and solidarity (European Group on Ethics in Science and New Technologies, 2018,p.17). AI teams should not develop or use these technologies in a way that knowingly undermines “functioning democratic systems of government” (Unity Blog, 2018). AI organisations should actively develop strategies with academia, civil society and industry partners, to promote social cohesion and knowledge-exchange collaborations (Privacy International/Article 19, 2018, p. 29).


Additional material:

1. https://inventory.algorithmwatch.org/


References


[1] MORLEY, Jessica et al. Ethics as a service: a pragmatic operationalisation of AI Ethics. arXiv preprint arXiv:2102.09364, 2021.


[2] MITTELSTADT, Brent. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, v. 1, n. 11, p. 501-507, 2019.


[3] BENJAMINS, Richard. Towards organizational guidelines for the responsible use of AI. arXiv preprint arXiv:2001.09758, 2020.


[4] FJELD, Jessica et al. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication, n. 2020-1, 2020.


[5] VAKKURI, Ville; KEMELL, Kai-Kristian; ABRAHAMSSON, Pekka. ECCOLA-a method for implementing ethically aligned AI systems. In: 2020 46th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 2020. p. 195-204.


[6] JOBIN, Anna; IENCA, Marcello; VAYENA, Effy. The global landscape of AI ethics guidelines. Nature Machine Intelligence, v. 1, n. 9, p. 389-399, 2019.


[7] STIX, Charlotte. A survey of the European Union's artificial intelligence ecosystem. Available at SSRN 3756416, 2019.


[8] RYAN, Mark; STAHL, Bernd Carsten. Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. Journal of Information, Communication and Ethics in Society, 2020.