Ethics in AI

Guide for Artificial Intelligence Ethical Requirements Elicitation

Start Guide
RE4AI Ethical Guide logo

Tools

List of tools on the cards

1. DALEX. Principle: Transparency. Ethical issues: Explainability; Explicability; Understandability; Interpretability. The DALEX package takes an X-ray of any model and helps explore and explain its behaviour, helps understand how complex models are working. Link: https://github.com/ModelOriented/DALEX 

2. InterpretML. Principle: Transparency. Ethical issues: Explainability; Explicability; Understandability; Interpretability. A Microsoft open source package that incorporates machine learning techniques where you can train interpretable models and explain black box systems, supporting global understanding of models or the reasons behind predictions. Link: https://github.com/interpretml/interpret

3. CALIMOCHO. Principle: Transparency. Ethical issues: Explainability; Explicability; Understandability; Interpretability. An implementation of Explanatory Active Learning (XAL) based on Self-explanatory Neural Networks. Link: https://github.com/stefanoteso/calimocho

4. ABOD3. Principle: Transparency. Ethical issues: Explainability; Explicability; Understandability; Interpretability. ABODE is an integrated development environment (IDE) for Behavior Oriented Design (BOD) that allows you to visualize, develop and debug AI in real time. Link: https://github.com/RecklessCoding/ABOD3 

5. TransparentAI. Principle: Transparency / Justice and fairness / Non-maleficence / Responsibility / Sustainability. Ethical issues:  Explainability; Explicability; Understandability; Interpretability; Showing / Non-bias; Redress / Security, Safety; Harm / Responsibility, Accountability; Acting with integrity / Energy, Resources (energy). A toolbox in Python to know if an AI-based system is ethical, based on the AI HLEG. Link: https://github.com/Nathanlauga/transparentai

6. Multi Accuracy Boost. Principle: Transparency / Justice and fairness. Ethical issues: Explainability; Explicability; Understandability; Interpretability / Non-bias. A tool for auditing and post-processing black-box algorithms to ensure accurate predictions in datasets with protected attributes. Link: https://github.com/amiratag/MultiAccuracyBoost 

7. Variational Fair Autoencoders (VFAE). Principle: Justice and fairness. Ethical issue: Non-bias. A tool that enables training models and obtaining predictions that are less biased by the sensitive properties of people on a pre-defined dataset. Link: https://github.com/yevgeni-integrate-ai/VFAE 

8. The Impartial Machines Project. Principle: Justice and fairness. Ethical issue: Non-bias. A tool that attempts to eliminate potential influences/ biases in news. Link: https://github.com/abhayrjoshi/The-Impartial-Machines-Project 

9. Fairness-Aware-Ranking in Search & Recommendation Systems. Principle: Justice and fairness. Ethical issue: Non-bias. A tool that attempts to eliminate potential influences/ biases in ranked lists generated by recommender systems. Link: https://github.com/saikumarkorada20/Fairness-Aware-Ranking 

10. Fair-ML-4-Ethical-AI. Principle: Justice and fairness. Ethical issue: Non-bias. Pedagogical resources for bias detection and elimination in datasets using R. In French. Link: https://github.com/wikistat/Fair-ML-4-Ethical-AI 

11. Fair-Forest. Principle: Justice and fairness. Ethical issue: Non-bias. A Java library that attempts to eliminate potential influences/ biases in decision trees and random forests . Link: https://github.com/pjlake98/Fair-Forest 

More comprehensive tools with no direct relationship to a principle:

1. Deon. A command line tool that allows you to add an ethical checklist to data science projects. Link: https://github.com/drivendataorg/deon

3. Interpretable AI. Principle: Transparency. Ethical issues: Explainability; Explicability; Understandability; Interpretability. A list of Interpretability techniques for building robust AI applications and examples of AI propagating biases. Link: https://github.com/thampiman/interpretability 

4. Melusine. A high-level Python library for email classification and feature extraction with a focus on the French language. Contains Ethical Guidelines for evaluating AI design based on the AI HLEG. Link: https://github.com/MAIF/melusine

5. SWED. An educational argument diagramming tool for the domain of Software Engineering ethics, with a specific version to discuss AI ethics. Link: https://github.com/JoshuaCrotts/Software-Engineering-Ethics-Debater 

6. AI collaboratory. A project that performs analysis, evaluation, comparison and classification on some pre-defined datasets. Link: https://github.com/nandomp/AICollaboratory 

7. Fooling LIME and SHAP. Principle: Transparency. Ethical issues: Explainability; Explicability; Understandability; Interpretability. Code from an article where the authors aim to cheat LIME and SHAP (two tools for XAI). Link: https://github.com/dylan-slack/Fooling-LIME-SHAP

8. social and Ethics in ML. Principle: Justice and fairness / Privacy. Ethical issues: Fairness, Non-bias, Consistency / Personal or Private information. It shows how privacy and equity was achieved in a project. Link: https://github.com/belsonna/social_and_Ethics_in_ML 


Additional material:

1. https://www.exploreaiethics.com/category/tools/

2. https://docs.google.com/document/d/1h6nK9K7qspG74_HyVlT0Lx97URM0dRoGbJ3ivPxMhaE/edit

3. http://aequitas.dssg.io/

4. http://ethicstoolkit.ai/

5. https://github.com/marcotcr/lime

6. https://github.com/slundberg/shap

7. https://github.com/linkedin/LiFT