Ethical Problem-Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method). Both these stages represent distinct steps in a human-centered EaaS (Ethics as a Service) framework developed by Nicholas Kluge Corrêa, James William Santos, Camila Galvão, Marcelo Pasetti, Dieine Schiavon, Faizah Naqvi, Robayet Hossain, and Nythamar de Oliveira.
This repository contains a simple demo of our framework, and it should not be considered a working EaaS platform. The full implementation of our method as an EaaS is currently under development.
The following steps can summarize the flow of the EPS methodology:
The flow of the ethical problem-solving framework begins with a pre-algorithmic impact assessment (Pre-AIA). The pre-assessment gauges preemptively the realm of impact of a particular system, leading to the actual tools of impact assessment. This preliminary assessment informs the user what algorithmic impact assessment surveys (AIAs) are required to fulfill the evaluation stage. For example, the user must perform the privacy and data protection AIA if the intended application utilizes personally identifiable information.
After this brief assessment, the user is directed to the next stage.
Our evaluation stage consists of questionnaires with pre-defined questions and answers that can be single-choice or multiple-choice. Our current implementation of these AIAs covers the following themes: data protection and privacy, protection of children and adolescents, antidiscrimination, and consumer rights. These AIAs use legally binding standards to deduce the implications of AI systems through an objective lens.
The questions of our AIAs identify the system's compliance with at least three ethical principles identified by one of our previous studies (WAIE). Hence, each AIA generates impact scores relative to these assessed principles.
Ultimately, these assessments generate a standardized impact level on each ethical principle evaluated by each AIA. i.e., we divide the attained score by the maximum attainable score for each principle. At the same time, the overall cumulative impact of all assessed principles represents the general impact of a system against a specific AIA. For example:
Our AIAs provide an impact score that cannot address all of the ubiquities attached to the ethical issues that AI systems present. Hence, in our current implementation of the EPS framework, we developed a more qualitative survey to accompany the evaluation stage, entitled Ethical Troubleshoot, aimed at going beyond an objective evaluation. In short, this troubleshooting query allows the respondent to divulge how a given AI system or application has been developed in a human-centric way. It utilizes a combination of multiple-choice, single-choice, and open-ended questions to gauge the system's scope, its intended and unintended uses, and its target audience.
After the evaluation stage, the EPS framework requires that human evaluators classify the system under consideration in an impact matrix. The matrix is constituted by three levels of recommendation tailored to each impact level - high, intermediate, and low - and six different ethical principles gathered from the WAIE review, i.e., fairness, privacy, transparency, reliability, truthfulness, and sustainability.
Hence, each principle has three distinct possible recommendations tailored to specific impact levels, e.g., Sustainability-low, Sustainability-intermediate, and Sustainability-high.
The WHY-SHOULD-HOW methodology is the format in which the evaluation outcome is presented.
The WHY step is structured to demonstrate the relevancy of each principle, providing the conceptualization and highlighting paradigmatic cases of deficit implementation in a structure that answers the questions "What is said principle?" and "Why should you care about it?". The SHOULD and HOW are attached to streamline the normative guidance and the practical tools to address it.
The SHOULD provides the metric utilized to gauge the level of recommendation regarding the corresponding principle, the level of recommendations indicated for the specific case, and the set of recommendations in a summarized form. Finally, the HOW component offers the practical tools and strategies required to implement the recommendations made in the SHOULD stage.
The HOW step of the WHY-SHOULD-HOW methodology pragmatizes the normative recommendations of our method. Hence, throughout the HOW stage, in every principle evaluated, the developer is presented with ready-to-use tools paired with demonstrations in the form of an open repository of tutorials. The repository has many examples of tools and techniques developed to deal with the potential issues of an AI application (e.g., algorithmic discrimination, model opacity, brittleness, etc.), all being worked through with some of the most common contemporary AI applications (e.g., computer vision, natural language processing, forecasting, etc.)
By following the EPS framework, evaluators and developers working together can identify ethical concerns and take proactive steps to address them. This ultimately leads to tightening the principle-practice gap, i.e., from AI to beneficial AI.
The demo is powered by the abstra
library, which runs a flask
app under the hood. To run the app locally:
- Clone this repository
git clone https://github.com/Nkluge-correa/ethical-problem-solving.git
- Install the requirements
pip install -r requirements.txt
- Lunch the
abstra
app.
abstra serve
This demo is hosted in the Abstra Cloud.
The main folder in this repo contains the following:
-
The
Home.py
file is simply an HTML version of this README file. It only serves as the "cover" of our demo, explaining the EPS methodology. Theabstra.json
file serves to order and structure the paths of the files for constructing the abstra page and the sidebar in our demo. If the name or path of any files in this repo is changed, the respective path in theabstra.json
file should also be changed. -
The scripts for all the AIAs (
AIA-anti-discrimination.py
,AIA-consumer-rights.py
,AIA-privacy-and-data-protection.py
, andAIA-protection-children.py
), the Pre-AIA survey (AIA-pre-survey.py
), and the Ethical Troubleshoot (Ethical-troubleshoot.py
) assessment. The scripts for the AIAs have the implementation of the score calculation in it. Meanwhile, the Pre-AIA survey merely prescribes what assessments should be performed. Lastly, the Ethical Troubleshoot script collects all the user answers and saves them as a CSV file. In principle, all the information gathered in these surveys should be used to inform an ethics board. This board is then responsible for performing the ethical framing stage. -
The scripts for all the recommendations tied to the EPS (
EPS-fairness.py
,EPS-privacy.py
,EPS-reliability.py
,EPS-sustainability.py
,EPS-transparency.py
,EPS-truthfulness.py
). Each of these scripts allows the controller to choose a given level of risk (low, intermediate, and high), which then sends them to an informative set of HTML pages that follow the WHY-SHOULD-HOW method. In principle, the ethical framing aspect of the recommendation stage is meant to be done by an ethics board. Meanwhile, all subsequent WHY-SHOULD-HOW information is sent back to the user (i.e., developer of the system under evaluation), along with the scores related to the evaluation stage and a written report produced by the ethics board. -
The AIA folder contains four subfolders (anti-discrimination, consumer-rights, privacy-and-data-protection, and protection-children). Each of these subfolders contains the questions related to each respective AIA.
-
The EPS folder contains six subfolders (fairness, privacy, reliability, sustainability, transparency, truthfulness). Each folder contains the WHY-SHOULD-HOW informative for their respective principles. Inside each of these subfolders, one can find a readable version and an HTML version. The HTML files are the ones rendered by the abstra app (to convert markdown to HTML, you can use the
markdown-html-converter.py
utility script). While the WHY is the same for each level of impact, all the SHOULD-HOW files have their respective impact level as a suffix (e.g., SHOULD-HOW-INTERMEDIATE). -
The img folder contains the images used throughout this demo.
- The tools and assessments in this demo are intended to be an academic display of our research.
- The results obtained by our tools are not legal/technical advice. We suggest consulting a specialist in all cases.
- Completing the tools indicates guidelines for mitigating algorithmic impacts and suggests measures to solve ethical problems. However, no warranties, express or implied, are made regarding the future performance of the application or the organization.
- It is up to the stakeholders who have used the tools and assessments to seek to prevent, mitigate, and resolve the possible impacts identified.
- The author's disclaimer responsibility and the results presented do not serve as a certificate or attestation for ethical practices.
- The Algorithmic Impact Assessments (AIAs) are based on some legislative texts (Consumer Defense Code, Criminal Code, Statute of the Child and Adolescent, General Data Protection Law). However, they do not lend themselves to exhausting their content for legal compliance.
- The information, opinions, estimates, and guidelines contained in the platform refer to the present date and may need to be updated due to the passage of time or possible changes.
@article{correa2024crossing,
title={Crossing the principle--practice gap in AI ethics with ethical problem-solving},
author={Corr{\^e}a, Nicholas Kluge and Santos, James William and Galv{\~a}o, Camila and Pasetti, Marcelo and Schiavon, Dieine and Naqvi, Faizah and Hossain, Robayet and Oliveira, Nythamar De},
journal={AI and Ethics},
pages={1--18},
year={2024},
publisher={Springer}
}
This research was funded by RAIES (Rede de Inteligência Artificial Ética e Segura). RAIES is a project supported by FAPERGS (Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul) and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico).
Ethical Problem-Solving © 2024 by Nicholas Kluge Corrêa, James William Santos, Camila Galvão, Marcelo Pasetti, Dieine Schiavon, Faizah Naqvi, Robayet Hossain, and Nythamar de Oliveira is licensed under CC BY-SA 4.0.