The Need for an Ethical Approach to AI

Heka.ai
9 min readDec 19, 2023

You don’t know it yet, but you’re in the middle of the biggest technological revolution of the 21st century. From everyday content recommendations to decision support systems in sensitive situations, AI is deeply integrated into our lives. Exciting advancements have given rise to foundation models like OpenAI’s GPT for text generation, Midjourney for image generation, and Meta’s Segment Anything Model (SAM) for image segmentation. These models hold tremendous potential to revolutionize various aspects of our society

1 — Some Foundational Models and their Creators

As the power of AI applications continues to grow, it becomes crucial to collectively set up the boundaries we want to set for these tools. The goal is not to hinder AI’s development but to ensure its sustainable and responsible progress. AI systems have already proved they can cause significant harm and concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Elon Musk and OpenAI CEO Sam Altman.

In October 2019, researchers discovered that an algorithm used in hospitals to identify patients to focus on favored white patients to the detriment of black patients. Had this not been noticed and mitigated, the quality of care for black patients would have been lower, penalizing their health [1]. Recently, Italy temporarily banned OpenAI’s chatbot, ChatGPT, due to concerns about its compliance with the GDPR (General Data Protection Regulation), the regulation that establishes European standards for protecting citizens’ personal data [2]. The service was reauthorized at the end of April, after OpenAI reviewed its management of user data [3]. As AI systems become more influential and their capabilities and performance improve, associated risks multiply.

Facing these emerging challenges, the development of an ethical framework for the use of AI becomes imperative. Such a framework aims to protect individuals, groups, and society while fostering the continued development of techniques that can benefit humanity. Recognizing the significance of ethical frameworks, numerous stakeholders are actively engaging in discussions about AI ethics. Their objective is to identify the values that must be protected, and how they can be enforced.

2 — Recent Trend in the Number of Scientific Articles on the Topic of Ethical AI Frameworks, Dimensions.ai

General considerations on Ethics

Ethics delves into the realm of understanding what is morally right or wrong, good or evil, and encompasses concepts like justice, well-being, and fairness. In the context of artificial intelligence (AI), ethical considerations become crucial. Ethical AI is a field of applied ethics that focuses on how AI developers, hardware manufacturers, authorities, and operators should conduct themselves to mitigate the ethical risks associated with AI. These risks can arise from the design of AI applications or their inappropriate and malicious utilization.

At the core of ethics lie values and norms. Values represent the principles and ideals that individuals or society acknowledge. They serve as guiding lights, offering direction rather than rigid objectives. In the ethical AI discourse, key values include respect for privacy, equality, societal stability, environmental protection, human autonomy, or safety, among others. By building on these values, society can establish norms that steer us toward their realization. Norms prescribe how the world should be. For instance, a normative imperative related to the value of equality would state: “AI applications must be fair and not discriminatory.” [4]

3 — From Values to Law

The rapid developments in AI can threaten these values. Identifying the values to uphold and defining appropriate standards serves as a crucial initial step in developing strategies to govern the use of AI. This reflective process should involve representative bodies from all facets of society, ensuring a collaborative and inclusive approach. Academia, national and international organizations, tech companies and the rest of the private sector both shape the guidelines and are shaped by them [5]

4 — Representation of influences between AI frameworks stakeholders

AI Ethics framework

Several institutions have been designing AI ethics frameworks. At Sia Partners, we have framed it around 5 pillars:

  1. Transparency
  2. Diversity
  3. Privacy and Data Gouvernance
  4. Responsibility
  5. Technical robustness and safety.

Topics such as fairness, transparency, and privacy will be addressed in more detail in subsequent articles.

5 — AI Ethics Challenges
  • Transparency” is the most universally shared principle among the works of different task groups. Algorithmic systems that are difficult to control and audit are the least transparent. Transparency aims to limit the proliferation of black-box systems. Its goal is to minimize potential harm resulting from unexpected system behaviors, increase user trust, and foster communication between developers and users. To achieve this, algorithms must be traceable and explainable, meaning that each algorithm should be carefully documented and explainable both at a general operational level and in terms of its predictions. This latter point is the focus of research in the field of explainability known as XAI (eXplainable AI), which continues to gain significance.
  • Diversity”, non-discrimination and equity are also widely shared among AI stakeholders. It is often referred to as AI fairness in literature. It focuses on ensuring the fair distribution of AI benefits and drawbacks, as well as fair treatment for all individuals. The fight against discrimination has received significant attention, particularly after the discovery of racial biases against American citizens of African origin in systems such as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which predicted the risk of recidivism for defendants. However, racial discrimination is not the only form of discrimination, and systems are expected to avoid discrimination based on a wide range of sensitive attributes, such as gender, age, religion, or origin. The specific list of these attributes depends on national legal frameworks. To prevent discrimination, various reports recommend using technical solutions to detect and reduce biases in algorithmic data and predictions. These technical measures should be accompanied by non-technical measures, such as transparent design processes, ease of auditability, and promotion of diversity within design teams.
  • “Respecting privacy and governing data” are fundamental rights that must be protected. Data protection measures are strengthening worldwide, driven by the European GDPR, which stipulates that users must be informed about how their data is collected, stored, used, and shared. They should also have control over their own data, including the right to access, modify, and delete their personal information. Additionally, data should be anonymized to the greatest extent possible to reduce the risks of identification and privacy breaches. It is important to set up a robust framework for data governance and use technical tools to either protect the data from attacks or eliminate the need for personal data during the development of ML (Machine Learning) algorithms. We have published an article on this subject that explores the generation of synthetic data to meet GDPR requirements.
  • “Responsibility” deals with the allocation and assessment of responsibilities. Assigning responsibility for harm caused by an algorithm is a complex task that involves moral judgment. A common example is the case of accidents involving autonomous vehicles. On one hand, these systems involve multiple actors, making it challenging to isolate responsibility. On the other hand, certain situations may have no harm-free resolution, which makes it extremely difficult to assign responsibility. To ensure effective management of AI application projects, it is recommended to clearly distribute legal responsibilities and, once again, encourage transparency so that any non-compliance with standards can be found. Organizations handling data are strongly encouraged to adopt rigorous codes of conduct, raise awareness among their teams about this issue, and deploy technical methods.
  • The principle of “technical robustness and safety” aims to minimize the risks associated with the use of AI by ensuring its robustness and reliability. AI systems can cause harm in many ways. They can create or reinforce discrimination, violate privacy, erode user trust and skills, promote radical individualism, or even surpass regulatory frameworks by evolving faster than they can adapt. Thus, it is crucial for algorithms to show proper behavior. Therefore, they should be evaluated in various operating environments. Specifically, their resilience against attacks that aim to divert them from their intended purpose should be tested, as algorithms can be deliberately misled. As a precautionary measure, they should have a safe fallback mode in case of failure detection. To meet these requirements, several solutions can be considered. From a technical perspective, systematic data quality evaluation and security by design are essential measures. Additionally, cooperation among industry, users, and regulators is necessary to develop standardized control processes.

The foundational principles of ethical AI are undoubtedly in need of further refinement, particularly in an emerging legal environment. Given the rapid evolution of these systems, vigilant monitoring is essential to address any emerging concerns on time. Furthermore, a purely theoretical ethical framework holds limited value unless accompanied by a standardized method for its practical application.

One of the major challenges of AI ethics lies in operationalizing it in practice. How can we quantify concepts such as fairness, transparency, and data protection? How can we compare the qualitative and quantitative significance of an AI algorithm’s precision, fairness, and level of transparency? The complexity of these questions presents fertile ground for research, leading to the proposal of various partial solutions aimed at addressing specific aspects of the problem at different stages of AI application development.

Throughout the lifecycle of an AI application, each stage offers an opportunity to address specific ethical concerns. This lifecycle typically includes understanding the business case, system design, database creation, database analysis, data preprocessing, model training, testing and evaluation, deployment, and performance monitoring. In an article published in 2023 in the journal AI and Ethics [9], E. Prem presents a comprehensive list of practical solutions that can be implemented based on the phase of the project. The insights shared by Prem offer valuable guidance and are further elaborated in the table below, supplying a comprehensive resource for tackling ethical considerations in AI development.

6 — Good Practices for Operationalizing AI Ethics

Conclusion

The rapid integration of artificial intelligence (AI) in various domains of society raises intricate ethical concerns, such as transparency, justice, non-maleficence, responsibility, and privacy. Multiple stakeholders, including tech companies, research institutions, and national/international organizations, are actively engaged in formulating ethical frameworks to govern the development and implementation of AI.

The complexity of the ethical problem and its practical implementation is beyond the scope of the framework presented in this article. Numerous technical and non-technical methods are available to address different sides of these ethical challenges.

Next articles will delve into specific technical aspects that are part of the ethical AI framework. With a focus on transparency, fairness, and respect for privacy, each article will examine the inherent issues, existing technical solutions, limitations, availability of relevant algorithms, and supply case studies to illustrate the concepts discussed. These articles aim to provide a deeper technical understanding of these crucial ethical dimensions in AI.

References

[1] “Real-life Examples of Discriminating Artificial Intelligence,” Datatron, [Online]. Available: https://datatron.com/real-life-examples-of-discriminating-artificial-intelligence/. [Accessed 12 07 2023].

[2] “ChatGPT banned in Italy over privacy concerns,” BBC, 01 04 2023. [Online]. Available: https://www.bbc.com/news/technology-65139406. [Accessed 20 06 2023].

[3] “ChatGPT accessible again in Italy,” BBC, 28 04 2023. [Online]. Available: https://www.bbc.com/news/technology-65431914. [Accessed 12 07 2023].

[4] “Chapter 1: What is AI ethics?,” University of Helsinki, [Online]. Available: https://ethics-of-ai.mooc.fi/chapter-1/3-values-and-norms. [Accessed 12 05 2023].

[5] T. Hagendorff, “The Ethics of AI Ethics: An Evaluation of Guidelines,” 01 02 2020. [Online]. Available: https://link.springer.com/article/10.1007/s11023-020-09517-8. [Accessed 20 05 2023].

[6] Independent High-Level Expert Group on Atificial Intelligence set up by the European Commission, “Ethics guidelines for trustworthy AI,” 08 04 2019. [Online]. Available: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai. [Accessed 12 05 2023].

[7] OECD’s Council on Artificial Intelligence, “OECD AI Principles overview,” 2022. [Online]. Available: https://oecd.ai/en/ai-principles. [Accessed 12 05 2023].

[8] M. I. E. V. Anna Jobin, “The global landscape of AI ethics guidelines,” 02 09 2019. [Online]. Available: https://www.nature.com/articles/s42256-019-0088-2. [Accessed 20 05 2023].

[9] E. Prem, “From ethical AI frameworks to tools: a review of approaches,” 01 06 2023. [Online]. Available: https://link.springer.com/content/pdf/10.1007/s43681-023-00258-9.pdf. [Accessed 09 07 2022].

--

--

Heka.ai

We design, deploy and manage AI-powered applications