This page is an introduction to responsible AI, also referred to as trustworthy AI or ethical AI.

Index

Ethics in AI

Ethics in AI refers to the study and application of moral principles and values in the development, deployment, and use of artificial intelligence systems. It involves considering the ethical implications, potential risks, and societal impact of AI technologies. Ethics in AI addresses questions and concerns related to the responsible and fair use of AI, including issues such as:

  1. Transparency & Explainability : Ensuring that AI systems are understandable and can provide explanations for their decisions and actions. This principle is strictly correlated with RAI Causality.
  2. Fairness and bias: Mitigating biases in AI algorithms and ensuring fairness in the treatment of individuals and groups across different demographics. Also fairness can be improved by using RAI Causality.
  3. Privacy and data protection: Safeguarding personal information and ensuring compliance with privacy regulations in the collection, storage, and use of data.
  4. Accountability and responsibility: Determining who is accountable for the actions and consequences of AI systems, especially in cases of harm or unintended consequences.
  5. Human control and autonomy: Striving to maintain human control over AI systems and preventing them from unduly influencing or replacing human decision-making.
  6. Social impact and inclusivity: Assessing the impact of AI on society, employment, and marginalized communities, and striving for inclusivity and benefit for all.
  7. Safety and risk management: Addressing the potential risks and challenges associated with AI technologies, including cybersecurity, algorithmic failures, and unintended consequences.

Ethics in AI aims to ensure that AI is developed and deployed in a responsible and ethical manner, aligning with societal values and protecting human rights. It involves interdisciplinary collaboration between technologists, ethicists, policymakers, and other stakeholders to establish guidelines, standards, and regulations for the ethical use of AI.

Another important principle is trust, but it's not clear how to define it.

AI Governance

Scouting

Training

Model Governance

Model Reporting

Main instruments to report machine learning “artifacts”:

For more “special cases”:

Responsible AI for Generative AI

Risk

Classification of AI Solution (AI Scenario)

Tools & Framework

Libraries

Microsoft

Resources on Responsible AI

The work of RAI practitioner

AI Ethicist

Regulations

EU AI Act

You can find infos here AI law and regulations.

AI Risk

AI Risk repository

Resources

In this section, all the sources for further information are gathered.

Handbooks

Institutions

Articles