This page is an introduction to responsible AI, also referred to as trustworthy AI or ethical AI.
Index
Ethics in AI
Ethics in AI refers to the study and application of moral principles and values in the development, deployment, and use of artificial intelligence systems. It involves considering the ethical implications, potential risks, and societal impact of AI technologies.
Ethics in AI addresses questions and concerns related to the responsible and fair use of AI, including issues such as:
- Transparency & Explainability : Ensuring that AI systems are understandable and can provide explanations for their decisions and actions. This principle is strictly correlated with RAI Causality.
- Fairness and bias: Mitigating biases in AI algorithms and ensuring fairness in the treatment of individuals and groups across different demographics. Also fairness can be improved by using RAI Causality.
- Privacy and data protection: Safeguarding personal information and ensuring compliance with privacy regulations in the collection, storage, and use of data.
- Accountability and responsibility: Determining who is accountable for the actions and consequences of AI systems, especially in cases of harm or unintended consequences.
- Human control and autonomy: Striving to maintain human control over AI systems and preventing them from unduly influencing or replacing human decision-making.
- Social impact and inclusivity: Assessing the impact of AI on society, employment, and marginalized communities, and striving for inclusivity and benefit for all.
- Safety and risk management: Addressing the potential risks and challenges associated with AI technologies, including cybersecurity, algorithmic failures, and unintended consequences.
Ethics in AI aims to ensure that AI is developed and deployed in a responsible and ethical manner, aligning with societal values and protecting human rights. It involves interdisciplinary collaboration between technologists, ethicists, policymakers, and other stakeholders to establish guidelines, standards, and regulations for the ethical use of AI.
Another important principle is trust, but it's not clear how to define it.
AI Governance
- Standards Database. Find information on AI-related standards using the search and filtering capabilities below. This database currently covers nearly 300 relevant standards that are being developed or have been published by a range of prominent Standards Development Organisations.
Scouting
- The FY2024 Map. The Responsible AI Ecosystem. Tracking 300 responsible AI enabling startups.
-fiddler.ai. Enterprise AI Observability. Build trust into AI for MLOps
- splunk
Training
Model Governance
Model Reporting
Main instruments to report machine learning “artifacts”:
- Use Model Cards to report on your model. Used by HuggingFace and many others, they are already quite established. Model cards allow others to understand and re-use your model. [HuggingFace Model card] [ArxiV paper Model Cards for Model Reporting]
- DataSheets for Datasets is a reporting approach for datasets. Datasheets are a checklist that guides you through all relevant questions, such as motivation to create the data, composition, collection process, and so on.
- Use the REFORMS checklist if you report on your research based on machine learning. Typically when writing a paper. REFORMS is agnostic to the field of application, so it’s a rather broad checklist.
For more “special cases”:
- Writing a paper on a clinical prediction model? Use the TRIPOD+AI checklist.
- For writing a paper on medical imaging, you can use CLAIM, a checklist for artificial intelligence in medical imaging.
Responsible AI for Generative AI
Risk
- ATLAS. ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques against Al-enabled systems based on real-world attack observations and realistic demonstrations from Al red teams and security groups.
Classification of AI Solution (AI Scenario)
Libraries
- HAX Toolkit from Microsoft. The Human-AI Experience (HAX) Toolkit is a set of hands-on tools that help AI builders from any discipline to work together as they plan and create AI systems that people interact with.
- OWASP AI Exchange. The OWASP AI Exchange is as an open source collaborative document to advance the development of global AI security standards and regulations. It provides a comprehensive overview of AI threats, vulnerabilities, and controls to foster alignment among different standardization initiatives. This includes the EU AI act, ISO/IEC 27090 (AI security), the OWASP ML top 10, the OWASP LLM top 10, and OpenCRE - which we want to use to provide the AI Exchange content through the security chatbot OpenCRE-Chat.
- The Assessment List for Trustworthy Artificial Intelligence
- Best Practice AI
- Giskard. The testing framework for ML models
- XAIoGraphs. XAIoGraphs (eXplainability Articicial Intelligence over Graphs) is an Explicability and Fairness Python library for classification problems with tabulated and discretized data. The explainability methods in this library don't make any hypotheses about the data, so it does not require the AI model. Simply need data and predictions (or decisions), being able to explain AI models, rule models, and reality.
- DoWhy. DoWhy provides a wide variety of algorithms for effect estimation, prediction, quantification of causal influences, diagnosis of causal structures, root cause analysis, interventions and counterfactuals. A key feature of DoWhy is its refutation and falsification API that can test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts.
Microsoft
Resources on Responsible AI
The work of RAI practitioner
AI Ethicist
Regulations
EU AI Act
You can find infos here AI law and regulations.
AI Risk
AI Risk repository
- AI Risk Repository. The AI Risk Repository has three parts:
- The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
- The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
- The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
"The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence" [paper]
Resources
In this section, all the sources for further information are gathered.
Handbooks
Institutions
- The Responsible AI Institute. The Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. The RAI Institute’s conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of creating, selling or buying AI products. Through our global network of responsible AI experts, the RAI Institute offers valuable insights to practitioners, policymakers, and regulators to enable technologies that improve the social and economic well-being of society.
- European AI Alliance - ALTAI portal. The Assessment List for Trustworthy Artificial Intelligence (ALTAI), is a practical tool that helps business and organisations to self-assess the trustworthiness of their AI systems under development.
Articles