This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.
-
Updated
Mar 4, 2026
This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.
Aligning AI With Shared Human Values (ICLR 2021)
FIBO is a SOTA, first open-source, JSON-native text-to-image model built for controllable, predictable, and legally safe image generation.
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
📚 A curated list of papers & technical articles on AI Quality & Safety
A curated list of awesome academic research, books, code of ethics, courses, databases, data sets, frameworks, institutes, maturity models, newsletters, principles, podcasts, regulations, reports, responsible scale policies, tools and standards related to Responsible, Trustworthy, and Human-Centered AI.
Documentation for dynamic machine learning systems.
A systems-thinking essay that explains why failure rarely happens suddenly. It shows how slow drift, accumulating pressure, and weakening buffers push systems toward collapse long before outcomes change, and why prediction-focused analytics miss the most important phase of failure.
Experimental interface environment for open source LLM, designed to democratize the use of AI. Powered by llama-cpp, llama-cpp-python and Gradio.
An interpretable early-warning engine that detects academic instability before grades collapse. Instead of predicting performance, it models pressure accumulation, buffer strength, and transition risk using attendance, engagement, and study load to explain fragility and identify high-leverage interventions.
AI security framework: deterministic input filtering, adaptive rule learning (389K pre-trained attacks), optional LLM veto verification. Zero dependencies. Works without an LLM. Patent Pending.
A systems-thinking essay that reframes failure as a gradual transition rather than a discrete outcome. It explains how pressure accumulation, weakening buffers, and hidden instability precede visible collapse, and why prediction-based models arrive too late to prevent failure in human-centered systems.
Master thesis: Exploring bias in German NLG (GPT-3 & GerPT-2). Applies regard classification and bias mitigation triggers.
Trustworthy AI: From Theory to Practice book. Explore the intersection of ethics and technology with 'Trustworthy AI: From Theory to Practice.' This comprehensive guide delves into creating AI models that prioritize privacy, security, and robustness. Featuring practical examples in Python, it covers uncertainty quantification, adversarial ML
Experimental AI cognitive architecture exploring how agents develop memory, self-reflection, and evolving internal identity over time.
HyperCortex Mesh Protocol (HMP): decentralized cognitive mesh for AI agents
The left hemisphere. Frameworks, logic, and certainty architecture. Home of FSVE, AION, LAV, ASL, GENESIS, TOPOS, and 60+ epistemically validated frameworks built to make AI systems reliable, not just capable.
I've always believed that the hardest problems in AI aren't technical; they're architectural. We keep building systems that can't explain themselves, can't prove their own integrity, can't handle uncertainty without either freezing or lying. And then we act surprised when we don't trust them.
🦄 2- Ethical Entrepreneurship Startup: Project Mindful Emotional AI is a startup developing ethical and scalable Emotion AI solutions. It uses advanced technologies and the InferenceOps paradigm to capture and analyze emotional data in real time, enhancing human–machine interaction and ensuring regulatory alignment.
A python implementation of Dave Shap's ACE Model
Add a description, image, and links to the ethical-ai topic page so that developers can more easily learn about it.
To associate your repository with the ethical-ai topic, visit your repo's landing page and select "manage topics."