Skip to content

simaba/ai-prism

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

PRISM — AI Governance Resource Hub

Practical Resources for Intelligent Systems Management — a curated collection of frameworks, tools, regulations, papers, and open-source projects for responsible and trustworthy AI deployment in regulated industries.

Maintained by Sima Bagheri · LinkedIn · Medium

Focus areas: Enterprise AI governance · LLM deployment safety · Risk management · Regulatory compliance (NIST AI RMF, EU AI Act, ISO 42001) · Release readiness · Incident response


Contents


Regulatory Frameworks

United States

European Union

  • EU AI Act — The world's first comprehensive legal framework for AI, using a risk-based tiered approach (unacceptable, high, limited, minimal risk).
  • EU AI Act Summary — Plain-language guide to the EU AI Act provisions and timelines.
  • GDPR & AI — European Data Protection Board guidance on AI and GDPR intersection.

International Standards

  • ISO/IEC 42001:2023 — International standard for AI management systems. Provides requirements and guidance for establishing, implementing, maintaining, and improving an AI management system.
  • ISO/IEC 23894:2023 — Guidance on risk management for AI systems.
  • IEEE 7000 Series — IEEE standards for ethically aligned AI design.
  • OECD AI Principles — International principles on trustworthy AI adopted by 46 countries.

Risk Management Frameworks

  • NIST AI RMF Core — Interactive version of the AI RMF with searchable categories and subcategories.
  • Microsoft Responsible AI Standard — Microsoft's internal responsible AI framework, publicly shared.
  • Google PAIR Guidebook — People + AI Research guidebook for designing human-centered AI.
  • IBM AI Fairness 360 — Open-source toolkit for examining, reporting, and mitigating discrimination in ML models.
  • MITRE ATLAS — Adversarial Threat Landscape for AI Systems — knowledge base of AI-specific adversarial tactics.
  • OWASP Top 10 for LLMs — The 10 most critical security risks for LLM applications.

Governance Tools & Platforms

  • Microsoft Responsible AI Toolbox GitHub stars — Integrated suite for responsible AI assessment including error analysis, fairness, causal inference, and counterfactual analysis.
  • Giskard GitHub stars — Open-source AI quality testing platform for detecting biases, vulnerabilities, and performance issues.
  • verifywise GitHub stars — AI compliance platform with direct NIST AI RMF and EU AI Act mappings.
  • Evidently AI GitHub stars — Evaluate, test, and monitor ML and LLM models in production.
  • WhyLabs — AI observability platform for model monitoring and drift detection.
  • Fiddler AI — Explainable AI and model performance monitoring for enterprises.
  • Microsoft PyRIT GitHub stars — Python Risk Identification Toolkit for generative AI red teaming.
  • LangFuse GitHub stars — Open-source LLM observability and analytics.

AI Testing & Evaluation


Incident Management


Model Cards & Documentation


Academic Papers


Datasets & Benchmarks

  • BigBench GitHub stars — Collaborative benchmark for large language model evaluation beyond current capabilities.
  • TruthfulQA — Benchmark measuring whether LLMs generate truthful answers.
  • HarmBench — Standardized evaluation framework for automated red teaming.
  • MMLU — Massive Multitask Language Understanding benchmark across 57 subjects.

Communities & Organizations


Courses & Learning


My Open-Source Frameworks

Frameworks I have built for AI governance and release readiness in regulated industries:

Repository Description Stars
governance-playbook End-to-end AI governance playbook aligned with NIST AI RMF stars
release-checklist Risk-tiered release gate checklist for LLM/ML deployments with airc CLI stars
nist-rmf-guide Practitioner guide to implementing NIST AI RMF in regulated industries stars
release-governance 5-stage release lifecycle framework with governance gates stars
accountability-patterns Design patterns for human accountability in AI systems stars
regulated-ai Starter kit for deploying AI in regulated industries (healthcare, finance, insurance) stars
multi-agent-governance Governance framework for multi-agent AI systems stars
agent-eval Evaluation framework for AI agents across correctness, safety, and reliability stars

Contributing

Contributions are welcome! Please read the Contributing Guidelines and open an issue before submitting a PR.

How to add a resource:

  1. Verify the resource is publicly accessible and actively maintained
  2. Add it to the appropriate section with a one-line description
  3. For GitHub repos: add a stars badge using ![stars](https://img.shields.io/github/stars/owner/repo?style=social)
  4. Open a PR with the title Add: [Resource Name]

License

CC0

To the extent possible under law, Sima Bagheri has waived all copyright and related or neighboring rights to this work.

About

A curated list of AI governance frameworks, tools, regulations, and resources for responsible AI deployment

Topics

Resources

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors