Three forces collided in 2026. The EU AI Act's enforcement deadlines started biting. ISO 42001 quietly became the SOC 2 of AI. US states started passing their own AI compliance laws. CISOs and GRC teams who'd been deferring AI governance now have hard deadlines and procurement teams asking for documentation that didn't exist a year ago.
This is the curation we wish existed. Compliance automation platforms that actually work for AI, not retrofitted SOC 2 tools. Governance platforms used by Fortune 500 risk officers. Regulatory trackers that publish original analysis instead of restating press releases. Updated quarterly as the regulatory landscape shifts.
Princeton researchers Arvind Narayanan and Sayash Kapoor's newsletter on AI hype, real risks, and policy. Successor to their AI Snake Oil book and Substack.
Gary Marcus's Substack covering AI risk, regulation, and the limits of current AI systems.
Weekly newsletter from Anthropic's head of policy covering AI research, capability shifts, and policy implications.
Lawfare's AI vertical covering legal, regulatory, and national security implications. Original analysis from policy experts.
Stanford Institute for Human-Centered AI's research and policy briefs. The annual AI Index is a definitive industry benchmark.
Brookings Institution's AI policy research with a focus on governance, regulation, and democratic implications.
Research nonprofit publishing original work on AI safety, risk evaluation, and policy recommendations.
Mozilla Foundation's Trustworthy AI program with research, advocacy, and open-source tooling.
International Association of Privacy Professionals' AI hub. Definitive resource for compliance, AIGP certification, and practitioner training.
Anthropic's research and policy publications on AI safety, evaluation, and regulation.
AI governance, risk, and compliance platform used by Fortune 500 enterprises. Strong on EU AI Act readiness and model documentation.
AI governance platform with risk assessment, audit, and assurance modules. Active in NYC bias audit compliance.
European AI governance platform focused on EU AI Act compliance with model registry and risk classification tools.
AI governance platform for tracking AI use, managing compliance with NIST AI RMF and EU AI Act, and operationalizing policies.
AI governance platform from ETH Zurich roots. Strong on ISO 42001 alignment and continuous compliance.
Compliance operations platform with AI governance modules covering NIST AI RMF, ISO 42001, and EU AI Act controls.
Real-time LLM security platform protecting against prompt injection, data leakage, and adversarial attacks.
MLSecOps platform for AI/ML model security, vulnerability scanning, and ML supply chain protection.
ML security platform protecting AI models from theft, evasion, and adversarial inputs at runtime.
AI observability and evaluation platform turning offline LLM evals into production guardrails. Detects hallucinations and unsafe outputs at scale.
Enterprise GenAI security platform for monitoring, controlling, and auditing employee LLM use across approved and shadow AI. Acquired by F5.
Compliance automation platform with AI module for ISO 42001, NIST AI RMF, and EU AI Act readiness alongside SOC 2 and HIPAA.
Compliance automation platform expanding into AI governance frameworks, including ISO 42001 and NIST AI RMF mapping.
Compliance automation platform with AI compliance modules covering ISO 42001 and EU AI Act controls.
GRC automation platform with growing AI compliance support for fast-moving frameworks.
Compliance automation platform for cloud-native companies, with growing AI governance and ISO 42001 features.
Free voluntary framework from NIST for managing risks across the AI lifecycle. The de facto US standard for AI governance programs.
International standard for AI management systems. Becoming the SOC 2 equivalent for AI compliance certification.
Independent reference site for the EU AI Act with article-by-article navigation, deadlines, and compliance guidance.
Open-source security framework identifying the most critical LLM application vulnerabilities. Widely adopted by AppSec teams.
Open database of real-world AI failures and harms. Essential reference for risk assessment and red-team scoping.
International Association of Privacy Professionals. Largest global community for privacy and AI governance practitioners with the AIGP certification.
Nonprofit advancing responsible AI practices with assessments, certifications, and a global practitioner community.
Long-running ML podcast with frequent episodes on AI safety, governance, evaluation, and regulatory affairs.
Lawfare's weekly podcast covering national security and tech policy, with deep AI regulation episodes.
Three criteria. First, does this resource teach you something you can't learn from a Google search? Second, is it actively maintained and producing new content? Third, do practitioners in the role actually recommend it to peers? We don't accept payment for listings. We review and update this page quarterly.