A practical overview of the US regulatory landscape for AI vendors and enterprise buyers, covering federal frameworks, executive orders, and state-level legislation.
The NIST AI RMF provides a voluntary framework for managing AI risk throughout the AI lifecycle. It organizes risk management into four core functions: Govern, Map, Measure, and Manage. While not legally binding, the framework has become the de facto standard for US-based AI governance.
The AI Vendor Risk Index evaluates vendor alignment with NIST AI RMF principles as part of our compliance scoring dimension.
Signed in October 2023, this executive order establishes requirements for AI safety testing, standards development, and federal agency AI procurement. Key provisions include mandatory safety reporting for dual-use foundation models and new guidance for AI use in critical infrastructure.
Multiple US states have enacted or proposed AI-specific legislation:
Federal agencies increasingly require AI vendors to demonstrate risk management practices, provide algorithmic impact assessments, and ensure human oversight for high-stakes decisions. OMB Memorandum M-24-10 establishes minimum AI governance practices for federal agencies.
Our scoring methodology evaluates vendors on NIST AI RMF alignment, state regulatory preparedness, and federal procurement readiness. Vendors serving US government clients receive additional scrutiny on FedRAMP authorization and FISMA compliance.