← Back to Insights
AI Governance

Building an AI System Inventory and Risk Classification Framework

Joshua Garza

Key Takeaways

  • An AI system inventory is the foundational control surface for every downstream governance activity — risk assessment, bias auditing, incident response, and regulatory reporting all depend on it.
  • Risk classification should use the EU AI Act's four-tier structure (Unacceptable, High, Limited, Minimal) as operational vocabulary, even outside the EU.
  • No single discovery method finds everything — procurement reviews, cloud audits, business unit interviews, and shadow AI detection must all be used together.
  • Inventory maintenance requires four active mechanisms: intake gates, periodic reassessment cycles, triggered reviews, and explicit ownership transitions.
  • Start with 80% coverage in 30 days targeting your highest-consequence systems; imperfect coverage beats indefinite delay.

The foundational prerequisite for any AI governance program is a comprehensive, maintained inventory of all AI systems across the organization, paired with a risk classification framework that enables proportionate oversight and regulatory compliance. Without knowing what AI systems exist, who owns them, what data they consume, and what decisions they influence, every downstream governance activity — risk assessment, bias auditing, incident response, regulatory reporting — operates on incomplete information. The inventory is not a deliverable. It is the control surface. Everything else depends on it.

This post walks through the structural components of a defensible AI system inventory and risk classification framework. Each section builds sequentially: why inventory is a prerequisite, what to capture, how to classify risk, how to discover what already exists, and how to keep the registry current. The approach draws primarily on the EU AI Act¹, NIST AI RMF 1.0², and ISO/IEC 42001³ as reference frameworks, but the operational guidance applies regardless of jurisdiction or regulatory exposure.

You Cannot Govern What You Have Not Inventoried

AI systems enter organizations through multiple channels — procurement, shadow IT, embedded vendor features, and data science experimentation — most undocumented. Before you can assess risk, assign accountability, or demonstrate compliance, you need to know what exists.

This is not optional. The NIST AI RMF² begins with its "Map" function — categorizing AI systems before measuring or managing them. ISO/IEC 42001³ requires determining which AI systems fall under management scope. The EU AI Act¹ mandates registration and documentation for high-risk systems.

Without inventory, you get duplicated effort, invisible risk exposure, and no capacity to respond to regulatory inquiries or incidents.

What Belongs in an AI Inventory

A governance-grade inventory is not a vendor spreadsheet. Each entry should capture structured fields: system name, business purpose, data sources, risk tier, named system owner, deployment date and status, vendor or internally built classification, and deployment scope (internal, customer-facing, or third-party-facing).

Every field serves a governance function. Data sources feed downstream privacy and bias assessments. Named ownership converts abstract accountability into operational responsibility. Deployment scope determines scrutiny level regardless of technical complexity — a simple model making customer-facing credit decisions demands more oversight than a sophisticated internal analytics tool.

Risk Classification Tiers: EU AI Act as a Reference Model

Once you know what to capture, the next question is how to classify each system's risk. Even outside the EU, the AI Act's four-tier structure¹ provides operationally useful vocabulary. Unacceptable Risk (social scoring, manipulative subliminal techniques) triggers immediate prohibition. High Risk (employment, credit, clinical, law enforcement, critical infrastructure) demands conformity assessments, human oversight, and documentation. Limited Risk (chatbots, synthetic content generators) requires transparency disclosures. Minimal Risk (spam filters, recommendation engines) needs inventory presence only.

For ambiguous cases, the NIST AI RMF's² harm dimensions — physical, psychological, financial, reputational, societal — help adjudicate where a system falls when tier boundaries blur.

Conducting an AI Discovery Sweep

With fields defined and risk tiers established, the practical challenge becomes finding the systems that need to be cataloged. No single discovery method surfaces everything. Four complementary channels are necessary:

  1. Procurement and vendor contract review — flag agreements containing AI, ML, or automation terms.
  2. Cloud and IT asset audits — scan for active model endpoints across SageMaker, Azure ML, Vertex AI, and similar platforms.
  3. Structured business unit interviews — ask what decisions rely on automated or AI-assisted tools, what data they consume, and who owns them.
  4. Shadow AI detection — review expense reports and browser extensions for unauthorized consumer AI subscriptions.

A questionnaire sent only to IT misses business-led procurement. A cloud audit misses SaaS-embedded AI. Accept that the first sweep will be incomplete — the goal is coverage breadth, not perfection.

Maintaining the Inventory as a Living Document

Discovery is only the beginning. An inventory completed once and never updated is an audit artifact, not a governance tool. Four mechanisms prevent this decay:

  • Intake gates requiring an inventory entry and risk tier before any AI system reaches production
  • Periodic reassessment cycles — annual for high-risk, 18–24 months for lower tiers
  • Triggered reviews when training data, deployment scope, or regulations change
  • Explicit ownership transitions ensuring accountability survives personnel moves

This aligns with ISO/IEC 42001's³ continuous improvement requirements — the inventory is your AI management system's living control surface, not a filed document.

Starting Point

If your organization has no AI inventory today, do not wait for a perfect template. Target 80% coverage within 30 days. Start with the highest-consequence systems: anything influencing employment decisions, creditworthiness, clinical care, or public-facing automated determinations. A rough inventory of your riskiest systems delivers more governance value than a polished registry that takes six months to build while undocumented systems accumulate exposure unchecked.

Governance follows the inventory. The inventory does not wait for governance to be perfect. Start now, iterate continuously, and treat completeness as a direction rather than a prerequisite.

An AI system inventory is not a bureaucratic exercise. It is the operational foundation without which risk classification, accountability, compliance, and incident response all fail. Every ungoverned AI system accumulates risk for each day it remains undocumented — risk that compounds silently until a regulator asks a question you cannot answer or an incident reveals a system nobody knew existed.

Start now. Target your highest-consequence systems first. Accept imperfect coverage over indefinite delay. Then maintain the registry as a living document, not a filed artifact. Governance begins here.

References

  1. European Parliament and Council. Regulation (EU) 2024/1689 (EU AI Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
  2. National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
  3. International Organization for Standardization. ISO/IEC 42001: Information technology — Artificial intelligence — Management system. https://www.iso.org/standard/81230.html