Custodia, LLC · Why We Exist

IS YOUR AI WORKINGFOR YOUR BUSINESS?OR AGAINST IT?

Most organizations cannot answer that question with documentation. Custodia exists because someone has to — and because no single discipline was asking it across governance, identity, and security at the same time.

// HOW WE GOT HERE
001

01

AI arrived as a hundred small decisions.

No organization decided one morning to deploy ungoverned AI. It happened incrementally. A vendor added an AI feature to a product already in use. An engineer connected a foundation model API to speed up a workflow. A department subscribed to an AI tool without telling IT. A chatbot went into production without a risk review. By the time leadership asked what AI the organization was running, the answer was already complicated — and nobody had mapped it.

This is not negligence. This is what rapid AI adoption looks like from the inside. The governance program was not built for this. The identity infrastructure was not extended to cover it. The security program did not include it. The documentation did not exist. And then someone — a customer, an auditor, a regulator — asked.

// THE BLIND SPOT
002

02

Three disciplines. One blind spot.
Nobody was looking at all three.

GOVERNANCE

AI governance frameworks tell organizations what controls to build and what policies to write. They are essential. But they do not tell you who has access to your AI systems right now, or whether that access is governed. A policy without access controls is a document, not a program.

IDENTITY

Identity and access management tells organizations who can do what across their systems. But most IAM programs were built before AI tools existed at scale. Service accounts running AI models, API keys with no owner, engineers with admin access to production models — these gaps exist in almost every organization we assess.

SECURITY

Security programs protect systems from external threats. But AI introduces threat vectors that traditional security controls were not designed to address — adversarial inputs, data poisoning, model manipulation, AI supply chain risk. Most security programs have not been extended to cover the AI layer explicitly.

Nobody was asking all three questions about the same AI systems at the same time.

That is the gap Custodia was built to close.

// WHY IT BECAME URGENT
003

03

The regulations arrived before the governance did.

THE REGULATORY MOMENT

The EU AI Act went into enforcement in 2026. Colorado's AI Act made annual impact assessments a legal requirement. Thirty-eight US states passed AI-related legislation in 2025 alone. Organizations that had deployed AI without governance suddenly had compliance obligations attached to systems they had never formally inventoried.

THE COMMERCIAL MOMENT

Enterprise customers started adding AI governance questions to vendor security questionnaires. Investors started asking about AI risk in technical due diligence. Board members started asking questions that compliance teams did not have documented answers to. AI governance stopped being a best practice and became a business requirement.

THE AUDIT MOMENT

Auditors testing SOC 2, ISO 27001, and HIPAA controls started asking specifically about AI systems. Access controls. Data governance. Incident response. The same frameworks organizations had been complying with for years now had AI-shaped gaps that existing compliance programs had not addressed. The evidence did not exist because nobody had built it yet.

// THE TRIFECTA
004

04

ONE FIRM.THREE DISCIPLINES.ONE ANSWER.

AI RISK

We inventory every AI system in your organization — including the ones nobody approved. We classify each one against applicable regulations, map your compliance gaps to specific obligations and penalty exposure, and tell you what your regulatory risk is in plain English. Not framework citations. Dollar amounts and deadlines.

NIST AI RMF · EU AI Act · ISO 42001 · State AI Laws

AI IDENTITY

We assess how access to your AI systems is provisioned, managed, and revoked — using the same depth of IAM knowledge that comes from implementing identity governance inside enterprise organizations. We find the ungoverned service accounts, the former employees still active, the API keys with no owner. This is the domain no other AI governance firm delivers with real technical depth.

ISO 27001 Annex A 5.15 · 5.16 · 5.18 · 8.2 · 8.5 · SailPoint Implementation Standards

AI SECURITY

We assess whether your security program has been extended to cover AI-specific threat vectors — adversarial inputs, model manipulation, data poisoning, AI supply chain risk. We evaluate whether your incident response plan covers AI failures and whether your AI vendors have been security-assessed. Most security programs have a gap the size of your AI stack.

NIST AI RMF · NIST AI 600-1 · NIST 800-53 · ISO 27001 · AI Threat Modeling

The ARIS Report tells you whether your AI is working for your business or against it — across governance, identity, and security — before a regulator, auditor, or breach does it for you.

// THE NUMBERS
005

05

Documented evidence has a dollar value.
So does the absence of it.

Every metric below comes from published research, regulatory frameworks, and enforcement records. The ARIS Report is timestamped, retained, and framework-mapped — which means every number in this section represents a risk your documentation either mitigates or leaves open.

EU AI Act · Max Penalty

€35M

or 7% of global annual revenue

Maximum fine for prohibited AI system deployment under EU AI Act Article 5. High-risk system violations carry up to €15M or 3% of revenue. Third-party due diligence documentation is an explicit mitigating factor under the Act's conformity assessment provisions.

EU AI Act 2024/1689, Arts. 99–101

Average Cost of a Data Breach · 2024

$4.88M

IBM / Ponemon Institute

Identity-related breaches — unauthorized access via ungoverned credentials, service accounts, or orphaned API keys — consistently rank as the leading breach vector. The ARIS IAM gap analysis specifically surfaces these access control failures before they become incidents.

IBM Cost of a Data Breach Report 2024

B2B Contract at Risk

$50K–$500K

per failed vendor AI governance review

Enterprise customers are now embedding AI governance questions in vendor security questionnaires (Gartner, 2025). A single undocumented answer puts the entire contract at risk. The ARIS Report produces the evidence package that answers those questions before the questionnaire arrives.

Gartner AI Governance Survey 2025

Colorado AI Act Requirement

Annual

impact assessments — not optional

Colorado SB 205 requires developers and deployers of high-risk AI systems to complete a documented impact assessment every year. The obligation does not expire. The ARIS Report satisfies this requirement and Custodia retains the timestamped workpapers to maintain your annual compliance trail.

Colorado SB 205 (2024), §§ 6-1-1702–1704

Equivalent Big 4 Engagement

$25K–$75K

for the same framework coverage

A comparable AI governance assessment from a Big 4 or major consulting firm runs $25,000 to $75,000. An in-house AI Governance Officer costs $120,000 to $180,000 in annual salary plus benefits. ARIS Professional delivers the same board-ready, framework-mapped output for $6,500.

Consulting market benchmarks, 2024–2025

Breaches Involve Human / Identity Element

74%

Verizon DBIR 2024

74% of all breaches involve a human element — credentials, privilege abuse, or misuse of access. AI systems running on ungoverned service accounts and unaudited API keys are exactly this attack surface. The ARIS IAM assessment maps every AI system access path against this exposure.

Verizon Data Breach Investigations Report 2024

ANNUAL COMPLIANCE REQUIREMENTS SATISFIED BY A SINGLE ARIS ENGAGEMENT

ObligationFrequencyPenalty / ConsequenceARIS Evidence Produced
Colorado AI Act § 6-1-1702AnnualAG enforcement · civil penaltiesDocumented AI impact assessment · retained by Custodia
EU AI Act Art. 9 · 10 · 17 · 72Ongoing / per deploymentUp to €15M or 3% revenueRisk management, data governance & QMS documentation
SOC 2 CC6.1–6.3Annual audit cycleAudit qualification · lost certificationsAI system access governance & provisioning controls
ISO 27001 Annex A 5.15–5.18Annual surveillance auditCertification suspensionIdentity management & access rights for AI systems
HIPAA 164.308(a)(4)Ongoing / audit on demand$100–$50K per violationWorkforce AI access controls & PHI-adjacent AI governance
NIST AI RMF (federal contracts)Per contract cycleContract disqualificationGovern · Map · Measure · Manage framework alignment

The annual ARIS reassessment costs $2,500–$6,500. The annual cost of not having current documentation is measured in the table above.

READY TO KNOW
THE ANSWER?

Book a free 30-minute discovery call. We confirm your AI footprint, identify your regulatory exposure, and recommend the right ARIS tier. No commitment required.

Book Your Discovery Call

ARIS Essentials $3,500 · Professional $6,500 · Enterprise $9,500 · Pittsburgh, PA · Remote-friendly

Custodia, LLC · aiprivacyandgovernance.com · Pittsburgh, PA