Colorado’s SB 24-205 takes effect June 30, 2026. Learn the five key steps lenders must take to prepare AI credit models for compliance and avoid enforcement risk.
If your institution uses AI or machine learning to make credit decisions, June 30, 2026 is a deadline you cannot afford to miss. Colorado’s SB 24-205, the first US state law specifically governing high-risk AI systems in financial services, takes full effect on that date. It creates real compliance requirements for lenders, including impact assessments, bias audits, vendor accountability, and consumer disclosures. Non-compliance may lead to enforcement by the Colorado Attorney General.
The business case for getting ahead of it is straightforward. Institutions that build the right AI governance infrastructure now will be better positioned not just for Colorado, but for the wave of state-level AI legislation that is already moving through legislatures across the country.
Is Your Bank or Financial Institution in Scope?
The law applies to any lending institution or bank that deploys an AI system to make or substantially factor into a consequential decision about a consumer’s access to, or the cost or terms of, financial or lending services. In lending, that covers a wide range of models, from credit scoring, automated underwriting, risk-based pricing, pre-approval tools to account management models that change credit lines or terms.
There is a conditional exemption for federally regulated banks and credit unions. If your institution is subject to examination by a federal or state prudential regulator and that regulator’s guidance is substantially equivalent to SB 24-205, including requirements to audit AI systems for bias, you may qualify.
But this is not a blanket pass.
An SR 11-7 program that was built around traditional statistical models and has not been extended to address algorithmic fairness, explainability, and disparate impact will not be sufficient. For a closer look at where banks typically carry gaps, see our post on evolving model risks and strategies for banks.
Non-bank lenders, fintechs, and smaller state-chartered institutions without substantive prudential AI oversight are squarely in scope with no exemption pathway.
The Four Obligations That Matter Most
SB 24-205 creates four core obligations that financial institutions must implement when using AI in credit decisions.
1. Annual Impact Assessments
Deployers must conduct a formal impact assessment for each high-risk AI system before deployment, annually thereafter, and after any significant modification. The assessment must document the system’s purpose, data inputs, known limitations, foreseeable risks of algorithmic discrimination, and mitigation measures. Records must be retained for at least three years. This is the most operationally intensive requirement of the law — and the one most institutions are least prepared for. Our post on 5 key strategies to audit model and AI risk in finance outlines a practical approach to building this process.
2. Bias and Fairness Testing
Institutions must demonstrate that each in-scope AI system does not produce outcomes that discriminate against consumers on the basis of protected characteristics. That requires running disparate impact analyses across race, gender, age, national origin, and related proxies and documenting the results, including any remediation steps taken. Our strategic roadmap for mitigating algorithmic risk in AI credit scoring covers the testing methodology in detail, and our whitepaper on Model Risk Management in Financial Services lays out the governance framework that supports it.
3. Consumer Disclosures and the Right to Appeal
When an AI system drives or substantially contributes to an adverse credit decision, the affected consumer must be notified — including that AI was used, what data was relied upon, and how it contributed to the outcome. Consumers must also have the opportunity to correct inaccurate data and, where technically feasible, to request human review. This requires coordination between model risk, compliance, legal, and customer service before the deadline.
4. Public Disclosure Statement
Deployers must publish a statement on their website summarizing the types of high-risk AI systems they use and describing how algorithmic discrimination risk is managed. This is table stakes for reputational credibility as AI governance scrutiny grows.
Getting Your Vendors in Order
For institutions using third-party credit scoring or underwriting models, the vendor is the developer under SB 24-205 — and you are the deployer. That means you are legally responsible for ensuring your vendor can support a compliant impact assessment. Vendors must provide documentation of training data, intended uses, known limitations, and discrimination risk. If they cannot, your deployment is not defensible. Now is the time to issue vendor questionnaires and update contract language. For a real-world example of how structured third-party model governance delivers results, see our success story on 40% faster validation of third-party credit risk models.
How SR 11-7 Helps — and Where It Falls Short
Banks operating under SR 11-7 have a head start. Documentation of model purpose and limitations, independent validation, ongoing performance monitoring, and governance around model changes all transfer directly to SB 24-205 obligations. But SR 11-7 does not explicitly require disparate impact testing across protected classes, consumer-facing disclosures, or annual impact assessments framed around fairness. Institutions that want to rely on the prudential regulator exemption need to close those gaps — not just assert that SR 11-7 is in place. See our overview of SR 11-7 best practices for model governance and our post on how leading banks validate AI and ML models differently for guidance on where the bar is moving.
Five Steps to Take Before June 30
These five steps provide a practical roadmap for lenders to align AI-driven credit decisions with SB 24-205 requirements.
1. Build Your AI System Inventory
Catalog every model used in credit origination, underwriting, pricing, and account management. Identify which meet the SB 24-205 definition of a high-risk AI system. Many institutions will find systems in production that have never been formally governed.
2. Run Bias and Fairness Audits
Test each in-scope model for disparate impact across protected class proxies and document the results. Where significant disparities exist, document mitigation steps in the impact assessment.
3. Complete Impact Assessments
Develop a standardized template covering purpose, data inputs, known limitations, discrimination risk, mitigation, and monitoring. Conduct the first assessment before June 30. Establish a calendar for annual re-assessments.
4. Update Vendor Contracts
Send SB 24-205 documentation requests to third-party model vendors. For new contracts, require explicit representations on impact assessment support, 90-day notification of discovered discrimination risks, and ongoing disclosure maintenance.
5. Build Disclosure and Appeal Workflows
Align adverse action notices to satisfy both Reg B and SB 24-205. Establish a human review process for AI-assisted decisions. Publish the required public disclosure statement before the deadline.
Conclusion
Colorado is the first mover, not the last. Multiple US states have active AI accountability legislation in progress, and the CFPB has made explainability in credit decisions an ongoing examination focus. The governance infrastructure you build now — model inventory, bias testing, impact assessments, vendor accountability — is the same infrastructure that will serve you as this regulatory landscape consolidates. Institutions that treat SB 24-205 as a one-state compliance exercise will be rebuilding from scratch every time a new state law takes effect. Those that treat it as the foundation of a durable AI governance practice will have a structural advantage. For a broader view of where AI is reshaping credit risk governance across US banking, see our post on how AI is reshaping credit risk analytics and compliance in US banks.
Also, Explore our Model Risk Management capabilities, see how we delivered 40% faster third-party credit model validation, and learn how we drove $400K in annual savings through ML-based credit scoring improvements for a US lender.
Ready to assess your readiness?
Anaptyss helps banks and lending institutions design and execute model risk governance programs that meet SR 11-7, SB 24-205, and emerging state AI requirements. We combine deep credit model expertise with regulatory compliance knowledge to build impact assessment frameworks, conduct bias audits, and structure vendor accountability programs. For more information, reach us at info@anaptyss.com.