← All Reports

State Spotlight: Colorado's AI Legislation

Data coverage: 2021–2026 (Q1 session). A deep dive into how Colorado went from early movers to one of the most active AI-regulating states in the country - and how a governor's workgroup proposal to replace the landmark Colorado AI Act is reshaping the state's regulatory trajectory.

When researchers and advocates talk about which states are leading on AI policy, Colorado belongs in the first sentence. Since 2021, Colorado has introduced more than 108 AI-related bills, signed thirteen core AI laws into effect, convened a special legislative session in part to address AI governance, and established itself as an early laboratory for some of the most consequential technology policy questions of our time: How do you regulate high-risk automated systems? Who owns the image of your face? Can an algorithm set your rent?

This report draws on data from the CAID State AI Legislation Tracker to offer a systematic look at Colorado's AI legislative record from 2021 through early 2026. We trace three distinct waves of lawmaking, identify the five dominant policy themes that have shaped Colorado's agenda, and highlight the bills that made it (and those that didn't). The goal is to offer lawmakers and practitioners a picture of what Colorado's experience reveals about the opportunities and limits of state-level AI governance.

Numbers at a Glance

108 Total AI-related bills in the CAID tracker
38 Core AI bills (AI as primary subject)
13 Core AI laws signed into effect
9 Core AI bills introduced in 2026 so far

Colorado's AI legislative activity was essentially nonexistent before 2021. From 2017 to 2020, the state introduced a handful of bills that touched on AI-adjacent technologies (including automated vehicles, solar energy systems, and workforce platforms), but none that addressed AI as a primary policy subject. That changed in 2021 with a single targeted bill on insurance data, and has not slowed since. By 2025, Colorado introduced more core AI legislation in a single year than it had in the preceding four years combined. The 2026 session is on pace to match that intensity, with nine core AI bills already introduced and major consumer protection legislation in active development.

Three Waves of Colorado AI Legislation

Wave 1 (2021 - 2022): Targeted, First-Mover Bills

Colorado's first wave of core AI legislation was narrow and intentional. In 2021, the legislature passed SB 21-169, restricting how insurance companies could use external consumer data, particularly in machine learning models, when making underwriting decisions. It was a focused intervention in a specific industry, not a sweeping AI framework, but it signaled that Colorado legislators were paying attention to how AI was already being deployed in consequential decisions affecting consumers.

The following year, the legislature passed SB 22-113, regulating law enforcement and government use of facial recognition technology. Colorado joined a small group of states addressing biometric surveillance directly, and did so before the national conversation had fully crystallized. Both laws were signed by the governor. The message from this: Colorado was willing to move on AI in domains where the harms were concrete and the politics were manageable.

Wave 2 (2023 - 2024): Comprehensive Frameworks Emerge

By 2024, Colorado's AI legislative activity shifted from targeted interventions to more comprehensive regulation. The state passed four core AI laws in a single session - its most productive year to that point - and introduced twelve more bills that touched on AI-related themes.

The signature achievement of this wave was SB 24-205, the Consumer Protections for Artificial Intelligence Act, or the Colorado AI Act. Drawing on frameworks similar to those in the EU AI Act, SB 24-205 established obligations for developers and deployers of "high-risk" AI systems, requiring risk assessments, transparency disclosures, and anti-discrimination protections. Though the bill was significantly narrowed before passage under pressure from the technology industry, it remained one of the most ambitious state-level AI consumer protection laws in the country at the time of enactment.

The 2024 wave also produced important legislation on synthetic media. HB 24-1147 required disclosures when AI-generated content depicted candidates in political advertising. SB 24-011 addressed deepfakes used to facilitate stalking, harassment, and online misconduct. Both passed with broad support. Rounding out the year, HB 24-1468 created a comprehensive framework governing AI and biometric technologies, addressing data collection, retention, and use in commercial and government contexts.

Wave 3 (2025 + the Special Session): Proliferation and Evaluation

The 2025 legislative cycle saw a sharp acceleration in AI policy. Ten core AI bills were introduced in the regular session, with four more added during the August 2025 special session - fourteen total across the year - spanning consumer protection, public safety, algorithmic pricing, wildfire mitigation, government language access, and criminal procedure. The sheer volume reflected both national momentum (2025 was a record year for AI legislation across the country) and Colorado's accumulated policy sophistication: legislators were now working from experience, not just concepts.

Several high-profile bills failed or were vetoed, revealing the limits of the legislature's appetite on the subject. But the session also produced three new signed AI laws: HB 25-1153 (government language access with AI-powered assessment), SB 25-240 (a task force on AI-generated evidence in criminal proceedings), and SB 25-288 (criminal and civil liability for AI-generated intimate images, one of the strongest such laws in the country).

Of note is that Colorado convened a special legislative session in August 2025 in which AI governance was among the subjects on the agenda. The legislature passed SB 25B-004, the Algorithmic Transparency Act, which required disclosure and impact assessments for automated systems used in high-stakes decisions. The special session underscores how seriously Colorado's legislature has come to treat AI regulation as an ongoing institutional responsibility, not a one-time policy exercise.

The 2026 regular session opened with nine core AI bills already introduced, touching healthcare, conversational AI, algorithmic pricing, data center infrastructure, and youth social media protections. The defining question of the session, however, is not any single bill but rather the fate of the governor's proposed ADMT framework - a proposal that, if enacted, would repeal and replace the Colorado AI Act entirely (see the Consumer Protection section below).

Five Policy Themes

1. Synthetic Media and Deepfakes: A Win for Colorado

No policy area illustrates Colorado's AI legislative success more consistently than synthetic media. The state has now signed three laws specifically targeting AI-generated or deepfake content:

BillYearSubjectStatus
SB 24-0112024 Criminalizes deepfakes used for online harassment and stalking Signed
HB 24-11472024 Requires disclosure of AI-generated content in candidate political advertising Signed
SB 25-2882025 Criminal and civil liability for AI-generated non-consensual intimate images Signed

These bills share several features that explain their success: they address harms that are visceral and easy to articulate (election manipulation, harassment, sexual exploitation), they draw clear lines between permissible and impermissible conduct, and they attract broad bipartisan support. For practitioners, the deepfake bills are also instructive for what they do not do - they regulate specific harmful applications of AI rather than the technology itself, making them more durable against industry objections.

2. Consumer Protection and High-Risk AI: Ambitious, Debated, and Still Emerging

Colorado's consumer protection arc is arguably the most interesting story in the data. It follows a pattern of legislative ambition, industry resistance, scaling back, and eventual (though partial) success.

SB 24-205 (2024) was the opening bid: a comprehensive high-risk AI framework covering risk assessments, algorithmic impact disclosures, and anti-discrimination obligations for developers and deployers. It passed, but only after significant amendments that considerably narrowed the scope.

In 2025, legislators tried again with SB 25-318, which would have extended those protections further. It was postponed indefinitely under pressure from the technology industry. Rather than accept defeat, legislators carried the work into the special session, where a more targeted and bipartisan version (SB 25B-004, the Algorithmic Transparency Act) was signed into law in August 2025. Where the earlier bills attempted comprehensive governance, SB 25B-004 specifically focused on transparency and disclosure requirements for high-stakes automated decision systems.

The consumer protection trajectory suggests a dynamic that may be common to many states: a legislature that is willing to keep pushing even when it fails, eventually creating incremental pieces of policy related to AI that can clear both chambers and survive a governor's signature.

For practitioners, the lesson is that the overall direction is clear even when individual bills fall short. Organizations deploying AI in consequential decisions, such as hiring, housing, lending, and benefits, should treat Colorado's consumer protection trajectory as a signal of what is coming, not just what has already arrived.

The ADMT Framework: A Proposed Reset

The most significant development in Colorado's AI consumer protection story is not a bill that passed, but rather a proposal that has not yet been formally introduced. On March 17, 2026, Governor Polis's AI Policy Work Group, convened in October 2025 with representatives from consumer groups, hospitals, school districts, small businesses, large technology companies, and venture capitalists, released a draft framework that would repeal and replace SB 24-205 entirely. The workgroup's unanimous support for the proposal is notable given the diversity of interests represented.

The proposed framework, titled "Concerning the Use of Automated Decision Making Technology in Consequential Decisions" (ADMT), differs from SB 24-205 in both scope and approach. Where SB 24-205 focused on "high-risk AI systems" and required risk assessments, algorithmic impact disclosures, and anti-discrimination policies for developers and deployers, the ADMT framework narrows the target to "Covered Automated Decision-Making Technologies" and shifts the emphasis from internal risk management to consumer-facing transparency rights. Key provisions include:

  • Pre-use notice: deployers must inform consumers when an ADMT is being used in a consequential decision (hiring, housing, lending, healthcare, education, or government services)
  • Adverse outcome notice: if a decision is adverse, consumers must be notified within 30 days with information about the decision and the data relied upon
  • Correction and human review rights: consumers may correct inaccurate data used in the decision and request that a human review the outcome
  • Three-year recordkeeping: deployers must retain compliance documentation
  • Attorney General enforcement: no private right of action; the AG enforces with a 90-day cure period before penalties apply

The proposal also applies a higher applicability threshold than SB 24-205: rather than requiring that AI be a "substantial factor" in a decision, coverage is triggered only when the ADMT "materially influences" the outcome - a narrower standard. Legal services decisions, which were included under SB 24-205, are removed from the covered domains.

As of this writing, SB 24-205 remains on the books with an effective date of June 30, 2026 (delayed from its original February 1, 2026 date by the special session's SB 25B-004). Senate Majority Leader Robert Rodriguez, the primary architect of SB 24-205, has indicated he is reviewing the draft and is expected to introduce legislation to give the ADMT framework statutory force before the 2026 session closes in May. Whether the framework passes, is further amended, or stalls while SB 24-205 takes effect will define the next chapter of Colorado's consumer protection story.

For practitioners, the ADMT proposal signals a meaningful shift in Colorado's theory of AI regulation: away from upstream risk management obligations on developers and toward downstream transparency and consumer rights at the point of deployment. That shift may be more durable politically, even if it leaves harder questions about algorithmic discrimination and system design to other mechanisms.

3. Algorithmic Pricing: A Persistent Dead End

If deepfakes represent Colorado's clearest policy success, algorithmic pricing represents its most consistent failure. The legislature has tried three times to regulate the use of pricing algorithms, particularly in housing markets, and has failed each time:

BillYearSubjectStatus
HB 24-10572024 Prohibit algorithmic tools used for coordinated rent-setting among landlords Failed
HB 25-10042025 Prohibit pricing coordination between landlords using algorithmic software Vetoed
HB 25-12642025 Prohibit use of surveillance data to set prices and wages Failed
HB 26-12102026 Prohibit surveillance data used to set prices and wages (renewed attempt) In Progress

HB 25-1004 is notable because it passed both chambers with substantial support, only to be vetoed by Governor Polis, who argued it was overly broad and would create legal uncertainty for legitimate business practices. The veto reflects a tension at the heart of algorithmic pricing legislation: the behavior it targets may closely resemble what courts have treated as normal market information sharing, making the legal theory difficult to sustain.

For practitioners in housing, real estate technology, and retail, Colorado's repeated attempts are a signal that this policy area is not going away even if no single bill has succeeded. The 2026 reintroduction of the surveillance pricing ban (HB 26-1210), now framing the issue as a surveillance data problem rather than purely an algorithmic coordination problem, suggests legislators are looking for a legal theory with more staying power. Those whose business models involve algorithmic pricing should anticipate continued regulatory attention across multiple states.

4. Government Use of AI: An Emerging Agenda

A quieter but growing theme in Colorado's AI legislation concerns how government itself uses AI. These recent bills have addressed AI in public sector operations:

  • HB 25-1153 (signed, 2025): Required a statewide assessment of whether government agencies were using AI tools to support language access for non-English speakers, framing AI as a tool for equity rather than only a threat to it.
  • SB 25-240 (signed, 2025): Created a task force to study AI-generated evidence and its implications for criminal discovery and due process.
  • HB 25-1212 (stalled, 2025): Would have established public safety protections and accountability mechanisms for AI used by law enforcement; a more politically contested version of the government AI question.
  • SB 25B-004 (signed, 2025 special session): Applied algorithmic transparency requirements broadly, including to government-operated systems.

These bills reflect a legislature beginning to grapple with AI not just as an industry to regulate, but as a tool its own agencies are deploying. That shift in perspective - from external regulator to internal steward - tends to produce different, and often longer-lasting, policy outcomes.

5. Healthcare AI: A New Theme in 2026

A new and notable theme has emerged in the 2026 session: healthcare-specific AI regulation. Colorado has now introduced two bills that focus explicitly on AI in clinical and therapeutic contexts:

  • HB 26-1139 (Use of Artificial Intelligence in Health Care): Passed the House in March 2026 and heads to the Senate. The bill addresses AI use in clinical decision support, requiring transparency and human oversight when AI is used in consequential healthcare decisions.
  • HB 26-1195 (Psychotherapy Artificial Intelligence Restrictions): In progress in the House. The bill targets AI tools that simulate or facilitate therapeutic relationships, addressing concerns about unregulated AI mental health applications.

Healthcare AI is a domain where the harms are concrete and the regulatory hook (patient safety, professional licensing, clinical standards) is already established, features that have historically made Colorado bills more likely to succeed. If either bill passes, it would represent the state's first domain-specific AI law outside of synthetic media and biometrics and could signal a new vector for state AI regulation nationally.

What Passed, What Failed, and Why

Looking across the full dataset, Colorado's core AI laws cluster around a few shared attributes. Bills that succeeded tended to address well-defined harms with identifiable victims; draw on existing legal frameworks (discrimination law, criminal codes, consumer protection statutes) rather than creating entirely new regulatory categories; attract bipartisan sponsors; and make their way through committees without triggering intense industry mobilization.

Bills that failed tended to be broader in scope, more novel in legal theory, or directly threatening to organized business interests. The algorithmic pricing bills faced real estate and software industry opposition. SB 25-318 faced a broad technology sector coalition. And Colorado's wildfire AI bills (SB 23-032, SB 25-011, and SB 25-022) represent a specific and recurring pattern: good ideas that cost money die in appropriations, not on the merits. Three separate proposals to use AI for wildfire detection and mitigation have all failed to survive the appropriations process, despite strong public support for the underlying goal.

A Note on Key Legislators

Any analysis of Colorado's AI legislative record has to mention Rep. Brianna Titone, who has emerged as the most prolific and consistent AI legislator in the state. An engineer by training, Titone has been a primary or co-sponsor on five core AI bills, including HB 24-1147, HB 24-1468, SB 24-205, SB 25-318, and SB 25-288. Her sustained engagement across multiple sessions and regulatory approaches has given Colorado a degree of legislative continuity that many states lack.

Other notable sponsors include Sen. Robert Rodriguez (SB 24-205, SB 25-318, SB 25B-004), Sen. Chris Hansen (SB 22-113, HB 24-1147, HB 24-1468), and Rep. Lisa Cutter (SB 24-011, HB 25-1212, SB 25-011, SB 25-288). The recurring appearance of the same legislators across different bills and years reflects a small, but experienced, caucus that has developed genuine subject-matter expertise on AI governance.

Implications for Lawmakers and Practitioners

For lawmakers, Colorado's record offers a few practical lessons. Bills that work tend to be anchored to specific, demonstrable harms rather than to comprehensive governance frameworks, even when the latter is the ultimate goal. Starting narrower and building incrementally may be more effective than attempting omnibus AI legislation, even in a state with significant political appetite for regulation.

The special session precedent is also worth noting: Colorado demonstrated that AI governance is serious enough to justify extraordinary legislative measures when the regular session falls short. Other states may see this as a model, particularly where the annual legislative calendar is short and AI bills routinely die for lack of floor time and expertise.

For practitioners, Colorado's trajectory signals that the regulatory environment for AI is not static and is not going to recede. Even when individual bills fail, the legislative pressure accumulates and eventually produces law. Organizations operating in Colorado (or in states watching Colorado) should be preparing now for high-risk AI disclosure requirements, algorithmic transparency obligations, and expanded liability for AI-generated harmful content. The question is not whether these requirements will arrive, but when and in what form.

Colorado's experience also reveals which domains are highest on the regulatory priority list: automated decision systems affecting consumers in housing, employment, lending, and benefits; synthetic media used to deceive or exploit individuals; and government-operated AI systems that touch civil rights and due process. Practitioners in these domains face the most immediate regulatory exposure and should be treating Colorado's existing laws as a floor, not a ceiling.

What to Watch in 2026

The 2026 session, which runs through May 13, is already the most consequential in Colorado's AI legislative history not because of any single bill that has passed, but because of what is in motion simultaneously. There are four storylines worth tracking closely.

The ADMT framework and the fate of SB 24-205. This is the dominant question. The governor's AI Policy Work Group released its unanimously supported ADMT proposal on March 17, 2026. If Senate Majority Leader Rodriguez introduces legislation and it passes before the session ends, SB 24-205 would effectively be repealed and replaced before its June 30, 2026 effective date. If the bill is introduced but stalls, or if no bill is introduced at all, SB 24-205 takes effect in June with its current requirements, imposing risk assessment and algorithmic discrimination obligations that industry has spent two years lobbying against. The outcome will set the template for how other states approach consumer protection: the SB 24-205 model (risk management upstream) versus the ADMT model (transparency and rights downstream).

Healthcare AI. HB 26-1139 has passed the House and heads to the Senate. If it becomes law, Colorado would join a small but growing number of states with sector-specific AI rules for healthcare, a domain where enforcement infrastructure (medical licensing, patient safety boards) already exists. Watch for whether the Senate narrows the bill's scope, as has happened with most of Colorado's broader AI proposals.

Algorithmic pricing, again. HB 26-1210 has passed the House. After three prior failures (two committee deaths and one gubernatorial veto), this iteration frames the issue as surveillance-data misuse rather than pricing coordination, which may give it a stronger legal foundation. Governor Polis's veto of HB 25-1004 in 2025 is still the most recent precedent; whether the reframing is enough to change the outcome remains to be seen.

Conversational AI and chatbot regulation. HB 26-1263 (Conversational Artificial Intelligence Service Operator Requirements) has cleared committee and heads to the full House. This is a new regulatory domain for Colorado (targeting AI chatbot operators specifically) and could be the first such law in the country if it passes. Its scope and ultimate text will be closely watched by the technology industry nationally.

Colorado's AI legislative record is, in the end, a story about a state that decided early that AI governance was a legitimate and necessary function of state government. It has spent five years working out, in the real-world friction of a legislature, what that actually means. The 2026 session is testing whether that accumulated experience is enough to resolve the state's biggest open question: what kind of consumer protection framework can actually survive the political process, get signed into law, and hold up in practice.

Explore Colorado's full AI legislation record in the CAID tracker

Open Dashboard →   All Reports →

Data note: All bill counts and status data are drawn from the CAID State AI Legislation Tracker as of April 2026 (v2 data refresh), using Plural Policy (OpenStates) legislative data and the CAID two-tier AI classification system. Bill status reflects the most recent action recorded in the tracker. The ADMT framework discussion is based on the governor's AI Policy Work Group proposal released March 17, 2026; the framework had not yet been formally introduced as legislation at time of publication. This report will be updated as the 2026 session concludes.