
For the first blog of the 2025 year, we will continue our focus on an important player in the privacy and artificial intelligence (“AI”) space: Colorado. In a relatively short time, Colorado has become one of the leaders in the U.S. for both privacy and artificial intelligence, beginning with its passage of the Colorado Privacy Act back in 2021 (back then the third omnibus privacy legislation in the country, behind California and Virginia). Now Colorado leads the charge in the AI space with its passage of the Colorado Artificial Intelligence Act (“CAIA” or the “Act”), the first comprehensive piece of AI legislation in the United States.
The CAIA was signed into law on May 17, 2024 and will come into effect on February 1, 2026. The purpose of the Act is to “comprehensively regulate the development and use of high-risk artificial intelligence systems.”[1] The CAIA also introduces some unique terms that are essential to appreciate the scope of the Act. Some of these essential terms are[2]:
- “Developer” – an individual, corporation, or other legal or commercial entity doing business in Colorado that develops or intentionally and substantially modifies an AI system that results in any new foreseeable risk of algorithmic discrimination.
- “Deployer” – a person doing business in Colorado who deploys a high-risk artificial intelligence system.
- “High-risk artificial intelligence system” – any AI system that, when deployed, makes, or is a substantial factor in making a consequential decision.
- “Consequential decision” – a decision that has a material legal or similarly legal effect on the provision or denial to any consumer of, or the cost or terms of, education, employment, financial, health-care housing, insurance (to name a few).
As previously mentioned, the Act regulates high-risk artificial intelligence systems, primarily to prevent or at least mitigate the risk of “algorithmic discrimination.” Algorithmic discrimination is exactly as it sounds: the use of an AI system that results in an unlawful differential treatment or impact that disfavors a member or group of a protected class. Interestingly, the CAIA does not prohibit algorithmic discrimination altogether (which would result in strict liability).
Instead, the Act imposes on both developers and deployers a duty of care (negligence) to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of a high-risk AI system. It maintains a rebuttable presumption of “reasonable care,” but only if the developer or deployer satisfies the obligations of the CAIA. However, the requirements needed to uphold that rebuttable presumption are somewhat burdensome and extensive.
Developers, for instance, must adhere to multiple transparency requirements to deployers, the public, and even (upon request) the Attorney General.[3] At a glance, these transparency requirements include a general statement describing reasonably foreseeable uses, known harmful/inappropriate uses of the high-risk AI system, high-level summaries of the purpose, intended benefits, limitations, and the type of data used to train the AI. Additional documentation on how the AI was evaluated for performance and mitigation of algorithmic discrimination, measures the developer has taken to mitigate known or foreseeable risks, and the data governance measures used to that end are required. Any disclosure to the public, Attorney General, or upon discovery of algorithmic discrimination must be done within ninety days, in their respective manner. For the public, developers must have on their website a statement that summarizes the types of high-risk AI systems available on the market and how the developer manages known or foreseeable risks.
Deployers, on the other hand, have their own set of, mostly unique, transparency requirements.[4] Deployers can, however, be subject to the same requirements as developers if they develop or intentionally and substantially modify the AI system. But in most cases, deployers would be required to implement a risk management policy and program that is reasonable, following the latest version of the “Artificial Intelligence Risk Management Framework” published by the National Institute of Standards and Technology (NIST). Additionally, deployers would need to publish impact assessments annually and within ninety days after substantial and intentional modification to a high-risk AI system along with an impact assessment. Similarly to developers, deployers have to include on their websites a statement that summarizes the types of high-risk AI systems it uses, how risks are being managed, and the information collected and used by the deployer. Finally, an annual review must be conducted to ensure that algorithmic discrimination is not occurring.
Regarding consumer rights, if a high-risk AI system has affected a consumer by making a consequential decision, deployers must notify the consumer(s) and offer them a statement of details. If the consequential decision is adverse to a consumer, then additional statements as to the reasons for the decision must be included. However, there is no basis for a private right of action.[5] Only the Attorney General can enforce the CAIA. That said, how the AG will determine an event to be a “consequential decision” remains to be seen. Even more puzzling is how the AG will police this activity.
I anticipate many states will follow suit in the coming year. With the AI craze in full effect, many states may want to follow Colorado’s lead to develop their own comprehensive framework, or at the very least, a patchwork of AI-focused regulations similar to the approach California has taken, as discussed in our last blog. In the interim, both developers and deployers must continue to get ready for this law to take effect by early next year. Thus, 2025 will be a busy compliance year for developers and deployers with AI offerings.
The information you obtain at this site, or this blog is not, nor is it intended to be, legal advice. You should consult an attorney for advice regarding your individual situation. We invite you to contact us through the website, email, phone, or through LinkedIn. Contacting us does not create an attorney-client relationship. Please do not send any confidential information to us until such time as an attorney-client relationship has been established.
[1] https://leg.colorado.gov/sites/default/files/images/fpf_legislation_policy_brief_the_colorado_ai_act_final.pdf
[2] https://casetext.com/statute/colorado-revised-statutes/title-6-consumer-and-commercial-affairs/fair-trade-and-restraint-of-trade/article-1-colorado-consumer-protection-act/part-17-artificial-intelligence/section-6-1-1701-definitions
[3] https://casetext.com/statute/colorado-revised-statutes/title-6-consumer-and-commercial-affairs/fair-trade-and-restraint-of-trade/article-1-colorado-consumer-protection-act/part-17-artificial-intelligence/section-6-1-1702-developer-duty-to-avoid-algorithmic-discrimination-required-documentation
[4] https://casetext.com/statute/colorado-revised-statutes/title-6-consumer-and-commercial-affairs/fair-trade-and-restraint-of-trade/article-1-colorado-consumer-protection-act/part-17-artificial-intelligence/section-6-1-1703-deployer-duty-to-avoid-algorithmic-discrimination-risk-management-policy-and-program
[5] Sec. 6-1-1706(6)