Compliance with the Colorado Artificial Intelligence Act: High Risk

Earlier this year, we briefly discussed multiple Artificial Intelligence (AI) Acts that are currently in effect or are in the process of being rolled out in the United States. See e.g., U.S. AI Acts Summaries – Colorado – Isaza Law, PC. This edition will focus on the Colorado Artificial Intelligence Act (CAIA) and how companies can be compliant when operating so-called “high-risk” AI systems.

A company’s obligations are dependent on how the entity is classified under the law: whether as a “developer” or a “deployer” of AI systems. Under the CAIA, a “developer” refers to an individual or entity doing business in Colorado that develops or intentionally and substantially modifies a high-risk AI system. A “deployer” refers to an individual or entity doing business in Colorado that deploys a high-risk AI system. A high-risk system is any AI system that, when deployed, makes, or is a substantial factor in making a “consequential decision.”

So, what is a consequential decision? The CAIA defines a consequential decision as a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service. Thus, the landscape as to who is affected is broad, as it covers various diverse and essential sectors. For instance, using an AI tool to vet resumes could be considered high risk, because employment falls under consequential decisions.

Developer Obligations

Developers are mandated to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination through the intended and contracted use of high-risk AI systems. Furthermore, they are subject to documentation requirements, public disclosures, and disclosures to the attorney general.

  1. Documentation Requirements

Developers must disclose high-level summaries of the type of data used to train the high-risk system, including the following:

  • any known or reasonably foreseeable limitations (or discrimination) of the system arising from its intended uses;
  • the purpose of the high-risk system;
  • the intended benefits and uses; and,
  • all other information necessary to allow the deployer to comply with the deployer’s duties.
  1. Documentation of Descriptions to Deployers

Developers must also provide descriptions of how the performance and mitigation of algorithmic discrimination high-risk system was evaluated before it was offered, sold, leased, licensed, given, or made available to the deployer. The descriptions must also include the measures taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination from the high-risk system, the intended outputs, how it should be (not) used or monitored, the data measures used to cover the training datasets, and the suitability of such data.

These documents must be made available to any deployer or a third party contracted by a deployer that utilizes the developer’s high-risk system so that the deployers can complete their own impact assessment.

  1. Public Disclosures

A developer must also publicly disclose on either their website or in a public use case inventory a statement that includes the type of high-risk systems that the developer has created, or intentionally and substantially modified, and makes available to a deployer or other developer. Additionally, the public disclosure statement must include how the developer manages known or reasonably foreseeable risks of algorithmic discrimination or the types of high-risk systems that were described and utilized.

  1. Disclosures to the Attorney General

Within 90 days, and without unreasonable delay, a developer must disclose to the state attorney general (and all known developers/deployers of the high-risk system) any known or reasonably foreseeable risks of algorithmic discrimination upon discovery of said risks. The discovery may be acquired from the developer’s ongoing testing and analysis of the high-risk system that has been deployed or through a credible report from a deployer alleging that the high-risk system has caused algorithmic discrimination. The attorney general may also request any of the information and documentation described above.

Deployer Obligations

The Deployer’s obligations focus more on the consumers who are affected by the high-risk system that the deployer is utilizing. This includes the same reasonable care standard as developers, as evidenced by a risk-management policy, an impact assessment, and consumer notification requirements.

  1. Risk-Management Policy

The purpose of the risk-management policy is to govern the deployer’s deployment of the high-risk system. The policy must include the principles, planned processes (and implementation), and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The reasonableness of the policy will be considered, given the nationally or internationally recognized risk management frameworks for AI systems or, quite simply, at the discretion of the state attorney general.

  1. Impact Assessment

An annual impact assessment may be completed by the deployer or a third party contracted by the deployer. At a minimum, it must include the following:

  • a statement by the deployer disclosing the purpose, intended use, benefits, and deployment context of the high-risk system;
  • an analysis of the likelihood of any known or foreseeable risk of algorithmic discrimination, the nature of the discrimination, and the steps taken to mitigate the risk; and,
  • the categories of data inputs and the outputs it produces, (if applicable) the categories of data used to customize the high-risk system, metrics used to evaluate performance or limitations, consumer transparency measures taken, and post-deployment monitoring and safeguards. 
  1. Consumer Notification

If a deployer deploys a high-risk system to make, or was a substantial factor in making, a consequential decision concerning a consumer, then the deployer must notify the consumer before the decision is made. In the notification, the deployer must provide information about the high-risk system, contact information, and the consumer’s right to opt out of processing personal data.

If a decision was adverse to the consumer, then the deployer must disclose the principal reason for the decision, the manner and degree of the involvement of the high-risk AI system, the type of data that was processed to make the decision, the sources of the data, an opportunity to correct any incorrect personal data, and an opportunity to appeal the adverse decision.

Closing Thoughts

As we have seen from various privacy laws across the world, transparency and security of that data is typically the name of the game. These new and upcoming AI laws are similar in that regard, especially considering that the consumers’ right to access is explicitly stated. In essence, having proper documentation and a system in place to achieve that will help facilitate and mitigate any shortcomings of developers or deployers who engage in the usage of high-risk artificial intelligence systems. Thus, following all of these documentation rules and guidelines prescribed by the CAIA should establish a rebuttable presumption of compliance for both developers and deployers. 

Of course, inherent in all AI compliance practices is the initial determination of whether the entity is considered a “developer” or a “deployer.” This step alone may require careful thought and legal analysis of the offering.

The information you obtain at this site, or this blog is not, nor is it intended to be, legal advice. You should consult an attorney for advice regarding your individual situation. We invite you to contact us through the website, email, phone, or through LinkedIn. Contacting us does not create an attorney-client relationship. Please do not send any confidential information to us until such time as an attorney-client relationship has been established.

Leave a Reply

Your email address will not be published. Required fields are marked *