Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
131 changes: 131 additions & 0 deletions the-pulse/posts/2026/02/02/colarado-AI.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
---
title: "Colorado's AI Law Pause: What It Means for People Working in Data Science"
description: |
Colorado’s decision to delay and potentially rewrite its first-in-the-nation Artificial Intelligence Act, alongside President Trump’s recent executive order discouraging state-level AI regulation, highlights a growing tension between ambitious AI governance goals and the operational realities of working in data science. Drawing on lessons from Colorado’s policy experiment and the European Union’s AI Act, this piece translates regulatory debates into concrete implications for practitioners who design, deploy, and maintain data products. Rather than treating governance as a legality, the article shows how accountability requirements surface directly in technical workflows such as documentation, data lineage, monitoring, reproducibility, and human oversight. The piece offers practical guidance for data scientists, analysts, and engineers on how to design flexible, observable systems that reduce regulatory risk, while improving technical quality and organizational resilience.
categories:
- AI governance
- Applied data science
- Data engineering and MLOps
- Technology policy
- Operational risk
author: "Dr. Stefani Langehennig, University of Denver Daniels College of Business"
date: last-modified
date-format: long
toc: true
format:
html:
theme: [lux, rwds.scss]
css: rwds.css
toc: true
grid:
sidebar-width: 0px
body-width: 1000px
margin-width: 250px
code-annotations: below
mermaid:
theme: neutral
#bibliography: references.bib
csl: chicago.csl
execute:
eval: false
echo: true
messages: false
error: false
warning: false
nocite: '@*'
page-layout: article
title-block-banner: "#ffffff"
# citation: true
image: images/thumb.png
---

In 2024, Colorado became the first U.S. state to pass a [comprehensive law](https://leg.colorado.gov/bills/sb24-205) aimed at regulating "high-risk" artificial intelligence systems-models used in areas such as hiring, housing, credit, and healthcare. The law adopted a risk-based approach, placing additional obligations on systems that shape consequential decisions, including requirements around documentation, monitoring, and human oversight. Less than a year later, lawmakers delayed its implementation and began reconsidering key provisions, citing uncertainty about feasibility, cost, and enforcement.

Colorado's approach drew explicitly on international models, most notably the [European Union’s AI Act](https://artificialintelligenceact.eu/), which similarly classifies AI systems by risk and ties higher-risk uses to stronger accountability requirements. Colorado's experience is not only a story about state politics. It serves as a useful case study for a more practical question: what happens when ambitious AI governance principles meet the realities of building and maintaining production data systems?

For data scientists, analysts, machine learning engineers, and others responsible for real-world data products, this moment signals that AI governance is no longer a peripheral policy concern. It is becoming an operational constraint.


## From Governance Principles to Technical Work

Colorado's law followed a pattern increasingly visible in global AI governance, particularly the European Union's AI Act. These frameworks share a risk-based logic in that systems that influence consequential decisions face higher expectations for transparency, oversight, and accountability.

At a high level, these expectations (fairness, consumer protection, responsible use) sound abstract. In practice, they translate directly into technical work:

- Clear documentation of model purpose, training data, and limitations
- Records showing where data comes from and how it changes over time
- Reproducible experiments and versioned artifacts
- Ongoing monitoring for performance drift and unintended impacts
- Defined processes for human review and intervention

> None of this lives in legislation. It lives in scripts, workflows, dashboards, deployment systems, and operational infrastructure.

Colorado's stalled implementation of AI policy surfaced a familiar pattern, as many organizations are well equipped to optimize model performance, but far less prepared to operationalize accountability at scale. The friction emerged not because governance goals were controversial, but because the supporting technical infrastructure was uneven.

![](images/thumb.png){width=80% fig-align="center"}

## Why Uncertainty Becomes a Design Risk

One challenge Colorado encountered was definitional ambiguity. For example, what qualifies as "high risk", what safeguards are sufficient, and how should harms should be assessed? These questions are not merely legal - they are technical and context dependent.

Different data sources, deployment approaches, and users lead to different answers. For teams building data systems today, that uncertainty creates risk. When teams cannot easily see how data moves through a system, how models change over time, or how decisions are produced, adapting later becomes costly and disruptive.

Recent federal signals add another layer of complexity. President Trump's executive order discouraging state-level AI regulation aims to reduce fragmented policy on AI, but it does not replace state experimentation with a concrete national policy. Teams now operate in a moving landscape shaped by state initiatives, evolving federal priorities, and international regimes like the EU AI Act. In this environment, aiming for minimal compliance is risky. Teams are better served by designing systems that are flexible and easy to observe from the start.

## Responsibility Does Not End at Deployment

A lesson emerging from both policy debates and practice is that accountability does not stop when a model goes live. Responsibility shifts across teams over time, from data scientists to engineers, product owners, operators, and decision-makers.

This challenge is the focus of the [Responsible Handover of AI framework](https://senseaboutscience.org/responsible-handover-of-ai/) developed by [Sense about Science](https://senseaboutscience.org/), which emphasizes the need for clear transitions of responsibility as AI systems move from development into real-world use. Rather than treating deployment as a handoff to "the business", the framework highlights the risks that arise when assumptions, limitations, and responsibilities are not carried forward with the system.

For practitioners, this framing maps governance concerns onto familiar operational questions, such as who monitors systems after deployment, which development assumptions still matter in production, how limitations are communicated to users, and what happens when systems are updated or handed over to new teams.

Without explicit handover practices, accountability gaps emerge because responsibility becomes diffuse as systems evolve. From this perspective, many regulatory requirements are not adding entirely new work, rather they formalize practices teams already rely on. This includes documentation that travels with systems, monitoring in production, and clear escalation paths when something goes wrong.

## Practical Steps Teams Can Take Now

Regardless of how U.S. and international regulation ultimately settles, many investments pay off immediately while reducing future risk, including:

- _Standardizing documentation_. Ensure model summaries and data descriptions travel with systems as they move between teams
- _Build end-to-end visibility_. Version datasets, features, models, and configurations so results can be reproduced
- _Instrument monitoring early_. Track input drift, unstable predictions, performance decay, and downstream impacts once systems are in production
- _Clarify governance workflows_. Define who approves releases, who monitors systems, and how responsibility shifts over time
- _Translate risk for leadership_. Gaps in documentation and visibility tend to come back later as messy, expensive fixes; addressing them early saves time and pain

> These practices are not limited to machine learning. Any system that informs decisions can create similar accountability challenges.

## Governance Lives in the Data Stack

There's still no settled agreement on how AI should be governed. But for people building real-world data systems, its implications are already concrete. Accountability increasingly lives in the data stack in how workflows are instrumented, how models are monitored, and how decisions can be examined after the fact.

This is not simply about regulatory compliance. It is about building systems that are transparent, resilient, and trustworthy at scale. Organizations that treat governance as a core technical problem (rather than an external policy constraint imposed later) will be best positioned to navigate whatever regulatory balance ultimately emerges.


::: {.article-btn}
[Explore more data science ideas](/the-pulse/index.qmd)
:::

::: {.further-info}
::: grid

::: {.g-col-12 .g-col-md-12}
About the author:
: [Dr. Stefani Langehennig](https://www.linkedin.com/in/stefani-langehennig-phd-418820144/) is an Assistant Professor of the Practice in the Business Information & Analytics Department at the University of Denver's Daniels College of Business. She is also the lead director for the Center for Analytics and Innovation with Data (CAID). As a former data scientist, she has worked with both academic and industry partners in the U.S. and abroad, helping organizations evaluate and implement data analytics and AI solutions. Her research focuses on computational social science methods, the impact of data transparency on political behavior, and legislative policy capacity.

::: {.g-col-12 .g-col-md-6}
**Copyright and licence** : © 2026 Stefani Langehennig
<a href="http://creativecommons.org/licenses/by/4.0/?ref=chooser-v1" target="_blank" rel="license noopener noreferrer" style="display:inline-block;">
<img style="height:22px!important;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/cc.svg?ref=chooser-v1">
<img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/by.svg?ref=chooser-v1">
</a>
This article is licensed under a Creative Commons Attribution 4.0 (CC BY 4.0)
<a href="http://creativecommons.org/licenses/by/4.0/?ref=chooser-v1" target="_blank" rel="license noopener noreferrer" style="display:inline-block;">International licence</a>.
:::

::: {.g-col-12 .g-col-md-6}
**How to cite** :
Langehennig, Stefani. 2026. “**Colorado’s AI Law Pause: What It Means for People Working in Data Science**.” *Real World Data Science*, 2026. [URL](https://realworlddatascience.net/the-pulse/posts/2026/02/colarado-AI.html)
:::

:::
:::
Binary file added the-pulse/posts/2026/02/02/images/thumb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.