Connecticut's Cautious Approach to AI Law
For the second consecutive year, Connecticut's legislature has taken a measured approach to artificial intelligence, opting against broad regulations for businesses while passing targeted laws to criminalize deepfake revenge porn and fund AI education.
This session saw a significant bill aimed at AI business regulation pass the Senate, only to be shelved in the House. Governor Ned Lamont signaled a potential veto, expressing concern that strict regulations could hinder Connecticut's technology sector. This has left many wondering about the future of AI governance in the state.
Here’s a breakdown of the AI-related bills from this session, what passed, what didn't, and how Connecticut's actions compare to the national landscape.
What AI Legislation Passed in Connecticut?
Most of the successful AI legislation was included in the state's budget bill. Key provisions include:
-
Education Funding: The budget allocated $500,000 for the Connecticut Online AI academy, $25,000 for AI training at the Boys and Girls Club of Milford, and $75,000 for three additional Boys and Girls Clubs' AI training pilot programs.
-
Criminalizing Deepfake Abuse: A new law, effective October 1, 2025, makes it a crime to disseminate "synthetically created" intimate images without the depicted person's consent. While not explicitly using the term "AI," this measure directly targets the growing issue of generative-AI revenge porn deepfakes.
-
Enhanced Data Privacy: New privacy rules within Senate Bill 1295 mandate that collectors of sensitive data must inform consumers if their personal information is used to train large language models. The law also grants consumers the right to opt out of automated systems using their data for major life decisions related to housing, insurance, healthcare, education, criminal justice, and employment. It further gives them the right to question these automated decisions and correct inaccurate data.
Which AI Regulations Were Rejected?
Senate Bill 2, a significant measure that would have compelled companies to publicly disclose their use of AI to consumers, passed in the Senate but was never brought to a vote in the House. In an effort to win the governor's support, last-minute amendments weakened the bill's original requirements for annual impact assessments and mitigation of algorithmic discrimination. This marks the second year in a row that a similar AI business regulation bill has stalled after passing the Senate due to a veto threat from Governor Lamont.
Another piece of legislation, Senate Bill 1484, aimed at preventing algorithmic discrimination against employees and requiring disclosure of AI's role in worker assessments, passed the Judiciary Committee but did not advance further.
Why the Governor Opposed Broader AI Rules
Governor Lamont has consistently voiced apprehension that overly restrictive AI regulations could make Connecticut less attractive to technology businesses. The effort, led for the third year by Sen. James Maroney, D-Milford, gained support from key Senate leaders but failed to sway the governor.
In 2024, Lamont's office stated that AI is a "fast-moving space and that we need to make sure we do this right and don’t stymie innovation." This year, his chief innovation officer, Dan O’Keefe, argued that S.B. 2 sent a message that companies "can’t innovate here." O’Keefe also suggested it was "too early" for a state of Connecticut's size to be a pioneer in AI regulation.
How Connecticut Compares to Other States on AI
Several states are moving forward with AI legislation. In 2024, Colorado became the first state to require companies to disclose their AI systems and outlaw AI-driven discrimination, achieving what Connecticut's S.B. 2 aimed to do. Utah passed a law requiring proactive disclosure of AI use in regulated fields, while California and Texas have also enacted private-sector AI regulations.
Connecticut is not alone in criminalizing deepfake revenge porn, joining states like New Jersey, New Hampshire, and Massachusetts. New Hampshire's law goes even further, prohibiting any deepfake that causes reputational harm, including those of political figures. Meanwhile, a bill similar to Connecticut's S.B. 2 was passed in Virginia but was ultimately vetoed by Gov. Glenn Youngkin.
The National AI Legal Landscape
At the federal level, the Take It Down Act, signed by Trump in May, now federally criminalizes deepfake pornography. A significant development occurred recently when a federal budget bill, dubbed the 'big beautiful bill', initially included a provision to ban states from enacting their own AI laws for ten years. However, this language was removed by a near-unanimous Senate vote, leaving states free to determine their own regulatory paths for now.
A Note on AI in Journalism
As a point of transparency, the original reporting by CT Mirror utilized AI tools to analyze bill text, transcribe hearings, and search for quotes. For instance, AI helped search for specific language in a 9-hour video, accelerating the reporting process. All AI-assisted findings were fact-checked against original sources.