ASAP
Connecticut Passes Law Significantly Regulating Use of AI in Employment
On May 11, 2026, the Connecticut General Assembly passed Senate Bill 5, and Governor Lamont is expected to sign it into law. The law is a comprehensive online safety law with significant requirements relating to Automated Employment-related Decision Technology (AEDT). These AEDT requirements combine concepts from the current AI regulations in California and the European Union, taking a disclosure-focused approach that encourages, but does not impose, substantive pre-use design or audit mandates. It also innovates by creating a program for third-party risk assessments as a means to vet and certify AI models, but falls short of making such evidence broadly admissible to defend against AI model-specific claims.
Broad Definition of Covered Technologies: The law defines the covered technology broadly. An AEDT is any system that processes personal data and produces outputs (e.g., predictions, scores, rankings, classifications, or recommendations) that are a “substantial factor” in making or materially influencing employment decisions (e.g., hiring, promotion, discipline, termination, and similar decisions tied to terms of employment). It expressly excludes generic software tools (e.g., spreadsheets, word processors) and tools used only incidentally or for descriptive/statistical purposes. This definition focuses on predictive AI technologies and the resulting potential of algorithmic bias (rather than generative AI large language models and their attendant primary risks of hallucination/inaccuracy). It will likely be read broadly to include GenAI if it is applied in a manner that might cause cognizable discriminatory harm or other covered injury.
Evidence of Bias Testing: The law amends Connecticut’s existing anti-discrimination framework effective October 1, 2026. This amendment explicitly provides that use of an automated system is not a defense to a discrimination claim under state law. That said, evidence of bias-testing and similar efforts, including the quality, efficacy, recency and scope of these efforts, the results of these efforts, and the response to the findings, “may be consider[ed]” by courts or agencies in deciding liability. This aligns Connecticut with similar aspects of California’s Fair Employment and Housing rules, effectively endorsing the heightened value of (and need for) this sort of pre-use and potentially in-use testing of AI tools.
Developer-Deployer Division of Labor: The law allocates obligations between developers and deployers, with the primary compliance burden falling on deployers (i.e., employers or entities using the technology). Starting on October 1, 2026, developers must provide deployers with sufficient information to enable compliance with the law’s requirements (but only where the technology is marketed or intended to materially influence employment decisions). There isn’t any listing or categorization, however, of the types of information that the developers are required to create in the first place, thus leaving open whether the information needed or requested by deployers will, in practice, be available (and if not, whether there is any imperative for developers to create that information upon request).
Notice and Disclosure: The law imposes a real-time interaction disclosure requirement. A deployer must inform any employee or applicant, in plain language, when they are interacting with AEDT. This disclosure is not required if it would be obvious to a reasonable person that the interaction involves an AEDT.
The law also requires a pre-decision notice when AEDT is used to generate outputs for, or as a substantial factor in, an employment decision. Before the decision is made, the deployer must provide a written notice to the affected individual that discloses:
- the fact that the technology is being used,
- the purpose of the technology and the type of employment decision involved,
- the trade name of the system,
- the categories of personal data processed and how those data are assessed,
- the sources of the personal data, and
- contact information for the deployer.
Notably, the law allows developers to contractually assume these deployer notice and disclosure obligations.
Trade Secrets Safe Harbor: The law includes a trade secret safe harbor: neither developers nor deployers are required to disclose trade secrets or other legally protected information, and merely need affirmatively notify the recipient that information is being withheld on that basis.
Independent Verification Organizations: The law establishes a pilot program beginning July 1, 2027, for “independent verification organizations”, i.e., third-party entities approved by the Connecticut Department of Consumer Protection to assess whether AI systems meet defined risk mitigation and safety standards (e.g., preventing personal injury, property damage, or data privacy harms). Approved organizations (capped at five) must apply and enter into a state-supervised memorandum of understanding that defines the scope of their verification activities, required methodologies, reporting obligations, and governance standards. These organizations do not receive formal certification or confer regulatory approval; rather, they issue verification assessments whose legal effect is limited. They may be used as evidence in certain private civil cases but do not create a safe harbor, presumption of compliance, or defense in enforcement actions. The program is temporary (sunsets in 2030) and designed as a testbed for potential future AI auditing or certification regimes, with required state evaluation and recommendations for expansion or modification.
Enforcement: Violations of this law are deemed unfair or deceptive trade practices pursuant to the Connecticut Unfair Trade Practices Act (CUTPA), enforceable exclusively by the Connecticut attorney general. There’s no private right of action, and a temporary cure period through December 31, 2027, may be available at the AG’s discretion.
The impact of this law is not to be underestimated. Employers should recognize that, although the statute stops short of mandating formal audits or certification, its structure signals a regulatory trajectory toward more formalized, third-party validation of AI systems. While we await implementing regulations, the law by itself reinforces that transparency of use and risk assessments are commonsense expectations when employers implement AI tools, and that independent, third-party audits may be the gold standard for defensibility. As a practical matter, employers deploying AEDT should begin building internal governance frameworks now, including documentation of system purpose and data inputs, comprehensive AI/AEDT use-case vetting guidelines and processes, regular bias testing, and vendor diligence protocols, to ensure they can satisfy both current disclosure requirements and likely future expectations around independent review and risk mitigation.