Update on California’s Efforts to Regulate the Use of AI in Employment Decision-Making

Updated May 19, 2023: AB 331 has died in committee and will not become law this session.

  • California’s Civil Rights Council has revised proposed regulations governing the use of automated-decision systems.
  • A proposed bill, AB 331, would impose obligations on employers to evaluate the impact of an automated decision tool (ADT), prohibit use of an ADT that would contribute to algorithmic discrimination, add a new notice requirement, and create a governance program.
  • A separate bill, SB 721, would create a temporary Working Group to deliver a report to the legislature regarding artificial intelligence.

California continues to take steps to regulate the burgeoning use of artificial intelligence, machine learning, and other data-driven statistical processes in making consequential decisions, including those related to employment. The California Civil Rights Council (CRC)1 recently issued updated proposed regulations governing automated-decision systems. The agency had issued a working draft in March 2022.  In addition to these regulatory efforts, California lawmakers have introduced two bills designed to further regulate AI in employment.  California’s efforts at oversight now consist of the following:

  • The CRC’s Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems;
  • Assembly Bill No. 331, to add Chapter 25 (commencing with Section 22756) to Division 8 of the Business and Professions Code, relating to artificial intelligence; and
  • Senate Bill No. 721, “California Interagency AI Working Group,” to add and repeal Section 11546.47 of the Government Code, relating to artificial intelligence.2

While these approaches have a common goal of minimizing the potential negative consequences of artificial intelligence when deployed (in relevant part) in the employment and personnel management contexts, they also represent simultaneous efforts from disparate bodies racing in California to be the first to regulate new technology.  Adding to the confusion in an increasingly crowded space is the fact that these entities are proposing their own unique definitions regarding similar terminology (e.g., “adverse impact” by the CRC versus “algorithmic discrimination” by the California Assembly, and “automated-decision system” by the CRC versus “automated decision tool” by the California Assembly), when it remains unanswered who should even be defining these terms as a threshold matter.  Taken as a whole, the CRC, whose mission is to promulgate regulations that implement California’s civil rights laws, appears to be leapfrogging the legislative process with its efforts.

The following summarizes the latest primary updates regarding California’s three-pronged approach.

Civil Rights Council’s Proposed Rules

The latest iteration of the CRC’s Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems was released on February 10, 2023.  Since first publishing draft modifications to its antidiscrimination law in March 2022, the CRC has continued to refine its definitions of key terms without altering the primary substance of the proposed regulations.  The CRC’s most recent proposal includes the following primary updates:

Key Updates to Definitions

  • Introduces definition for adverse impact, which includes but is limited to, “the use of a facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by the Act.  ‘Adverse impact’ is synonymous with ‘disparate impact.’”
  • Introduces definition for artificial intelligence to mean a “machine-learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.”
  • Introduces definition for machine learning to mean the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
  • Broadens the definition of agent from a person acting on behalf of an employer to provide services “related to …. the administration of automated-decision systems for an employer’s use in recruitment, hiring, performance, evaluation, or other assessments that could result in the denial of employment or otherwise adversely affect the terms, conditions, benefits, or privileges of employment” to “the administration of automated-decision systems for an employer’s use in making hiring or employment decisions” (emphasis added).
  • Introduces definition for proxy to mean a “technically neutral characteristic or category correlated with a basis protected by the Act.”
  • Provides a more fulsome list of examples of tasks that constitute automated-decision systems, and clarifies that automated-decision systems exclude word-processing software, spreadsheet software, and map navigation systems.
  • Provides an example of the capability of algorithms to “detect patterns in datasets and automate decisions [sic] making based on those patterns and datasets.”
  • Renamed and refined the definition of Machine-Learning Data to “Automated-Decision System Data.”

Key Updates to Defense to Unlawful Employment Practice and Recordkeeping Obligations

  • Clarifies how an employer or covered entity can defend against a showing that it engaged in an unlawful use of selection criteria that resulted in an adverse impact or disparate treatment on an application, employee or class of applicants or employees on a protected basis, i.e., that the employer or covered entity can show that the selection criteria, as used, is job-related for the position in question and consistent with business necessity and there is no less-discriminatory policy or practice that serves the employer’s goals as effectively as the challenged policy or practice.3 
  • Extends recordkeeping obligations not just to any person who sells or provides an automated-decision system or other selection criteria to an employer or covered entity, but also to any person “who uses an automated-decision system or other selection criteria on behalf of an employer or other covered entity.”
  • Clarifies the scope of records to be preserved, rather than simply referring to “records of the assessment criteria used by the automated-decision system.”4 

Assembly Bill No. 331

Introduced by Assembly Member Bauer-Kahan on January 30, 2023, Assembly Bill No. 331 (AB 331) would, similar to NYC Local Law 144, impose obligations on employers to evaluate the impact of an automated decision tool5 and to provide notice regarding its use, and provide for formation of a governance program.  SB 331 would also prohibit a deployer6 from using an ADT in a way that contributes to algorithmic discrimination.7

Impact Assessment

AB 331 would require a deployer and a developer8 of an ADT to perform an impact assessment9 on/before January 1, 2025, and annually thereafter, for any ADT used that includes:

  • a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts;
  • a description of the ADT’s outputs and how the outputs are used to make, or are a controlling factor in making, a consequential decision;
  • a summary of the type of data collected from natural persons and processed by the ADT when it is used to make, or is a controlling factor in making, a consequential decision;
  • a statement of the extent to which the deployer’s use of the ADT is consistent with or varies with the statement required of the developer;10
  • an analysis of the potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information;
  • a description of the safeguards that are or will be implemented by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the ADT known to the deployer at the time of the impact assessment;11
  • a description of how the ADT will be used by a natural person, or monitored when it is used, to make or be a controlling factor in making, a consequential decision; and
  • a description of how the ADT has or will be evaluated for validity or relevance.12

Notice Requirements

AB 331 would also require a deployer, at or prior to an ADT being used to make a consequential decision,13 notify any natural person who is the subject of the consequential decision that an ADT is being used to make, or is a controlling factor in making, the consequential decision, and to provide that person with:

  • a statement of the purpose of the ADT;
  • contact information for the developer; and
  • plain language description of the ADT that includes a description of any human components and how any automated component is used to inform a consequential decision.

Furthermore, if a consequential decision is made solely based on the output of an ADT, a deployer must accommodate a natural person’s request not to be subject to the ADT and to be subject to an alternative selection process or accommodation, if technically feasible.

Governance Program

AB 331 would also require a deployer or developer to establish and maintain a governance program to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination associated with the use of the ADT.  In relevant part, the governance program must conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this chapter, and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the ADT, the state of technical standards, and changes in business arrangements or operations of the deployer or developer.

Senate Bill No. 721

Senate Bill No. 721 (SB 721), titled “California Interagency AI Working Group,” was introduced on February 16, 2023 and would create a Working Group to deliver a report to the legislature regarding artificial intelligence and be disbanded by January 1, 2030.

The Working Group would consist of 10 members, including two appointees by the governor, two appointees by the president pro tempore of the Senate, two employees by the speaker of the Assembly, two appointees by the attorney general, one appointee by the California Privacy Protection Agency, and one appointee by the Department of Technology.  The Working Group would be chaired by the director of technology, and the members must be Californians with expertise in at least two of the following areas:  computer science, artificial intelligence, the technology industry, workforce development, and data privacy.

The Working Group would be required to accept input from a broad range of stakeholders, including from academia, consumer advocacy groups, and small, medium and large businesses affected by artificial intelligence policies.  The Working Group would be required to:

  • Recommend a definition of artificial intelligence as it pertains to its use in technology for use in legislation;
  • Study the implications of the usage of artificial intelligence for data collection to inform testing and evaluation, verification and validation of artificial intelligence to ensure that artificial intelligence will perform as intended, and minimize performance problems and unanticipated outcomes;
  • Determine proactive steps to prevent artificial intelligence-assisted misinformation campaigns and unnecessary exposure for children to the potentially harmful effects of artificial intelligence;
  • Determine the relevant agencies to develop and oversee artificial intelligence policy and implementation of that policy; and
  • Determine how the Working Group and the Department of Justice can leverage the substantial and growing expertise of the California Privacy Protection Agency in the long-term development of data privacy policies that affect the privacy, rights, and the use of artificial intelligence online.

SB 721 would also require the Working Group to submit a report to the legislature on or before January 1, 2025 regarding the foregoing, and every two years thereafter.  SB 721, if enacted would remain in effect until January 1, 2030.

With the proliferation of new regulations and laws, it is more important than ever for employers to stay abreast of developments regarding this topic, especially given the potential for a resulting patchwork of obligations for those who choose to incorporate qualifying artificial intelligence into their personnel management processes. Littler will continue to monitor these developments and report on any significant developments.


See Footnotes

1 Formerly the Fair Employment and Housing Council.

2 The California Senate has also introduced Senate Bill No. 313 (SB 313), which would establish, within the Department of Technology, the Office of Artificial Intelligence, which would be empowered with the authority necessary to guide the design, use, and deployment of automated systems by a state agency to ensure that all artificial intelligence systems are designed and deployed in a manner consistent with state and federal laws and regulations regarding privacy and civil liberties that minimizes bias and promotes equitable outcomes for all Californians.  SB 313 would require any state agency utilizing generative artificial intelligence to communicate directly with a natural person to provide notice that the interaction with the state agency is being communicated through artificial intelligence, and how the natural person can communicate directly with a natural person from the state agency.

3 The CRC also clarifies that, “[r]elevant to this inquiry [of whether the selection criteria is job-related, consistent with business necessity, and there being no less-discriminatory alternative] is evidence of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, recency, and scope of such effort, the results of such testing or other effort, and the response to the results.”

4 The scope of records to be preserved include, but is not limited to, “automated-decision system data used or resulting from the application of the automated-decision system for each such employer or other covered entity to whom the automated-decision system is sold or provided or on whose behalf it is used.  Relevant records also include training set, modeling, assessment criteria, and outputs from the automated-decision system.”  As before, these records must be maintained for at least four years following the last date on which the automated-decision system was used by the employer or covered entity.  These records must be maintained for at least four years following the last date on which the automated-decision system was used by the employer or other covered entity.

5 Automated decision tool is defined as “a system or service that uses artificial intelligence and has been specifically developed and marketed to, or specifically modified to, make, or be a controlling factor in making, consequential decisions.”

6 Deployer is defined as “a person, partnership, state or local government agency, or corporation that uses an automated decision tool to make a consequential decision.”

7 Algorithmic discrimination is defined as “the condition in which an automated decision tool contributes to unjustified differential treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law.”

8 Developer is defined as “a person, partnership, state or local government agency, or corporation that designs, codes, or produces an automated decision tool, or substantially modifies an artificial intelligence system or service for the intended purpose of making, or being a controlling factor in making, consequential decisions, whether for its own use or for use by a third party.”

9 Impact assessment is defined as a “documented risk-based evaluation of an automated decision tool that meets the criteria of Section 22756.1.”

10 This is not required of a developer.  A developer must provide a deployer with a statement regarding the intended uses of the ADT and documentation concerning limitations of the tool, the type of data used to train the ADT, and how the ADT was evaluated for validity and explainability.

11 In lieu of this, a developer would be required to provide a description of the measures taken by the developer to mitigate the risk known to the developer of algorithmic discrimination arising from the use of the ADT.

12 This is not required of a developer.

13 Consequential decision is defined as a “decision or judgment that has a legal, material, or similarly significant effect on an individual’s life relating to the impact of, access to, or cost, terms or availability of [as is relevant here] … Employment workers management or self-employment, including but not limited to all of the following:  (A) Pay or promotion; (B) Hiring or termination; (C) Automated task allocation.”

Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.