Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
On August 24, 2023, the Office of Management and Budget approved a request from the U.S. Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) to revise the “Itemized Listing” that OFCCP uses to collect information from federal contractors that are selected for supply or service audits. Among the changes that have been approved is a new requirement that audited contractors:
Item 21: Identify and provide information and documentation of policies, practices, or systems used to recruit, screen, and hire, including the use of artificial intelligence, algorithms, automated systems or other technology-based selection procedures.
OFCCP originally proposed adding this new requirement in November 2022. Several commenters voiced concern about this new requirement. In a document OFCCP issued on August 22, 2023 in support of its proposal, the agency responded to these concerns by arguing that:
use of these technology-based selection procedures may lead to instances of screening or selection bias such as assigning lower ratings to minority or women candidates in a screening process. Individuals with disabilities are also at risk of exclusion due to these tools. . . . The proposed language will ensure that OFCCP and contractors are evaluating whether their selection procedures, including the use of AI tools, are creating barriers to equal employment opportunity.
OFCCP’s inclusion of this inquiry into the use of AI by federal contractors is largely consistent with ongoing efforts by the OFCCP and the Equal Employment Opportunity Commission (EEOC) to scrutinize the use of AI in the workplace. In 2019, OFCCP published an FAQ stating that the use of screening devices like games, challenges, and video submissions that use “artificial intelligence algorithms” to assess qualifications triggers obligations under the Uniform Guidelines on Employee Selection Procedures. In other words, if an AI-based selection procedure results in an adverse impact on a particular racial, sex, or ethnic group, the tool will be subject to OFCCP scrutiny.
The broad language of OFCCP’s inquiry into contractors’ use of AI raises several noteworthy concerns.
First and foremost, the Itemized Listing not only makes no effort to define “artificial intelligence” but actually exacerbates the term’s uncertain meaning by expanding the request for information and documentation to include “algorithms, automated systems,” and “other technology-based selection procedures.” This means that just about anything is potentially within the scope of the request. Accordingly, contractors will have to use their judgement and discretion in responding. For example, common tools such as online intake forms that automatically convert candidate inputs into a structured format would appear to fall within the literal language of this new request. The same is true for basic search functions that permit human users to search a candidate database using keywords or other criteria. However, it cannot be the case that OFCCP really expects (or would even want) contractors to provide information and documentation on such types of process-oriented tools.1
Another concern is the potential burden from having to respond to this request. Artificial intelligence, algorithms, automated systems, and technological tools may be incorporated into, or directly or indirectly support, hiring processes in a variety of ways that may not be apparent to the users of those systems. Identifying and describing such systems may be very difficult. Indeed, some of this information, such as algorithms, may not be available to the end user and may, in fact, be proprietary intellectual property to which the contractor either has no access or no right to disclose.
Nor is it clear that this effort will generate any value. OFCCP has not provided any information as to how it intends to use the information that is provided. In situations where the data produced by a contractor does indicate any potential adverse impact, OFCCP will have no basis for inquiring into the contractor’s use of artificial intelligence, algorithms, automated systems or other technology-based selection procedures. And, of course, when a contractor’s data does indicate potential adverse impact, OFCCP already had authority to inquire into the causes of the potential adverse impact, including any use of artificial intelligence or algorithms in making selections.
OFCCP began applying the scheduling letter to supply and service compliance evaluations scheduled on or after August 24, 2023, so now is a critical time for contractors to become familiar with the new requirements to ensure they are prepared for a potential audit. The updated scheduling letter emphasizes the need for contractors—and employers more generally—to exercise caution when deploying new employee selection tools or making decisions. There are certain steps that contractors should consider moving forward. First, contractors should fully evaluate the potential risks of using AI as part of the selection process and understand the legal landscape. Relatedly, contractors should ensure that their AI-based algorithms are compliant with all federal and state laws and regulations. Second, contractors should explore ways to minimize the potential legal and business risks associated with AI such as implementing an AI usage policy or establishing internal best practices. Third, contractors should reach out to their AI vendors to gain more situational awareness of how the AI tools are being used. For instance, contractors should ask their vendors what data is being gathered and how it is being used for hiring purposes. Other useful questions for vendors many include asking about the type of statistical analyses that they performed to test for disparate impact.
1 An ongoing concern in AI regulation is that definitional breadth may inadvertently sweep up uses and contexts far removed from the types of complex algorithmic assessments fairly considered to be subject to algorithmic (as opposed to human) bias. This is why existing regulations, such as New York City Local Law 144 of 2021, has a detailed and narrow definition of its technological scope. See RCNY § 5-300, which requires that the technology generate a prediction or classification, and that a computer (rather than a human) to some extent autonomously determine what characteristics of a data set to use within its model, the weights to give these characteristics, and if applicable, what other parameters of the model to apply and adjust for improved accuracy.