ASAP
Deepfakes in the Workplace: The Emerging Legal Risks of AI-Driven Harassment
A California appellate court recently affirmed a jury verdict awarding $4 million to a police captain who was subjected to a hostile work environment after a sexually explicit, AI-generated image resembling her was widely circulated in the workplace, holding that the dissemination of such fabricated content constituted unlawful harassment under California law. In a separate case, a Washington State trooper filed suit alleging that a supervisor used AI to create and circulate a deepfake video of him intimately kissing a coworker; the officer is suing his employer for discrimination, retaliation, and invasion of privacy. These high-profile incidents highlight a disturbing trend: AI-generated content—especially deepfakes—is emerging as a powerful new form of workplace harassment.
As AI tools become more accessible and ubiquitous in the workplace, employers should prepare for the possibility that deepfake content could be weaponized to humiliate, retaliate against, or intimidate colleagues, creating hostile environments that challenge current harassment policies and legal frameworks. Recent reports show deepfake-related fraud attempts surged by over 3,000% in 2023, with the number of deepfake files skyrocketing from 500,000 in 2023 to an estimated 8 million by 2025.1 In fact, the first quarter of 2025 alone saw 179 major incidents—already surpassing the total for all of 2024—underscoring the accelerating risk of AI misuse.2
The U.S. Equal Employment Opportunity Commission (EEOC)—the federal agency charged with enforcing the nation’s core workplace anti‑discrimination laws—has expressly acknowledged this emerging risk. In its guidance document, “Summary of Key Provisions: EEOC Enforcement Guidance on Harassment in the Workplace,” the EEOC identifies examples of harassing conduct based on legally protected characteristics and explicitly includes the “sharing [of] pornography or sexually demeaning depictions of people, including AI‑generated and deepfake images and videos,” underscoring the agency’s recognition that AI‑enabled misconduct can constitute actionable workplace harassment.
The misuse of AI in the workplace extends far beyond deepfake pornography. AI tools can also be exploited to perpetrate other forms of harassment by generating manipulated or fictitious images that target an individual’s protected characteristics, such as race, disability, religion, or national origin. For example, using AI to create an altered image that depicts a colleague with a visible disability they do not have, or to change their skin tone or ethnic features in a mocking or demeaning way, may constitute harassment under Title VII or the Americans with Disabilities Act. Such conduct can foster a hostile work environment. Title VII prohibits harassment based on protected characteristics, regardless of whether conduct occurs in person or through digital means. Deepfakes fall squarely within this scope when they target race, gender, or other protected traits.
The use of AI-generated deepfake pornography in the workplace may give rise to a host of legal consequences beyond traditional harassment claims under Title VII and analogous state anti-discrimination statutes. Depending on the circumstances, such conduct could trigger criminal liability, including charges related to cyber harassment, distribution of obscene material, or nonconsensual pornography under federal or state laws.3
Additionally, privacy laws may be implicated, particularly where individuals have a reasonable expectation of privacy that is violated by the unauthorized creation, manipulation, or dissemination of explicit images. Civil claims for intentional infliction of emotional distress may also arise, especially where the conduct is extreme and outrageous, and causes severe psychological harm. Moreover, new statutes like the 2025 federal TAKE IT DOWN Act and Florida’s Brooke’s Law mandate removal of nonconsensual intimate deepfake content within 48 hours—signaling growing legislative momentum.
As deepfake technology becomes increasingly sophisticated and accessible, its misuse in the workplace poses serious legal and reputational risks for both individuals and organizations. The potential consequences span a wide array of legal domains—including employment discrimination, privacy law violations, intentional infliction of emotional distress, and even criminal liability.
To help mitigate these risks, employers can adopt a proactive, multidimensional approach. Employers can make these measures more actionable by:
- Suggesting specific policy language (e.g., “Prohibit creation or distribution of AI-generated content that demeans or harasses employees based on protected characteristics”).
- Adding technical safeguards (e.g., monitoring tools, watermark detection).
- Including incident response protocols beyond reporting (e.g., forensic investigation, cooperation with law enforcement, anticipating partnership with legal and crisis management teams).
This begins with implementing a clear and comprehensive anti-harassment policy that identifies prohibited conduct, including the creation or distribution of sexually demeaning material such as AI-generated or deepfake images and videos. The policy should cover all forms of harassment based on protected characteristics and be paired with multiple, accessible avenues for reporting misconduct. Regular, mandatory training for all employees—including supervisors and managers—is also important to ensure the workforce can recognize, report, and appropriately respond to emerging forms of digital harassment. Specifically, immersive training and simulated exercises—such as real-world deepfake phishing or voice-clone drills—can help to sharpen employees’ ability to detect manipulated media. Technical tools like watermark detection and authentication checks, paired with secondary confirmation protocols for sensitive communications, can also provide crucial defense layers. In cases where the conduct may rise to the level of criminal behavior, employers can establish protocols for timely reporting to law enforcement. In accordance with EEOC guidance, any reports of harassment must be met with a prompt, impartial, and thorough investigation, followed by corrective action that is effective in stopping the behavior and preventing recurrence. Employers can revise investigation protocols to treat digital content—like deepfake images or synthetic audio—with the same rigor as physical evidence. This includes forensic analysis, verifying metadata, and ensuring fairness in credibility assessments for both alleged victims and accused parties. Taken together, these measures not only align with EEOC best practices but also help build a workplace culture that is resilient to the evolving threats posed by AI-enabled misconduct. As AI reshapes workplace dynamics, employers have an opportunity to set clear boundaries that protect employees and uphold trust.