The artificial intelligence (AI) revolution presents organizations with promising possibilities and massive risks.

While some companies are focused intensely on business applications, others are unsure where to begin. Either way one legal liability issue to address is AI-related employment concerns — specifically generative AI — including how AI may crop up in harassment cases.

Deepfakes are a real risk. People using AI to create false images or videos of a person engaged in invented activities, often pornographic or hate-filled, can have devastating consequences. Casual viewers often don’t know these images were altered or created by a computer.

Unlawful workplace harassment is a type of discrimination that demeans or threatens one or more employees based on certain characteristics that are protected by the law or by a company’s internal policy. When harassment occurs through electronic devices like smartphones or computers, including the creation or dissemination of deepfakes, it’s often referred to as “cyber harassment.”

Cyber harassment is more common than you may realize.

A 2023 Anti-Defamation League survey found 52% of American adults reported experiencing hate or harassment online at some point in their lives (up from 40% in 2022). The survey also found 37% experienced severe harassment, which includes physical threats, sustained harassment, stalking, sexual harassment, swatting and doxing (publishing online someone’s sensitive information).

Meanwhile, a recent SalesForce study of over 14,000 workers found 28% of workers are currently using generative AI — and over half are doing so without their employer’s formal approval.

Given the risks involved and the speed with which AI is changing, it’s a business imperative to create and regularly update your policies, including by addressing cyber harassment, and take steps to help protect staff.

States are tackling the issue at a staggering pace, introducing 50 AI-related bills per week. As of April 2024, 14 states have enacted laws addressing nonconsensual sexual deepfakes and 10 states have enacted laws to limit the use of deepfakes in political campaigns.

Some bills under consideration include restrictions on technology deemed high risk. Others require purveyors of systems to conduct evaluations of the technology and/or impose transparency, notice, and labeling requirements.

Take Specific Actions to Address Cyber Harassment

A comprehensive anti-harassment policy should include cyber harassment and more specifically, improper use of AI.

In addition to having a policy, consider these strategies to help limit risks:

  • Provide an expansive definition of cyber harassment that includes cyberbullying, sexting and deepfakes. Because technology is rapidly evolving, don’t limit your prohibition to specific platforms or AI tools.
  • Educate staff about cyber harassment identification, prevention and response. Be clear that your company won’t tolerate any harassment, including using AI to commit cyber harassment. Ensure employees know how to report cyber harassment they experience or witness.
  • Regularly monitor and adapt to your state’s cyber harassment and AI-related laws. These laws vary by jurisdiction and can include those related to revenge porn, online impersonation, and doxing.
  • Don’t rush to judgment if harassing videos come to light. Continue following your policies about responding to incidents of harassment. Your company could face legal liability if it acts rashly against a person who appears in what turns out to be a deepfake video.
  • As always, consult legal counsel for your jurisdiction-specific guidance and to monitor any new federal legislation. In late April, for example, the Senate Judiciary Committee’s Subcommittee on Intellectual Property held a hearing titled The NO FAKES Act: Protecting Americans from Unauthorized Digital Replicas. The hearing examined legal and ethical issues deepfake videos and voice-cloning tools present.

Remind Employees of Your Acceptable Technology Use Policy

Consider annual reminders about your technology use policy that defines misconduct and states consequences without limiting it to existing technologies.

At a minimum your policy should:

  • State that users have no legitimate expectation of privacy in any material they create, receive, or view using your company’s equipment or servers.
  • Provide that your company reserves the right to monitor user activity at any time and for any reason but doesn’t commit your company to conduct such monitoring.
  • Establish zero tolerance for unlawful conduct online using your equipment or servers.

Investigate any employee complaints. Work with counsel to determine whether actions constitute misconduct if they occur offsite or use equipment your company doesn’t own.

Even if you can’t identify perpetrators or impose discipline against people who allegedly committed cyber harassment, use the complaint as an opportunity to educate staff about your policy.

Generative AI will, by definition, continue to change. Matching the excitement of exploring Gen AI business applications with technology use boundaries is one way to help prevent liabilities from taking root in your workplace. &

The post Deepfakes and Cyber Harassment: The Hidden Dangers of AI in the Workplace appeared first on Risk & Insurance.

Meet with Apollo Dealer Services

Chat with us virtually or in person.