Sophisticated new technology can cause serious trouble for your employees and your organization. Don’t let it take you by surprise.
Deepfake, a skillful blending of Artificial Intelligence (AI) and machine learning that lets people alter video, audio, and photos to make it appear that people are doing and/or saying things they aren’t, may not affect your workplace now. But experts suggest that it is coming and can create new security headaches for employers.
As deepfake technology becomes more readily available, it has some positive uses. For instance, you could create a video with someone appearing to speak in three or four different languages, or it can be used to create a digital voice for someone who has lost the ability to speak.
However, if this technology is used to create false, embarrassing, or crude videos or audio that put your organization or leadership in a negative light, that is where the trouble starts. You could be forced to spend time, money, and energy to publicly show that the material was faked and how. In the meantime, it could take months or years to repair the damage to your reputation.
At the same time, this technology could be used to fake identities to attack a company, stealing data and money. Attorneys warn that this technology is already starting to be used to harass women. For instance, a person will put a coworker’s face on a woman’s body in an adult film and share it online. Particularly if company property is used for these attacks, there are numerous ethical, morale, compliance, and legal issues that need to be addressed.
Here are a few steps you can take to prevent deepfake debacles:
1. Review company policies. Ensure that all existing policies regarding the use of technology, as well as those related to anti-harassment, anti-retaliation, and anti-discrimination address new technology, including deepfakes. Communicate changes in these policies to all employees.
2. Have a protocol in place for responding to a deepfake incident. Be prepared to react to any threats, including what to do/say if audio/video is made public.
3. Watch for government action. Earlier this year, the House Intelligence Committee held a hearing on concerns about deepfakes. Specifically, they examined the national security threats posed by this technology and what can be done to detect and combat it. They also addressed what roles the public and private sectors can play in deepfake detection and prevention.
4. Stay alert and train everyone how to identify and report a deepfake video or similar threat. There are some red flags to suggest a video is fake. These include strange or lack of blinking, jerky facial or body movements, shifts in skin tone and lighting, and blurred images or double exposures.
5. Talk to your HIT team and outside technology vendors. Alert them to concerns you have about deepfakes and seek any suggestions they have for ways to prevent and identify them.