Exclusive Interview with Technologent’s Principal Solutions Architect Nathan Hull
Safeguarding Workplace From Deepfakes
Posted on 12-29-2020, Read Time: 5 Min
Share:

![]() |
“I would say AI technology-driven attacks are a leading emerging threat. Whether that technology is used to create a deepfake, mimic human behavior to avoid detection or to automate a campaign, it is something everyone should be aware of,” notes Nathan Hull, Principal Solutions Architect, Technologent. |
With over 15 years of industry experience, Nathan Hull works with clients as a transformational IT consultant. In an exclusive interaction with HR.com, he talks about why AI technology-driven attacks, especially deepfakes, could be the biggest emerging threat companies are looking at as remote is becoming the new norm.
Excerpts from an interview:
Q: What are the newer varieties of cyber threats that have emerged during the Covid-19 period?
Nathan: The majority of attacks being utilized are fairly “traditional” in terms of what we have seen in the cybersecurity space over the past several years.
However, Covid-19 has provided a unique opportunity to create themed attacks such as phishing campaigns with content that has intense global interest. Couple that with the large percentage of users who are now working remotely or have shifted outside of their normal daily work routine, and you have somewhat of a perfect storm in regard to a cyber threat opportunity.
In my daily conversations, both personally and professionally, Covid-19 as a topic has a high frequency of occurrence. Having a subject so top of mind creates a level of familiarity or comfort, as some would say the “new normal.”
So, if someone was scrolling through their inbox and they came across an e-mail allegedly from the CDC or WHO with the subject: Covid-19 Cases on the Rise, it could be fairly tempting to open, which could then open the door for an intruder.
However, Covid-19 has provided a unique opportunity to create themed attacks such as phishing campaigns with content that has intense global interest. Couple that with the large percentage of users who are now working remotely or have shifted outside of their normal daily work routine, and you have somewhat of a perfect storm in regard to a cyber threat opportunity.
In my daily conversations, both personally and professionally, Covid-19 as a topic has a high frequency of occurrence. Having a subject so top of mind creates a level of familiarity or comfort, as some would say the “new normal.”
So, if someone was scrolling through their inbox and they came across an e-mail allegedly from the CDC or WHO with the subject: Covid-19 Cases on the Rise, it could be fairly tempting to open, which could then open the door for an intruder.
Q: What sort of improvement should organizations do to strengthen their security systems?
Nathan: The number one thing many organizations could benefit from is increasing the frequency of cybersecurity awareness training. In the case where no cybersecurity awareness program exists, create and implement one.
By far the biggest security vulnerability in any organization is people. That statement has been made many times by many people and validated in multiple studies. For example here is a study by WebRoot.
Some common inefficiencies in many organizations fall under web and mail content filtering solutions, advanced endpoint and DNS protections, 2FA, etc. Most organizations have some form of these technology protections in place, but many are not deployed in a best practice configuration. I would strongly recommend audits of those systems to identify gaps or areas of improvement and remediate. If a company has a remote workforce it is vital that the security controls and protections for those users are as strict, if not stricter than on-prem users.
By far the biggest security vulnerability in any organization is people. That statement has been made many times by many people and validated in multiple studies. For example here is a study by WebRoot.
Some common inefficiencies in many organizations fall under web and mail content filtering solutions, advanced endpoint and DNS protections, 2FA, etc. Most organizations have some form of these technology protections in place, but many are not deployed in a best practice configuration. I would strongly recommend audits of those systems to identify gaps or areas of improvement and remediate. If a company has a remote workforce it is vital that the security controls and protections for those users are as strict, if not stricter than on-prem users.
Q: What is Deepfake technology and why is it a threat to businesses?
Nathan: Deepfake is a term associated with the use of technology to create or alter some form of digital media, for example a video, with the intention of making it appear authentic when in reality the event never occurred.
The threat to businesses is vast in terms of potential. Like many other technologies, the intended application can be positive or negative. For example, a movie studio using AI to generate a scene otherwise un-realistic to film is a valuable capability. However, creating a deepfake that appears to be a “leaked” cell phone video of an executive talking very negatively about the company’s customer base could be devastating.
The threat to businesses is vast in terms of potential. Like many other technologies, the intended application can be positive or negative. For example, a movie studio using AI to generate a scene otherwise un-realistic to film is a valuable capability. However, creating a deepfake that appears to be a “leaked” cell phone video of an executive talking very negatively about the company’s customer base could be devastating.
Q: Deepfakes seem to mostly harm politicians and celebrities. How can it harm employers?
Nathan: Currently, the majority of deepfakes online are videos targeting celebrities, the technology is relatively new and evolving. Where exactly the threats will evolve within businesses is somewhat predictive.
However, with the prevalence of phishing and social engineering there is a real possibility we could see an uptick in utilizing deepfake technology to strengthen those campaigns. Keep in mind a deepfake can be multiple forms of media. Commonly we hear about video, but the technology also includes pictures and images, audio, etc.
Over the summer, The Wall Street Journal published an article about a U.K. CEO transferring $243,000 to a fraudster over the phone who used a deepfake of his boss’ voice to authorize the transfer. It is plausible that the same type of campaign could be used to target any individual in any organization. For instance, instead of a hacker calling into an organization’s help desk mimicking an employee as an attempt to gather credentials, they could utilize a deepfake of the helpdesk manager’s voice to simply instruct the helpdesk employee to reset an account.
Or a deepfake of your boss’ voice over the phone instructing you to take some action. As you can see, the opportunities to exploit organizations with such technology are extremely diverse.
However, with the prevalence of phishing and social engineering there is a real possibility we could see an uptick in utilizing deepfake technology to strengthen those campaigns. Keep in mind a deepfake can be multiple forms of media. Commonly we hear about video, but the technology also includes pictures and images, audio, etc.
Over the summer, The Wall Street Journal published an article about a U.K. CEO transferring $243,000 to a fraudster over the phone who used a deepfake of his boss’ voice to authorize the transfer. It is plausible that the same type of campaign could be used to target any individual in any organization. For instance, instead of a hacker calling into an organization’s help desk mimicking an employee as an attempt to gather credentials, they could utilize a deepfake of the helpdesk manager’s voice to simply instruct the helpdesk employee to reset an account.
Or a deepfake of your boss’ voice over the phone instructing you to take some action. As you can see, the opportunities to exploit organizations with such technology are extremely diverse.
Q: How can companies regulate and prevent deepfakes?
Nathan: As threats evolve, just as with any other technology, so will countermeasures. The means to prevent, regulate and identify deepfakes it going to vary by technology, sophistication and implementation. For example, digitally watermarking or signing a video can offer protection in that it gives the owner the capability to prove the video was or was not altered. If that video was being edited by additional legitimate entities, say for advertising purposes, a distributed ledger such as blockchain would offer protection.
Q: What is your methodology to deal with such threats in your organization?
Nathan: As I noted above, I am a heavy advocate of regular, consistent, security awareness training. Within that training curriculum having enhanced content around social engineering and developing methodologies in the context relatable to an employee is highly valuable.
Q: What are the emerging threats that businesses should be aware of?
Nathan: I would say AI technology-driven attacks are a leading emerging threat. Whether that technology is used to create a deepfake, mimic human behavior to avoid detection or to automate a campaign, it is something everyone should be aware of. I would recommend organizations investigate these technologies to further understand the potential impact and how they can mitigate the risk.
Error: No such template "/CustomCode/topleader/category"!