Attempted Audio Deepfake on LastPass is “The New Normal” for Voice Phishing

by | Apr 17, 2024

Any organizations that might have believed audio deepfakes were too unrealistic to be a real threat should be keeping tabs on stories like the recent voice phishing attempt on LastPass. Though it was unsuccessful, it is indicative of a trend that began before the Covid-19 pandemic and is now best described as “established” rather than “emerging.”

Several minutes of public audio is now more than sufficient to craft convincing deepfakes of executives, or even rank-and-file employees in sensitive positions. What is most noteworthy about the LastPass case is not the attempt at voice phishing, which is now fairly common, but that it was detected primarily because the attackers used odd methods of communication. The incident provides several important notes that should be applied to employee training.

AI, other tools make voice phishing trivial to non-technical criminal actors

Audio deepfakes started appearing in voice phishing attempts very shortly after the phenomenon “went public” in 2018, and there is now a small body of research that suggests people struggle to identify fraudulent phone calls or voice messages of this nature. At least one study, from less than a year ago, found a 27% failure rate among participants. That’s much higher than the percentage of employees that can be expected to fall for more conventional phishing links in emails.

At this point, it is entirely likely that a voice phishing attempt will have to be thwarted before it reaches the ears of employees to ensure that it does not cause a breach. That was the case in the recent LastPass incident, which raised red flags because the attacker communicated via WhatsApp and outside of normal business hours. A more business-savvy attacker might have been able to deliver a convincing deepfake in a way that did not raise alarms; the approach could be particularly devastating if a network device is first compromised, and the messages appear to be coming from legitimate numbers and accounts inside of the building.

LastPass audio deepfake spotted thanks to seeming ignorance of business protocols

LastPass was fortunate in that their attacker seemed to have no knowledge of standard business communications, with it being exceedingly rare for anyone to communicate internally via WhatsApp, let alone for the CEO of the company to randomly start peppering an employee with messages after business hours.

The employee that was targeted in the voice phishing attack received several different call attempts and at least one voicemail message, but did not respond to any due to the strangeness of the approach and instead immediately reported the communications. LastPass did not post samples of the deepfake audio or make any comment on its quality, but given the current tools that are available it is entirely possible that it was convincing and that the key to stopping this attack was that the hacker was sloppy in their approach.

That raises the question of what happens when an attacker is more sophisticated about how they present themselves. This was answered in February, when an unnamed Hong Kong company made the news after losing $25 million to a voice phishing incident. In that case, the attacker appears to have used recycled or constructed video as part of a fake Zoom call that was bolstered by deepfake audio.

Though these approaches now have sophisticated new wrinkles thanks to AI and new tools, there are some recurring commonalities that can render them transparent to a security-conscious target. Voice phishing always retains one particular feature common to business email compromise schemes: the attacker is very insistent on the target taking action immediately and without following up with others, attempting to use extreme pressure to convince them to act without further review of the request.

Recent Posts

How can we help?

5 + 12 =

× How can I help you?