Crimes involving manipulated video and audio to impersonate customers cause concern
In a survey of 2,000 consumers in the US and UK carried out for iProov, 85 per cent said deepfakes would make it harder to trust what they see online © Getty Images/iStockphoto
by Matthew Vincent
Banks and fintech groups are setting up partnerships to combat the use of doctored video and audio content by fraudsters, as research shows such ‘deepfake’ crimes are the biggest worry among customers.
HSBC this week became the latest bank to sign up to a biometric identification system, developed by technology firm Mitek and offered through a partnership with Adobe.
It will be introduced by HSBC’s US retail banking operation to check the identities of new customers, using live images and electronic signatures.
Chase, ABN Amro, Caixa Bank, Mastercard and Anna Money are among those that have incorporated Mitek’s biometric checks.
Bank authentication systems are also being monitored by a new security centre opened on Thursday by UK fintech iProov.
The centre, which is based in Singapore, aims to detect and block identity-based cyber attacks, including deepfake videos that are used to impersonate clients. It will do this via the ID verification systems it has supplied to several bank customers. Rabobank, ING and Aegon are some of those using iProov’s technology to ensure they are dealing with real people and not manipulated recordings.
News of these initiatives comes as research shows growing concern about deepfake fraud. According to a University College London report published last month, fake audio and video content now ranks top of 20 ways artificial intelligence can be used for crime — based on the harm it can cause, the potential for profit, ease of use and how difficult it is to stop.
Bank customers are aware of the dangers. In a survey of 2,000 consumers in the US and UK carried out for iProov, 85 per cent said deepfakes would make it harder to trust what they see online, and nearly three-quarters said that made ID verification more important.
BlackBerry, the smartphone maker-turned-software group, believes the pandemic has exposed more people to impersonation frauds. In August, it warned that “with less face-to-face contact due to the lockdown, workers are more susceptible than ever to clever, yet criminal, deepfakes asking them to authorise payments or send data to seemingly legitimate senior people”.
We cannot rely on a digital A-Team to fight deepfakes
Eric Milam, BlackBerry’s vice-president of research operations, said criminals were already going after “the easy low-hanging fruit”, recording real customer voices and then synthesising new audio to attempt telephone banking fraud.
However, the greatest increase in deepfake frauds may be aimed directly at individuals, rather than their banks. Due diligence consultancy Silent Eight has suggested that “phishing” scams that are used to extract personal data could be rendered entirely convincing by fake audio and video.
It recently estimated that phishing attempts have a 60 to 70 per cent success rate. The use of AI to personalise the message — with names or references that only friends or family would know — risked taking the success rate to 100 per cent, it said.
“If an older person gets a video purporting to be from their grandchild, referring to them as granny or pop pop, [it is] enough to be instantly credible,” said Silent Eight’s chief revenue officer Matthew Leaney.
“If you have grown up believing what you see, you are just going to trust it. The societal impact is horrific.”