ClickCease

Fortifying Digital Customer Onboarding Against Deepfakes

Bill Fish
April 16, 2024

Just as death and taxes are certainties of life, so too is the continuously evolving ingenuity of technology used for fraud. And now you can add deepfakes to the ever-evolving mix of fraud types, and while the rise of deepfake technology is not totally new, its level of sophistication presents new challenges for businesses seeking to deliver frictionless digital onboarding experiences to their customers. While companies prioritize delivery of a smooth and fast process, they’re now realizing the need to develop strategies for deepfake recognition in their processes. The key, however, is to do that without sacrificing the user experience or jeopardizing the path to digital engagement.

The best-in-class digital customer onboarding experiences involve rapid and easy interactions, allowing users to swiftly onboard and access new services within minutes or even seconds of officially becoming customers. Many companies think they are being highly secure in their approach by relying on selfies and applying layers of friction. Some have chosen to use selfies or other forms of liveness detection in their processes, but these methods are precisely what deepfakes were built to circumvent. So while many organizations will spend considerable time, effort, and resources to stay ahead of deepfake fraud innovation, the simplest, and most effective way to avoid deepfakes is to eliminate onboarding methods that they could target.

To do that, let’s understand how deepfakes operate, and look at how user authentication through methods like pre-filled applications and forms can remove the possibility of deepfakes entering the onboarding equation completely.

Understanding the Deepfake Issue


It’s important to understand both how deepfakes work, and how pervasive and insidious they are. 

The numbers on the impact of deepfakes paint a picture of a deeply-rooted and growing problem. In 2022, 66% of cybersecurity professionals encountered deepfake attacks within their organizations, an alarmingly high number that illustrates just how widespread these attacks. Fraudsters are becoming incredibly adept at fabricating counterfeit audio and video messages purportedly from CEOs or other high-value targets. These manipulated audio and video messages typically contain urgent requests for recipients to transfer funds or divulge sensitive information.

The banking sector, in particular, has been a target of deepfake attacks, with 92% of banking security leaders expressing concern about its potential for fraudulent exploitation. Among banking services, personal banking and payments are especially being targeted; in an example that received widespread attention, in 2021, a bank manager fell victim to a scheme and erroneously transferred $35 million to a fraudulent account.

The ramifications of deepfake attacks extend beyond the banking industry, impacting various sectors. In the past year alone, 26% of smaller companies and 38% of larger enterprises fell victim to deepfake fraud, resulting in substantial losses reaching up to $480,000.

In recent instances, deepfakes have taken on various creative forms, such as mimicking the voice of one's employer in a phone conversation, depicting Meta’s Mark Zuckerberg in an altered video discussing the magnitude of data access, or portraying Belgium's prime minister attributing the coronavirus pandemic to climate change in a manipulated speech recording.

Understandably, the term carries a negative connotation due to the potential for deepfake images and videos to convincingly represent real individuals or synthetic identities fabricated by malicious actors. Consequently, they are increasingly used to exploit vulnerabilities in identity verification processes within digital onboarding procedures.

Because deepfakes are built to mimic selfies, which are used in many onboarding processes through the form of “liveness” detection, the most effective way to eliminate the issue is to eliminate selfies.

How Deepfakes Enter the Identity Verification Process


Since deepfakes are built to impersonate people – either real or synthetic ones – rapid evolution of fraud-related technologies has rapidly introduced deepfakes into the identity verification process. In many ways, it’s the perfect fraud type, as it aims to use tools intended to establish identity as an avenue to trick legitimate systems built to perform identity verification.  

What’s happening is that fraudsters are making use of available data to enable AI to do their work for them. The proliferation of deepfakes and generative AI has empowered fraudsters to fabricate synthetic biometric data, including facial features and voice manipulation, in order to deceive identity verification systems. This poses a significant threat to the integrity of identity verification, potentially enabling unauthorized access to devices, secure resources, or sensitive information.

Deepfake tactics aimed at circumventing identity authentication encompass some common strategies, including:

  • Manipulating facial recognition systems: While facial recognition systems are commonly used for identity authentication, they are susceptible to deepfake attacks. Fraudsters leverage AI-generated deepfake images or videos to deceive facial recognition algorithms, tricking them into authenticating fraudulent individuals. This can lead to unauthorized access to accounts, circumvention of security measures, or entry into secure premises.
  • Exploiting voice cloning technology: Voice cloning, facilitated by AI, enables fraudsters to replicate someone's voice with remarkable precision. By combining deepfake technology with voice cloning, fraudsters can exploit voice authentication systems to gain access to user accounts, facilitating fraudulent transactions and other malicious activities.
  • Advances in social engineering and phishing attacks: Social engineering has long been utilized by fraudsters to manipulate users into divulging sensitive information for identity theft. However, with the advent of ChatGPT and other language models capable of generating human-like text, as well as audio and video AI tools enabling the creation of deepfakes without technical expertise, the risk associated with social engineering and phishing attacks has escalated significantly. Fraudsters can now craft highly convincing messages and media to deceive unsuspecting individuals, further exacerbating the threat landscape.

Consequently, even inexperienced fraudsters can orchestrate sophisticated social engineering attacks leveraging AI, employing techniques such as:

  • Realistic chatbots: AI-driven chatbots or virtual assistants are capable of emulating human interactions, rendering them potent tools for fraudsters. These chatbots can execute social engineering schemes by convincingly impersonating trusted individuals or customer service representatives. Through such deception, fraudsters manipulate victims into gaining access as a trusted customer, divulging personal information, passwords, or financial particulars, precipitating incidents of identity theft or financial fraud.
  • Audio and video deepfakes: An emerging tactic in social engineering involves the utilization of voice cloning or deepfake videos that closely resemble trusted contacts of the users. By mimicking the voices or appearances of family members, friends, supervisors, or financial advisors, fraudsters orchestrate highly convincing scams, deceiving individuals into surrendering sensitive information or executing transactions on their behalf.

There is some good news for teams that safeguard identity authentication processes. Algorithms are emerging to aid video creators in verifying the authenticity of their content. For instance, cryptographic techniques can be employed to embed hashes at predefined intervals throughout the video. If the video is altered, these hashes will change accordingly.

In videos, for example, artificial intelligence has yet to replicate natural eye movement and blinking. A closer examination of a deepfaked individual may reveal facial glitches and prolonged passivity. Additionally, videos often exhibit discrepancies between audio and image synchronization, necessitating careful scrutiny of the speaker's pronunciation and any unnatural pauses for inconsistencies. Imperfections in colors and shadows are also common, with anomalies such as misplaced shadows or color fluctuations indicating potential deepfake manipulation.

Using a Phone-Centric Approach to Identity Verification and Authentication to Eliminate the Risk of Deepfakes 


Deepfakes, in any form, have the potential to disrupt access to online services, deceive individuals, and tarnish company reputations. Therefore, businesses must adopt a proactive approach to mitigate this threat while finding the right balance of friction—ensuring seamless customer registration and service access while keeping malicious actors at bay. 

Using methods like selfies and videos for identity verification during digital onboarding can introduce vulnerabilities to deepfake fraud. Deepfake technology has advanced to the point where it can convincingly manipulate images and videos, creating fake representations of individuals. This poses a significant risk to the integrity of identity verification processes. Fraudsters could exploit these methods to bypass security measures by submitting manipulated visual content that appears genuine to human eyes but is, in fact, fraudulent.

Instead of relying on potentially compromised visual identification methods, companies should adopt a phone-centric approach to identity verification during onboarding. This method focuses on utilizing the mobile device itself as a central component of verification, leveraging factors such as possession, reputation, and ownership of the device.

A phone-centric approach offers critical advantages, including:

  • Accuracy: By verifying the physical possession of the legitimate user’s mobile device, companies can ensure a more accurate identification of the user. This reduces the risk of impersonation and fraudulent activity.
  • Reliability: Phone-centric identity verification relies on network signaling, device attributes, and user signals, which are very difficult for fraudsters to manipulate when used together. This enhances the reliability of the verification process and reduces the likelihood of false positives or false negatives.
  • Elimination of Deepfake Vulnerabilities: Since phone-centric verification does not rely on visual content like selfies or videos, it eliminates the possibility of deepfake fraud. Fraudsters cannot manipulate network signals or device attributes to create fake representations, making it a more secure method of verification.

A phone-centric approach to identity verification offers a more robust and secure solution for digital onboarding processes. It provides accurate and reliable verification while mitigating the risks associated with deepfake fraud, ensuring the integrity of the onboarding process and protecting both the company and its customers from potential security threats.

How Organizations Can Apply Phone-Centric Identity and the PRO Model of Identity Verification & Authentication


It starts with Prove’s PRO Model of Identity Verification and Authentication, which is based on these precepts for establishing identity:

  • Possession: This addresses a fundamental element of identity by determining: is this customer in physical possession of their phone? Phone-Centric Identity leverages the mobile device as a decisive "what you have" factor for companies to definitively confirm their interaction with the customer. This verification, known as a "possession" check, yields a binary outcome rather than a probabilistic score. By discerning if a consumer has physical possession of their mobile device, Phone-Centric Identity technology promptly determines whether the company is engaging with their customer or another party.
  • Reputation: This determines: are there risky changes or suspicious behaviors associated with this phone number? Reputation provides insights into whether a phone number is associated with risky changes or suspicious behaviors. Unlike personal phones that remain consistent over time, burner phones, those subjected to SIM swaps, or recently registered numbers are often linked with lower reputations. This enables companies to flag such phones independently of customer activity.

This is all done through Prove's Trust Score™ which uses a dynamic assessment of phone number reputation in real-time, serving as a valuable tool for identity verification and authentication. By analyzing behavioral patterns and Phone-Centric Identity™ signals from trusted sources during transactions, Trust Score effectively combats fraud, including SIM swap fraud and account takeover attempts. This versatile metric enhances security across various customer touchpoints, spanning from digital onboarding to existing customer authentication and digital servicing.

  • Ownership: The ownership check affirms that the customer is associated with the given phone number? It does this by verifying the association between an individual and a phone number. Customers supply certain personally identifiable information (PII) to confirm this connection. The outcome of the ownership check is a binary result: True or False.

Then, we use things like Prove Auth (including silent methods) to have confidence that the real user is still in possession of the mobile device. PRO allows for a strong bind of an identity to a phone number and device. So our customers will know that this is the user's actual phone. 

Prove's key management capabilities, like device and user trust, including behavioral signals, maintains the confidence in the device as the legitimate user's phone.

Prove's authentication methods do not rely on "perception" by our customers or other users, which is what deep-fakes are focused on (even if those means are digital, AI, whatever). This is an important distinction, as deepfakes are all about perception. Our approach is to take that imprecision out of the equation. 

Instead, Prove relies on the trustworthiness of the network signaling, device and user signals, which would require the fraudster to not only have a "fake" but also have the user's legitimate device.

The Critical Correlation Between Phone-Centric Identity Signals and Digital Trust


Phone-Centric Identity harnesses real-time data from authoritative sources, generating a robust framework for digital identity and trust. With the widespread adoption and prolonged usage of mobile phones, Phone-Centric Identity signals offer a strong correlation with identity and trustworthiness.

According to a visual from a McKinsey report, profiles with extensive data history and consistent patterns exhibit lower fraud risk. Phone-Centric Identity signals, encompassing factors like phone line tenure, usage behaviors, and account activity, boast both high depth and consistency.

Unlike deepfakes, which lack the historical data depth to mimic these signals effectively, Phone-Centric Identity stands resilient against fraudulent attempts. Additionally, the "Possession" factor verifies user presence through mobile device confirmation, enhancing identification accuracy.

Get Started With a Phone-Centric Approach to Eliminate Deepfakes


By following the phone-centric blueprint and integrating proven methods of identity verification and authentication, businesses can effectively eliminate the risk of deepfake attacks during the customer digital onboarding process and safeguard their operations and reputation.

Let us show you how to create a trusted and customer-friendly digital onboarding process that eliminates the threat of deepfakes. Talk with the Prove team and let’s get started.

Keep reading

See all blogs
Prove Identity Launches Solutions in AWS Marketplace to Elevate Digital Customer Experiences

Prove’s solutions can help businesses make their online customer experiences faster, easier and more secure.

Prove
April 16, 2024
Prove and BetMGM Partner to Improve and Fortify Digital Identity in Online Gaming

Prove and BetMGM, the sports betting and iGaming leader, have entered into a partnership which will elevate the security standards and user experience for BetMGM customers through the Prove Pre-Fill® identity solution.

Kaushal Ls
April 9, 2024
Be Part of the Future of Fraud and Digital Identity at Prove’s improve 2024 Featuring Fraud Fight Club

Prove is hosting a digital identity summit – improve 2024 – with the help of Fraud Fight Club, in Charlotte, NC, on Thursday, April 25, 2024 - an exclusive gathering of top minds in fraud, risk, and identity.

Kelley Vallone
March 18, 2024