ClickCease

How FinTechs & Banks Can Battle Synthetic Identity Fraud, Bot Attacks, and Fake Accounts

Yuka Yoneda
February 8, 2022

Forbes and Payments NEXT recently reached out to our fraud and identity expert, Mary Ann Miller, to comment on another case of FinTech’s continuing fraud problem. But this instance is just one of many others and part of a systemic issue that is affecting not only FinTechs, but banks, merchants, and other financial service providers as well. So I sat down with Mary Ann to learn more about how synthetic identity fraud, fake accounts, bot attacks, and “framing fraud” are impacting banks and FinTechs, as well as what they can do about it. Read on for the full conversation.

 

YY: Mary Ann, there’s been a lot of news about FinTech’s growing fraud problem, particularly with synthetic identity fraud and bots being used to create fake accounts. Can you describe the issue for us?

 

Mary Ann Miller: This is a systemic issue that we see at many banks and FinTechs, and it’s been happening for quite some time now. It may be attracting more media attention now, but it’s the manifestation of a continuing problem related directly to the identity theft and synthetic fraud we saw during the pandemic. What’s happening is that bad actors are weaponizing the personal information that they’ve stolen in data breaches and then using bots to launch attacks. Many news articles we see today focus on specific companies, but many banks and FinTechs are being attacked – not just the ones we see in the news.

YY: Can you tell us more about the bot aspect of these attacks? After fraudsters first collect the stolen data from breaches, how exactly does that work?

Mary Ann Miller: These are sophisticated bot attacks that, again, are hitting many banks and FinTechs. Fraudsters use automated scripted bot attacks – you can think of them as software applications that run automated tasks. Once the fraudsters have access to the stolen information from data breaches, they program bots to complete account opening processes at FinTechs and banks using the stolen data. This part usually happens with no human interaction. Since the bad actor would have all the information to complete the process for account opening, including consumers' personal information, this is very easy for them to do.

 

YY: What happens after the fake accounts are created? What are the fraudsters trying to achieve? 

 

Mary Ann Miller: The bad actors wreak all kinds of havoc once they create a fake account with someone else’s identity or a fabricated identity using someone’s SSN, sometimes referred to as synthetic fraud. This includes check deposit fraud, ACH fraud, dispute abuse fraud, exploitation of race conditions, collection of account incentives, money laundering, and even sadly, crimes like human trafficking. In the case of bots, the bad actors are trying to conduct fraud on a mass scale, and quickly, we see that in the industry. If a financial institution sees a spike in account opening, it’s almost guaranteed that attacks on the payment services will begin.

 

YY: Some of this fraud is related to incentive campaigns where FinTechs or banks offer a monetary incentive to new customers to encourage them to open new accounts. How do these incentive programs tie into the issue?

Mary Ann Miller: Banks and FinTechs might underestimate the effort a bad actor will go through just for a $10 incentive, but there is more to the story. The fraud occurs over and over, and the cash-out for the bad actor does add up. In addition, bad actors have a database of identities they have stolen, so they use them across the industry. 

 

YY: The creation of these fake accounts is obviously problematic for FinTechs and banks, but how does this affect the consumers whose information was stolen? Is this related to the problem known as “framing fraud?”

 

Mary Ann Miller: Consumers are really at risk, and in many cases, they are unaware that their identity information was used to conduct fraud. This has consequences downstream for many processes like SAR filing, IRS reporting, and privacy. There’s even reputational damage in some cases. We’ve been calling this issue “framing fraud” because often, law enforcement goes after the actual victim for the crime as if they’re being framed.

 

YY: What can FinTechs and banks do to mitigate and get ahead of these issues?

 

Mary Ann Miller: Banks and FinTechs can take the following steps to mitigate these issues: improve the security layers around the endpoints, such as account opening with better bot protection. Technology such as phone-centric identity signals can deduce from the time of onboarding, whether the applicant is a bot or a real person. Implementing digital identity-proofing signals that provide red flags that can be actioned on in real time to prevent the creation of fake accounts cuts off the fraudster’s legs for this kind of attack. This not only helps protect the business but also consumers from becoming victims. 

 

What’s interesting is that many of these phone-centric identity signals for onboarding can be implemented in a way that improves the onboarding experience for real customers while blocking bots. In addition to fortified identity-proofing, these enhanced onboarding solutions can reduce onboarding time by 79%, decrease abandonment by 35%, and include integrated KYC checks.

 

Contact us for more information about how you can mitigate synthetic identity fraud, bot attacks, and fake accounts while improving the customer experience.

Keep reading

See all blogs
Blog
Prove’s Mary Ann Miller Featured in TechRepublic Panel About Addressing Cyberattacks With AI

AI tools can autonomously generate threat detection queries, sift through vast amounts of data, and pinpoint potential threats without manual intervention.

Mary Ann Miller
July 26, 2024
Blog
Creating Deepfakes is Easy - And That’s a Huge Onboarding Problem

Deepfakes, while not entirely new, have reached a level of sophistication that challenges businesses that are trying to deliver frictionless digital onboarding to their users.

Kelley Vallone
July 25, 2024
How to Defend Against the Rise of SIM Swap Attacks

The Federal Trade Commission (FTC) received reports of a significant increase in SIM swap attacks in 2023, and Experian's 2024 scam forecast identified SIM swapping as one of the top threats, emphasizing the need for heightened awareness and preventive measures.

Mary Ann Miller
July 24, 2024