42.5% of Fraud Attempts Detected Use AI, Finds Report

Signicat has announced a new report on the growing threat of AI-driven identity fraud in partnership with independent consultancy Consult Hyperion.

The research reveals that fraud prevention decision-makers across Europe are experiencing more AI-driven identity fraud and expect it to grow, but are unprepared to tackle it, and haven’t been able to implement measures to prevent it yet.

The Battle against AI-driven Identity Fraud is the first study into how organisations across Europe are battling the growing threat of AI-driven identity fraud.

It asks banks, insurance providers, payment providers and fintechs about their experience, how AI is changing fraud, and whether they are prepared to fight it.

Over a thousand fraud decision-makers across Belgium, Germany, the Netherlands, Norway, Spain, Sweden, and the UK took part in the research.

Key findings include:

Rise of deepfakes: Three years ago, AI was being used to create new or synthetic identities and forgeries of documents. Today AI is being used more extensively and at scale for deepfakes and social engineering attacks.

A third of AI-driven fraud attempts are successful: 42.5% of fraud attempts detected use AI, as estimated by respondents, with 29% of them considered to be successful. One in nine said that estimated AI usage in fraud attempts is as high as 70% for their organisation. 38% of revenue loss to fraud is estimated to be due to AI-driven attacks.

Account takeovers in B2B: Despite account takeover generally being seen as a consumer issue, it is actually the most common fraud type for B2B organisations.

Confusion on how to combat: Fraud decision-makers recognise that AI will drive nearly all future identity fraud. However, there is confusion and limited understanding about its exact nature, impact, and the best prevention technologies.

Plans but little action: Over three-quarters of businesses have teams dedicated to the issue of AI-driven identity fraud, are upgrading their fraud prevention technology, and expect increased budgets. However, less than a quarter have begun implementing measures.

At an inflection point

AI is not yet making fraud materially more successful—at least, not yet. Success rates for fraud attempts, both AI-driven and not, have remained steady over the last three years. And that is why we are at an inflection point.

AI is enabling more sophisticated fraud, at a greater scale than ever seen before. Fraud is likely to be more successful, but even if success rates stay steady, the sheer volume of attempts means that fraud levels are set to explode.

There has been a shift in the last three years, from creating new accounts using forged credentials, to compromising accounts that already exist. Signicat’s research reveals that account takeover attacks are the most popular type of fraud, often taking advantage of weak or reused passwords. Deepfakes, often used to impersonate the holder of an account rather than creating a new or synthetic identity, are far more popular, accounting for one in 15 fraud attempts. Fraudsters are happy to evolve and attack where they see vulnerabilities.

Understood yet unprepared

There is a very high awareness of the problem of AI-driven identity fraud. Most fraud decision-makers agreed that AI is a major driver of identity fraud (73%), that AI will enable almost all identity fraud in the future (74%), and that AI will mean more people will fall victim to fraud than ever before (74%). Organisations do, at least understand the threat that AI poses in its ability to make identity fraud easier, more accessible, and work at scale. They can detect AI in the attacks they face, and they understand that the problem is only going to get worse.

However, organisations are unprepared for the threat. They do not know what techniques and technologies will help them the most, and their plans to fight back are just that: plans, with implementation timescales mostly in the next twelve months. Even more worrisome is that organisations report that the deck is stacked against them: they lack budget, expertise and time.

Asger Hattel

“Fraud has always been one of our customers’ biggest concerns, and AI-driven fraud is now becoming a new threat to them. It now represents the same amount of successful attempts as general fraud, and it is more successful if we look at revenue loss,” said Asger Hattel, CEO at Signicat.

“AI is only going to get more sophisticated from now on. While our research shows that fraud prevention decision-makers understand the threat, they need the expertise and resources necessary to prevent it from becoming a major threat. A key part of this will be the use of layered AI-enabled fraud prevention tools, to combat these threats with the best technology offers.”

“It is essential that financial firms have a robust strategy for AI-driven identity fraud. Identity is the first line of defence,” said David Birch, Director, Consult Hyperion, “Identity systems must be able to resist and adapt to ever-changing fraud tactics, to protect legitimate customers and ensure the reputation of the service.”

Understanding the need for layered defences by organisations, Signicat offers digital identity solutions that are orchestrated securely and organized for both companies and end-users. From identity verification through various methods such as automated user identity verification or national digital identity schemes, to authentication and legally binding electronic signatures, Signicat covers the complete digital identity lifecycle. A layered approach been key to staying ahead of AI-driven fraud, Signicat also offers data enrichment and verification solutions, as well as ongoing identity monitoring to ensure that no fraud is committed after the customer sign-up process.