Table of Contents
Intro
February 07, 2024
Welcome to Part 2 of a two-part series observing trends in the financial fraud landscape. In Part 1, we examined the greatest fraud schemes and emerging trends to look out for in 2024. In Part 2, we will be exploring the greatest challenges fraud teams will face, as well as technologies that are predicted to change the game in our fraud defense strategies.
Fraud has, without a doubt, remained a challenge for fraud teams over the years, as illicit actors in the financial services landscape continue to innovate and get ahold of new technologies. Last year, around 60% of banks, fintechs and credit unions suffered at least $500k in direct fraud losses each. For many small businesses, this can represent a substantial amount. Furthermore, in many cases, it might appear lower than reality due to a miscategorization of fraud cases in “Bad Debt”. That lack of maturity in fraud reporting can create significant issues in the future.
Additionally, we continue to uncover new services used by fraudsters every day, from creating fake IDs with generative AI to using deepfake/deepvoice technology to create fake synthetic avatars. There has been a noticeable acceleration in the development of new technologies with fraud applications. This highlights the dire need for fraud teams to remain two steps ahead of evolving fraud tactics and the need for organizations to invest in fraud prevention today, to be ready for tomorrow’s next big attack.
By identifying the most difficult challenges faced by fraud teams and understanding which technologies to have in the fraud toolkit, we will be better equipped to prevent and detect sophisticated forms of fraud. I have reached out to seven industry-leading fraud experts who all experience unique fraud challenges on a daily basis. They shared their insights on what the next greatest challenges for fraud teams will be, as well as technologies and practices that will change the game in fraud prevention.
What will be the greatest challenges fraud teams face this year?
Staying ahead of evolving fraud tactics
Like most organizations, fraudsters are continuously evolving and finding new ways to improve what they do best, which in their case, is bypassing complex fraud prevention measures to achieve their goals. The evolution of Fraud-as-a-Service (FaaS), powered by emerging technologies such as generative AI and large language models (LLMs), has presented many challenges to fraud teams. On the operations side, the challenge is moving rapidly, despite organizational red tape, to adapt, modify and evolve our tools and processes. On the technology side, it’s a change management problem, having to adopt new tools rapidly in teams that are often agile but not so technologically skilled.
Understanding new attack vectors and the technology behind them requires more skills and visibility across the organization’s processes and operations. Teams relying on rule-based detection strategies might find themselves rapidly overwhelmed by AI-powered attacks, as it requires significant manual efforts to maintain, while an AI model will retrain itself weekly, daily or hourly.
“I think one of the biggest challenges for fraud teams is something that has plagued them for years, but is getting even harder. Balancing speed with security.
– PJ Rohall, Co-Founder, About-fraud.com
For instance, LLMs can be used to weaponize chatbots and virtual chat agents. Multiple scripts and scenarios can be tested to discover the easiest ways to open new accounts, modify existing accounts or exploit return/refund policies at a scale larger than ever. Not to mention, many of these models have real-time learning capabilities that allows them to continuously outsmart anti-fraud mechanisms and better manipulate agents on the other side of chat sessions.
We can easily imagine all the challenges a fraud team would encounter upon discovering a new type of attack: How do I acquire the data I need? Which fraud detection tools do I have or must acquire? What actions can I take today to either raise alerts or stop these sessions in a scalable way, given the resources at hand? How can I effectively train frontline agents to recognize these attacks? And, how can I accomplish this rapidly, without putting the organization at risk, and while adhering to established safeguards to prevent the introduction of new vulnerabilities? Finding the right balance is crucial, yet speed is of the essence in all aspects of change management.
“The challenge for organizations will be rapidly adapting to evolving cyber threats, maintaining robust cybersecurity measures and employee training.
– Alexandre Girard, Senior Manager – Artificial Intelligence & Ian St-Cyr, Senior Manager – Fraud Control & Detection, Bell Canada
In addition, staying ahead means obtaining funding for new projects, new platforms and, sometimes, new hires with the expertise to guide the organization toward building a stronger defense against emerging threats. Funding anti-fraud projects has always been a significant challenge for fraud teams. But our experts do at least foresee a chance for fraud teams to receive the resources they need:
“[…] FIs notably might respond to those trends by shifting their priority between (compliance) to fraud partially; over the past few years, they have managed their investments to focus on other (compliance) risks and regulatory pressure, but it will certainly change as Fraud becomes more and more painful for the organizations […]
– Dominic Hurtubise, Partner, Regulatory Compliance and Financial Crimes & Éric Lachapelle, Partner, Regulatory Compliance and National Leader Financial Crimes, KPMG Canada
But from my perspective, I don’t think we have to see this as a choice between compliance and fraud. The right technology can often be used for both, and fraud teams will greatly benefit from working with AML teams or CX teams to select technology that has more than one application, allowing for a broader adoption of whatever tools are selected. For example, using geolocation to prevent account opening from OFAC-sanctioned jurisdiction and to prevent fraudulent accounts from being opened from known-risky locations.
Managing a substantial uptick in fraud attacks
We saw in our previous post that a combination of various factors, such as new technologies, the current economic landscape and the greater accessibility of Fraud-as-a-Service (FaaS), will drive a massive uptick in fraud attacks. As a result, fraud teams will be overwhelmed with a large volume of both high and low-value investigations, making it difficult for teams to manage and prioritize different cases.
This will be especially true for companies that depend heavily on manual rule-based systems, where sudden, disruptive events resulting in unusual behaviors could lead to a significant increase in fraud alerts from legitimate customers. Such an increase is challenging to manage, reduces the accuracy of fraud detection systems, allows fraudsters to operate successfully for longer periods and leads to analysts losing trust in the tools they use. The absence of sophisticated AI models working alongside our rule engines, or the lack of rich and diverse data, will undoubtedly pose a major operational challenge for fraud teams, especially in a year that is expected to see fraudsters using more adaptable tools.
The challenge of managing day-to-day operations with innovation in fraud detection methods has always been a task that fraud teams must tackle. However, there will be increased pressure moving forward, as the general public becomes more aware of the fraud landscape. It becomes crucial for the success of fraud teams to allocate sufficient time to envision the future, plan ahead, and research new solutions.
Lacking support when seeking to develop cohesive fraud strategies
Fraud teams often grapple with the lack of internal support. They are commonly viewed as a cost center and an obstacle to the business. While some executives may acknowledge fraud as a significant challenge to address this year, the costs and consequences of tackling certain issues will continue to be problematic. This is where good fraud leaders can truly make a difference: by simplifying the issue (and its solution), crafting a compelling narrative, and knowing how to rally internal stakeholders around a common goal. From a technological standpoint, this goal can sometimes translate into reduced losses and an increase in revenue.
“Many fraud leaders have reported their insistence to initiate platform-wide initiatives to proactively monitor and detect occurrences of fraud but have been met with opposition. This leads to an ongoing cycle of ‘whack-a-mole’ wherein the organizations piece together fraud responses as specific fraud methods gain traction within their platform. This is significantly more expensive, time-consuming, and less effective than implementing a well-developed fraud strategy.
– Alexander Hall, Fraud Strategy Consultant, Dispute Defense Consulting
Are there any disruptive technologies or practices that may change the fraud game in 2024?
Leveraging AI and machine learning
While AI will inevitably present challenges to fraud teams, the same technology can also be used for good. For instance, the same prediction capabilities used by fraudsters to analyze the behaviors of their victims can also be used by fraud teams to stay ahead of illicit actors. In fact, 90% of businesses that use machine learning for fraud prevention and detection reported high levels of confidence in their effectiveness.
AI and machine learning are particularly effective in analyzing vast quantities of data and identifying irregular trends, making it easier for fraud teams to identify suspicious patterns and recognize which cases to escalate. Furthermore, a whitepaper released by Nets and KPMG on harnessing artificial intelligence and machine learning to combat card fraud stated that combining both human and machine-generated information in a single framework will produce the most accurate fraud scores possible. This will subsequently combat criminal activity, reduce financial losses and improve the customer experience.
It is, however, important to note that like any other tool or technology, one should not only narrow down on AI technology to fight fraud. Instead, AI and machine learning technologies should remain a small component of a fraud fighter’s entire arsenal of complex fraud-fighting tools to tell you various insights. This includes information on where an identity comes from, where a device is located, and so on.
Incorporating more datasets into our tools
From many of my conversations, it has become apparent that the new challenges facing fraud teams, ranging from increased speed requirements to emerging attack vectors and workload surges, can be alleviated by incorporating a broader array of datasets into existing fraud detection tools. From geolocation and device fingerprinting to phone forensics data and biometrics, 2024 is the year to broaden our scope. We need to collect more information so that the AI models previously discussed can utilize this data to make more accurate predictions.
Historically, institutions have engaged with users by requiring them to provide information as a security measure. Yet, over the past few years, there has been a consistent rise in compromised information finding its way into criminal marketplaces. Fraudsters now have access to accurate information that can pass through validation processes successfully.
Effective fraudsters have established a power position due to dated practices of relying on a user’s provided information to proceed with determination processes. However, there is a lot of data available that could, and indeed should, be incorporated into the datasets used by fraud teams.
“Passive datasets such as geolocation, device fingerprinting and biometrics allow anti-fraud teams to see more than what is shown and provide the optimal way to close the gap between fraudsters and fraud teams.
– Alexander Hall, Fraud Strategy Consultant, Dispute Defense Consulting
Biometric ID verification tools are particularly worth highlighting. Stolen identity information is actively used by criminals to commit various forms of identity fraud. Continuing to resist biometric ID verification tools means that people will remain at a higher risk of falling victim to identity criminals using stolen personal information.
“The adoption of biometric ID verification could be a game changer. […] Continuing to resist biometric ID verification tools means that people will remain at a higher risk of falling victim to identity criminals using stolen personal information.
– Eva Velasquez, CEO, Identity Theft Resource Center
But as always, there is no single silver bullet in fraud prevention, and a good biometrics system, by itself, isn’t sufficient: especially with the rise of deepfakes. By leveraging passive datasets for determinations, fraud teams can recognize models that will help identify the intent of users and begin to take the power away from fraudsters. Even if they hide behind the latest technologies to circumvent well-established biometrics systems.
Interested in discovering how our geolocation and device fingerprinting technology can protect your business from the evolving threats of fraud?
Did you miss out on Part 1? If you want to know what the next biggest fraud schemes will be in 2024, read Part 1 here.