Signals
Q2 2025
Global cybercrime estimated cost 2029 | Statista Global cybercrime estimated cost 2029 | Statista
1. 2.
Sign up for Mastercard Signals
Accelerating threats,higher stakes
Emergent responses, evolving defenses
Sources
Cybercrime: New frontiers
This edition of Signals examines the key drivers of cybercrime in the years ahead. These include a growing web of connected tools and platforms that are expanding the digital attack surface vulnerable to cyber criminals. At the same time, geopolitical turbulence is blurring the lines between cyber warfare and criminal activity — and impeding the coordination of global cybersecurity strategies. In addition, AI is making it easier for criminals to scale their operations, and a growing black market is putting powerful cybercrime tools into more hands. We also explore the challenges posed by three emerging threats. First is the advent of autonomous, agentic AI tools capable of conceiving, designing and executing criminal strategies at scale. Second is the relentless exploitation of people through the industrialization of scams. And third, criminals are using emerging technologies to effectively launder their vast proceeds. We conclude with analysis of innovations that promise the most effective response to this challenge. Adaptive, intelligent and increasingly autonomous tools offer more proactive protection of people and systems. Continuous risk-based authentication, transaction monitoring and the tokenization of data will ensure greater security for digital interactions. And collaborative insight-sharing platforms will provide more accurate and real-time threat intelligence to a wider network of partners.
By 2029 cybercrime is set to cost the world an estimated $15.6 trillion.1 Measured in terms of GDP, only the economies of the United States and China are bigger. But that explosion in criminal activity is being met by a wave of innovation in cybersecurity, fraud mitigation and anti-money laundering technologies, all aimed at securing the digital ecosystem.
listen to this section · 2:03 min
Securing trust: Insights from the frontlines of cybercrime
Share your feedback
next
Navigate issue below
Toward a safer future
Cost of cybercrime to the world by 20292
Securing trust
Cybercrime: New fronters
Accelerating threats, higher stakes
Cost of a data breach 2024 | IBM WEF Global Cybersecurity Outlook 20252024 Bad Bot Report | Resource LibraryNational Vulnerability Database WEF Global Cybersecurity Outlook 20252025 Data Breach Investigations Report | Verizon Kaspersky - Advanced persistent threats target one in four companies in 2024Check Point ResearchOpenAI: Influence and cyber operations: an update, October 2024 2025 Data Breach Investigations Report | VerizonFirm hacked after accidentally hiring North Korean cyber criminal - BBC News www.bbc.co.ukWEF Global Cybersecurity Outlook 2025Darktrace’s 2024 Annual Threat ReportUK authorities warn of retail-sector risks following cyberattack spree | Cybersecurity DiveWEF Global Cybersecurity Outlook 2025
3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18.
As the world has become more reliant on technology, and systems more interconnected and interdependent, so the attack surface available to bad actors is expanding and the opportunities to exploit vulnerabilities have multiplied. The internet of things, 5G technology, supply chain complexity and increased digital consumer engagement have all contributed. In industry, the shift from siloed control systems to web-based networks is exposing critical infrastructure to new cyber threats, making it increasingly difficult for security teams to effectively prioritize and address the most critical of them.
Cybercrime is one of the great threats of our time. It ruins lives, destroys businesses, erodes trust in the digital ecosystem and undermines global stability. It’s bigger than the economies of India, Germany and Japan combined, displays double digit growth year on year and is fueled by rapid technological change, the increasing availability of sophisticated, scalable tools and the sponsorship of organized criminal gangs and nation states.
average cost of a data breach3
72%
% of organizations report an increase in cyber risk4
1/3
of internet traffic is bad bots5
Four drivers of cybercrime
An expanded attack surface
280,000
cyber vulnerabilities documented in the U.S. National Vulnerability Database6
Cybercrime is a powerful geopolitical tool. Nation states increasingly leverage cyber technology to disrupt economies and undermine political stability in rival countries. Ransomware attacks are used to target hospitals, energy grids and financial institutions, creating widespread disruption. The anonymity of cyberspace complicates attribution and sows mistrust. Meanwhile, geo-economic fragmentation and growing nationalism are increasing multipolarity, which inhibits the coordination of global cybersecurity solutions.Disinformation and propaganda are also used to manipulate public opinion and destabilize societies. Unlike conventional warfare, cyber operations are often deniable and cost-effective, making them an attractive option for countries seeking to achieve their geopolitical objectives in the digital age.
Geopolitical turbulence
60%
of organizations say that geopolitical tensions have altered their cybersecurity strategies7
x3
increase in proportion of data breaches thought to be motivated by espionage between 2023 and 20248
Increase in advanced persistent threats (APT), typically state-sponsored9
Most state-sponsored cyber warfare is geopolitically motivated, but one country — North Korea — uses it almost entirely for financial gain, to circumvent international sanctions. After beginning with coordinated ATM cash-out attacks 10 years ago, the regime-affiliated Lazarus Group now specializes in high-profile cryptocurrency thefts. Its operatives have even infiltrated corporations by posing as IT freelancers in remote job interviews.12
Geopolitical turbulence continued
$1.5billion
stolen from Bybit exchange by Lazarus Group in world’s largest crypto heist, February 202513
In 2024 OpenAI said it had disrupted more than 20 organized attempts by nation-state actors to use ChatGPT to generate code for cyberattacks.11
AI is rapidly scaling cybercrime by automating and enhancing attacks. As the necessary technology becomes more readily available, a burgeoning black marketplace in fraud tools and cybercrime-as-a-service (CaaS) offerings is emerging. These include automated password hacking, data scraping and automated vulnerability detection and exploitation through large-scale botnet attacks.
AI scalability
What will most affect cybersecurity in the next 12 months?15
Based on the World Economic Forum’s 2024 Global Security Outlook of 409 respondents in 57 countries.
66%
AI/ML tech
IT and operational technology convergence
13%
Cloud tech
11%
Quantum tech
4%
Decentralized tech (e.g., blockchain)
Cybercrime offers lucrative returns with often minimal investment. The anonymity of cyberspace and geopolitical fragmentation make it hard for law enforcement to trace and prosecute cybercriminals, especially when they operate across international borders. Tools can target victims anywhere in the world and easily be scaled to increase returns. An enormous money laundering industry has emerged to hide the proceeds of crime.
Latin America
Cybercrime democratization
57%
of detected threats are due to malware as a service (MaaS), a rise of 17% year on year.16
In 2025 a spate of cyberattacks on U.K. retailers including M&S, the Co-op and Harrods were attributed to hackers using the ransomware-as-a-service platform DragonForce. The attacks involved data theft, ransomware and operational shutdowns.17
Regional variation: confidence in cyber resilience18
28%
Africa
36%
Middle East
Asia
42%
Europe
50%
North America
65%
home
previous
listen to this section · 5:09 min
Darktraces 2024 Annual Threat Report2024 Global Bot Security Report | DataDomeImperva 2024 Bad Bot ReportSOCRadar Annual Dark Web Report 2024Scam Platform Shut Down by UK Authorities After 1.8 Million Fraudulent - Infosecurity MagazineWEF Global Cybersecurity Outlook 2025Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations | CSRCGartner Predicts 40% of AI Data Breaches Will Arise from Cross-Border GenAI Misuse by 2027Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat HistoryCybersecurity’s Biggest Blind Spot: Third-Party Risk, New Resilience Analysis Finds - ResilienceWEF_Global_Cybersecurity Outlook 20252025 Data Breach Investigations Report | VerizonRiskRecon – The state of third party risk management report 2024www.mastercard.com www.mastercard.comThe cost of quantum computing how expensive is it to run a quantum system stats insideQuantum cyber security and a post quantum cryptography world | Mastercard Newsroomwww.cellcrypt.com smarter.energynetworks.orgFraud - National Crime AgencyGlobal Anti-Scam Alliancewww.bbc.com www.bbc.comwww.khaosodenglish.com United Nations Office on Drugs and CrimeUnited Nations Office on Drugs and CrimeRecorded Future 2024 Payment Fraud Report: Trends, Insights, and Predictions for 2025www.socradar.ioCUHK Data Breach: Hackers Target University In Hong KongCyber Security Report 2025 | Check Point SoftwareRecorded Future 2024 Payment Fraud Report: Trends, Insights, and Predictions for 2025www.transunion.comwww.transunion.com 52 AuthenticID Report Shows Deepening Financial Sector Threat | FinTech Magazinewww.recordedfuture.comDeepfake-Eval-2024: A Multi-Modal In-the-Wild Benchmark of Deepfakes Circulated in 2024Entrust 2025 Identity Fraud Reportwww.abc.net.auwwww.verizon.comwww.abc.net.auBots Now Make Up Nearly Half of All Internet Traffic Globally | Thales Groupwww.recordedfuture.comwww.pwc.comwww.fraud.netwww.pwc.comSex, drugs and bitcoinwww.arxiv.orgwww.dlnews.comwww.research.kaiko.comwww.tronweekly.comwww.financialcrimeacademy.orgwww.financialcrimeacademy.orgwww.financialcrimeacademy.orgwww.financialcrimeacademy.orgwww.thetimes.comwww.wired.comwww.europol.europa.eu
19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42, 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75.
AI doesn’t sleep. Agentic engines can relentlessly probe for vulnerable points in targeted systems, make their own decisions about strategy and then carry out attacks on their own initiative — shrinking windows between discoveries of weak spots and their exploitation. We are entering a new era of autonomous, machine-designed cybercrime.
Number of organizations that lack sufficient protection against even basic bots20
Autonomous threats
Cybercrime is undergoing a significant transformation. It once relied on automated bots that perform simple actions over and over — such as clicking on advertisements to drive up ad rates or ceaselessly sending requests to a certain server with the goal of overwhelming and incapacitating it, in what’s known as a denial of service (DoS) attack. Such set-and-forget bots remain in the cybercriminal arsenal, but scammers are also now embracing autonomous AI-driven threats. While traditional bots relied on predefined scripts, autonomous AI introduces a new level of sophistication. Take brute-forcing and credential stuffing attacks. The former puts bots to work throwing permutations of usernames and passwords at an account in the hope that one will work; the latter involves applying stolen sign-in credentials to huge numbers of accounts. Assuming these tasks from primitive bots, autonomous AI systems will adapt in real time. They’ll learn from their mistakes, perhaps modifying the passwords they generate in brute-forcing attacks to be more effective. In credential stuffing attacks they’ll reject less likely credentials and refine better bets. Or, in what’s known as “smart targeting,” they’ll choose to attack accounts that they deem easier to break. Meanwhile, bots visible to security solutions are giving way to shapeshifting and “invisible” bots that can give security solutions the slip by regularly changing their IP addresses, appropriating legitimate users’ session cookies and other methods.
Agentic bots: Smart and proactive
% of chief information security officers who have witnessed “a significant impact” from AI-driven criminal phenomena19
78%
Polymorphic malware, which mutates its encryption or code to get around security systems, is nothing new. But AI is creating a new generation of viruses, worms, trojans and ransomware that can:
Agentic bots: smart and proactive continued
100%
“Bad bots” drive almost 1/3 of all internet traffic21
THEN
From automation to autonomy
Set-and-forget automated bots
NEXT
Autonomous, opportunistic bots
Template-based phishing messaging
AI-sustained phishing conversations
Brute force and credential stuffing attacks
Smart targeting and credential determination
Visible bots
Shapeshifting and invisible bots
Complex tech in the hands of a few
Gen AI within reach of many
Learn from experience to refine and improve malware code mutations. Crunch data on the performance of malware variants in different security environments. Design the best attack for each context: encrypting data to “lock” it and hold it hostage, stealing it so that the attackers can use it for their own purposes, or both.
The AI-powered bot explosion
80%
40%
0%
20%
2019
2023
2020
2021
2022
24.1
25.6
27.7
30.2
32.0
13.1
15.2
14.6
17.3
17.6
62.8
59.2
57.7
52.6
50.4
CaaS is another trend that is scaling cybercrime. Bad actors have embraced the as-a-service business model which, in the cloud era, has transformed the enterprise tech economy. Criminals who lack the technical knowledge and financial or infrastructural means to carry out their own phishing, ransomware and other attacks can contract the software and expertise they need from shady specialists. The CaaS industry is enabling a new generation of malicious amateurs.
Cybercrime as a service: Toolkits for beginners
Among the services on offer are:
Help with forcing the first break past a victim’s defenses.
Initial access
Easy-to-use kits
Allow those of less tech proficiency to launch a variety of attacks.
Trojan attacks
Trojans are a type of malware that presents itself as a legitimate file or application to fool users into clicking on it.
Infostealer attacks
Use malware to secretly steal crucial information from credentials to bank information and beyond from a user’s computer.
Keylogger attacks
A type of infostealer attack in which a criminal spies on what a victim is typing, to ascertain log-in credentials and other key info.
Ready-to-go botnets
For running denial-of-service and other attacks.
Spyware
Giving criminals remote control of victims’ devices.
Money-laundering
Crypto “mixing” and other tools and methods for when it’s time to cash out.
A CaaS price list22
Type
Cost
Infostealer subscription
$1,024
Mailing/spam
$100
DDoS subscription
$20
Phishing kit
Scam template
$25
Zero-day exploit
$2,000-$200,000
In 2024 the U.K. National Crime Agency (NCA) shut down the CaaS group Russian Coms, which sold subscriptions to an advanced spoofing service used to impersonate financial institutions, telcos and law enforcement agencies in over a million scamming attempts. The average loss to each victim, across 107 countries, was $12,000.23
AI is emerging both as a powerful resource for cybercriminals and an effective tool for those seeking to defeat them. But there is a third dimension — the inherent security risk posed by the rush to introduce AI tools to daily life. Two-thirds of organizations recognize AI’s potential to significantly impact cybersecurity, but only a minority have taken steps to assess the security of AI tools themselves before deployment.
AI: A triple-edged sword
Security screening for AI tools is lacking24
Does your organization have a process in place to assess the security of AI tools before deploying them?
63%
No
Yes
37%
As AI fast becomes critical infrastructure further risks need to be considered:
Adversarial attacks and model inversion
AI systems rely on vast datasets from disparate sources, which can leave them vulnerable. Bad actors may use model inversion attacks, for example, in which they reverse-engineer AI models to reveal sensitive underlying data such as protected health information (PHI).25 Or they may engage in data poisoning, in which the input data for AI is manipulated to generate nefarious outcomes.
Autonomous decision-making risks
Agentic AI tools, which operate with minimal human oversight, can make incorrect or irreversible decisions. This autonomy introduces vulnerabilities that bad actors may exploit.
Compliance challenges
AI systems processing sensitive data must adhere to stringent regulations like the EU’s General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Because regulations vary from country to country, when prompts are sent to gen AI tools and APIs abroad, they may pose security risks.
of AI-related data breaches by 2027 may be caused by improper use of gen AI across borders26
In January 2025 researchers reported that an exposed database discovered at Chinese AI startup DeepSeek revealed over a million logs of user data, API keys and chat history.27
The cybersecurity of supply chains is cited by most companies as their most complex challenge29
The rapid, and in some cases unchecked, insertion of AI into every sector of the economy will add further complexity to supply chain and third-party risk. Already, a third of cyber insurance claims are a consequence of third-party risk,28 and this is likely to increase with the proliferation of AI tools that can become entry points for cyberattacks.
Vulnerabilities in complex supply chain interdependencies
Third-party risk: A new level of complexity
26%
Increasing sophistication of cybercrime
22%
Uncertainty arising from geopolitical tensions
Rapid adoption of emerging technologies
12%
Cyber skills gap
7%
Expanding regulatory requirements with insufficient harmonization
6%
IT-OT convergence
The problem is exacerbated by the fact that while AI Is fast and dynamic, third-party risk management (TPRM) procedures rely on snapshots of risk and self-certification — providing inadequate oversight of fast-paced, dynamic AI.
Third-party risk: a new level of complexity continued
The proportion of data breaches due to third parties is increasing30
Only 4% of organizations are confident in their vendors’ security questionnaire responses31
Quantum computing research by nation states is rising37 (Announced Investment $bn)
Emergent quantum computing (QC) could eventually solve puzzles exponentially more complex than those classical computing can. The question is: Would those include the cryptographic ones that underpin today’s security systems? Most encryption methods, like RSA and ECC (Elliptic Curve Cryptography), rely on mathematical problems that are extremely difficult for classical computers to solve. “Cryptographic relevance” in QC is expected to arrive with 1,000-qubit quantum computers32 — the qubit being QC’s basic unit, equivalent to classical computing’s bit.33 (IBM’s current state of the art quantum computer has 127 qubits).34 The massive cost of QC, coupled with the physical and infrastructural problems of deploying it, make it an unlikely weapon for cybercriminals in the near future. Nonetheless, organizations are preparing quantum-ready encryption.35 One fear is the possibility of “store now, decrypt later” gambits, where criminal — or nation-state — actors steal information now in the expectation that they will have the tools to access it years later.36
Quantum: Code-breaker?
China
15.3
Germany
5.2
U.K.
4.3
U.S.
3.8
South Korea
2.4
France
2.2
Japan
1.8
India
1.7
The scam is as old as time, but the web has transformed it into a global industry. In the U.K. alone, fraud accounts for 40% of crime, four-fifths of which is cyber-enabled.38 For scammers, the cheapest ploy is one that is automated and scalable — the email phishing attack still tops the list. But in more elaborate scams the victim is manipulated, often via social media, to send large funds or invest in fraudulent schemes.
Scams: A global problem
Global losses attributed to scams39
Countries worst hit economically (%GDP)
Countries where scams are most prevalent (%CITIZENS that ENCOUNTER SCAMS DAILY)
of victims don’t recover their losses
96%
Initial contact
“Pig butchering” — a grim term to describe scammers building trust with their victims over time, before financially exploiting them, analogous to how a pig is fattened up before it’s slaughtered.
Often through social media, dating apps or unsolicited messages, posing as friendly individuals or potential romantic interests.
Pig butchering
Building trust
Scammers engage in conversations over weeks or months, sometimes establishing a fake relationship.
Investment pitch
Once trust is established, a seemingly lucrative investment opportunity is introduced, often involving cryptocurrency or get-rich-quick schemes.
Fake profits
Victims are shown fabricated profits to encourage them to invest more.
The scam
When victims try to withdraw their funds or stop investing, the scammers disappear with their money.
In some parts of the world, so-called scam farms have emerged to industrialize this time-consuming and labor-intensive exercise. In 2025 they made headlines as epicenters of vast, well-organized criminal enterprises. Whole cities are dominated by armies of online scammers, often recruited against their will.40
people forcibly trafficked into online scam farms in Southeast Asia43
220,000
Scam farms
Shwe Kokko, a city on Myanmar’s border with Thailand, was built in 2017 ostensibly as a resort destination. But it was soon dominated by cyberscam entrepreneurs taking advantage of the region’s lawlessness. Online scams generated a multi-billion-dollar economy, serviced in part by trafficked labor put to work in barred-window facilities.41 A recent crackdown by the authorities has created a new problem: housing and repatriating up to 7,000 now redundant scam workers.42 On top of this, a devastating earthquake in March 2025 has left thousands of people homeless across the country.
Myanmar’s ‘scam city’: A cybercrime company town
TRAFFICKING ROUTES TO MYANMAR44
“Know your enemy,” advised the legendary military strategist Sun Tzu. That’s difficult in cyberspace, where bad actors operate in digital anonymity. To trade the spoils of crime, however, they need a marketplace, and the black market in stolen personal data and financial information offers a window through which to view the activities of cybercriminals.
stolen payment credentials posted for sale in 202445
270 million
Black markets, dark web
What most people consider the internet is actually the tip of the iceberg. The surface web is readily indexable by search engines and accessible to anyone with a browser. The next layer, the deep web, accounts for most of the internet and includes legitimate private databases, password-protected sites and resources that require authentication to access — the very entities that hackers and cybercriminals target. Finally, the dark web is hidden from search engines and is only accessible through specialized software. It has become a hub for cybercriminal activities, including the sale of stolen data, hacking tools and ransomware services.
Going rates for stolen PII and assets on the dark web46
Identity fraud begins with identity data. Infostealers are malicious programs designed to infiltrate systems and devices to extract valuable personally identifiable information (PII). They are cheap, easy to deploy and highly scalable, capable of automated mass credential harvesting. Educational institutions are prime targets for infostealers because they frequently store personal data with inadequate security precautions.
Weekly number of cyberattacks per organization, by industry, globally48
3,574
Infostealers: Harvesting personal data
In 2024 the personal data of more than 20,000 students were stolen from the Chinese University of Hong Kong and offered for sale on the dark web. It was the third Hong Kong educational institution to fall victim to a cyberattack in the space of six months.47
Education
2,286
Government
2,210
Healthcare & medical
2,084
Telecommunications
1,579
Construction & engineering
1,577
Energy & utilities
1,572
Aerospace & defence
1,554
Consumer goods & services
1,553
Automotive
Media & entertainment
1,520
Associations & nonprofits
1,510
Financial services
Synthetic identity fraud is the creation of entirely fabricated identities or the mixing of genuine information with fake data to create convincing profiles with which to access systems or funds. Fraudsters typically use stolen social security numbers, fabricated names or fictional addresses to build these identities. Once established, synthetic identities are used to apply for credit cards, open bank accounts or even commit large-scale financial fraud. The growing availability of stolen PII makes it easier than ever for cybercriminals to scale these schemes.
Synthetic identity fraud: Fabricating personas
PII (Personally Identifiable Information) elements offered for sale with stolen card data in 202449
Date of birth
EMAIL
National identification number
PHONE NUMBER
mother’s maiden name
pin
1bn
99m
4.8m
7.4M
4m
5.2k
New account fraud — by industry50
18.3%
Video gaming
16.2%
Retail
14%
Communities
8.8%
Travel & leisure
8.7%
Gaming
3.8%
2.3%
1.8%
Insurance
1.1%
Logistics
0.6%
Percentage of new U.S. lender accounts opened using synthetic identities51
H1 2021
H1 2022
H1 2023
H1 2024
Technology has supercharged the scam. First, graphics processing units, originally intended for gaming, gave fraudsters the computational muscle to create better deepfakes. Then came machine learning (ML), processing massive data sets to lend criminals greater capabilities in digital image and audio synthesis. Generative AI is now turbocharging the process, facilitating the low-cost creation of seamless deepfakes in different media — fast and at scale. Deepfakes are also playing a role in a global environment already made anxious by “deepfake news” and the disinformation campaigns associated with the hybrid war efforts of hostile nation states.
Deepfake automation: Attack of the simulacra
of financial institutions say they are experiencing an increase in deepfake fraud attacks52
82
deepfakes of public figures in 38 countries between 2023-2453
AI bots relentlessly scour social media profiles for images, video and voices. They autonomously generate deepfake versions of real people as well as create new synthetic identities from users’ digital footprints. They then devise and execute multiple scams in real time — a type of identity fraud that is smart, proactive, automated and at scale.
Deepfake automation: attack of the simulacra continued
4X
growth in number of deepfakes in fraud incidents in 2023-2454
244%
annual increase in digital document forgeries55
These are the cybercrime equivalents of “spray and pray” — indiscriminate and executed en masse. They require simple computing tools and little or no hacking skill and can turn a profit even with a low hit rate.
Spray and pray: Strike force
Applying stolen sign-in credentials to huge numbers of web apps on the chance that some might work.
Credential stuffing
Plying an account with an effectively endless series of usernames and passwords in the hope that one will work.
Brute force attack
Throwing a smaller set of passwords against a vast number of accounts — the mirror image of the brute force attack.
Password spraying
An email or message in which attackers impersonate trustworthy entities like friends, family, banks or utility companies to trick victims into sharing sensitive information.
Phishing
Analysis of 29 million exposed PINs by Australia's ABC News found that consecutive numbers, dates and years of birth were the most common PINs chosen by users. In this graph popular PINs are represented by brighter pixels.
The perils of passwords and PINs58
combinations representing dates, such as 2512 (or 1225 in the U.S.)
repeated digits
most common
beginning with 19 and 20, the first two digits of years of birth
Downstream of these methods lurks account takeover (ATO) fraud — what you get when one of them is successful. Once ensconced in a breached account, the criminal can harvest information, transfer out money or move laterally into connected accounts to victimize others.
Account takeover attempts by industry targets59
Account takeover: Identity hijack
11.5%
Travel
8%
Business services
5.5%
Computing & IT
4.7%
Community & social
3.7%
Telecom & ISPs
3.6%
3.4%
Law & government
2.5%
2.4%
Entertainment
2.2%
Lifestyle
7.5%
Other
E-skimming, also known as digital skimming or a Magecart attack, is a type of cybercrime where hackers steal payment card information during online transactions. This is done by injecting malicious code into e-commerce websites or payment processing pages. When customers enter their card details, the malware captures the information and sends it to the attackers. The availability of improved e-skimming kits and discovery of new vulnerabilities are expected to further boost this type of fraud in coming years.
illicit funds flowing through the global financial system61
Money laundering is essential to cybercrime. It enables illicitly obtained funds to be disguised as income from legitimate sources. By cleaning the money through various financial transactions, criminals can integrate it into the legal economy without raising suspicion.
Money laundering
% of money laundering that passes under the radar62
% of global GDP laundered annually 63
AI is revolutionizing money laundering by enabling criminals to scale their operations with unprecedented efficiency and sophistication. Using advanced algorithms, they can automate complex financial transactions, making it easier to disguise the origins of illicit funds across multiple accounts and jurisdictions. AI-generated synthetic identities and deepfake technologies allow them to bypass stringent verification processes, while machine learning helps them identify vulnerabilities in anti-money laundering systems, adapting their strategies in real time. This industrialized approach to money laundering presents a significant challenge.
AI/ML: Speed and scale
AI bots can execute rapid, complex transactions across multiple accounts to confuse investigators.
Automated transactions
Five ways AI is turbocharging money laundering
Criminals use AI to analyze anti-money laundering systems and develop methods to evade detection.
Adaptive techniques
Criminals use AI to exploit decentralized finance (DeFi) platforms and mixing services to obscure money trails.
Cryptocurrency manipulation
AI-generated fake identities are used to open fraudulent accounts and bypass verification processes.
Synthetic identities
AI-generated deepfakes are used to impersonate individuals and authorize illicit transactions.
Deepfake technology
Criminals introduce illicit funds to the financial system, perhaps by purchasing crypto with stolen credit cards or through peer-to-peer platforms or crypto ATMs. Alternatively, they use the stolen funds at legitimate businesses, like online shops and gambling sites, which can convert it to crypto.
A laundering exercise using cryptocurrency typically moves through three steps:
Placement
Cryptocurrency appeals to money launderers because of its pseudonymity, which allows transactions to occur without revealing personal identities. Its decentralized nature and lack of oversight make enforcement challenging. The ability to conduct seamless, borderless transactions is also a great advantage, while tools like mixers and tumblers can obscure transaction histories.
Crypto: The launderer’s currency of choice
Estimated proportion of cryptocurrency transactions that are associated with criminal activity64
The crypto funds are moved through various transactions to break the connection to the original source. This can include the use of chain hopping — moving funds across multiple blockchains to create a complex transaction trail — privacy coins with built-in anonymity features, and mixers and tumblers that blend cryptocurrencies from multiple users and redistribute them to new addresses.
Layering
Criminals convert funds back into usable assets, fiat currency or other legal assets. This can involve opening crypto exchange accounts with synthetic identities or stolen personal data, paying for fake services through shell companies, purchasing real estate and luxury goods that can be resold, or spending the funds via prepaid crypto-backed debit cards.
Integration
AI automates this work. Stolen money is deposited into compromised or synthetic bank accounts, automated scripts distribute it across multiple accounts to avoid detection, and bots execute real-time purchases of cryptocurrency.
AI-ASSISTED CRYPTOCURRENCY MONEY LAUNDERING
Individuals (sometimes unwitting) used to receive and move illicit funds through their accounts.
Money mules
Use of retail tools like gift cards, e-commerce and prepaid services to disguise illicit money as legitimate.
Consumer platforms
Services that obfuscate crypto transactions by mixing funds with those of others to break traceability.
Mixers & tumblers
Cryptocurrencies designed to provide transaction anonymity.
Privacy coins
Rapidly swapping assets across multiple blockchains via decentralized exchanges to obscure origins.
DEX chain hopping
Automated algorithms that execute rapid, randomized trades to complicate tracking.
AI trading bots
Buying and selling self-owned NFTs to create fake transaction history and legitimize illicit crypto.
NFT wash trading
Use of instant, collateral-free loans in DeFi to shuffle funds and mask laundering activity.
Flash loan laundering
Converting cleaned cryptocurrency back into real-world currency through exchanges or brokers.
Crypto to fiat
Approximately 38% of trades and 60% of traded value on three major NFT exchanges could be compromised by wash-sale manipulation65
Laundering the world’s biggest crypto heist
In February 2025, North Korean hackers stole nearly $1.5 billion in cryptoassets from Bybit, a Dubai-based exchange. It was the largest crypto heist in history and the proceeds were laundered with unmatched speed.66 The laundering process involved thousands of separate transactions, rapid transfers through hundreds of wallets,67 conversion into different cryptocurrencies, and the moving of funds between blockchains.68
TBML accounts for as much as 80% of illicit financial flows worldwide71
Trade-based money laundering (TBML)69 manipulates transactions along the supply chain — such as by over- or under-invoicing goods, falsifying trade documents or misrepresenting shipments — to move illicit funds across borders. Several challenges hinder entities trying to detect and prevent TBML: the complexity of international trade finance operations, the lack of harmonized standards and data compatibility across jurisdictions, and the need for comprehensive international cooperation and collaboration among governments, businesses and the financial sector.70
Trade-based money laundering: Hiding in plain sight
In early 2025, investigations revealed that the Irish Kinahan cartel established extensive TBML networks across Southeast Asia, particularly in Singapore and Indonesia. By exploiting regional business connections and weak enforcement measures, the cartel invested millions in various ventures, including commodities trading and real estate, to launder illicit funds.73
TBML can be 10 times more efficient than other money laundering processes, which makes it especially compelling for criminal and terrorist organizations moving large sums internationally.72
How it works
MLaaS providers market their services to cybercriminals, fraudsters and traffickers through dark web forums, encrypted messaging apps and cybercrime marketplaces. Some even offer onboarding guides, tiered pricing models and service-level agreements based on laundering volume, turnaround time or risk appetite.
Money laundering as a service (MLaaS) is on the rise74 as professional facilitators leverage their financial expertise and networks to launder illicit funds for a criminal clientele. It combines traditional laundering methods with automation, AI and the use of dark web marketplaces and global financial networks to move and hide dirty money efficiently. MLaaS is scalable, difficult to detect and operates across jurisdictions.
Money laundering as a service: Outsourcing to the experts
In January 2025, Spanish and Portuguese authorities arrested 14 people involved in a Russian-run money laundering network that processed over €1 million daily, providing laundering services to other criminal groups in the EU.75
Client acquisition and onboarding
Modular service packages
MLaaS operations may offer a menu of services, allowing clients to select tactics tailored to their needs — crypto mixing, chain hopping, mule networks, shell companies and fake accounts and trade-based money laundering.
Integration and placement of funds
The final step is to introduce the money into the legitimate financial ecosystem through crypto-to-fiat conversion, purchase of luxury goods and assets, and use of shell companies to pay fake salaries or issue phony invoices that give illicit funds the appearance of legitimate income.
listen to this section · 26:51 min
19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75.
% of internet traffic
Recorded Future - Uncover Your Unique Threat Landscape with Collective InsightsAdvanced Cyber Threat Intelligence | Recorded FutureMastercard Invests in Continued Defense of Global Digital Economy With Acquisition of Recorded FutureFederated Learning for Collaborative Financial Crimes Detection | SpringerLinkhting friendly fraud: How to identify risky customers | Mastercard NewsroomTradeverifyd | Supply Chain Risk Management SoftwareWEF Global Cybersecurity Outlook 2025www.cnbc.comNew Survey: Half of People Use Passkeys as Frustrations with Passwords Continue - FIDO AllianceOne-click payments by 2030 with tokenization | Mastercard NewsroomGlobal Fraud Detection and Prevention in Banking Market: 2024-29 | JuniperAPP scams performance report (July 2024)Reality Defender Announces Real-Time Video Deepfake Detection for Web Conferencing PlatformsEnsign InfoSecurity launches real-time deepfake detection toolwww.silenteight.comwww.biocatch.comMastercard Trace Financial CrimeDigital identity made simple | OnfidoIdentity builds trust in our digital worldwww.apnews.comwww.paycompliance.comAI-Native Fraud & Financial Crime Prevention Platform| Feedzaiwww.coinlaw.io
76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100.
Challenging tomorrow’s cybercriminals requires intelligent, adaptive and collaborative tools. AI will provide the foot soldiers for battles no longer fought with static rules and blacklists, but through dynamic, real-time analysis and intervention. Large-scale insight sharing acts as a global early-warning system for cybercrime, detecting ripples of suspicious activity before they become waves of fraud.
Proactive security
The focus of businesses, institutions and governments will be on the effective management of increasingly autonomous technologies. Organizations will form alliances to exchange insights, best practices and defensive techniques, strengthening collective resilience against cyberattacks. Trust networks using privacy-enhancing technologies and federated learning systems will ensure that intelligence sharing is configured for specific activities, such as fraud or identity theft, without sharing underlying data.
Intelligence and insights are the sword and shield of tomorrow's cybersecurity — intelligence to analyze vast datasets in real time, insights to drive effective and strategic responses. Organizations will require real-time data feeds that provide instant insights into emerging threats, attack methodologies and vulnerabilities. Threat intelligence platforms enhance decision-making by correlating vast amounts of data from diverse sources, while predictive analytics will allow organizations to understand attacker motivations, preferred tools and behavioral patterns and maintain a proactive security posture. Threat intelligence encompasses malware analysis, vulnerability assessment and monitoring of malicious actors in both the physical and digital realms. As the landscape of cyber threats grows more sophisticated, the demand for real-time, automated threat intelligence has surged, with companies leveraging AI and machine learning to predict and mitigate risks. Collaboration within the industry — such as sharing threat data across platforms and regions — also plays a crucial role in enhancing global cybersecurity resilience.
Threat intelligence: Early warning systems
% of security teams dissatisfied with their ability to correlate security data across all products and services76
Threat intelligence: early warning systems continued
Four main types of threat intelligence77
1. Operational
Focuses on the mechanics of specific campaigns, providing insight into an attacker's motivation and capabilities.
2. Strategic
A wider view, analyzing macro-level dynamics and geopolitical and industry trends that may create cyber risks.
3. Technical
Zeroes in on access routes, malware signatures, IP addresses and other details of an attack.
4. Tactical
Keeps tabs on criminals' shifting techniques and procedures.
In December 2024 Mastercard acquired Recorded Future, the world’s largest threat intelligence company, withover 1,900 clients across more than 80 countries, including the governments of 45 countries and over 50% of the Fortune 100.78 Recorded Future helps companies stay ahead of cyber-attacks by connecting the dots across millions of sources, powered by Intelligence Graph®, a uniquely powerful AI-driven data collection and analysis tool. Think of it as an early warning system that informs organizations about which threats they need to worry about right now, so they can protect themselves before an attack happens.
Real-time analysis
AI-powered systems can analyze vast amounts of transactional data in real time, identifying anomalies and suspicious patterns as they occur.
Cybersecurity is evolving from reactive threat mitigation to autonomous, proactive defense systems. These systems leverage AI to predict, detect and neutralize threats before they cause damage. Automated security tools continuously analyze network traffic, endpoint activities and system behavior to identify anomalies indicative of cyberattacks. The integration of self-healing systems ensures that vulnerabilities are automatically patched, minimizing downtime and reducing human intervention. Dynamic and intelligent fraud detection systems are a leap forward in the payments industry, a principal target of fraudsters and cyberattacks. Here's how they make a difference:
AI at scale: Autonomous defense
Adaptive learning
Unlike traditional rule-based systems, they continually learn from new data, attack vectors and criminal strategies.
Predictive analytics
Leveraging historical data makes possible anticipating potential fraud scenarios and implementing preventative measures.
Scalability and efficiency
The growing volume and complexity of digital transactions are a perfect fit for AI analysis, reducing the false positives produced when traditional systems erroneously flag legitimate transactions as fraudulent.
listen to this section · 27:28 min
The Mastercard network resists attack 200 times each minute.79 Its Safety Net global fraud monitoring tool prevented $50 billion in fraud between 2022 and 2024. Another AI tool, Decision Intelligence, assesses fraud risk on 159 billion transactions each year, issuing a score for each one within 50 milliseconds.
Mastercard's Safety Net detects organized card testing by fraudsters, indicative of a BIN (bank identification number) attack:
No data movement or centralization
Data remains within each organization’s secure environment, which reduces the risk of data breaches, compliance violations or leaks of proprietary information.
Federated learning is a way of training AI cybersecurity models that turns training’s usual logic on its head. In traditional methods, data flows away from individual nodes, or “clients,” towards the model in the cloud — potentially exposing that data and violating data-governance regulations. In federated learning, the model itself is downloaded to the clients and trained there, locally, using local data. At the end of the process the various local versions are reassembled in the cloud into a single integral model, without risking the exposure or sharing of the underlying data.
Federated learning: Collaborative fraud modelling
Key benefits
Improved fraud detection accuracy
By training on insights from multiple institutions, the model gains a more comprehensive understanding of fraud behaviors, making it better at identifying suspicious activity.
Privacy protection
Federated learning uses privacy-enhancing technology (PET). Since raw data is never shared, organizations maintain control over their sensitive information, aligning with privacy regulations and other industry-specific standards.
Preserving competitive intelligence
Organizations are often reluctant to share data because it can reveal business strategies or customer behaviors. Federated learning enables collaboration without compromising competitive advantages.
Improvement in fraud detection accuracy by financial institutions as a result of a federated learning platform79
Anticipated rise in federated learning market80
Parallel to the growth in federated learning, trust networks are emerging as a powerful tool for sharing fraud insights between organizations without exposing personal data. These networks are particularly effective in combating first-party and other fraud schemes. First-party fraud is committed by people directly engaging in transactions. A payer might falsely claim never to have authorized a transaction, for example. In a type of first-party fraud known alternatively as friendly or chargeback fraud, the buyer of an expensive smartphone online might falsely claim never to have received it.
Trust networks: Blind insights
of consumers admit to having committed friendly fraud in 202481
1/4
Ethoca, a Mastercard company, lets merchants and card issuers share data in real time data to prevent fraud and resolve transaction disputes faster. The service notifies merchants about potentially fraudulent transactions so they can act before issues escalate to chargebacks.
Tradeverifyd83 uses AI agents to proactively identify, monitor and mitigate supplier risks by harnessing predictive intelligence and n-tier supplier discovery. The company offers real-time third party monitoring and automated alerts.
Trust networks: blind insights continued
Trudenty's Consumer Trust Network82 was set up by an online seller defrauded by one of her buyers. The network preserves consumer data privacy while allowing companies to share risk signals indicating fraud. Retailers, banks and card issuers can make informed decisions, in real time, about how they interact with customers to reduce losses and time-consuming investigations, while improving experiences for trusted consumers.
AI is ushering in an era of continuous real-time N-tier risk assessment, which involves evaluating threats across multiple levels (the nth degree) of a supply chain or system, beyond just the immediate vendors, suppliers or components. It focuses on identifying vulnerabilities and dependencies that exist deeper in the network, such as tier 2, tier 3 and beyond. This approach is crucial for understanding the interconnectedness of systems and mitigating risks that might not be apparent at the surface level.
N-tier risk assessment: Deep mapping threat relationships
Organizations could also consider:
Conducting regular adversarial testing
Simulate attacks to identify weaknesses in AI systems. This includes testing for adversarial inputs, data poisoning and model vulnerabilities.
Implementing robust data security measures
Encrypt sensitive datasets used for training AI models and restrict access to authorized personnel. Regularly audit data sources to ensure they are free from malicious manipulation.
Adopting explainable AI (XAI) solutions
Invest in AI models that are interpretable and transparent, allowing organizations to understand the reasoning behind decisions. This builds trust and makes it easier to detect anomalies.
Human-in-the-loop mechanisms
Incorporate human oversight into critical AI processes, especially for high-stakes decision-making. Human operators can review and verify AI outputs to minimize errors or exploitation.
The rush to deploy AI to organizational systems creates a whole new set of often unpredictable risks. For example, despite the immense power of AI tools, around two-thirds of organizations have no processes in place to vet their security risks before deploying them.84 Effective AI governance will be crucial to prevent privacy violations, data breaches and nefarious manipulations. A focus on explainable AI (XAI) and the advent of Know Your Agent (KYA) processes will enhance transparency, ensuring that cybersecurity decisions made by AI-driven systems are interpretable and accountable. Additionally, continuous audits and adversarial testing will be necessary to prevent cybercriminals from exploiting AI-driven security measures. Among the steps inherent in the KYA process are verification of an AI solution’s or other digital agent’s data sources and algorithms, bias testing, and the drafting of permission and access management protocols.
Know Your Agent: Onboarding AI
Monitoring and updating AI models
Continuously monitor AI systems for unusual behavior and deploy updates to improve resilience against emerging threats. This includes patching vulnerabilities and improving algorithm robustness.
Compliance and governance
Develop comprehensive policies to ensure AI systems align with data protection regulations (e.g., GDPR, CCPA). Clear governance frameworks can guide ethical and secure AI implementation.
Governments worldwide are introducing new regulations around AI. The European Union AI Act85 categorizes AI applications according to risk and has strict requirements on AI systems dealing with biometric identification, security and critical infrastructure, for example. This introduces compliance challenges for multinational organizations with complex global operations.
The commoditization of personal data, fueled by dark web marketplaces and AI-driven fraud tactics, is being met by innovation in identity verification and authentication. The next five years will see a decisive shift toward continuous, real-time, risk-based authentication, building on behavioral analytics, biometrics and identity networks.
Fighting the scammers
As tokens and passkeys substitute for data and passwords, continuous authentication will evolve to leverage a wider range of biometrics, behavioral analytics, device intelligence and contextual solutions that ensure persistent and dynamic verification of individuals and entities. This approach significantly reduces identity fraud. Continuous, or always-on, authentication offers dynamic security in a protean threat environment, assessing on a rolling basis whether devices and users should be trusted. It uses context-aware methods like geolocation assessment and transaction data and picks up on anomalies like atypical typing rhythms or mouse movements. AI analysis of multiple datapoints enables a smarter, more reasoned assessment of risk than passwords and PINs. Global markets for password-less authentication, risk-based authentication and passive (silent) authentication are forecasted to grow rapidly over the next several years, driven by tech advances, demands for enhanced security and rising adoption across industries like finance, health care, retail and workplace security. Continuous authentication is related to zero-trust architecture, an approach to cybersecurity that eschews single logins and simple perimeter protection in favor of more holistic security. Every access request, whether inside or outside the network, is treated as untrusted until proven otherwise. Zero-trust architecture incorporates multiple strategies, such as least privilege access (limiting access to only most needed), micro-segmentation and real-time monitoring.
Continuous authentication: Always-on security
The Mastercard portfolio of identity solutions adds protection across all money movement flows. By combining identity, device, behavioral biometrics and transaction data with sophisticated AI technology it can prevent bad actors from striking across the lifecycle of a scam.85
Tokenization and passkeys are both critical tools in modern security and fraud prevention, though they serve distinct purposes. Tokenization protects sensitive data, such as credit card numbers or personal information, by replacing it with unique tokens that are meaningless outside of the secure system, minimizing exposure and reducing the risk of data breaches. Passkeys, on the other hand, are a password-less authentication method that uses cryptographic key pairs — one private and one public — ensuring secure account access while eliminating vulnerabilities associated with traditional passwords, like reuse or phishing. While tokenization focuses on securing data in storage and transactions, passkeys emphasize secure authentication.
Tokenization and passkeys: The twin turbo of fraud prevention
% of the world’s top 100 services and websites that have already adopted passkeys86
Mastercard target for tokenization of e-commerce transactions by 203087
Tomorrow’s scam protection tools will also use context-aware methods to assess risk. The defining feature of the scam is that the victims themselves play an active role in their misfortune, making it far harder to police using traditional authentication tools. Authorized push payment fraud (APP) happens when the victim is deceived into transferring funds directly to accounts under the control of the fraudster.
Anticipated rise in number of fraudulent push payment transactions88
AI scam protection: Patterns and anomalies
2029
2028
2027
2026
334.3M
276.9M
230.1M
190.1M
New technologies are increasingly able to determine whether manipulation and deception are clouding victims’ judgment. In the U.K., Mastercard’s Consumer Fraud Risk solution, for example, uses AI to analyze the parties to a transaction and map their relationships to constellations of high risk or suspect accounts across the real-time payments network. Within a fraction of a second a risk score is generated that enables a bank to block a payment before it leaves the account. The U.K. is bucking the global trend for APP fraud, seeing a 12% reduction across 2023.89
Advances in deepfake detection are focused on developing more robust and efficient methods to identify AI-generated content across various modalities, including video, audio, images and documentation. Techniques include sophisticated machine learning, computer vision and audio analysis techniques to differentiate between real and manipulated media. AI-powered deepfake detection tools can analyze pixel-level differences between authentic and manipulated images with high precision. However, very few solutions can detect deepfakes in real time, which limits their ability to detect a scam in progress. In time, we can expect real-time capability to become a standard feature of all conferencing apps, and not just identity verification platforms.
Real-time deepfake detection: A cat and mouse game
Real-time deepfake detection: a cat and mouse game continued
U.S. deepfake detection platform Reality Defender recently announced its platform was capable of real-time detection, enabling the verification of individuals on web conferencing platforms.90 Meanwhile, Asia-Pacific's largest cybersecurity firm, Ensign InfoSecurity, says its Aletheia solution can identify AI-driven manipulations in audio and video content inside a second and with 90% accuracy.91
As illicit financial networks become faster, smarter and more decentralized, financial institutions and regulators are adopting next-gen anti-money laundering (AML) solutions powered by AI, blockchain intelligence, advanced analytics and cross-industry collaboration. These solutions aim to detect suspicious activity at machine speed, streamline compliance and future-proof financial ecosystems against evolving criminal tactics.
Innovation in anti-money laundering
AI-driven platforms detect anomalies and suspicious activity that might indicate money laundering across vast payment networks in real time. These systems leverage machine learning algorithms to analyze millions of financial transactions per second, benchmark them against expected behaviors, and instantly flag deviations — such as rapid crypto conversion, cross-border layering activity, and signs of large transactions being divided into small, less conspicuous amounts (structuring). Such real-time detection enables compliance teams or automated systems to respond immediately, halt transactions, request further verification or launch deeper investigations. This can disrupt money laundering schemes as they unfold, rather than relying solely on post-transaction audits or static rule checks.
Number of mule accounts found at 257 financial institutions, per 202492
Advanced transaction monitoring: Spotting patterns, mapping relationships
% of financial institutions expected to use AI for AML activities in 202593
Mastercard’s Trace Financial Crime uses machine learning and advanced graph analytics to detect complex, hidden financial activities in money mule accounts, often revealing indirect relationships across disparate financial entities to identify criminal networks. This goes beyond traditional rule-based detection methods to identify sophisticated laundering techniques. ML algorithms enable Trace to analyze vast volumes of transactional data in real time and learn to identify subtle anomalies that may suggest involvement in a criminal scheme, such as irregular fund flows, unusual transaction frequencies or atypical recipient behavior. In addition, AI tracing tools leverage graph analytics, which map out the complex web of connections between accounts, individuals and financial institutions. This allows the tool to uncover indirect relationships. Imagine, for example, an account that wasn't previously involved in fraud but shares links with known mule accounts through common intermediaries or shared transaction paths. These insights help identify hidden networks of criminal activity that might otherwise remain undetected when analyzing accounts in isolation.
Money mule accounts being used for laundering activity in the U.K.94
Blockchain forensics focuses on detecting money laundering techniques such as:
Chain hopping
Rapid movement of funds across multiple blockchains to confuse forensic trails.
Specialized analytics providers are leading the evolving field of blockchain forensics — a discipline focused on uncovering illicit activities within blockchain networks. They use AI/ML models to trace crypto wallets and identify the origin and destination of digital assets — to establish connections between addresses even when efforts have been made to obfuscate ownership and the intent of transactions. These providers employ mapping techniques to follow assets across chains, ensuring continuity in the investigation even when assets are moved into entirely different ecosystems. This level of blockchain forensics is crucial when responding to large-scale hacks, such as the Bybit incident in February 2025, where vast sums are stolen in a short period.
Blockchain forensics: Joining the dots
Tumbling (mixing)
Pooling together multiple users' coins and redistributing them to break the traceability of individual transactions.
Cryptocurrencies designed to protect anonymity by hiding or eliminating data that can be used to identify parties to transactions.
regulators are expected to:98
Increase collaboration across jurisdictions to track illicit funds Institute more frequent compliance audits to assess risk management frameworks Impose tougher penalties for AML violations, including higher fines and operational restrictions
Traditional Know Your Customer (KYC) methods are costly and often rely on static data that may be vulnerable to fraud and breaches. Businesses can mitigate risk by integrating passive assessments of identity attributes such as device, data and behavior before more expensive checks like document verification. New approaches should embrace dynamic risk assessments that use AI to passively and continually assess identity attributes. Leaders in this space are adopting a perpetual (pKYC) approach where risk assessments are made over the complete lifecycle of an account to improve overall compliance and reduce the risk of mule accounts, account takeover, scams and more. KYC technology has become the first line of defense in preventing illicit actors from gaining access to financial systems. Solutions like Onfido95 and Mastercard Identity Insights for Accounts96 use a combination of biometric authentication (such as facial recognition or fingerprint scanning), document analysis (examination of ID cards, passports or driver's licenses for signs of forgery or tampering) and database and sanctions screening to meet AML compliance requirements. This technology is especially critical in combating synthetic identities. As deepfake technology and AI-generated identification documents become more convincing, KYC tech infused with machine learning and neural networks can detect subtle irregularities and behavioral red flags that human reviewers might miss — enabling organizations to meet evolving regulatory expectations under AML frameworks.
Know Your Customer: Identity verification in AML
Global authorities are enforcing stricter AML compliance with fines for noncompliance reaching billions of dollars — creating a strong incentive for companies to adopt KYC technology. The U.S. Department of Justice fined TD Bank more than $3 billion in 2024 because lax AML practices allowed significant money laundering over multiple years.97
Advanced fraud detection platforms such as Feedzai 99 leverage machine learning and behavioral analytics to monitor customer activity across devices and communication channels. A mule account might pass KYC checks at onboarding but later demonstrate unusual login locations, rapid crypto transactions or interaction patterns indicative of bot control — all of which can trigger AML alerts under behavioral profiling.
Behavioral analytics for customer risk profiling: Dynamic monitoring for AML
AI-enhanced behavioral analytics modules have been integrated into approximately 40% of new AML software solutions.100
Modern regulatory technology (RegTech) tools leverage advanced AI to automate AML workflows. These tools are designed to handle vast amounts of financial data, detect anomalies and flag suspicious activities with a high degree of accuracy. By automating repetitive and time-consuming tasks such as transaction monitoring, customer due diligence and suspicious activity reporting, RegTech platforms reduce operational costs and human error while improving the speed and consistency of compliance processes. One of the key advantages of these tools is their ability to adapt to rapidly changing regulatory environments. Using machine learning, natural language processing and real-time data analysis, RegTech solutions can update compliance rules dynamically, ensuring institutions remain compliant with evolving local and international regulations without constant manual intervention.
RegTech innovation: Automated compliance
Adoption use cases:
Automated KYC/AML checks Continuous transaction monitoring Real-time suspicious activity reporting Risk scoring and customer segmentation
https://www.psr.org.uk/media/uaag25pp/app-fraud-publication-jul-2024-v6.pdf
In a few short decades, the online world has become the foundation of global communication, financial flows and information exchange, and the catalyst for cultural, political and economic transformation. It’s where innovation flourishes, ideas are shared and communities come together. What happens online increasingly determines what happens offline.
listen to this section · 2:55 min
Now, it’s at a critical juncture. Cybercrime is evolving into a highly autonomous, scalable and sophisticated industry that destroys livelihoods, erodes confidence, creates instability and undermines growth, all while extracting huge amounts of value from the global economy. The features of this new and dangerous landscape are in constant flux. There is a proliferation of autonomous bots, polymorphic malware, deepfake scams, geopolitical antagonists — and a burgeoning cybercrime industry with evolving tactics and increasingly sophisticated tools. Yet the technology that bad actors deploy to threaten, steal and subvert is being met by unprecedented levels of innovation in cybersecurity and fraud prevention. This is an arms race in which threat and response evolve in tandem and in tension. Even as AI creates new opportunities for cyber criminals, it offers more effective ways of thwarting them. Looking further ahead, quantum computing will present a challenge to the cryptography that underpins the global digital infrastructure, but also promises to be a powerful tool for securing it. Agentic AI, real-time data, behavioral intelligence and advanced identity technologies will together form a foundation for a future cybersecurity that’s smarter, faster and more resilient. It will counter autonomous threats with more vigilant autonomous defenses, identity fraud with smarter verification, and money laundering with more discerning intelligence. Cybersecurity stakeholders also have an advantage that criminals lack: the will to work together. Global collaborations, trust networks and insight-sharing are key to this. The payments sector has always been a target for criminals, and consequently a trailblazer in the development of security technologies. Mastercard will continue to drive this innovation, to ensure the resilience of a financial ecosystem being rapidly redefined through technological change.
Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re building a resilient economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.