AI in Financial Crime: New Scams and How to Protect Yourself

image

AI is a powerful tool worth looking at with curiosity. It doesn’t need to inspire fear. In the right hands, AI tools can be leveraged to make work more efficient, spur creativity, and solve minor problems. But as with any innovative technology, it can also be used to commit a crime. In the wrong hands, AI can be used to make scamming and financial crime easier, faster, and more effective.

Modern Scam Tactics

It’s both impressive and mildly upsetting to consider the creativity of scammers, both historically and in modern times. AI has given scammers additional leverage in their financial crime exploits.

Deepfakes

Deepfakes are AI-generated videos, audio clips, or images that persuasively impersonate real people. They use generative AI models to manipulate facial expressions, voices, and body movements to make someone appear to say or do things they never did.

How can they be used to scam you? Imagine receiving a conference call from your higher-ups in which they urgently request that you make a series of wire transfers to various bank accounts. They are your bosses, after all, so you make the transfers as instructed. But the videos were deepfakes. It may sound like an unrealistic incident, but in early 2024, it’s exactly what happened to an employee at the British engineering company Arup.

Chatbots

Chatbots are, of course, already being widely utilized as customer service representatives. In a similar capacity, they can be used to execute scams or commit fraud.

Chatbots can conduct realistic conversations that slowly but surely elicit users to reveal sensitive information. Modern chatbots have an uncanny ability to convincingly mimic tone, style, and even slang. While legitimate companies may be transparent about using AI customer representative agents, the goal of scam campaigns is often to make it seem like the person is real.

Synthetic Identities

Synthetic identities are part real, part AI. Purveyors of financial crime, fraud, and scam tactics create them by stitching together real data with fabricated data to create an identity that can perform real-person tasks, like opening bank accounts, applying for loans, or conducting certain transactions that fly under the radar of fraud detection agencies.

How AI Is Incorporated

Cybercrime is not legal, respectable, or fair to victims. However, there is a certain elegance to the way modern technology is employed to make scams more effective. For example, creators use generative AI to produce compelling deepfakes. These AI-generated videos can depict real people with hyper-realistic mannerisms and vocal inflections.

NLP (Natural Language Processing) makes it possible for chatbots to understand, respond to, and improvise realistic human language. A scammer’s AI can carry out a 30-minute conversation that feels surprisingly natural. It’s carefully calibrated to extract information and can be personalized to a specific target.

Voice synthesis is the technology used to mimic the sound of real voices in vishing attacks. Vishing, or voice phishing, is a type of scam in which fraudsters use phone calls or voice messages to trick victims into giving personal or financial information.

Are You Susceptible to Scams?

The answer is yes. Since deepfakes are harder to spot, everyone is vulnerable. It’s not a reflection on your intelligence or understanding of the world. Scammers are really good at what they do, and technology makes it easier for doctored information to fly under the radar. Busy professionals may be particularly susceptible, since while you juggle multiple tasks, your brain naturally puts certain small details on the back burner. With AI making it nearly impossible to tell real from fake, it’s no wonder there haven’t been more victims already.

How to Protect Yourself

The best way to protect yourself from increasingly sophisticated and hard-to-spot scam tactics is to arm yourself on multiple fronts. That involves using technology to fight technology, adapting a skeptical mindset, and prioritizing keeping up-to-date with the latest news in the field.

  1. Don’t rely on email or a call alone. Verify unusual requests directly.
  2. Always use strong passwords and MFA.
  3. Keep software updated to avoid leaving unpatched vulnerabilities exposed.
  4. Pay close attention to unnatural speech or slight incongruities in written messages as well as audio and video messages. Apply the same level of vigilance to direct calls and even video conferences.
  5. Double-check wallet addresses and never take unsolicited offers at face value.
  6. Investigate the registration status of investment platforms as a first step, way before even considering the slightest iota of trust.
  7. Use a VPN to encrypt your traffic and only use secure networks for work-related tasks. The best VPN in Canada is one you can use on multiple devices.
  8. Invest in AI-based fraud detection tools.

Scammers Are Going to Scam

Scammers have been around forever, and they will continue to exist. Every tool developed to stop scams, scammers adapt. A scam or fraudulent campaign's intent remains the same: trick, deceive, and profit.

It may feel as if deepfakes, chatbots, and synthetic identities have doomed honest interactions. But just as travellers have learned to avoid pickpockets, professionals can learn to dodge AI‑driven financial scams.

Your Role in Fighting Financial Crime

Everyone has a role to play in keeping AI-driven scams at bay. Corporate leaders, managers, governments, and individuals can all do their part to help protect companies and their people from becoming victims. What can you do, starting today?

No matter your stance on AI, either as a dangerous or useful tool, it’s important to take information about the latest scams at face value. Protecting yourself and your team requires taking a neutral stance on the tool. It also means shifting priorities toward understanding how to be careful, not toward promoting fear. After all, in the case of AI-driven scams, AI is simply being used to amplify the intentions of bad actors.

Good to know

This article was written by a third party as part of a commercial collaboration. The views and opinions expressed are solely those of the author and do not necessarily reflect those of HelloSafe. HelloSafe assumes no responsibility for the content or accuracy of this publication.