Skip to main content
TechFor60s

How to Spot AI-Powered Scams: Voice Cloning, Deepfakes & More

AI is making scams more convincing than ever. Learn how voice cloning, deepfakes, and AI phishing work — and exactly how to protect yourself.

TF
TechFor60s Team
·14 min read
Share:
Digital security concept with a shield icon on a dark technology background

Artificial intelligence is changing the world in many wonderful ways. It helps doctors diagnose diseases, powers the voice assistants in our homes, and even helps us find old friends online. But unfortunately, criminals are also using AI to create scams that are more convincing than anything we have seen before.

The good news? Once you understand how these new scams work, you can spot them and protect yourself. This guide explains everything in plain, simple English. No technical background needed.

If you have read our guide on phone scams targeting seniors, you already have a strong foundation. Think of this article as the next chapter, because scammers have upgraded their tools, and now it is time for us to upgrade our defences.

What Are AI-Powered Scams?

Let us start with the basics. AI stands for artificial intelligence. It is software that can learn patterns and create things that look, sound, or read like they were made by a real person.

When criminals use AI, they can:

  • Clone someone's voice so it sounds exactly like your grandchild, your son, or your daughter on the phone
  • Create fake videos (called deepfakes) that look like a real person saying things they never actually said
  • Write scam emails that are polished, personal, and free of the spelling mistakes that used to give scammers away
  • Build fake websites that look identical to your bank, a government agency, or a well-known shop

In the past, many scams were easy to spot because of poor grammar, strange accents, or obviously fake pictures. AI has removed those warning signs. That is why we all need to learn new ways to protect ourselves.

Voice Cloning Scams: When Your "Grandchild" Calls

This is one of the most frightening scams out there, and it is growing fast.

How It Works

A scammer finds a short clip of someone's voice. This could come from a social media video, a voicemail greeting, or even a short phone call where they tricked someone into speaking for a few seconds. AI software then analyses that voice and can create brand-new sentences that sound exactly like that person.

The scammer calls you, and you hear what sounds like your grandchild, your son, or your daughter saying something like:

"Grandma, I am in trouble. I have been in a car accident and I need money for the hospital right now. Please do not tell Mum and Dad, they will be so upset."

The voice sounds real. The emotion sounds real. Your heart starts racing and you want to help immediately. That is exactly what the scammer is counting on.

Why It Is So Dangerous

The traditional grandparent scam relied on a stranger doing a rough impression. You might have noticed the voice was not quite right. With AI voice cloning, the voice can be nearly perfect. Even people who know their family members very well have been fooled.

In 2025, the Federal Trade Commission reported that Americans lost over 12 million dollars to voice cloning scams alone. In the UK, Action Fraud has seen a sharp rise in similar reports.

How to Spot It

Even though the voice sounds real, there are still clues:

  • The call comes from an unknown number. Your real grandchild's name would normally appear on your screen.
  • They ask for money urgently. Real emergencies rarely require you to send money within minutes.
  • They want gift cards, wire transfers, or cryptocurrency. These are untraceable payment methods that scammers love. A hospital or solicitor would never ask for payment by gift card.
  • They beg you not to tell anyone. This is designed to stop you from checking the story with other family members.
  • The conversation feels scripted. If you ask unexpected questions, the "voice" may struggle to answer naturally.

Deepfake Video Scams: Seeing Is No Longer Believing

You have probably heard the old saying, "I will believe it when I see it." Unfortunately, that no longer applies.

How It Works

Deepfake technology uses AI to create realistic-looking videos of people saying or doing things they never actually did. Criminals use this to:

  • Impersonate authority figures. You might see a video of a "bank manager" or "police officer" telling you that your account has been compromised and you need to move your money.
  • Create fake celebrity endorsements. A video appears to show a famous person recommending a "guaranteed" investment. The celebrity never said any of it.
  • Pretend to be family members on a video call, asking for money.

How to Spot It

Deepfake videos are getting better, but they are not perfect yet. Look for:

  • Odd lip movements. The words and mouth may not quite match up, like a badly dubbed film.
  • Strange blinking. The person may blink too little, too much, or in an unnatural pattern.
  • Blurry edges. Look at the hairline, ears, and jawline. These areas often look slightly blurred or distorted.
  • Lighting that does not match. The face may be lit differently from the background.
  • A sense that something is "off." Trust your instincts. If a video feels strange, even if you cannot explain why, be cautious.

The most important rule: never make a financial decision based on a video you received unexpectedly. If your bank needs to contact you, they will send a letter or you can visit your local branch.

AI-Generated Phishing Emails: The New Threat in Your Inbox

You may already know how to spot scam emails. The classic signs have always been poor spelling, odd formatting, and generic greetings like "Dear Customer." AI has changed the game.

How It Works

Criminals now use AI to write emails that are:

  • Perfectly written. No spelling mistakes, no grammar errors, no awkward phrasing.
  • Personalised. The email may include your real name, your address, or details about a recent purchase. Scammers scrape this information from data breaches and social media.
  • Urgent and emotional. The email creates a sense of panic. "Your account will be closed in 24 hours" or "Unusual activity detected on your card."
  • Convincing in design. The email looks exactly like it came from your bank, the NHS, HMRC, the IRS, or a company like Amazon.

How to Spot It

Even the best AI-written emails still have weaknesses:

  • Check the sender's email address carefully. It might say "Amazon" in the name, but the actual address could be something like support@amaz0n-security-alerts.com. Look for misspellings, extra numbers, or unusual domains.
  • Hover over links before clicking. On a computer, move your mouse over any link (without clicking) to see where it actually goes. If the address looks strange, do not click.
  • No legitimate company will ask for your password by email. Ever. If an email asks you to "confirm" your password, it is a scam.
  • Be suspicious of attachments. Do not open attachments you were not expecting, even if the email looks official.
  • When in doubt, go directly to the website. Instead of clicking a link in an email, open your web browser and type the company's address yourself. Or call them using the number on your statement or their official website.

Using two-factor authentication adds an extra layer of protection to your accounts, making it much harder for scammers to break in even if they do trick you into revealing a password.

The Grandparent Scam Evolved: AI Makes It Worse

We touched on this earlier, but it deserves its own section because it is the single most common AI scam targeting seniors.

The traditional grandparent scam has been around for years. A stranger calls, pretends to be your grandchild, and asks for money. It was often easy to spot because the voice did not sound quite right.

Now, with AI voice cloning, the scam has evolved:

  1. The scammer finds your grandchild's voice online. A short video on Facebook, Instagram, or TikTok is all they need. Even a 10-second clip is enough for AI to clone the voice.
  2. They research your family. Social media profiles often reveal names, relationships, birthdays, and locations.
  3. They call you with the cloned voice. The voice sounds exactly like your grandchild. They tell a convincing story about an emergency.
  4. They create urgency and secrecy. "I need help right now, and please do not tell anyone."
  5. They ask for untraceable payment. Gift cards, wire transfers, or cryptocurrency.

This updated version of the scam is incredibly effective because it bypasses the one defence most people relied on: recognising the voice of someone they love.

How to Protect Yourself: Your AI Scam Defence Plan

Here are practical, simple steps you can take today to protect yourself and your family.

1. Create a Family Safe Word

This is the single best defence against voice cloning scams. Sit down with your family and agree on a secret word or phrase that only your family knows. It could be anything: a favourite holiday destination, a pet's name from years ago, or a made-up word.

If someone calls claiming to be a family member and asks for money, ask them for the safe word. A scammer using a cloned voice will not know it.

Important: Do not share your safe word on social media, by text, or by email. Only discuss it in person or on a phone call you initiated yourself.

2. Always Call Back on a Known Number

If you receive a distressing call from someone claiming to be a family member, hang up and call them back using the number you already have saved in your phone. Do not call the number that just rang you, as it may be the scammer's number.

If you cannot reach the person, call another family member to check on them.

3. Never Send Money Urgently

No real emergency requires you to buy gift cards, send a wire transfer, or pay in cryptocurrency within minutes. Real hospitals, real solicitors, and real police officers do not work that way.

If someone is pressuring you to send money immediately and telling you not to speak to anyone else first, that is almost certainly a scam.

4. Limit What You Share on Social Media

The less information scammers can find about you and your family online, the harder it is for them to target you. Consider:

  • Setting your social media profiles to private
  • Not posting videos where your voice can be clearly heard (or ask family members to be mindful of this)
  • Avoiding posts that reveal family relationships, travel plans, or daily routines

5. Keep Your Devices and Accounts Secure

6. Talk About It With Friends and Family

Scammers rely on shame and silence. The more openly we discuss these scams, the fewer people will fall victim. Share this article with friends. Bring it up at your next family gathering. Make sure everyone, especially younger family members whose voices might be cloned, is aware of the risk.

Real Examples of AI Scams

These stories are based on real reports. They show how AI scams play out in everyday life.

The Cloned Grandson

Margaret, 73, from Birmingham, received a call that sounded exactly like her grandson, Jack. He said he had been in a car accident in London and needed 3,000 pounds for medical treatment. He was crying and begged her not to tell his parents. Margaret was about to go to the bank when she remembered reading about voice cloning scams. She hung up and called Jack's mobile number. He answered immediately, perfectly safe at home. The call had been completely fake.

The Fake Investment Video

Robert, 68, from Manchester, saw a video on social media of a well-known television presenter recommending a cryptocurrency investment. The video looked completely real. Robert clicked the link and invested 5,000 pounds. The investment platform was fake, and the video was a deepfake. He never got his money back.

The Perfect Phishing Email

Susan, 71, from Bristol, received an email that appeared to be from her bank. It was perfectly written, included her full name, and warned of suspicious activity on her account. It asked her to click a link to verify her identity. Susan noticed the email address was slightly different from her bank's real address. She called her bank directly and confirmed it was a scam.

What to Do If You Have Been Scammed

If you think you have fallen victim to an AI scam, do not feel ashamed. These scams are designed by criminals to fool anyone, and very clever people fall for them every day. Here is what to do:

Act Quickly

  1. Contact your bank immediately. If you have sent money, your bank may be able to stop or reverse the transaction. Call the number on the back of your bank card.
  2. Do not send any more money. Scammers often come back pretending to be the police or a fraud investigator, saying they need more money to "recover" your funds. This is another scam.
  3. Save all evidence. Keep any emails, text messages, phone numbers, or screenshots related to the scam. Do not delete anything.

Report It

In the United States:

  • Federal Trade Commission (FTC): Report at reportfraud.ftc.gov or call 1-877-382-4357
  • FBI Internet Crime Complaint Center: Report at ic3.gov
  • Your local police: File a report, especially if you have lost money

In the United Kingdom:

  • Action Fraud: Report at actionfraud.police.uk or call 0300 123 2040
  • Your bank's fraud department: They can help secure your accounts
  • The National Cyber Security Centre: Forward suspicious emails to report@phishing.gov.uk

Get Support

Being scammed can be emotionally devastating. You may feel embarrassed, angry, or anxious. Please remember:

  • It is not your fault. These scams are designed by professional criminals.
  • Talk to someone you trust about what happened.
  • In the US, the AARP Fraud Watch Network helpline is 877-908-3360.
  • In the UK, Age UK offers free advice at 0800 678 1602.

Frequently Asked Questions

Can AI really clone someone's voice from a short video?

Yes. Modern AI voice cloning tools can create a convincing copy of someone's voice from as little as 10 to 15 seconds of audio. This is why it is worth being careful about what voice recordings are publicly available on social media. The technology is improving rapidly, which makes the family safe word strategy even more important.

How can I tell if a video call is a deepfake?

Look for subtle signs like unnatural blinking, lips that do not quite match the words, blurry edges around the face, or lighting that seems different on the face compared to the background. You can also ask the person to turn their head to the side or hold up a specific number of fingers. Current deepfake technology sometimes struggles with unexpected movements. If you have any doubt at all, end the call and contact the person through a different method.

Are younger people also targeted by AI scams?

While seniors are disproportionately targeted, AI scams affect people of all ages. However, older adults tend to lose larger amounts of money on average, which is why scammers focus their efforts on this age group. Sharing knowledge across generations is one of the best defences. Encourage your children and grandchildren to be careful about what they post online, and make sure everyone in your family knows about the safe word strategy.

Stay One Step Ahead

AI technology is advancing quickly, and scammers will keep finding new ways to misuse it. But knowledge is your strongest shield. By reading this article, you have already taken an important step.

Remember the key rules:

  • Set up a family safe word and use it any time someone asks for money
  • Always call back on a known number before taking any action
  • Never send money urgently, no matter how convincing the caller sounds
  • Check email addresses and links carefully before clicking anything
  • Talk openly with family and friends about these scams

You do not need to be afraid of technology. You just need to be informed. And now, you are.

If you found this guide helpful, share it with a friend or family member. The more people who know about AI scams, the harder it is for criminals to succeed.

#scams#ai#safety#security#voice cloning#deepfakes

Was this guide helpful?

Know someone who would find this useful?

Share:

You Might Also Like