Fraud just got personal

June 30, 2025

In recent years, artificial intelligence (AI) has transformed entire industries, from healthcare to customer service. But as companies seek ethical ways to apply this technology, cybercriminals have found in it a powerful ally.

The result: online fraud is no longer just a concern for financial institutions or businesses — it now directly affects individuals in their everyday lives.

From mass attacks to hyper-personalised scams

Since the start of the pandemic in 2020, we’ve seen that most digital attacks targeting consumers were mass-scale: generic phishing campaigns, SMS scams or robocalls. I'm sure anyone reading this blog has received at least one phone call saying: “Hi, I’m from HR at company X and we saw your résumé online.”

Criminals now target individual psychology with personalised scams based on real data and familiar voices.

Today, thanks to generative AI tools, criminals are changing tactics. They now aim directly at the psychology of the individual, designing highly personalised scams based on real data, familiar voices, and digital behaviours detected and generated by algorithms.

Real cases we’re already seeing

  • Voice deepfakes: People receiving calls from a “relative” asking for urgent help — especially the “grandchild” calling the “grandmother” claiming to have been in a car accident after school. In reality, it’s an AI-generated voice, trained on audio from social media.
  • Automated romance scams: Bots capable of holding coherent conversations for weeks, building fake relationships before asking for money.
  • ‘Aware’ phishing: Emails and messages that perfectly mimic the tone, language, and context of a brand, with such specific details that they immediately inspire trust.
  • Fake tech support: “Support” calls where a human-sounding bot pretends to be from your telco or bank, using real data to gain credibility.

Voice deepfakes and automated romance scams are real examples of how AI is amplifying fraud.

Why is this especially dangerous for consumers?

Most consumers are not equipped to deal with hostile AI. Older adults, minors, or simply less tech-savvy users are especially vulnerable. But the biggest concern is that most consumers believe they can spot fraud and don’t feel at risk — unaware that modern AI erases the usual warning signs that once helped us detect scams: poor grammar, unbelievable accents, and vague information.

Moreover, the channels where these scams occur (messaging, social media, email, and even phone calls) are already embedded in users’ daily lives. The emotional impact, therefore, is much greater.

What can we do as an industry?

  • Educate with urgency and realism: It’s not enough to just say “watch out for scams.” We need to show concrete, real and up-to-date examples.
  • Embed active protection in products: Consumer Cyber Security apps must integrate protection against AI-driven scams. At Telefónica Tech, we’re embedding this type of detection service to protect end users.
  • Foster cross-industry collaboration: Telcos, banks, insurers, social platforms… all must be part of a joint strategy for defence and early warning.
  • Pay special attention to the most vulnerable: People over 65, for instance, need products that are not just secure, but also understandable, simple, and supported by human assistance.

AI is redefining fraud by targeting human emotion — and demands an innovative defence.

AI: a key asset in tackling new threats

AI has reshaped the digital landscape. Cybercriminals have rapidly adapted, using methods that both attack systems and exploit human emotions. But Cyber Security is also advancing — with comprehensive responses that include proactive measures across the industry, improved digital services, and early education to identify and mitigate risks.

Faced with these challenges, AI stands out as a key tool in Cyber Security. It can analyse vast amounts of data in real time, detect anomalies, and take action before threats cause harm. With advanced algorithms, it can flag malicious emails, track suspicious behaviours, simulate attacks, and strengthen defence systems.

A proactive approach based on automation and AI allows us to anticipate and neutralise threats, minimising the impact of digital fraud and positioning AI as a vital tool for user protection.

At Telefónica Tech, we believe that Consumer Cyber Security is not optional — it is, ultimately, the new personal line of defence.

Cyber Security in the age of AI: why phishing attacks are now more dangerous