Many Australians overestimate ability to spot deepfakes
Research commissioned by CommBank has found that most Australians feel confident they can spot an AI-generated scam, but fewer than half can correctly identify deepfake images when tested.
The survey of 1,988 respondents found nearly nine in ten Australians, 89%, said they were confident to some extent that they could recognise an AI-generated scam. When shown images and asked to distinguish between real and AI-generated content, respondents did so correctly 42% of the time.
CommBank said the results showed a disconnect between perceived and actual detection. The bank also pointed to rising exposure. Around 27% of respondents said they had witnessed a deepfake scam in the past year.
Confidence gap
CommBank's General Manager of Group Fraud James Roberts linked the confidence gap to scammer behaviour. "The findings reveal a growing gap between confidence and reality - and that gap is exactly what scammers are looking to exploit as they increasingly turn to AI to target everyday Australians and small businesses," said James Roberts, General Manager of Group Fraud, CommBank.
The data suggested deepfake scams affect a broad range of age groups. Respondents aged over 65 performed only 6% less accurately than younger participants in the image test, according to the research summary.
The survey also pointed to limited public awareness. Less than half of respondents, 42%, said they were familiar with AI-enhanced scams. CommBank said deepfakes now appear across social media platforms, websites, messaging apps, and telecommunication channels.
Roberts said people should focus on established precautions rather than trying to track every technical development. "The good news is that the steps that keep people safe don't need to evolve at the same speed as the technology does. Deepfakes might be new, but the same tried-and-tested habits - slowing down, checking details and speaking with someone you know and trust, such as a family member, remains your best defence - even against AI-powered scams," said Roberts.
Why it works
Monash University Professor of Human Factors in Cyber Security Monica Whitty said deepfakes exploit trust in familiar cues such as faces and voices. "Humans tend to trust faces, voices and familiar people. Deepfakes take advantage of that instinct," said Monica Whitty, Professor of Human Factors in Cyber Security, Monash University.
The survey found many Australians do not discuss AI-generated scams with friends or relatives. CommBank said 67% of respondents had not discussed AI-generated scams with relatives or friends.
Whitty linked that pattern to risk. "The data shows that many Australians don't talk openly about deepfake scams - with only a third discussing AI-generated scams with their relatives or friends. That means fewer opportunities to share warning signs or learn from others' experiences," said Whitty.
Safe words
The research suggested many Australians see value in simple verification steps, but fewer follow through. Nearly three-quarters of respondents, 74%, agreed that they should set up a safe word with loved ones to confirm identity, while 20% said they had done so.
Roberts said scammers have moved into voice imitation. "Scammers can fake voices now, so it's okay to double-check. In fact, it's smart," said Roberts.
CommBank pointed to its CallerCheck feature as one response. The bank said the service allows customers to verify whether a caller claiming to be from the bank is legitimate by triggering a security message in the CommBank app.
Scam types
Among those who said they had witnessed a deepfake scam in the past year, the most commonly cited categories were investment scams at 59%, business email compromise scams at 40%, and relationship scams at 38%.
CommBank also reported findings related to small businesses. Around 41% of small business owners said they were familiar with deepfake scams. Small businesses said half of deepfake scam attempts, 50%, arrived by email. In the past six months, 55% said they had cross-checked supplier payment details.
Roberts said AI-driven impersonation now targets both households and workplaces. "Scammers are using AI to create fake investment videos, deepfake celebrities, and even voice and text clones of loved ones, senior executives and government officials. Talking openly about this technology is one of the easiest ways to help stay ahead of it," said Roberts.
Policy response
CommBank said deepfakes require action across multiple industries where scams spread, including banks, telecommunications providers, and digital platforms. Roberts referred to an Australian Government initiative. "We recognise the impact of scams on Australians and support the Australian Government's Scam Prevention Framework to introduce obligations initially across banks, telcos and digital platforms. Deepfakes are showing up on social media, messaging platforms, websites and even through phone calls - and we welcome stronger protections across those industries, as well as banking," said Roberts.
"Deepfakes are new, but protecting yourself hasn't changed - and with stronger protections across all channels, we can help keep more Australians safe," said Roberts.
Whitty urged vigilance and discussion. "Be vigilant. Educate yourself. And if things look suspicious talk with others about it," said Whitty.