
The Commonwealth Bank of Australia (CBA) is under fire for its handling of digital transactions, with growing concerns over its partnership with cyber-intelligence firm Apate.ai — a spin-out from Macquarie University.
Merchants are reportedly being alerted that customers may be using fraudulent cards when transactions are declined, raising serious questions about the bank’s AI-driven fraud detection systems.
This comes from a bank that frequently touts its leadership in artificial intelligence.
Just last week, CBA told journalists it had “harnessed near real-time, AI-powered intelligence to outsmart the scammers.”
Yet, my personal experience suggests the reality is far less impressive.
After two weeks of persistent issues with my CBA debit cards — despite repeated assurances that the problems had been resolved, particularly with Google Wallet — I was once again left embarrassed.
During a recent visit to a local Italian restaurant, my payment was declined.
The merchant terminal displayed a message reading “Suspected Spam,” implying I was attempting to use a fraudulent card.
The transaction had been initiated via Google Wallet on a Samsung S25 Ultra, linked to a CBA debit card and an account with a healthy balance. When I retried the payment using the physical card, it was accepted without issue.
This public embarrassment appears to stem from flawed AI implementation at CBA, particularly in how it interacts with Google’s Android OS and Wallet services.
The bank’s systems seem unable to reliably distinguish between legitimate and suspicious activity.
CBA has also taken an aggressive stance on overseas transactions.
Routine payments for services like Tidal and SmartPDF — used for managing press releases and ASX announcements — have been repeatedly flagged or blocked.
Despite years of consistent use, these subscriptions are still treated as potential threats.
If CBA’s AI were truly intelligent, it would recognize recurring, legitimate charges and adapt accordingly.
Instead, customers face unnecessary disruptions, with monthly debits halted despite a clear payment history.
Meanwhile, Apate.ai claims to be deploying thousands of conversational bots to intercept scam calls and prevent fraudulent card use. CEO and founder Professor Dali Kaafar describes the system as a “honeypot” designed to lure scammers and gather intelligence. “We’ve designed our bots to be difficult to detect by scammers, making them incredibly effective at gathering intelligence and disrupting scam operations,” he said.
What Kaafar hasn’t addressed is the apparent misclassification of legitimate CBA customers as scammers — leading to blocked cards and reputational damage.
According to Scalefocus, the use of AI in banking raises significant privacy and security concerns. “Implementing robust cybersecurity measures and compliance frameworks is imperative… however, if financial organizations want to keep their customers’ trust, they must adhere to security best practices. This includes obtaining appropriate consent for data collection and ensuring data anonymization whenever possible.”
In CBA’s case, the current approach seems to be causing more harm than good. Customers are frequently bombarded with questionable payment alerts, and the default response is to block the card until the user manually verifies the transaction via the app.
For a bank that claims to be leading the AI revolution, CBA’s execution leaves much to be desired.

