
Ahmed Fessi of Medius says deepfake scams are now seen as easy and effective. The FBI warns of ‘vishing’ campaigns using AI voices to gain trust. Bitdefender also reports that even consumers are now being pressured using AI voice clips. What once seemed like science fiction is now a real and fast-growing crime.
Here are some cybersecurity tips for business:
Verify Unusual Requests: Never act on a single call or message. If a manager urgently asks for funds, confirm through a trusted channel.
Try Official Sources: Call back using their official number or send a separate email. The FBI advises using a second verification method. Create a secret phrase executives can use to prove their identity.
Use Multi-person Approvals: No single person should be allowed to transfer large sums alone.
Audit: Set a rule requiring at least two managers for high-value transfers. Audit all approvals and record them to deter fraud.
Cybersecurity Training: Train staff and test responses. Teach teams, especially in finance, HR, and ops, about deepfakes.
Awareness: Warn them about audio glitches or robotic video behavior. Conduct drills using fake scenarios. Remind employees that real executives won’t rush payments or demand secrecy.
Implement Tech Policies: Use multi-factor authentication and encrypted channels. Monitor for unusual activity with AI detection tools.
Use Tools: Some tools can now flag deepfakes in real time. Update vendor protocols to verify new accounts. KPMG says proper controls are essential.
Limit Public Exposure: Executives should avoid posting clear videos or voice clips.
Avoid interviews or speeches that provide clean data for AI training. Limit personal content on social platforms. The less material available, the harder it is to clone.
Read more on Analytics Insight

