AI, Deepfakes and Financial Fraud: Can you spot a Deepfake?
Artificial intelligence (AI) is transforming the accounting and finance sector, helping organisations automate bookkeeping, detect anomalies and improve financial forecasting. However, the same technology is increasingly being exploited by cybercriminals to commit sophisticated fraud against businesses, particularly small and medium-sized enterprises (SMEs).
One emerging threat is the use of AI-generated “deepfakes” to create convincing payment requests, invoices and even voice messages from supposed colleagues or senior executives. Deepfakes use machine-learning technology to produce highly realistic audio, video or documents that imitate real people or organisations.
For finance teams and acs, the risks are particularly significant. Criminals know that accounting departments control payments, supplier relationships and sensitive financial data, making them prime targets.
Fake invoices and payment instructions
One increasingly common tactic is the creation of AI-generated invoices or altered payment instructions. Using generative AI tools, criminals can produce highly convincing documents that look identical to genuine supplier invoices, often sent through compromised email accounts or phishing messages.
These invoices may include subtle changes such as a new bank account number, directing payments to fraudsters instead of legitimate suppliers. Because AI can replicate formatting, branding and writing style, such requests can be difficult for busy finance teams to spot.
Deepfake voices and “CEO fraud”
Another fast-growing threat is voice cloning and deepfake video calls. With only a few seconds of audio from social media or recorded meetings, criminals can use AI to mimic the voice of a CEO or senior manager.
An employee in the finance team might receive an urgent call appearing to come from their managing director asking them to authorise an immediate transfer. In one example highlighted by ICAEW, fraudsters used AI voice cloning to impersonate a company executive and persuade a UK subsidiary to transfer €220,000.
More recently, incidents have involved deepfake video calls where employees believed they were speaking to senior leadership during what appeared to be legitimate meetings.
Why SMEs are particularly vulnerable
While large corporations invest heavily in cybersecurity, SMEs often operate with smaller finance teams and fewer security controls. At the same time, AI tools have become inexpensive and widely available, making it easier for criminals to create sophisticated scams at scale.
The result is a new generation of AI-powered social engineering attacks - convincing, targeted and designed to bypass traditional controls.
Staying vigilant
Technology alone cannot eliminate this risk. Businesses should ensure that payment processes include independent verification procedures, especially for new supplier bank details or urgent transfer requests. Staff training is also critical, encouraging teams to question unexpected instructions and verify requests through known contacts rather than relying on email or phone calls alone.
Businesses should also review whether they have adequate cyber insurance in place to help manage the financial and operational impact of cyber incidents and fraud. Cyber policies often include the cost of dedicated highly experienced and efficient teams to professionally manage the impact on your business. If you’re looking for expert advice on cyber insurance for SME’s, you may wish to speak with Adrian Gyde at Cotswold Broking Services or Lucy Thornhill at Thornhills Insurance, both of whom specialise in advising businesses on cyber risk protection.
In an age where emails, voices and even video calls can be convincingly fabricated, one principle matters more than ever: trust, but verify.
Author, Justin Moore - Partner

