Smarter, faster, and more sophisticated scams are coming. Thanks to AI, scammers are more efficient than ever, stealing money at record rates. Every day AI tools such as ChatGPT and OpenAI are used as scam arsenal, leading to around 13 million people in the UK to lose around £1.4bn each year.
Global scam protection leader F-Secure stays one way step ahead of cyber criminals, defending people from scams before they happen. F-Secure's team of cybersecurity experts share the new threats the country will face in 2025:
New regulations for banks, telcos and social media companies who fail to prevent scams
Calvin Gan, Senior Manager, Scam Protection Strategy, says: "Right now lawmakers around the world are targeting telecom providers, banks, and social media companies, saying they should be held responsible when their customers fall victim to fraud. Australian lawmakers are pushing through a bill that will fine companies up to $50 million for failing to protect their customers from scams, and here, in a world first, UK bank refunds for fraud became mandatory after the Payment Systems Regulator (PSR) reduced the maximum compensation from a previous proposal of £415,000 to £85,000, covering more than 99% of claims.
"Passing new laws that empower businesses to beef up protection against scams is a welcomed move. Scam fighting is not a top-down only effort but involves everyone from governments to organisations and even individuals. Just like we've seen with GDPR in Europe forcing companies to take data privacy more seriously, new legislation like this would create an extra protection mechanism for consumers.
"Still, there's no 100% guaranteed way to prevent scams from happening in the first place. People need to take precautions daily, especially on scam-prone channels like social media and messaging apps.
Cheap, easy AI tools will be deployed in sophisticated cyber attacks
Laura Kankaala, Head of Threat Intelligence: "Using AI tools for malicious purposes (like generating malicious and manipulative content) has already been evident throughout this past year. As we head into 2025, we are bound to see more sophisticated attacks that leverage everyday AI tools – like ChatGPT, ElevenLabs, or basically any AI tool that is cheap and easy to access online. The reality is that cyber criminals are abusing this readily available technology to fine-tune their scams and consumers must be better informed, whether that's from their bank, mobile phone or another service provider, or by the cybersecurity industry to help educate consumers. We all play a part."
"While AI companies do put restrictions on malicious usage, most of them are not very successful at it. They need to be doing more to stop the use of their platforms for nefarious purposes – it cannot only be left up to legislation to enforce boundaries for what kind of content can be generated. Bottom line, the companies developing these tools should also be held up to a higher moral standard."
Multi-stage scams will become more prevalent
Joel Latto, Threat Advisor, says: "Cybercriminals have long relied on social engineering, and multi-stage scams represent some of their most deceptive tactics. These schemes often involve direct interaction with victims, enhancing their believability. For instance, a scammer might call a victim claiming they've applied for a loan. When the victim denies it, they are "transferred" to a supposed bank representative—another scammer, probably sat next to them—who proceeds to seek sensitive banking details. Malware further elevates these schemes, rerouting legitimate customer service calls to fraudsters or tricking victims into contacting fake numbers embedded in phishing emails.
"Such scams are effective because victims believe they are speaking with genuine, helpful representatives, which makes them more susceptible under pressure. This is something we've seen dramatised through TV series' such as Cold Call, which has recently rocketed up the charts on Netflix following its release five years ago. Perhaps more popular now because scams are much more commonplace, and viewers are much more likely to relate.
"Until now, the scalability of these scams was limited by the human capacity of fraudsters, who could only handle a limited number of interactions in specific languages and time zones. AI is changing this equation. With the rise of sophisticated conversational AI chatbots, scammers can now mimic real human interactions at scale, conducting conversations 24/7 across multiple languages. Coupled with realistic deepfake audio, these new call-based scams blur the line between human and machine interaction, making them far more dangerous than traditional robocalls.
"To counter these evolving threats, defenses must adapt, and mobile phone service providers must act. Blocking call-forwarding malware, detecting suspicious numbers, and developing sophisticated audio analysis tools to spot deepfakes are essential. Equally critical is educating users about the signs of scams and potential red flags. Defensive strategies must evolve as fast as attackers' capabilities, leveraging AI-driven solutions and strong collaboration between cybersecurity experts, telecom providers, and regulatory bodies."
High-yield, high-risk: the rise of Bitcoin investment scams on a new playing field
Sarogini Muniyandi, Senior Manager, Scam Protection Engineering, says: "Decentralised Finance (DeFi) is a new blockchain-based financial service that's been gaining traction and acceptance over the last year. DeFi refers to financial services provided by an algorithm on a blockchain, without a financial services company. It is an alternative approach that largely operates outside the traditional centralized financial infrastructure.
"As DeFi becomes mainstream, scammers will take advantage of anyone interested in Bitcoin investment and other digital assets, especially those that are unfamiliar with the risks of blockchain-based finance. By 2025, DeFi is expected to attract even more users seeking alternatives to traditional finance. The DeFi market provides loans, interest-bearing accounts, and high-yield investments that promise substantial returns, which can entice investors of all experience levels. With the rising popularity of DeFi, the total value locked (TVL) in these projects is projected to grow, making it a prime target for fraudsters who can steal funds on a larger scale.
"DeFi platforms operate on decentralised blockchain networks, allowing users to participate without traditional identification or regulatory oversight. This open environment enables scammers to steal victims' funds and vanish into thin air, all while remaining anonymous. By manipulating the smart contract and tools used to automate DeFi functions, the risks of stealing investor funds are at stake. Some DeFi platforms offer investors with unsustainable, extremely high-yield rates for farming Bitcoin derivatives, only for investors to later discover they can't withdraw their Bitcoin or that the platform has disappeared with their funds.
'While DeFi offers financial freedom and potential profits, its open, unregulated, and anonymous nature also creates a ripe environment for scams – something every Bitcoin investor needs to be aware of in 2025."
Comments
Post a Comment