/ciol/media/media_files/2026/02/09/06-02-5-2026-02-09-18-03-35.png)
A financial officer at a Singaporean multinational company transferred $5,00,000 to scammers in 2025 during a deepfake video call with what appeared to be company executives, according to cybersecurity firm Tookitaki. Singapore police reported it as one of the most convincing cases of AI-powered impersonation seen to date.
The incident followed a similar attack in February 2024, when a finance worker at engineering firm Arup transferred $25.6 million after participating in a video call with AI-generated deepfakes of the chief financial officer and other executives, according to the World Economic Forum. The Arup employee made 15 transfers totaling $25.6 million before discovering the fraud, company chief information officer Rob Greig told the forum.
These cases underscore how artificial intelligence has become a tool for cybercrime as criminals weaponize the technology while everyday users inadvertently expose sensitive data through AI applications, according to cybersecurity company Check Point Software Technologies.
One in every 27 prompts submitted to generative AI tools from corporate networks in December 2025 posed a high risk of leaking sensitive information, Check Point said. An additional 25% of prompts contained potentially sensitive data.
Corporate employees now use an average of 11 different generative AI tools and generate 56 AI prompts per user monthly, according to Check Point's December statistics. The company said 91% of organizations using generative AI tools experienced high-risk prompts during that period.
Cyberattacks globally reached an average of 1,968 attempts per organization weekly in 2025, representing an 18% year-over-year increase and a 70% jump since 2023, according to Check Point's Cyber Security Report 2026.
Major Corporate Breaches
Tata Technologies disclosed a ransomware incident on January 31, 2025, affecting IT assets, according to the company's filing with Indian stock exchanges. The company suspended some IT services temporarily.
In early March, the Hunters International ransomware group claimed responsibility and alleged 1.4 terabytes of data theft consisting of 730,000 files, according to TechCrunch. The leaked data included personal details about employees, purchase orders, and contracts with customers in India and the United States.
Between March and May 2025, more than 44,000 Windows systems in India were compromised by Lumma Stealer malware, according to Check Point's State of Cyber Security in India 2025 report.
Financial losses from cyber fraud reported through India's National Cyber Crime Reporting Portal reached approximately ₹36,450 crore by February 2025, the report stated.
Ransomware attacks surged 60% in December 2025 compared to the previous year, with 945 incidents publicly reported that month alone, according to Check Point's research.
Deepfake Technology Scales Rapidly
Deepfake fraud cases surged 1,740% in North America between 2022 and 2023, with financial losses exceeding $200 million in the first quarter of 2025 alone, according to the World Economic Forum.
Voice cloning now requires just 20-30 seconds of audio, while convincing video deepfakes can be created in 45 minutes using freely available software, Greig told the forum.
Fraudsters attempted to impersonate Ferrari CEO Benedetto Vigna through AI-cloned voice calls that perfectly replicated his southern Italian accent, according to the World Economic Forum. The call was terminated only after an executive asked a question that only Vigna would know.
Scammers created a fake WhatsApp account and set up a Microsoft Teams meeting targeting WPP, a global communications firm, using voice cloning and edited YouTube footage of a senior executive, according to the National Counterterrorism Innovation, Technology, and Education Center. The attempt failed due to employee suspicion.
The Deloitte Center for Financial Services projects that fraud losses in the United States facilitated by generative AI will climb from $12.3 billion in 2023 to $40 billion by 2027, according to research firm Deepstrike.
The cryptocurrency sector accounts for 88% of all detected deepfake fraud cases in 2023, Deepstrike stated. Romance scam losses topped $1.3 billion in 2024, with 40% of current online daters targeted by scams in 2025, according to Norton's Online Dating Report.
Security Gaps Persist
Email remained the primary channel for malicious content, accounting for 82% of malicious file delivery, Check Point's report stated. AI enables attackers to generate multilingual, culturally adapted messages that mimic trusted contacts or institutions, the company said.
A review of approximately 10,000 Model Context Protocol servers found security flaws in 40% of them, demonstrating that AI infrastructure has become part of the attack surface requiring protection, Check Point said.
Around 90% of organizations encountered risky AI prompts within a three-month period, suggesting governance frameworks have not kept pace with adoption rates, the company said.
Check Point recommended users question AI outputs rather than accepting them as authoritative, minimize data sharing with AI applications, and verify content that requests money or credentials even if it appears authentic.
"AI is rapidly becoming a co-pilot in how we learn, work, and connect online—but trust in technology must be earned, not assumed," Sundar Balasubramanian, managing director for Check Point Software Technologies in India and South Asia, said.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us