Voice Phishing: The Sound of Deception

2025-05-01     신하영

On March 18th, members of a voice phishing ring were arrested for defrauding victims of approximately ₩55 million by impersonating card delivery agents and installing malicious apps on their phones. Voice phishing cases like these have increased by 10% compared to that of last year. These crimes not only lead to severe economic losses but also affect the public safety of South Korean citizens. Thus, the Sungkyun Times (SKT) aims to comprehend the key issues of voice phishing scams and identify ways to eradicate further frauds.

 

Voice Phishing Targets Everyone

-Decoding Voice Phishing 

Phishing is the fraudulent practice of baiting victims into revealing personal information in an attempt to steal money from them. Voice phishing, in particular, utilizes various communication channels such as phone calls to directly interact with the victim and steal their sensitive financial information. A common voice phishing method involves fraudsters impersonating public agencies, such as the Financial Supervisory Service (FSS), or authorities like prosecutors and the police. They deceive people by fabricating a seemingly urgent situation and using professional jargon. Another common tactic is loan-fraud scams, which account for 65.7% of all voice phishing schemes. This method involves the false promotion of low-interest loan products, luring victims into sending their money. These frauds are particularly concerning as the highly systematic phishing techniques involve fake deposit bankbooks and overseas voice phishing organizations that limit victim protection and compensation measures. In fact, according to a survey conducted by Gyeonggi Province, 67.5% of the voice phishing victims only recovered less than 25% of their scammed money. This indicates that the recovery rate from voice phishing scams is low, suggesting that solving these schemes remains an urgent matter.

Fraudsters Disguising as Employees of Government Agencies

 

-Unraveling the Past and Future

The first voice phishing case reported to the police can be traced back to 2006, when scammers impersonated a National Tax Service employee to receive a monetary refund in Incheon City. Since then, the extent of these schemes has rapidly expanded, and their methods are becoming increasingly complicated. These fraudsters typically operate in highly controlled and organized environments with segmented teams specializing in various areas such as loan fraud, personal information collection, and impersonation. Recently, a new voice phishing method involving the impersonation of military officers has burgeoned, resulting in a total of 315 reported cases from 2024 until the present. The increasing number of reported cases highlights the sophistication of voice phishing schemes in Korea, as well as the financial damages that result from them. According to the Korean National Police Agency (KNPA), the total number of reported voice phishing cases rose to approximately 28,000 in 2024, resulting in losses totaling ₩854.5 billion. These evolving fraudulent crimes are only continuing to grow, targeting the most digitally vulnerable populations.

 

Voice Phishing Continues to Intensify

-Beyond Control with AI

Recently, there has been an increasing number of voice phishing crimes that exploit technologies utilizing artificial intelligence (AI). There are two major types of such voice phishing. First, schemes using deepvoice technologies often trick victims using cloned audios that mimic the voices of their family members and acquaintances. In fact, last year, a woman in her sixties was deceived by a voice call that manipulated and used her daughter’s voice, claiming that she had been kidnapped and needed ₩20 million. In a similar context, deepfake technologies utilize photos and videos of celebrities to fabricate not only seemingly real audio but also realistic-looking video content in which they are demanding large sums of money. A notable case involved a fraudster impersonating the Tesla co-founder Elon Musk through deepfake technology, luring a victim into transferring ₩70 million under the guise of an investment opportunity. This case highlights how AI technologies are expanding the scale of minor cases to a multifaceted crime. In an interview with SKT, Woo Sung-il, a professor at the College of Computing and Informatics, mentioned that although deepfake and deepvoice detection technologies are being entered into practical application, their performance drops when encountering a dataset they have not been trained on, highlighting the severity of the issue at hand.

Deepfake and Deepvoice Technologies

 

-Targeting the Vulnerable

Although voice phishing targets everyone regardless of their age, elderly groups tend to be the most vulnerable victims, especially those in their sixties and older. The FSS has noted that the number of voice phishing incidents has risen together with the increase in card delivery agent impersonation frauds against the elderly. Voice phishing schemers tend to exploit digital technologies and financial systems with which the victims are unfamiliar, placing them under high psychological pressure to send the money. In a case reported in March, a woman in her sixties received a call from a so-called prosecutor who installed malicious applications that hacked her phone to access her financial accounts. Even though she was able to prevent her information from being leaked through a voice phishing detecting app, Citizen Conan, she was almost on the verge of losing her money. In addition, these elders are especially put at risk due to their psychological state. According to a research paper written by individuals from Kwangwoon and Halla University, elders often feel a sense of emptiness after their retirement and loneliness upon losing family members. Fraudsters exploit this emotional weakness, interacting with victims in a sympathetic manner and building a comfortable rapport to ultimately demand money from them.

Impersonating Card Delivery Agents to Target the Elderly

 

Efforts to Terminate Crimes

-Improvements to Detection Technologies

To effectively counter voice phishing fraud, detection technologies must be developed and improved. First, Korea should adopt wide scale technologies that prevent voice phishing calls in the first place. The STIR/SHAKEN technology developed by the United States prevents automated phone calls, or robocalls, and spoofing, or the disguising of the source of the caller identification. These measures must be implemented in the country as well to block unauthorized phone calls and protect possible victims. In addition, Prof. Woo further emphasized that AI-driven detection technologies could be integrated into social media platforms. Although these platforms are reluctant to implement strict content monitoring, when integrated to personal electronic devices, they can be used to help mitigate the spread of AI scams at the individual level. For instance, the app Citizen Conan, which is currently limited to detecting malicious applications installed on phones, must be supplemented with deepvoice and deepface-detecting technologies. Another example of a recent development is the AI-detection tool by the mobile network company LG U+, which distinguishes the human voice from deepvoice-generated audios. Such technologies must be widely expanded and automatically installed on electronic devices so that they can be accessible to all individuals, enhancing digital security.


-Further Resolutions

Protectionary measures should be strengthened for the appropriate compensation of lost funds from voice phishing crimes. In Singapore, the Anti-Scam Centre (ASC) was established in 2019, which assists victims by securing their financial information in a short amount of time and blocking fraudulent accounts. Considering that it is a country where victims aged 65 or older suffered from the highest financial losses, this institute has assisted numerous seniors by preventing further losses and helping them recover their funds. Referring to successful countermeasures against this crime, Korea can also establish a nationwide scam response center for a rapid intervention of phishing scams and reinforce frameworks for temporarily suspending money transfers. Although such measures are important, to ultimately reduce elderly victims of voice phishing scams, it is essential to prevent victimization in advance. This can be achieved with active education and campaign programs among citizens, particularly the elderly. For instance, the Voice Phishing Zero Campaign in Uijeongbu City provides opportunities for elderly workers to learn how to recognize latest tactics of such crimes and protect themselves from scams. If implemented as a mandatory education throughout Korea, this can help prevent further victimization. Through such widespread, easily accessible protectionary and education means, more people across generations will become aware of voice phishing scams and be protected from its risks.

 

As voice phishing fraud becomes increasingly sophisticated through the use of AI technologies, many individuals, particularly the elderly, are falling victim to its traps and suffering significant economic losses. When individuals turn a blind eye to these crimes, the number of victims will only continue to increase. It is now time for society to acknowledge the severity of voice phishing schemes, moving beyond mere attention to actually making genuine efforts to solve them.