Digital sex crimes following the development of artificial intelligence (AI) technology are reaching a serious level.
The Korea Communications Standards Commission said in a recently published report on the “2025 1st Korea Communications Review Trend” that it has taken “correction requests” for 346,686 cases of illegal and harmful information on the Internet over the past year. In particular, 94,185 corrective actions were taken against digital sex crimes, accounting for 27.2% of the total. This figure shows that digital sex crimes account for a growing proportion of all illegal and harmful information, further raising social awareness.
Recently, concerns are growing as the number of deepfake sex crimes using AI technology has increased rapidly. According to statistics from the Korea Communications Standards Commission, 23,107 cases of deepfake sex crime video information targeting celebrities and the general public were requested for correction last year, up 221.5% from the previous year. During the same period, total digital sex crime information also increased 40.7% compared to the previous year, and the Korea Communications Standards Commission explained, “This is the result of a surge in demand for correction due to various pending issues such as deepfake sex crime videos occurring simultaneously.” Some point out that the current situation, in which deepfake sex crimes are rapidly increasing, clearly reveals the dark reality behind technological advances.
Experts point out that as deepfake videos become more sophisticated to determine whether they are true or not, they are likely to lead to secondary and tertiary damage without the victim’s knowledge. An official from the Korea Communications Standards Commission said, “Although illegal filming that was filmed or distributed against the will of the victims still accounts for the majority, about 75% of the total, the rapid increase in deepfake video is a worrisome phenomenon.” Another problem is that deepfake images are technically difficult to detect and block, so there are significant limitations to the regulatory authorities’ response.
In particular, many of the deepfake sex crimes are committed by teenagers, and there are many voices calling for countermeasures. According to the National Police Agency’s National Investigation Headquarters, a total of 963 people were arrested as a result of intensive crackdown on deepfake video crimes from August 28 last year to March 31 this year, of which 669 were teenagers, accounting for 69.5% of the total, and 228 (23.7%) were in their 20s. It is pointed out that it is difficult to prevent the spread of deepfake crimes simply by legal punishment, as 72 boys are also excluded from criminal punishment. An official from the Korea Communications Commission said, “Digital sex crime reports are steadily increasing, and Internet operators are actively taking measures to prevent distribution,” but stressed, “Considering that many of the perpetrators are in their teens and 20s, more effective education and prevention policies are urgently needed.” Concerns are growing in that deepfake crime is not just in the realm of adult crime, but is rapidly spreading to the youth.
Domestic portal companies such as Naver and Kakao are also strengthening their response by deleting and blocking illegal filming. According to the “Transparency Report on the Disposal of Illegal Films in 2024” released by the Korea Communications Commission, the number of illegal filming deleted and blocked last year was 181,204, more than doubling from the previous year (81,578). Lee Jin-sook, chairman of the Korea Communications Commission, said, “As the rapid development of AI services is increasing the damage of digital sex crimes such as deepfakes and sex crimes, efforts to prevent distribution by Internet operators will become more important to prevent secondary damage.” The KCC plans to come up with policy alternatives as well as technical responses in the future.
[Reporter Kim Kyu-sik]