AI Insights
Global Digital Policy Roundup: June 2025
The roundup is produced by Digital Policy Alert, an independent repository of policy changes affecting the digital economy. If you have feedback or questions, please contact Maria Buza.
Overview. The roundup serves as a guide for navigating global digital policy based on the work of the Digital Policy Alert. To ensure trust, every finding links to the Digital Policy Alert entry with the official government source. The full Digital Policy Alert dataset is available for you to access, filter, and download. To stay updated, Digital Policy Alert also offers a customizable notification service that provides free updates on your areas of interest. Digital Policy Alert’s tools further allow you to navigate, compare, and chat with the legal text of AI rules across the globe.
Drawing from the Digital Policy Alert’s daily monitoring of developments in the G20 countries, it summarizes the highlights of June 2025 in four core areas of digital policy.
- Content moderation, including the European Commission’s guidelines on very large online platforms’ obligations under the Digital Services Act, the UK’s amended binding codes under the Online Safety Act, and Australia’s binding codes on online minor protection.
- AI regulation, including the European Commission’s guidelines for classifying high-risk AI systems, South Korea’s legislative proposals to regulate and support AI development, and China’s new standards on AI.
- Competition policy, including the UK’s digital markets regulation, Australia and China’s platform competition reforms, Canada’s inquiry into algorithmic pricing, and France and Germany’s scrutiny of cloud and e-commerce practices.
- Data governance, including the G7’s statement on children’s online privacy, China’s data localization and facial recognition rules, the Royal Assent for the UK’s Data (Use and Access) Act, and Australia’s children’s privacy code.
Content moderation
Europe
The European Parliament adopted its position on the proposed Directive addressing the sexual abuse and exploitation of children (CSAM), introducing amendments that would explicitly criminalize the use of AI developed for such offenses. The amendments also strengthen provisions concerning the online dissemination and livestreaming of CSAM.
The European Commission issued several guidance and enforcement updates under the Digital Services Act (DSA) and its interaction with related legislation. The first draft guidelines clarify the obligations of Very Large Online Platforms (VLOPs) under the European Media Freedom Act (EMFA). Starting August 2025, VLOPs must notify media service providers before removing their content and provide reasons for removal, giving providers 24 hours to respond. The second draft guidance clarifies how the DSA and the Medical Device Regulation apply to platforms distributing medical device software. VLOPs in this sector must conduct risk assessments, apply mitigation measures, implement notice-and-action systems, ensure transparency around app classification and manufacturers, and meet additional obligations if acting as importers or distributors. Further, the Commission closed its consultation on draft DSA guidelines aimed at protecting minors online.
Regarding enforcement, the Commission issued preliminary findings indicating that AliExpress failed to meet its systemic risk obligations under the DSA. The company has submitted enforceable commitments to enhance its content moderation systems, complaint procedures, advertising transparency, and trader traceability. An independent Monitoring Trustee will oversee implementation, with the possibility of formal non-compliance proceedings and financial penalties if the breaches are confirmed.
The French Administrative Court ordered the suspension of the application of the decree designating adult websites subject to age verification rules under the Law on Securing and Regulating the Digital Space. The decision was appealed by the French government and is now pending a final decision. Before the Court’s ruling, the Regulatory Authority for Audiovisual and Digital Communication (ARCOM) issued formal warnings to five pornographic websites based in Cyprus and the Czech Republic for failing to comply with legally mandated age verification rules. The warnings are the first step toward potential blocking or delisting if non-compliance continues. ARCOM also acknowledged Aylo Group’s decision to suspend access to its services in France following enforcement actions over non-compliance with age verification obligations.
In Germany, the Berlin Commissioner for Data Protection and Freedom of Information, together with other data protection authorities, issued a notice to Apple and Google under the Digital Services Act, specifying that the DeepSeek AI application constitutes illegal content due to privacy violations. The authorities specified that Deepseek unlawfully transfers large volumes of personal data from German users to servers in China without the safeguards required under the General Data Protection Regulation.
Russia fined Apple RUB 12 million in two separate cases for alleged promotion of “non-traditional sexual relations” through its App Store. Since 2022, platforms and apps in Russia have been banned from displaying what the government labels as “propaganda” of “non-traditional sexual relations.
The United Kingdom Office of Communications (Ofcom) opened consultations on amendments to the binding codes adopted under the Online Safety Act. The additional measures include improving livestreaming protections, employing proactive technologies for content detection, and implementing stricter age assurance mechanisms. Concerning the code for user-to-user services, the amendments set out requirements for mandatory review and swift removal of suspected illegal content. Larger or high-risk services must implement additional measures such as internal policies, training, and detection tools for CSAM. The amendments also revise provisions on reporting, user controls, recommender systems, and service settings. The amendments to the codes for search services propose mandatory perceptual hash matching to detect abusive content. Large search providers must use verified hash databases, ensure regular updates, conduct human reviews, and implement safeguards for privacy and free expression. Ofcom is also consulting on amended guidance regarding illegal content judgements, proactive technology measures, highly effective age assurance, and age verification requirements for online adult content services.
Regarding enforcement, Ofcom opened investigations into nine file-sharing platforms for alleged failures to comply with illegal content duties, including 4chan, Im.ge, Krakenfiles, Nippybox, Nippydrive, Nippyshare, Nippyspace, and Yolobit. Ofcom also opened an investigation into First Time Videos for alleged failure to implement highly effective age assurance measures to prevent children from accessing pornography.
Asia and Australia
Australia’s eSafety Commissioner approved three of the nine industry-submitted codes under the Online Safety Act. The codes introduce safeguards to protect children from exposure to pornography, violence, and content related to suicide, self-harm, and disordered eating. The approved codes apply to search engine services, enterprise hosting services, and internet carriage services, while the remaining six were deemed insufficient and will be replaced by binding standards developed by the eSafety Commissioner. Additionally, the eSafety Commissioner issued advice to the Minister for Communications on the draft age-restricted social media platforms rules, which require platforms to prevent access by users under 16 years of age. It recommends keeping YouTube subject to the rules and establishing clearer enforcement criteria, safety standards for harmful design features such as infinite scroll, and exemptions for low-risk, age-appropriate services.
The Chinese State Administration for Market Regulation (SAMR) opened a consultation on draft regulations for supervising live e-commerce. They establish content moderation obligations for platforms hosting livestream shopping, such as maintaining real-time oversight of content through reviews and dynamic monitoring. The regulations also include user rights protections and organizational requirements. Additionally, SAMR is consulting on the regulations governing online trading platform rules. Platforms are required to regulate conduct and implement measures to take down illegal products and information.
The Cyberspace Administration (CAC) opened a consultation on draft measures for classifying online information that may affect the physical and mental health of minors. These classification standards would require platforms to categorize content based on age appropriateness and potential harm to children, implementing graduated access controls based on user age verification. Concerning enforcement, CAC reported the removal of millions of pieces of content and the suspension of numerous accounts as part of its campaign addressing the abuse of AI for generating misleading content and deepfakes.
Americas
The Brazilian Supreme Court ruled that Article 19 of the Internet Civil Rights Framework is partially unconstitutional, as it insufficiently safeguards constitutional rights and democratic values. Previously, platforms could only be held liable for third-party content on the basis of a specific court order. Now, they may be held civilly liable without prior notice in serious cases involving criminal content, paid ads, artificial amplification, or failure to remove widely circulated harmful content, such as terrorism, child sexual abuse, or discrimination. For crimes against honor, the original rules remain, though extrajudicial notice may suffice for removal. Identical reuploads of content already ruled unlawful must be taken down without new rulings. Article 19 still fully protects services such as email and private messaging due to confidentiality. Additionally, the Ministry of Justice and Public Security revised Instagram’s age rating to unsuitable for children under 16. Finally, the National Consumer Secretariat required all advertising on social media to be identifiable, aiming to address concerns over hidden ads and a lack of transparency.
Artificial Intelligence
International
The G7 nations adopted a declaration on AI for prosperity, recognizing the potential of AI to drive economic growth while emphasizing the need for responsible development. The declaration establishes shared principles for trustworthy AI systems and commits member nations to collaborative approaches in addressing AI governance challenges. The Council of Europe launched the HUDERIA methodology for risk and impact assessment of AI systems under its Framework Convention on AI, Human Rights, Democracy and the Rule of Law.
Europe
The European Commission advanced several AI regulatory and policy initiatives. It proposed a Council Decision for the EU to formally join the Council of Europe Framework Convention on AI, reinforcing its commitment to international AI governance. The Commission also launched a consultation on guidelines for classifying high-risk AI systems under the AI Act, which imposes strict obligations on systems affecting health, safety, or fundamental rights. In parallel, it opened a call to select 60 independent experts for a scientific panel to support the Act’s implementation.
Together with the High Representative, the Commission released the International Digital Strategy, highlighting digital transformation, secure infrastructure, and global AI cooperation as core elements of EU foreign policy. The Joint Research Centre published a generative AI outlook report, exploring societal and policy implications. Enhanced AI tools were added to the AI-on-Demand platform, and consultations closed on the Apply AI Strategy and Cloud and AI Development Act.
The United Kingdom signed the Data (Use and Access) Act, which mandates an inquiry into the use of copyrighted works in AI development. The Secretary of State must report to Parliament within six months on progress toward publishing both the economic impact assessment and the AI copyright report. Ofcom outlined its approach to AI, detailing how it will promote safe and responsible innovation across the telecommunications and media sectors. Meanwhile, the Financial Conduct Authority closed its consultation on the AI Live Testing proposal, which aims to establish a regulatory sandbox allowing financial services firms to test AI applications in controlled, real-world settings.
Asia and Australia
The Chinese National People’s Congress filed a motion for the development of an AI Law. It calls for the consolidation of various sectoral regulations into a unified framework addressing AI development, deployment, and accountability across all applications.
Several standards also entered into force. The State Administration for Market Regulation’s new national standard sets design requirements for cloud and storage infrastructure supporting AI. Joint guidelines from the Central Cyberspace Affairs Commission and Market Regulation Authority introduced standardization for AI governance across sectors. The Cyberspace Administration adopted an order establishing rules for AI use in meteorology and climate services. Meanwhile, the National Cybersecurity Standardization Committee opened a consultation on a national standard for network security technology product interoperability on threat information format, establishing requirements for AI systems to share and process cybersecurity threat intelligence.
Japan‘s bill on the promotion of research and development and the application of AI-related technologies entered into force. The bill establishes a governance framework for AI advancement, including provisions for research funding, ethical guidelines, and institutional oversight of AI applications.
South Korea introduced several legislative proposals to regulate and support AI development. Two partial amendments to the Basic Act on the Advancement of AI and the Establishment of Trust-Based Framework (Bill 2210903 and Bill 2210815) were introduced in the National Assembly, both incorporating copyright protection measures. These amendments address the use of copyrighted works in AI training, outlining fair use conditions, licensing requirements, and creator compensation. In addition, the Special Law for the Promotion of the AI Industry was introduced to establish financial grants, industrial policy support, and public-private partnerships aimed at accelerating the country’s AI capabilities.
Competition
Europe
The European Commission advanced enforcement action in the digital markets, imposing fines for anticompetitive practices. The Commission fined Delivery Hero and Glovo EUR 329 million for engaging in market allocation arrangements that restricted competition and harmed consumers through reduced choice and potentially higher prices. The Commission also closed a consultation on Microsoft’s proposed commitments addressing anticompetitive practices related to Teams, following concerns about bundling the collaboration software with Office products to the detriment of competing communication platforms. Finally, the Advocate General issued an opinion dismissing Google’s appeal and upholding the EUR 4.124 billion fine for abusing its dominance through Android-related restrictions. The opinion supports the General Court’s finding of anticompetitive bundling and network effects.
France’s Competition Authority advanced its examination of digital market practices through an inquiry into self-preferencing practices in cloud computing. The consultation examined how cloud providers potentially favor their own services over those of competitors, raising concerns about fair competition in the growing cloud infrastructure market.
Germany’s competition authority issued a preliminary legal opinion, finding that Amazon’s price control practices may constitute abuse of dominant position under the Act Against Restraints of Competition. The authority’s investigation focused on Amazon’s use of algorithmic price controls and restrictions on third-party sellers’ pricing autonomy on its marketplace platform.
The United Kingdom’s Competition and Markets Authority (CMA) advanced the implementation of the Digital Markets, Competition and Consumers Act. The CMA opened a consultation on draftrules for the strategic market status levy, establishing the funding mechanism for enhanced digital market oversight under the new regulatory framework. The levy will be imposed on firms designated with strategic market status to fund the CMA’s expanded enforcement activities in digital markets. The CMA also opened a consultation on amended merger guidance on jurisdiction and procedure, focused on streamlining procedures and clarifying jurisdictional tests. The changes include new performance indicators and refined approaches to material influence and global mergers.
Regarding enforcement, the CMA also opened a consultation on its proposed decision to designate Google as having strategic market status in general search services, which would subject the company to enhanced regulatory obligations under the Digital Markets, Competition and Consumers Act. Additionally, the CMA opened a consultation on its notice of intention to release commitments previously accepted regarding Google’s Privacy Sandbox proposals, following revised plans on third-party cookies. Finally, the CMA accepted Amazon’s commitments to tackle fake and misleading online reviews, addressing concerns on review manipulation undermining consumer trust and fair competition in e-commerce.
Asia and Australia
The Australian Competition and Consumer Commission (ACCC) published the final report of its Digital Platform Services Inquiry, marking the conclusion of a five-year examination of digital markets. The report provides findings on platform market power, data advantages, and recommendations for regulatory reforms to address competition concerns in search, social media, and digital advertising markets.
China‘s State Administration for Market Regulation (SAMR) consulted on multiple competition policies. SAMR opened a consultation on draft provisions prohibiting monopoly agreements, clarifying market share and turnover thresholds for the safe harbor system under the Anti-Monopoly Law’s prohibition of monopoly agreements. Simultaneously, SAMR opened a consultation on regulations for the supervision and administration of online trading platform rules, with provisions on transparency, fairness, and user protections, including limits on unilateral conduct and fees. Additionally, SAMR closed a consultation regarding guidelines on compliance with charging behavior of online trading platforms, addressing concerns on excessive fees, discriminatory pricing, and lack of transparency in platform commission structures.
The Indonesian Competition Commission (KPPU) issued rulings in multiple cases. The Commercial Court at the Central Jakarta District Court upheld the IDR 202 billion (approximately USD 12.37 million) fine imposed on Google for abuse of dominant market position in its app store practices. The ruling affirmed KPPU’s findings that Google imposed unfair terms on application developers and restricted competition in application distribution markets. KPPU also issued a conditional approval for the acquisition of Tokopedia by TikTok, addressing concerns on increased market concentration in e-commerce and social commerce markets. The approval includes behavioral remedies to maintain competition and prevent leveraging of TikTok’s social media dominance into e-commerce markets.
Japan‘s advanced implementation of platform competition regulations through multiple initiatives. The Japan Fair Trade Commission (JFTC) and the Ministry of Economy, Trade and Industry closed consultations on implementation orders and guidelines for the Act on Promotion of Competition Related to Specified Software Used on Smartphones. The measures establish specific obligations for smartphone operating system providers and application stores to ensure fair competition, including requirements for alternative application distribution and payment processing options. The JFTC also released a guide on the development and operation of effective Antimonopoly Act compliance programs, guiding digital platforms on establishing internal controls to prevent anticompetitive conduct.
The Turkish Competition Authority opened an investigation into Google for abuse of dominance in online advertising (ad) markets. The investigation focuses on Google’s practices in ad technology markets, including potential self-preferencing in its ad exchange and restrictions on interoperability with competing ad platforms.
Americas
The Canadian Competition Bureau opened aninquiry into algorithmic pricing, examining the prevalence and competitive effects of algorithm-driven pricing strategies. The inquiry investigates AI’s influence on pricing strategies and the risks of anti-competitive conduct and deceptive practices.
The Mexican Federal Economic Competition Commission (COFECE) closed its investigation into Google’s practices in Mexico’s digital advertising market. The investigation found no violation, concluding advertisers weren’t compelled to buy bundled services.
Data governance
International
The G7 Data Protection and Privacy Authorities issued a communiqué on championing privacy in a digital age following their fifth roundtable meeting. The authorities emphasized the need for collaborative approaches to address emerging privacy challenges in AI, cross-border data flows, and children’s online protection. The G7 authorities also adopted a joint statement on promoting responsible innovation and protecting children by prioritizing privacy, establishing shared principles for age-appropriate design and data minimization in digital services targeting minors.
The Global Cross-Border Privacy Rules (CBPR) Forum launched the Global CBPR and Privacy Recognition for Processors system, for facilitating trusted data flows across participating economies.
Europe
The Council of the European Union and European Parliament reached an agreement on a regulation laying down additional procedural rules for the General Data Protection Regulation (GDPR) enforcement in cross-border cases. The regulation harmonizes complaint admissibility, clarifies rights of parties, and introduces binding deadlines to avoid delays. The European Commission opened a consultation on the Digital Networks Act and closed a consultation on the European Business Wallet initiative, which would create a digital identity framework for businesses. The European Data Protection Board (EDPB) published final guidelines on data transfers to third-country authorities under Article 48 GDPR and closed a consultation on guidelines for processing personal data through blockchain technologies.
Regarding cybersecurity governance, EU Member States adopted the Commission’s proposal for an EU Blueprint for cybersecurity crisis management, establishing coordination mechanisms for responding to large-scale cyber incidents affecting multiple member states. In parallel, the NIS Cooperation Group published a roadmap on post-quantum cryptography, providing guidance for member states on transitioning to quantum-resistant encryption standards. The Commission also presented a roadmap for effective and lawful access to data for law enforcement, outlining plans across six areas, including updating data retention rules, enhancing cross-border cooperation on interception, and developing digital forensics tools.
Additionally, the Commission closed a consultation on the Cloud and AI Development Act, which aims to triple EU data center capacity by 2035, ease permitting for resource-efficient projects, and ensure secure EU-based cloud infrastructure for critical uses, including AI. Additionally, the Commission closed a consultation on revising the Cybersecurity Act, which would strengthen certification requirements for digital products and simplify reporting obligations. Finally, the Commission closed a consultation on the impact assessment for data retention by electronic communication service providers.
The French Data Protection Authority (CNIL) adopted guidelines on measures to be taken when collecting data through web scraping. It mandates safeguards including data minimization, exclusion of sensitive or private data, adherence to anti-scraping signals, and transparency. The authority also adopted recommendations on using legitimate interest as a legal basis for AI system development. It applies to private entities and certain public bodies, requiring that processing be lawful, necessary, and balanced against individuals’ rights. CNIL also closed a consultation on draft recommendations on multi-device consent collection, addressing GDPR-compliant consent across devices linked to user accounts. CNIL also consulted on guidelines for tracking pixels in emails.
Moreover, CNIL adopted guidance clarifying how organizations should identify roles as data controllers, joint controllers, or processors under the GDPR. It stresses role determination based on decision-making, requiring documented justifications, clear contracts, and accountability beyond contractual terms.
The German Conference of Independent Data Protection Supervisory Authorities adopted guidelines for procedures on imposing fines under GDPR. The rules cover procedural principles, corporate liability, case handling, cross-border cooperation, evidence collection, and coordination with prosecutors to ensure consistent enforcement nationwide. The Conference also adopted guidance on recommended technical and organizational measures for developing and operating AI systems. It emphasizes data protection by design with measures covering design, development, deployment, secure updates, and ongoing monitoring.
Additionally, the authorities issued a resolution on confidential cloud computing, highlighting that confidentiality demands strong threat models, secure key management, and transparency. Another resolution addressed the interplay between data protection and security in proposed reforms to German security laws, emphasizing data protection as key to democracy and urging proportionality, accountability, and oversight. The Conference also adopted a resolution on data protection requirements for outsourcing appointment management in healthcare.
Russia’s Ministry of Digital Development opened a consultation on a resolution requiring aggregator platforms to share data on goods and services through a unified system upon request by authorized security or law enforcement bodies.
The United Kingdom’s Data (Use and Access) Act received Royal Assent. The Act includes data protection provisions establishing rules for data access and sharing and data subject rights across sectors. It also includes data protection authority governance reforms to enforce data-sharing rules across sectors, with penalties for non-compliance, and data transfer regulations including regulatory approval, safeguards, and public interest derogations.
The Information Commissioner’s Office (ICO) and Office of the Privacy Commissioner of Canada concluded their joint investigation into23andMe, with the ICO imposing a GBP 2.31 million fine for cybersecurity failures. The findings identified weak password standards, a lack of multi-factor authentication, delayed response, and insufficient protection for sensitive data. The ICO also published its AI and biometrics strategy, outlining regulatory priorities for facial recognition, emotion detection, and other biometric technologies. Moreover, the ICO closed a consultation on updated guidance on encryption as a data protection measure.
The Home Office also implemented multiple codes of practice under the Investigatory Powers Act, including codes for bulk personal datasets, intelligence services’ use of third-party data, bulk acquisition of communications data, interception of communications, and equipment interference. The notices regime also entered into force, setting statutory guidance on data retention, technical capabilities, and security notices.
Asia and Australia
The Office of the Australian Information Commissioner closed a consultation on the Children’s Online Privacy Code under the Privacy and Other Legislation Amendment Act, aiming to regulate how online services, including social media, messaging applications, and cloud providers, handle children’s personal data. The Privacy Amendment Act provisions on automated decision-making entered into force, allowing action to be taken for serious privacy breaches without needing proof of harm, covering intrusions, surveillance, recording, and misuse of personal data. Australia and the European Union announced the opening of negotiations on a Security and Defense Partnership to boost cooperation in the defense industry and cybersecurity.
China implemented data governance measures across multiple sectors. The Cyberspace Administration’s regulations on facial recognition technology entered into force, mandating explicit consent, transparency, security safeguards, and impact assessments for entities processing facial data. The CAC also announced registration of the 19th batch of domestic blockchain service providers, including 42 new domestic blockchain services in copyright, finance, digital collectibles, and data provenance.
The People’s Bank of China‘s Measures for the Administration of Data Security in the Business Field entered into force, establishing cybersecurity requirements, cross-border data transfer restrictions, and data localization requirements for financial institutions and payment providers.
The Ministry of Industry and Information Technology consulted on multiple technical standards, including guidelines for automobile data cross-border export security, outlining compliance paths, exemptions, and technical safeguards. The State Administration for Market Regulation also opened consultations on data governance requirements in e-commerce platform regulations and live e-commerce regulations, establishing data protection and cybersecurity obligations. The regulations also include user identification requirements for live e-commerce.
Moreover, multiple cybersecurity standards were advanced through the consultation process, including the National Cybersecurity Standardization Technical Committee’s standards on binary sequence randomness detection, secret sharing mechanisms,SM9 cryptographic message format, an quantum key distribution.The Committee also consulted on standards for AI-generated content identification for identifying, securing, and detecting AI-generated content using metadata and technical safeguards, and mobile terminal security, setting security and testing requirements for mobile terminals. Additionally, Management Regulations for Terminal Device Direct Satellite Service entered into force, including cybersecurity requirements and local operations requirements for satellite communication providers.
The Reserve Bank of India’s Digital Lending Directions 2025 entered into force, establishing data protection provisions and data localization requirements for digital lending platforms. These regulations require lenders to store payment data within India and implement robust security measures for borrower information. The National e-Governance Division issued guidelines for consent management under the Digital Personal Data Protection Act, detailing requirements for a compliant consent management system.
Japan’s Ministry of Internal Affairs and Communications closed a consultation on amended enforcement regulations of the Act on Identity Verification by Mobile Voice Communication Providers, allowing mobile carriers to verify identities using My Number Card electronic records stored on mobile devices.
The South Korea Personal Information Protection Commission fined Merck, DR Plus, and OnFlat for data breaches and security failures, ordering corrective measures and public disclosures. Additionally, the Commission opened an investigation intoYes24 over a ransomware-related data breach and into Salesforce over potential data protection vulnerabilities following phishing and malware incidents targeting its users. The Commission also published results of security assessments of major cloud providers AWS, Azure, and Naver Cloud Platform, finding compliance gaps in default security settings, log retention, and access controls. The Commission also adopted its 2025 Privacy Policy Evaluation Plan, expanding evaluations to 50 services in new technology fields, including connected cars and AI.
The Commission also opened a consultation on amendments to the Enforcement Decree of the Personal Information Protection Act, to expand data portability rights and secure transfer methods for large data controllers.
Americas
Brazil‘s National Data Protection Authority (ANPD) adopted guidance on neurotechnologies and data protection, highlighting data privacy risks and sensitive data classification. The ANPD also opened a consultation on its inquiry into the processing of biometric data to assess the need for regulation, focusing on privacy risks and governance.
Canada introduced the Critical Cyber Systems Protection Act to strengthen cybersecurity for critical systems, imposing security obligations on essential service operators and amending the Telecommunications Act. In enforcement, the Supreme Court accepted Meta (Facebook)’s appeal of a decision finding it in violation of the Personal Information Protection and Electronic Documents Act for failing to obtain meaningful user consent and adequately protect data in the Cambridge Analytica case.
AI Insights
This Magnificent Artificial Intelligence (AI) Stock Is Down 26%. Buy the Dip, Or Run for the Hills?
Duolingo (DUOL 1.09%) operates the world’s most popular digital language education platform, and the company continues to deliver stellar financial results. Duolingo is elevating the learning experience with artificial intelligence (AI), which is also unlocking new revenue streams that could fuel its next phase of growth.
Duolingo stock set a new record high in May, but it has since declined by 26%. It’s trading at a sky-high valuation, so investors might be wondering whether the company’s rapid growth warrants paying a premium. With that in mind, is the dip a buying opportunity, or should investors completely avoid the stock?
Image source: Getty Images.
AI is creating new opportunities for Duolingo
Duolingo’s mobile-first, gamified approach to language education is attracting hordes of eager learners. During the first quarter of 2025 (ended March 31), the platform had 130.2 million monthly active users, which was a 33% jump from the year-ago period. However, the number of users paying a monthly subscription grew at an even faster pace, thanks partly to AI.
Duolingo makes money in two ways. It sells advertising slots to businesses and then shows those ads to its free users, and it also offers a monthly subscription option for users who want access to additional features to accelerate their learning experience. The number of users paying a subscription soared by 40% to a record 10.3 million during the first quarter.
Duolingo’s Max subscription plan continues to be a big driver of new paying users. It includes three AI-powered features: Roleplay, Explain My Answer, and Videocall. Roleplay uses an AI chatbot interface to help users practice their conversational skills, whereas Explain My Answer offers personalized feedback to users based on their mistakes in each lesson. Videocall, which is the newest addition to the Max plan, features a digital avatar named Lily, which helps users practice their speaking skills.
Duolingo Max was launched just two years ago in 2023, and it’s the company’s most expensive plan, yet it already accounts for 7% of the platform’s total subscriber base. It brings Duolingo a step closer to achieving its long-term goal of delivering a digital learning experience that rivals that of a human tutor.
Duolingo’s revenue and earnings are soaring
Duolingo delivered $230.7 million in revenue during the first quarter of 2025, which represented 38% growth from the year-ago period. It was above the high end of the company’s forecast ($223.5 million), which drove management to increase its full-year guidance for 2025. Duolingo is now expected to deliver as much as $996 million in revenue, compared to $978.5 million as of the last forecast. But there is another positive story unfolding at the bottom line.
Duolingo generated $35.1 million in GAAP (generally accepted accounting principles) net income during the first quarter, which was a 30% increase year over year. However, the company’s adjusted earnings before interest, tax, depreciation, and amortization (EBITDA) soared by 43% to $62.8 million. This is management’s preferred measure of profitability because it excludes one-off and non-cash expenses, so it’s a better indicator of how much actual money the business is generating.
A combination of Duolingo’s rapid revenue growth and prudent expense management is driving the company’s surging profits, and this trend might be key to further upside in its stock from here.
Duolingo stock is trading at a sky-high valuation
Based on Duolingo’s trailing 12-month earnings per share (EPS), its stock is trading at a price-to-earnings (P/E) ratio of 193.1. That is an eye-popping valuation considering the S&P 500 is sitting at a P/E ratio of 24.1 as of this writing. In other words, Duolingo stock is a whopping eight times more expensive than the benchmark index.
The stock looks more attractive if we value it based on the company’s future potential earnings, though. If we look ahead to 2026, the stock is trading at a forward P/E ratio of 48.8 based on Wall Street’s consensus EPS estimate (provided by Yahoo! Finance) for that year. It’s still expensive, but slightly more reasonable.
Data by YCharts.
Even if we set Duolingo’s earnings aside and value its stock based on its revenue, it still looks quite expensive. It’s trading at a price-to-sales (P/S) ratio of 22.9, which is a 40% premium to its average of 16.3 dating back to when it went public in 2021.
Data by YCharts.
With all of that in mind, Duolingo stock probably isn’t a great buy for investors who are looking for positive returns in the next 12 months or so. However, the company will grow into its valuation over time if its revenue and earnings continue to increase at around the current pace, so the stock could be a solid buy for investors who are willing to hold onto it for the long term. A time horizon of five years (or more) will maximize the chances of earning a positive return.
AI Insights
'Not quite human': Popular band confirmed to have been AI, stunning fans – The Jerusalem Post
AI Insights
Human Replatforming! Artificial Intelligence Threatens Half of Jobs
Redazione RHC : 8 July 2025 09:11
The chairman of the American car company Ford, Jim Farley, has released a statement sharp on the future of the job market in the age of artificial intelligence. According to him, new technologies are capable of literally depriving half of white-collar workers of their jobs, that is, employees who work in the office and perform intellectual tasks.
At the international forum Aspen Ideas Festival, Farley noted that artificial intelligence has an asymmetric impact on the economy. He emphasized that, on the one hand, new technologies help and facilitate many processes, but on the other hand, they deal a severe blow to some professions. This is especially true for those who work in information processing, document flow and other office tasks.
Farley noted that advances in artificial intelligence will inevitably leave behind many workers who have been the backbone of the corporate world for decades. He noted that technology is improving the lives of many, but it also raises a serious question for society: What will happen to those left behind? He said the global community still doesn’t have a clear plan for how to support these people.
The conversation also touched on the future of manufacturing workers. Farley acknowledged that automation and robotics are gradually replacing people, but so far this is in a limited number of operations. He said that about 10% of processes in Ford plants are already performed by machines, and with the advent of humanoid robots, this percentage could rise to 20%. However, it will not be possible to completely replace people in production in the near future: according to Farley, human work remains a unique and in-demand activity.
However, the announcement of the cut of half of employees sounds particularly alarming in light of other forecasts. Previously, the CEO of Anthropic, Dario Amodei, accused companies and politicians of exaggerating the consequences of the introduction of artificial intelligence. He is convinced that the real picture is much bleaker and that unemployment in the United States could reach 20%. Amodei stressed that technology manufacturers are required to be honest and transparent about the future consequences.
There is no doubt about the severity of the changes taking place. Even Amazon CEO Andy Jassy admitted that the company is already preparing to reduce staff due to the widespread implementation of artificial intelligence. Amazon has already laid off around 30,000 employees this year and Jassy said that these measures will continue, as new technologies ensure high efficiency.
Fiverr CEO Micha Kaufman noted in his speech to employees that artificial intelligence threatens jobs in almost every category, from programmers to lawyers to support specialists. Kaufman called what’s happening a warning sign for everyone, regardless of profession.
The largest U.S. bank, JPMorgan Chase, hasn’t stood aside either. The bank’s chief executive, Marianne Lake, said that over the next few years, the company plans to cut up to 10% of its staff, replacing them with artificial intelligence algorithms. Shopify changed its hiring approach in the spring. Now, management requires managers to prove that tasks cannot be performed using AI before agreeing to expand the team.
Microsoft is also confirming the trend: the company announced the reduction of 9,000 employees, equivalent to 4% of the total staff. At the same time, the company continues to actively invest tens of billions of dollars in the development of artificial intelligence technologies. The threat of mass layoffs does not only concern the private sector. The Australian government, for example, is already implementing a policy on the responsible use of AI in government agencies. Australian Finance Minister Katy Gallagher has noted that it is important to consider people’s rights, interests and well-being when using AI in public services.
All events confirm a growing trend: AI is increasingly influencing the labor market, reducing the need for people and forcing companies and governments to look for new ways to adapt to inevitable changes.
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Funding & Business5 days ago
Dust hits $6M ARR helping enterprises build AI agents that actually do stuff instead of just talking