By Wooyoung Lee ( September 15, 2025, 05:43 GMT | Insight) — South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices.South Korea’s competition watchdog has set up a taskforce dedicated to adopting artificial intelligence in its enforcement and administrative work, aiming to expedite case handling, detect unreported mergers and strengthen oversight of unfair practices….
AI Insights
South Korean regulator to adopt AI in competition enforcement | MLex
Prepare for tomorrow’s regulatory change, today
MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term.
Know what others in the room don’t, with features including:
- Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more
- Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs
- Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific
- Curated case files bringing together news, analysis and source documents in a single timeline
Experience MLex today with a 14-day free trial.
AI Insights
Answering the question of which AI tools deliver measurable value

Silicon Valley kingmakers
Meanwhile, the investor lineup reads like a who’s who of Silicon Valley’s kingmakers. Sequoia’s Roelof Botha and “solo GP” Elad Gil represent the kind of money that moves markets and shapes entire industries. Dramatic as it may sound, their funding decisions often preview which technologies will dominate enterprise conversations within two years, making their perspectives essential intelligence for anyone planning technology strategy.
The programming extends well beyond AI and public markets. The CEO of Waymo will showcase how autonomous systems are reshaping transportation, while Netflix’s CTO will provide a rare glimpse into the streaming infrastructure that powers global entertainment. Perhaps most intriguingly, Kevin Rose—who founded Digg, sold it, then recently rescued it from corporate ownership—will discuss the art of platform resurrection in an era of constant digital disruption.
Disrupt takes place as both TechCrunch and San Francisco reassert their respective primacies — the publication as tech journalism’s defining voice, the city as technology’s undisputed capital. It also promises to be entertaining, as these events always are.
AI Insights
AI used for taking notes at UVM Medical Center

BURLINGTON, Vt. (WCAX) – Artificial intelligence might be taking notes at your next doctor’s appointment.
Last year, we told you how the University of Vermont Health Network was tapping into AI to streamline doctor-patient conversations.
Over a year later, network officials say it’s making a mark.
Staff say they used to spend an hour or more reviewing notes from their shift, often eating into family or downtime.
There’s no scribbling pen or clacking keyboard in sight as emergency physician Dan Peters walked through a mock appointment at the University of Vermont Medical Center.
That’s because an app is taking notes for him.
“I think my initial thoughts were this is going to be game-changing in terms of time savings for documentation,” said Peters.
The app, called Abridge, summarizes doctor-patient conversations.
Justin Stinnett-Donnelly, the network’s chief health information officer, says it boosts providers’ mental health and bedside manner.
“It changes the interaction. You’re able to be much more focused on the conversation with the patient. And it actually reduces what we call cognitive burden,” said Stinnett-Donnelly.
A pilot study at UVMMC last spring found that Abridge halved clinicians’ cognitive load while more than doubling their professional fulfillment.
Peters is proof. He says Abridge cut his routine hour-long note evaluations in half.
“To have the assistance has been significantly helpful for reducing that burden of writing all the notes and just the cognitive load of needing to remember all the details,” he said.
The note from our mock appointment with Peters was spot on, and staff say physicians always double-check the record.
“Our providers go through, they read it and they edit it for clarity, and that ultimately, it is a human reviewing that note to make sure that it is accurate for that encounter,” said Stinnett-Donnelly.
Of the 1,200 physicians and hundreds of other staff throughout the network, 500 use Abridge.
Officials say some are wary or prefer taking notes on their own. The network encourages them to give it a try.
Patients, on the other hand, don’t need much encouragement.
((Dan peters // emergency physician
Dan Peters: “If the hundreds of patient encounters where I’ve used this technology, only a single patient has said no to me.”
Reporter Sophia Thomas: “Wow. One person?”
Peters: “Only one person. I don’t think it’s specific to me. I think patients expect that we’re using this type of technology to stay on the cutting edge.“
They’re getting some of that time back thanks to AI.
Network officials say there haven’t been any breaches of private information through Abridge.
They’re currently collecting data on its benefits and plan to roll out an impact study after the two-year anniversary of adopting the app.
Copyright 2025 WCAX. All rights reserved.
AI Insights
AI accurately identifies questionable open-access journals by analysing websites and content, matching expert human assessment

Artificial intelligence (AI) could be a useful tool to find ‘questionable’ open-access journals, by analysing features such as website design and content, new research has found.
The researchers set out to evaluate the extent to which AI techniques could replicate the expertise of human reviewers in identifying questionable journals and determining key predictive factors. ‘Questionable’ journals were defined as journals violating the best practices outlined in the Directory of Open Access Journals (DOAJ) – an index of open access journals managed by the DOAF foundation based in Denmark – and showing indicators of low editorial standards. Legitimate journals were those that followed DOAJ best practice standards and classed as ‘whitelisted’.
The AI model was designed to transform journal websites into machine-readable information, according to DOAJ criteria, such as editorial board expertise and publication ethics. To train the questionable journal classifier, they compiled a list of around 12,800 whitelisted journals and 2500 unwhitelisted, and then extracted three kinds of features to help distinguish them from each other: website content, website design and bibliometrics-based classifiers.
The model was then used to predict questionable journals from a list of just over 15,000 open-access journals housed by the open database, Unpaywall. Overall, it flagged 1437 suspect journals of which about 1092 were expected to be genuinely questionable. The researchers said these journals had hundreds of thousands of articles, millions of citations, acknowledged funding from major agencies and attracted authors from developing countries.
There were around 345 false positives among those identified, which the researchers said shared a few patterns, for example they had sites that were unreachable or had been formally discontinued, or referred to a book series or conference with titles similar to that of a journal. They also said there was likely around 1780 problematic journals that had remained undetected.
Overall, they concluded that AI could accurately discern questionable journals with high agreement with expert human assessments, although they pointed out that existing AI models would need to be continuously updated to track evolving trends.
‘Future work should explore ways to incorporate real-time web crawling and community feedback into AI-driven screening tools to create a dynamic and adaptable system for monitoring research integrity,’ they said.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries