Tools & Platforms
Korea’s game studios rebrand as AI tech firms, with stints at fashion, robotics, media
What was once a world of elves, dragons and power-ups is now giving rise to one of South Korea’s most unexpected tech revolutions, with game studios taking their place alongside Big Tech in the race for AI dominance.
The country’s gaming heavyweights are increasingly shedding their image as pure entertainment companies and positioning themselves as AI-first tech firms, expanding far beyond the virtual battlegrounds into sectors such as fashion, media and even robotics.
Facing a slowing gaming market and rising development costs, game developers and publishers such as NCSOFT Corp., Nexon Co. and Krafton Inc. are leveraging their proprietary AI tools and massive gameplay data troves to build new growth engines, applying gaming-derived machine intelligence to real-world industries.
“We’re no longer just competing for players’ time, but for a stake in the future of applied AI,” said an executive at a domestic game firm.
FROM MMORPGs TO 3D MODELS, FASHION AI
Few illustrate this transition better than NCSOFT, which in February spun off its AI division into a standalone subsidiary, NC AI.
The unit is set to launch Varco 3D at the end of July – a software tool that can generate high-quality 3D characters using nothing more than text or image prompts.
The product will be offered via a software-as-a-service (SaaS) model and targets users far beyond traditional game development, from virtual influencers to digital fashion brands, according to company officials.
The move follows NCSOFT’s development in 2023 of Varco, Korea’s first large language model (LLM) developed by a game company.
The company now provides Varco Art Fashion, an AI-powered tool that generates apparel designs and visual prototypes. The tool has already been adopted by 10 leading fashion firms, halving new product development times, according to NCSOFT.
“We see an opportunity to disrupt the fashion and content production pipelines using tools originally built for game development,” said an NC AI official.
The company also provides generative engines to media firms, allowing for automatic content production and editing.
PREDICTING THE NEXT BIG HIT, OR MISS
Nexon, which owns game-developing studio Nexon Games Co., is taking a different path: using AI to forecast the commercial success of upcoming games.
At the Nexon Developers Conference (NDC25) last month, the firm unveiled its Game Success Prediction AI, designed to sift through early gameplay patterns and metadata to identify breakout potential.
“Sometimes, high-quality games are overlooked,” said Oh Jin-wook, head of Nexon’s Intelligence Labs Group. “AI can help uncover hidden gems, allowing us to take more creative risks.”
His argument is backed by data.
According to global gaming platform Steam, 84% of titles released on its platform last year failed to even register meaningful sales.
Nexon said AI can help de-risk game development by offering early signals from pre-launch user testing.
TAKING AI INTO THE PHYSICAL REALM
Krafton, best known for PlayerUnknown’s Battlegrounds (PUBG), is taking AI into the physical realm.
In April, Krafton Chief Executive Kim Changhan met with Nvidia CEO Jensen Huang to discuss collaboration on humanoid robotics, building on their previous partnership to co-develop non-player character AI.
Krafton recently launched a Physical AI team, tasked with adapting in-game character AI for robotic applications. The goal: to use virtual intelligence as the foundation for real-world robotic “brains.”
Unlike software AI such as ChatGPT, physical AI focuses on decision-making for physical tasks such as picking up or moving objects.
ESCAPING THE GAMING RUT
Analysts said at the heart of this AI pivot is a strategic response to a cooling domestic gaming market.
Rising development costs and a lack of global blockbusters have dragged down growth.
According to the Korea Creative Content Agency, the nation’s gaming user rate fell to a record low of 59.9% in 2024.
The threat isn’t just rival games – it’s YouTube, TikTok and other attention-gobbling apps.
Nexon Games CEO Park Yong-hyun named non-gaming platforms as the biggest threat to the gaming industry.
According to mobile analytics firm Mobile Index, Koreans spent over 140 minutes a day on YouTube as of March, outpacing daily game playtime by a wide margin.
Experts say Korean game developers are uniquely positioned to scale into the broader AI economy.
The industry has accumulated years of player behavior data and developed highly advanced simulation environments – ideal conditions for training AI.
“Games are structured, interactive ecosystems with clear rules and goals, perfect for developing and testing AI models,” said Wi Jong-hyun, president of the Korea Game Society and a professor at Chung-Ang University. “It’s only natural that these companies are now leading Korea’s AI transition.”
Write to Young-Chong Choi at youngchoi@hankyung.com
In-Soo Nam edited this article.
Tools & Platforms
U.S. State Courts Cautiously Approach AI Despite Efficiency Promises and Staffing Crises
A new survey of state courts reveals a striking paradox in the American judicial system: Even though courts face severe staffing shortages and operational strain, they remain reluctant to adopt generative artificial intelligence technologies that could provide significant relief.
The Thomson Reuters Institute’s third annual survey of state courts, conducted in partnership with the National Center for State Courts AI Policy Consortium, found that 68% of courts reported staff shortages and 48% of court professionals say they do not have enough time to get their work done.
Despite these pressures, however, just 17% say their court is using gen AI today.
Courts Under Strain
The survey, which gathered responses from 443 state, county, and municipal court judges and professionals between March and April 2025, paints a picture of courts under significant strain.
Seventy-one percent of state courts and 56% of county/municipal courts experienced staff shortages in the past year, with 61% anticipating continued shortages in the next 12 months.
This staffing crisis translates into demanding work schedules, with 53% of respondents saying they work between 40 and 45 hours a week on average, and an additional 38% working over 46 hours a week.
Perhaps most telling, only half of court professionals said they had enough time to get their work done.
These workload pressures are only getting worse. Nearly half of respondents (45%) reported an increase in their caseloads compared to last year and 39% said the issues they are dealing with have become more complex.
Meanwhile, 24% of respondents reported increases in court delays, compared to 18% who reported decreases.
AI Adoption Remains Limited
Against this backdrop of operational strain, the survey reveals a cautious approach to AI adoption that seems at odds with the technology’s potential benefits.
Currently, only 17% of respondents said their court was using gen AI, and an additional 17% said their court was planning to adopt gen AI technology over the next year.
This slow adoption occurs despite widespread recognition of AI’s transformative potential, with 55% of respondents rating AI and gen AI as having a transformational or high impact on courts over the next five years.
The survey found that AI and gen AI is the highest-ranking impactful trend, rated as transformational or high impact by 55% of respondents.
Court professionals clearly see the efficiency benefits AI could provide. Court professionals predict that in the next year, gen AI will help them save an average of nearly three hours a week, rising to nearly nine hours a week within five years.
The projected time savings could be substantial: Respondents estimate they will save an average of nearly three hours every week in the next year, growing to nearly six hours each week within three years and 8.8 hours each week within five years.
Barriers to AI Implementation
So what is keeping courts back? The survey identifies several factors contributing to courts’ cautious AI adoption.
Seventy percent of respondents said their courts are currently not allowing employees to use AI-based tools for court business, and 75% of respondents said their court has not yet provided any AI training.
There are also varied but significant concerns about AI implementation.
More than a third (35%) are worried that AI will lead to an overreliance on technology rather than skill, while a quarter have concerns about malicious use of AI, such as counterfeit orders and evidence. Interestingly, only 9% were worried about widespread job loss resulting from AI.
Budget constraints may also play a role in limiting technology adoption. The survey found that 22% say their budget for the next year increased, while 30% said budgets decreased, and 30% say budgets stayed the same.
Current Technology Landscape
While AI adoption lags, courts have made progress implementing other technologies. Most courts have adopted key technologies, including case management (86%), e-filing (85%), calendar management (83%), and document management (82%).
Video conferencing has reached near-universal adoption at 88%.
However, some technology gaps remain. Beyond gen AI, the most common technologies set to be adopted next are legal self-help portals, online dispute resolution and document automation.
Virtual Hearings Widely Adopted
The survey shows significant adoption of virtual hearings, with 80% of respondents saying their court conducts or participates in virtual hearings.
In more than 40% of all jurisdictions, virtual hearings are available for first/initial appearances, preliminary/status hearings and/or motion hearings.
Virtual hearings appear to improve court efficiency in some areas. 58% of respondents reported that virtual courts decrease failure to appear rates, and 84% reported that virtual courts increase access to justice.
However, the digital divide presents ongoing challenges. Nearly one in five respondents (19%) feel that the majority of litigants are experiencing decreased access to justice because they lack strong technology skills.
Court access for people with lower digital literacy and fewer technical support resources were ranked as the top challenges for litigants involved in virtual hearings.
Cybersecurity Concerns
As courts increasingly rely on technology, cybersecurity emerges as a critical concern. The survey reveals significant variation in confidence levels regarding IT security.
While 57% of respondents feel highly confident in their IT systems’ security, an alarming 22% of respondents say they are “not at all confident” in the security of their IT systems.
Generational Workforce Changes
The survey identifies generational workforce shifts as another major factor affecting courts. Baby Boomers and Gen Xers exiting the workplace, along with Gen Zers entering the workforce and Millennials moving into leadership positions, are trends frequently ranked as transformational or high impact.
These demographic changes have important implications for technology adoption. As the report notes, Gen Zers are digital natives who are very comfortable using technology and may find it easier to manage automated workflows, while they may be resistant to jobs and tasks that still rely heavily on manual tasks.
Reducing Operational Errors
The survey provides insights about task efficiency and error rates in court operations.
Entering and updating data in court management systems was rated as both the most error-prone task by a wide margin and also as the second-most inefficient task. This finding suggests that greater use of automation in CMS entry could yield major improvements in both efficiency and error rates.
The survey also found correlations between different operational challenges. Tasks that are more stressful are also correlated with causing inconvenience for court users, suggesting that addressing workflow inefficiencies could simultaneously improve both staff satisfaction and user experience.
A Critical Juncture for Courts
The survey suggests that courts face a strategic choice: embrace AI technologies that could significantly alleviate operational pressures, or risk falling further behind as staffing challenges intensify and workloads continue to grow.
“We’re facing challenges — staff don’t think they have enough time to meet their demands, and they’re working more hours to get the work done, and that’s leading to burnout,” said David Slayton, executive officer and clerk of court for the Superior Court of Los Angeles County.
“It’s incumbent on court leaders to really think about how technology can help us with this problem.”
Mike Abbott, head of Thomson Reuters Institute, underscored the urgency of the situation.
“Courts are facing an unprecedented convergence of change, driven by generative AI and generational shifts in their workforce, at the same time as they continue to deal with staff shortages, backlogs and delays,” Abbott said.
“AI literacy can empower the courts to understand both the risks and the opportunities associated with the technology, enabling them to identify the best use cases which help them focus on higher value work.”
Tools & Platforms
State AI leaders gather at Princeton to consider how the technology can improve public services
Much of the news about artificial intelligence has focused on how it will change the private sector. But all around the country, public officials are experimenting with how AI can also transform the way governments provide essential services to citizens while avoiding pitfalls.
State AI leaders, including Gov. Phil Murphy of New Jersey, gathered at Princeton University in June to discuss how AI offers ways for government to be more efficient, effective, and transparent, especially at a time when budgets are strapped and economic uncertainty has slowed down hiring.
Hosted by Princeton’s Center for Information Technology (CITP), the NJ AI Hub, the State of New Jersey, the National Governors Association, the Center for Public Sector AI, GovLab, and InnovateUS, the conference brought together more than 100 AI leaders from 25 states to share ideas and collaborate. The meeting was conducted under an agreement of confidentiality to allow participants to discuss progress and concerns openly. Quotations in this story are used by permission.
What emerged was enthusiasm about AI’s potential to reduce the time government employees spend on manual tasks and improve their ability to engage citizens, as well as concerns about how best to use public data to innovate and increase equity rather than undermine it.
The gathering is just one of the ways that CITP – which is a joint center of the Princeton School of Public and International Affairs and Princeton Engineering – is leading on AI. The center also holds policy precepts to engage policymakers in AI governance at the SPIA in DC Center, and several affiliated faculty teach courses on AI policy at Princeton SPIA.
“There’s a clear recognition of the need for thinking about public accountability and equity,” said Princeton’s Arvind Narayanan. “At the same time, I think there’s also recognition of the potential for governments if we get this right.”
At the conference, CITP Director Arvind Narayanan noted that attendees were focused on practical implementation of AI tools rather than the “polarizing conversations around AI that dominate the media.” He also explained why public-facing deployments of AI by state governments have been slower than internal ones.
“There’s a clear recognition of the need for thinking about public accountability and equity. At the same time, I think there’s also recognition of the potential for governments if we get this right,” said Narayanan, who is also a professor of computer science and co-author of “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.”
Speakers shared big and small ways that AI is improving government. Some noted saving an hour or two a week per employee by leveraging AI to help draft grant applications, assess legislation, or review procurement policies while ensuring oversight and accuracy. One city automated the summarization of council oral votes, a task that was previously completed by a city clerk, creating summaries of 20 years of council books in a short period of time at nearly zero cost. As a result, voters have a simpler way to access information and hold elected officials accountable.
In his remarks, Gov. Phil Murphy laid out how New Jersey is approaching the technology, including its partnership with Princeton on the NJ AI hub.
“We held hands and jumped into the AI space,” Murphy said of the state’s partnership with the University. Together with Microsoft and New Jersey-based AI company CoreWeave, the state and University launched the NJ AI Hub earlier this year to foster AI innovation. “I don’t think we’d be all in if we didn’t think that the probabilities were very high that a lot of good things could go right with AI, but I think we also have to acknowledge some of the tensions that are still playing themselves out.”
Murphy highlighted concerns about AI’s potential to empower bad actors, as well as its impact on human creativity, jobs, and equity.
“Is this going to be something that is a huge wealth generator for the few, or are we going to be able to give access to this realm to everybody,” he said.
One of the ideas attendees considered at the conference was building a public AI infrastructure that would ensure it remains an open-source technology, rather than becoming privately controlled by a few companies. Bringing AI into the public domain would also present an opportunity to build in controls and mechanisms for accountability, speakers noted. They argued that AI is foundational infrastructure, not unlike roads, bridges, and broadband.
At the end of the two-day gathering, Anne-Marie Slaughter, chief executive of the New America Foundation and former Princeton SPIA dean, reflected on the conference. She emphasized what others had said about needing to be transparent in how AI is used and ensuring that public trust in government is strengthened.
“[AI] doesn’t just transform how government does things better, faster, cheaper. It can transform what government does and, even more importantly, what government in a democracy is,” Slaughter said. “You can start to co-create and you can start to co-govern.”
Posing with Gov. Phil Murphy at the conference are (left to right) Cassandra Madison of the Center for Public Sector AI, CITP Director Arvind Narayanan, New Jersey Chief AI Strategist Beth Simone Noveck, Timothy Blute of the National Governors Association and Jeffrey Oakman, senior strategic AI Hub project manager at Princeton.
Tools & Platforms
How Trump’s megabill could slow AI progress in US
The elimination of federal renewable energy tax credits in Trump’s One Big Beautiful Bill Act has major implications for the global AI race.
Ultimately, the shift means slowing down US progress on new energy production, which is key to winning the technology Cold War with China. There is no possible way tech companies can power the massive rollout of AI factories without solar, and now it will be that much more expensive.
But the attempt to throw a lifeline to the fossil fuel industry could be too little, too late, as detailed in this New Yorker article by Bill McKibben. The rate of solar adoption is now about a gigawatt every 15 hours. A gigawatt is the output of a typical nuclear power plant.
Solar isn’t just cheaper than fossil fuels. It’s also faster to deploy, which is crucial in the AI race. The expansion of AI data centers is creating new economic incentives for innovation in renewables, from geothermal to fusion to new battery chemistries, which can store all that new solar power. It’s a topic I expect we’ll be covering more and more here in the coming months.
-
Funding & Business2 weeks ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers2 weeks ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions2 weeks ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education4 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education5 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas