Tools & Platforms
Industry Leaders Chart the Future of Mobile Innovation at Galaxy Tech Forum
At Galaxy Unpacked 2025 on July 9, Samsung Electronics unveiled its latest Galaxy Z series devices and wearables — pushing the boundaries of foldable design and connected wellness experiences. These innovations mark the next step in the company’s mission to deliver meaningful, user-centered technology, with Galaxy AI and digital health emerging as key pillars of the journey ahead.
To explore these themes further, Samsung hosted two panels at the Galaxy Tech Forum on July 10 in Brooklyn. Samsung Newsroom joined industry leaders and executives to examine how ambient intelligence and advanced health technologies are shaping the future of mobile innovation.
(Panel One) The Next Vision of AI: Ambient Intelligence
(From left) Moderator Sabrina Ortiz, Jisun Park, Mindy Brooks and Dr. Vinesh Sukumar
The first panel, “The Next Vision of AI: Ambient Intelligence,” explored how multimodal capabilities are enabling the continued evolution of AI in everyday life — blending into user interactions in ways that feel intuitive, proactive and nearly invisible. Panelists discussed the smartphone’s evolving role, the importance of platform integration and the power of cross-industry collaboration to deliver secure, personalized intelligence at scale.
Jisun Park, Corporate Executive Vice President and Head of Language AI Team, Mobile eXperience (MX) Business at Samsung Electronics, opened the conversation by reflecting on Galaxy AI’s rapid adoption. Since the launch of the Galaxy S25 series in January, more than 70% of users have engaged with Galaxy AI features. He then turned the discussion to the next frontier, ambient intelligence — AI that is deeply personal, predictive and ever-present.
Jisun Park from Samsung Electronics
Samsung sees ambient intelligence as AI that is so seamlessly integrated into daily life it becomes second nature. The company is committed to democratizing Galaxy AI to 400 million devices by the end of 2025.
This vision builds on insights from a yearlong collaboration with London-based research firm Symmetry, which revealed that 60% of users want their phones to anticipate needs without prompts — based on daily habits.
“Some see AI as the start of a ‘post-smartphone’ era, but we see it differently,” said Park. “We’re building a future where your devices don’t just respond — they become smarter to anticipate, see and work quietly in the background to make life feel a little more effortless.”
Mindy Brooks, Vice President of Android Consumer Product and Experience at Google, discussed how multimodal AI is moving beyond reactive response to deeper understanding of user intent across inputs like text, vision and voice. Google’s Gemini is designed to be intelligently aware and anticipatory — tuned to individual preferences and routines for assistance that feels natural.
Mindy Brooks from Google
“Through close collaboration with Samsung, Gemini works seamlessly across its devices and connects with first-party apps to provide helpful and personalized responses,” she said.
Dr. Vinesh Sukumar, Vice President of Product Management at Qualcomm Technologies emphasized that as AI becomes more personalized, there is more information than ever that needs to be protected.
“For us, privacy, performance and personalization go hand in hand — they’re not competing priorities but co-equal standards,” he said.
Dr. Vinesh Sukumar from Qualcomm Technologies
Both Brooks and Dr. Sukumar reinforced the importance of tight integration across platforms and hardware.
“Our work with Samsung prioritizes secure, on-device intelligence so that users know where their data is and who controls it,” said Dr. Sukumar.
The AI panel at Galaxy Tech Forum
Moderator Sabrina Ortiz, senior editor at ZDNET, closed the session with a discussion on AI privacy. Panelists agreed that trust, transparency and user control must underpin the entire AI experience.
“When it comes to building more agentic AI, our priority is to ensure we’re fostering smarter, more personalized and more meaningful assistance across our device ecosystem,” said Brooks.
(Panel Two) The Next Chapter of Health: Scaling Prevention and Connected Care
The second panel, “The Next Chapter of Health: Scaling Prevention and Connected Care,” focused on how technology can bridge the gap between wellness and clinical care — making health insights more connected, proactive and usable for individuals, healthcare providers and digital health solution partners. Panelists explored how the convergence of clinical data, at-home monitoring and AI is reshaping the modern healthcare experience.
(From left) Moderator Dr. Hon Pak, Mike McSherry, Dr. Rasu Shrestha and Jim Pursley
Health data is often siloed across systems, resulting in inefficiencies and gaps in care. Combined with rising rates of chronic illness, an aging population and ongoing clinician shortages, the result is a system under pressure to deliver timely, effective care.
Dr. Hon Pak from Samsung Electronics
“Patients and consumers around the world are asking us to hear them, to know them, to truly understand them,” said moderator Dr. Hon Pak, Senior Vice President and Head of Digital Health Team at Samsung Electronics. “And I believe this is the opportunity we have with Samsung, Xealth and partners like Hinge and Advocate. Together, we are creating a connected ecosystem where healthcare can truly make a difference — not just in the life of a patient, but in the life of a person.”
Samsung is addressing this challenge through technological innovation and its recent acquisition of Xealth, a leading digital health platform with a network of more than 500 hospitals and 70 digital health solution providers. Through Xealth, Samsung plans to connect wearable data and insights from Samsung Health into clinical workflows — delivering a more unified and seamless healthcare experience.
Mike McSherry from Xealth
“This , plus your devices — the watch, the ring — are going to replace the standalone blood pressure monitor, the pulse oximeter, a variety of different devices,” said Mike McSherry, founder and CEO of Xealth. “It’s going to be one packaged solution, and that’s going to simplify care.”
This collaboration is designed to empower hospitals with real-time insights and help prevent chronic conditions through early detection and continuous monitoring with wearable devices.
Dr. Rasu Shrestha from Advocate Health
“The reality is that with all of the challenges that exist in healthcare, it is not any one entity that can heroically go in and save healthcare. It really takes an ecosystem,” said Dr. Rasu Shrestha, Executive Vice President and Chief Innovation & Commercialization Officer at Advocate Health. “That’s part of the reason why I’m so excited about Xealth and Samsung — and partners like us — really coming together to solve for this challenge. Because it is about Samsung enabling it. It’s more of an open ecosystem, a curated ecosystem.”
The panel spotlighted the growing shift from hospital-based care to care at home — and the opportunities enabled by Samsung’s expanding ecosystem of connected devices. Data from wearables, including those equipped with Samsung’s BioActive Sensor technology, can provide high-quality input for AI-driven insights.
Paired with Samsung’s SmartThings connectivity and wide portfolio of smart home devices, the company is uniquely positioned to support remote health monitoring and treatment from home.
AI is expected to play a role in reducing clinician workload by streamlining administrative tasks and surfacing the most relevant insights at the right time. Platforms like Xealth offer users a personalized, friendly interface to access necessary information from one place for a more connected healthcare experience.
Tools & Platforms
Noninvasive brain technology allows control of robotic hands with thought
NEWYou can now listen to Fox News articles!
Noninvasive brain tech is transforming how people interact with robotic devices. Instead of relying on muscle movement, this technology allows a person to control a robotic hand by simply thinking about moving his fingers.
No surgery is required.
Instead, a set of sensors is placed on the scalp to detect brain signals. These signals are then sent to a computer. As a result, this approach is safe and accessible. It opens new possibilities for people with motor impairments or those recovering from injuries.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
PARALYZED MAN SPEAKS AND SINGS WITH AI BRAIN-COMPUTER INTERFACE
A woman wearing non-invasive brain technology (Carnegie Mellon University)
How noninvasive brain tech turns thought into action
Researchers at Carnegie Mellon University have made significant progress with noninvasive brain technology. They use electroencephalography (EEG) to detect the brain’s electrical activity when someone thinks about moving a finger. Artificial intelligence, specifically deep learning algorithms, then decodes these signals and translates them into commands for a robotic hand. In their study, participants managed to move two or even three robotic fingers at once, just by imagining the motion. The system achieved over 80% accuracy for two-finger tasks. For three-finger tasks, accuracy was over 60%. All of this happened in real time.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Meeting the challenge of finger-level control
Achieving separate movement for each robotic finger is a real challenge. The brain areas responsible for finger movement are small. Their signals often overlap, which makes it hard to distinguish between them. However, advances in noninvasive brain technology and deep learning have made it possible to pick up on these subtle differences.
The research team used a neural network called EEGNet. They fine-tuned it for each participant. Because of this, the system allowed for smooth, natural control of the robotic fingers. The movements closely matched how a real hand works.
A robotic finger being controlled by non-invasive brain technology (Kurt “CyberGuy” Knutsson)
Why noninvasive brain tech matters for everyday life
For people with limited hand function, even small improvements can make a huge difference. Noninvasive brain technology eliminates the need for surgery because the system is external and easy to use. In addition, this technology provides natural and intuitive control. It enables a person to move a robotic hand by simply thinking about the corresponding finger movements.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
The accessibility of noninvasive brain technology means it can be used in clinics and homes and by a wide range of people. For example, it enables participation in everyday tasks, such as typing or picking up small objects that might otherwise be difficult or impossible to perform. This approach can benefit stroke survivors and people with spinal cord injuries. It can also help anyone interested in enhancing their abilities.
What’s next for noninvasive brain tech?
While the progress is exciting, there are still challenges ahead. Noninvasive brain technology needs to improve even further at filtering out noise and adapting to individual differences. However, with ongoing advances in deep learning and sensor technology, these systems are becoming more reliable and easier to use. Researchers are already working to expand the technology for more complex tasks.
As a result, assistive robotics could soon become a part of more homes and workplaces.
Illustration of how the noninvasive brain technology works (Carnegie Mellon University)
Kurt’s key takeaways
Noninvasive brain technology is opening up possibilities that once seemed out of reach. The idea of moving a robotic hand just by thinking about it could make daily life easier and more independent for many people. As researchers continue to improve these systems, it will be interesting to see how this technology shapes the way we interact with the world around us.
CLICK HERE TO GET THE FOX NEWS APP
If you had the chance to control a robotic hand with your thoughts, what would you want to try first? Let us know by writing us at Cyberguy.com/Contact
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER
Copyright 2025 CyberGuy.com. All rights reserved.
Tools & Platforms
Korea’s game studios rebrand as AI tech firms, with stints at fashion, robotics, media
What was once a world of elves, dragons and power-ups is now giving rise to one of South Korea’s most unexpected tech revolutions, with game studios taking their place alongside Big Tech in the race for AI dominance.
The country’s gaming heavyweights are increasingly shedding their image as pure entertainment companies and positioning themselves as AI-first tech firms, expanding far beyond the virtual battlegrounds into sectors such as fashion, media and even robotics.
Facing a slowing gaming market and rising development costs, game developers and publishers such as NCSOFT Corp., Nexon Co. and Krafton Inc. are leveraging their proprietary AI tools and massive gameplay data troves to build new growth engines, applying gaming-derived machine intelligence to real-world industries.
“We’re no longer just competing for players’ time, but for a stake in the future of applied AI,” said an executive at a domestic game firm.
FROM MMORPGs TO 3D MODELS, FASHION AI
Few illustrate this transition better than NCSOFT, which in February spun off its AI division into a standalone subsidiary, NC AI.
The unit is set to launch Varco 3D at the end of July – a software tool that can generate high-quality 3D characters using nothing more than text or image prompts.
The product will be offered via a software-as-a-service (SaaS) model and targets users far beyond traditional game development, from virtual influencers to digital fashion brands, according to company officials.
The move follows NCSOFT’s development in 2023 of Varco, Korea’s first large language model (LLM) developed by a game company.
The company now provides Varco Art Fashion, an AI-powered tool that generates apparel designs and visual prototypes. The tool has already been adopted by 10 leading fashion firms, halving new product development times, according to NCSOFT.
“We see an opportunity to disrupt the fashion and content production pipelines using tools originally built for game development,” said an NC AI official.
The company also provides generative engines to media firms, allowing for automatic content production and editing.
PREDICTING THE NEXT BIG HIT, OR MISS
Nexon, which owns game-developing studio Nexon Games Co., is taking a different path: using AI to forecast the commercial success of upcoming games.
At the Nexon Developers Conference (NDC25) last month, the firm unveiled its Game Success Prediction AI, designed to sift through early gameplay patterns and metadata to identify breakout potential.
“Sometimes, high-quality games are overlooked,” said Oh Jin-wook, head of Nexon’s Intelligence Labs Group. “AI can help uncover hidden gems, allowing us to take more creative risks.”
His argument is backed by data.
According to global gaming platform Steam, 84% of titles released on its platform last year failed to even register meaningful sales.
Nexon said AI can help de-risk game development by offering early signals from pre-launch user testing.
TAKING AI INTO THE PHYSICAL REALM
Krafton, best known for PlayerUnknown’s Battlegrounds (PUBG), is taking AI into the physical realm.
In April, Krafton Chief Executive Kim Changhan met with Nvidia CEO Jensen Huang to discuss collaboration on humanoid robotics, building on their previous partnership to co-develop non-player character AI.
Krafton recently launched a Physical AI team, tasked with adapting in-game character AI for robotic applications. The goal: to use virtual intelligence as the foundation for real-world robotic “brains.”
Unlike software AI such as ChatGPT, physical AI focuses on decision-making for physical tasks such as picking up or moving objects.
ESCAPING THE GAMING RUT
Analysts said at the heart of this AI pivot is a strategic response to a cooling domestic gaming market.
Rising development costs and a lack of global blockbusters have dragged down growth.
According to the Korea Creative Content Agency, the nation’s gaming user rate fell to a record low of 59.9% in 2024.
The threat isn’t just rival games – it’s YouTube, TikTok and other attention-gobbling apps.
Nexon Games CEO Park Yong-hyun named non-gaming platforms as the biggest threat to the gaming industry.
According to mobile analytics firm Mobile Index, Koreans spent over 140 minutes a day on YouTube as of March, outpacing daily game playtime by a wide margin.
Experts say Korean game developers are uniquely positioned to scale into the broader AI economy.
The industry has accumulated years of player behavior data and developed highly advanced simulation environments – ideal conditions for training AI.
“Games are structured, interactive ecosystems with clear rules and goals, perfect for developing and testing AI models,” said Wi Jong-hyun, president of the Korea Game Society and a professor at Chung-Ang University. “It’s only natural that these companies are now leading Korea’s AI transition.”
Write to Young-Chong Choi at youngchoi@hankyung.com
In-Soo Nam edited this article.
Tools & Platforms
New MGA Augmented targets smart-follow market with AI-powered underwriting
“The London Market is not broken,” Prince explained. “t needs a steady hand to implement new technologies, such as AI, to enhance the way insurance operates. Efficiency and accuracy can replace manual process and human error. Brokers and carriers will, as a consequence, have a much smoother experience when doing business in the London insurance market.”
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained