Tools & Platforms
Industry Leaders Chart the Future of Mobile Innovation at Galaxy Tech Forum – Samsung Global Newsroom
At Galaxy Unpacked 2025 on July 9, Samsung Electronics unveiled its latest Galaxy Z series devices and wearables — pushing the boundaries of foldable design and connected wellness experiences. These innovations mark the next step in the company’s mission to deliver meaningful, user-centered technology, with Galaxy AI and digital health emerging as key pillars of the journey ahead.
To explore these themes further, Samsung hosted two panels at the Galaxy Tech Forum on July 10 in Brooklyn. Samsung Newsroom joined industry leaders and executives to examine how ambient intelligence and advanced health technologies are shaping the future of mobile innovation.
(Panel One) The Next Vision of AI: Ambient Intelligence
▲ (From left) Moderator Sabrina Ortiz, Jisun Park, Mindy Brooks and Dr. Vinesh Sukumar
The first panel, “The Next Vision of AI: Ambient Intelligence,” explored how multimodal capabilities are enabling the continued evolution of AI in everyday life — blending into user interactions in ways that feel intuitive, proactive and nearly invisible. Panelists discussed the smartphone’s evolving role, the importance of platform integration and the power of cross-industry collaboration to deliver secure, personalized intelligence at scale.
Jisun Park, Corporate Executive Vice President and Head of Language AI Team, Mobile eXperience (MX) Business at Samsung Electronics, opened the conversation by reflecting on Galaxy AI’s rapid adoption. Since the launch of the Galaxy S25 series in January, more than 70% of users have engaged with Galaxy AI features. He then turned the discussion to the next frontier, ambient intelligence — AI that is deeply personal, predictive and ever-present.
▲ Jisun Park from Samsung Electronics
Samsung sees ambient intelligence as AI that is so seamlessly integrated into daily life it becomes second nature. The company is committed to democratizing Galaxy AI to 400 million devices by the end of 2025.
This vision builds on insights from a yearlong collaboration with London-based research firm Symmetry, which revealed that 60% of users want their phones to anticipate needs without prompts — based on daily habits.
“Some see AI as the start of a ‘post-smartphone’ era, but we see it differently,” said Park. “We’re building a future where your devices don’t just respond — they become smarter to anticipate, see and work quietly in the background to make life feel a little more effortless.”
Mindy Brooks, Vice President of Android Consumer Product and Experience at Google, discussed how multimodal AI is moving beyond reactive response to deeper understanding of user intent across inputs like text, vision and voice. Google’s Gemini is designed to be intelligently aware and anticipatory — tuned to individual preferences and routines for assistance that feels natural.
▲ Mindy Brooks from Google
“Through close collaboration with Samsung, Gemini works seamlessly across its devices and connects with first-party apps to provide helpful and personalized responses,” she said.
Dr. Vinesh Sukumar, Vice President of Product Management at Qualcomm Technologies emphasized that as AI becomes more personalized, there is more information than ever that needs to be protected.
“For us, privacy, performance and personalization go hand in hand — they’re not competing priorities but co-equal standards,” he said.
▲ Dr. Vinesh Sukumar from Qualcomm Technologies
Both Brooks and Dr. Sukumar reinforced the importance of tight integration across platforms and hardware.
“Our work with Samsung prioritizes secure, on-device intelligence so that users know where their data is and who controls it,” said Dr. Sukumar.
▲ The AI panel at Galaxy Tech Forum
Moderator Sabrina Ortiz, senior editor at ZDNET, closed the session with a discussion on AI privacy. Panelists agreed that trust, transparency and user control must underpin the entire AI experience.
“When it comes to building more agentic AI, our priority is to ensure we’re fostering smarter, more personalized and more meaningful assistance across our device ecosystem,” said Brooks.
(Panel Two) The Next Chapter of Health: Scaling Prevention and Connected Care
The second panel, “The Next Chapter of Health: Scaling Prevention and Connected Care,” focused on how technology can bridge the gap between wellness and clinical care — making health insights more connected, proactive and usable for individuals, healthcare providers and digital health solution partners. Panelists explored how the convergence of clinical data, at-home monitoring and AI is reshaping the modern healthcare experience.
▲ (From left) Moderator Dr. Hon Pak, Mike McSherry, Dr. Rasu Shrestha and Jim Pursley
Health data is often siloed across systems, resulting in inefficiencies and gaps in care. Combined with rising rates of chronic illness, an aging population and ongoing clinician shortages, the result is a system under pressure to deliver timely, effective care.
▲ Dr. Hon Pak from Samsung Electronics
“Patients and consumers around the world are asking us to hear them, to know them, to truly understand them,” said moderator Dr. Hon Pak, Senior Vice President and Head of Digital Health Team at Samsung Electronics. “And I believe this is the opportunity we have with Samsung, Xealth and partners like Hinge and Advocate. Together, we are creating a connected ecosystem where healthcare can truly make a difference — not just in the life of a patient, but in the life of a person.”
Samsung is addressing this challenge through technological innovation and its recent acquisition of Xealth, a leading digital health platform with a network of more than 500 hospitals and 70 digital health solution providers. Through Xealth, Samsung plans to connect wearable data and insights from Samsung Health into clinical workflows — delivering a more unified and seamless healthcare experience.
▲ Mike McSherry from Xealth
“This [phone], plus your devices — the watch, the ring — are going to replace the standalone blood pressure monitor, the pulse oximeter, a variety of different devices,” said Mike McSherry, founder and CEO of Xealth. “It’s going to be one packaged solution, and that’s going to simplify care.”
This collaboration is designed to empower hospitals with real-time insights and help prevent chronic conditions through early detection and continuous monitoring with wearable devices.
▲ Dr. Rasu Shrestha from Advocate Health
“The reality is that with all of the challenges that exist in healthcare, it is not any one entity that can heroically go in and save healthcare. It really takes an ecosystem,” said Dr. Rasu Shrestha, Executive Vice President and Chief Innovation & Commercialization Officer at Advocate Health. “That’s part of the reason why I’m so excited about Xealth and Samsung — and partners like us — really coming together to solve for this challenge. Because it is about Samsung enabling it. It’s more of an open ecosystem, a curated ecosystem.”
The panel spotlighted the growing shift from hospital-based care to care at home — and the opportunities enabled by Samsung’s expanding ecosystem of connected devices. Data from wearables, including those equipped with Samsung’s BioActive Sensor technology, can provide high-quality input for AI-driven insights.
Paired with Samsung’s SmartThings connectivity and wide portfolio of smart home devices, the company is uniquely positioned to support remote health monitoring and treatment from home.
AI is expected to play a role in reducing clinician workload by streamlining administrative tasks and surfacing the most relevant insights at the right time. Platforms like Xealth offer users a personalized, friendly interface to access necessary information from one place for a more connected healthcare experience.
▲ The health panel at Galaxy Tech Forum
Across both sessions, one theme was clear — realizing the potential of ambient intelligence and scaling prevention and connected care requires deep, cross-industry collaboration.
From on-device privacy solutions like Knox Matrix to expanded integration across Galaxy devices, Samsung and its partners are building an ecosystem that’s not only intelligent but simple, secure and future-ready.
Tools & Platforms
AI technology drives sharp rise in synthetic abuse material
New data reveals over 1,200 AI-generated abuse videos have been discovered so far in 2025, a significant rise from just two during the same period last year.
AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.
According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.
Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.
More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.
Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.
Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.
IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.
The IWF encourages the public to report concerning material and share the exact web page where it is located.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Tools & Platforms
AI on the line: How AI is transforming vision inspection technologies
In an era of tightening global regulations and rising consumer expectations, the F&B industry is increasingly turning to advanced vision inspection technologies. From spotting defects to ensuring compliance, these automated inspection tools are reshaping quality control, enhancing efficiency, reducing waste and boosting safety. FoodBev’s Siân Yates explores how cutting-edge technology is reshaping the industry, one perfectly inspected product at a time.
In the food and beverage industry, traditional quality inspection methods have always relied on human observation – an inherently inconsistent and flawed process. Automated vision inspection systems offer a transformative alternative. By detecting foreign objects, assessing product uniformity and ensuring that only items meeting strict quality criteria reach consumers, these systems significantly enhance operational efficiency and minimise errors.
“As the food industry moves towards more automation, applications are becoming increasingly complex, largely due to the variability in food products,” said Anthony Romeo, product manager at US-based vision solutions company Oxipital AI. This complexity stems from the need for automated systems to adapt to the wide range of textures, sizes and ingredients in food, making precise automation a key challenge.
Stephan Pottel, director of strategy at Zebra Technologies, highlighted the rising demand for intelligent automation: “There’s a growing need for machine vision and 3D solutions, powered by deep learning, to address more complex food and packaging use cases, along with vision-guided robotics for tasks like inspection, conveyor belt picking and sortation workflows”.
Key features of vision inspection
1. Defect detection
Vision inspection systems excel in identifying defects that may go unnoticed by human inspectors. These systems utilise high-resolution cameras and advanced algorithms to detect foreign objects, surface defects, and inconsistencies in size and shape. For example, in the fruit packing industry, vision systems can identify bruised or rotten fruit, ensuring only high-quality products are packaged and shipped.
2. Label verification
These technologies are increasingly used for label verification, ensuring compliance with regulatory standards. Systems can check for correct placement, legibility and adherence to labelling requirements, such as allergen information and expiration dates. Vision is usually deployed for label verification, rather than food surface defects, enhancing compliance and reducing the risk of costly recalls.
3. Product uniformity assessment
Maintaining product uniformity is crucial in the food and beverage sector. Vision inspection systems can assess visual aspects such as size, shape and colour. For instance, a snack manufacturer might use vision inspection to ensure that chips are uniformly shaped and coloured, meeting consumer expectations for quality and appearance.
4. Adaptive manufacturing
Advanced vision systems, particularly those incorporating AI and 3D technology, enable adaptive manufacturing processes. These systems can adjust production parameters in real time based on the visual data they collect. For example, in a bakery, vision systems can monitor the size and shape of pastries as they are produced, allowing adjustments to baking times or temperatures to ensure consistent quality.
Advancements in AI
Recent advancements in AI, automation and 3D technology have greatly enhanced machine vision systems, increasing accuracy and providing realistic visual sensing capabilities. 3D imaging technologies are being used to assess the shape and size of products, ensuring they meet packaging specifications. For instance, in the seafood industry, 3D scanners can evaluate the dimensions of fish fillets, ensuring they are cut to the correct size before packaging. This not only reduces waste but also ensures consistency in product offerings.
What is more, 3D profile sensors improve depth perception and refine quality control, making them indispensable tools in industrial automation. Oxipital AI’s Romeo highlighted the potential of these technologies: “Removing defects before they reach customers is a key first step where vision inspection technology plays a role, but there’s even more data to be leveraged”. By preventing defects from the outset, manufacturers can boost yield and reduce waste.
AI-powered vision inspection systems can also facilitate real-time monitoring of production lines, identifying potential issues before they escalate. This capability allows manufacturers to implement predictive maintenance, reducing downtime and improving overall efficiency.
AI and food safety
Consumer safety remains a top priority in the food and beverage industry. AI plays a crucial role in monitoring and analysing processes in real time, helping manufacturers navigate the complexities of compliance with legal requirements and certification pressures from major retailers.
As Zebra Technologies’ Pottel explained: “AI is ideal for food and beverage products where classification, segmentation, and object and anomaly detection are essential. It is also enhancing asset and inventory visibility, which is crucial for predicting contamination risks and maintaining high safety standards throughout the supply chain.”
“Vision technologies can help check the presentation of food products…offering a quick, repeatable and reliable way to assess the visual aspects of food products like size, shape and colour,” added Neil Gruettner, market manager at Mettler-Toledo Product Inspection.
He continued: “Deployment of this type of AI provides context to support rule-based machine learning and improve human decision-making. It also gives inspection equipment the tools to extract and interpret as much data as possible out of a product, facilitating the evolution and refinement of production processes through the continuous exposure to vast datasets.”
AI-enhanced vision systems also guide robots in handling food products, particularly those that are delicate or irregularly shaped. “AI has proved to be a great method for tackling applications with a high frequency of naturally occurring organic variability, such as food,”Oxipital AI’s Romeo explained, adding that this adaptability ensures gentle and precise handling, particularly important when sorting fresh produce or packaging baked goods.
Fortress Technology uses AI to reduce contamination risks and identify defects. The company’s commercial manager, Jodie Curry, told FoodBev: “Streamlining processes reduces the risk of contamination and ensures consistent quality. Implementing automated technology and digital tools helps identify inefficiencies and boosts responsiveness.”
The role of combination inspection systems
The integration of multiple inspection technologies into single systems is another key trend in this space. These systems integrate various inspection technologies, such as X-ray, checkweighing and vision inspection, to provide a comprehensive assessment of food products. By combining these technologies, manufacturers can ensure higher quality control, better detection of defects and more efficient production lines. This trend allows for more accurate and reliable monitoring, helping to reduce waste, improve safety standards and enhance overall product quality.
For its part, Fortress offers combination systems that enable comprehensive and multi-layered inspection. The company is already leveraging its proprietary data software package, Contact 4.0, across its metal detection, X-ray and checkweighing technologies. Contact 4.0 allows processors to review and collect data, securely monitor and oversee the performance of multiple Fortress metal detectors, checkweighers or combination inspection machines connected to the same network.
Deep learning and quality control
Deep learning is revolutionising visual inspection by enabling machines to learn from data and recognise previously unseen variations of defect As Zebra Technologies’ Pottel explained: “Deep learning machine vision excels at complex visual inspections, especially where the range of anomalies, defects and spoilage can vary, as is often the case with food.
This technology is vital for automating inspections and ensuring quality. Deep learning optical character recognition (OCR) also improves packaging inspection by ensuring label quality, regulatory compliance and brand protection. It can verify label presence, confirm allergen accuracy and prevent mislabeling.
“The goal is to strengthen quality control by capturing an image and processing it against set quality control parameters,” Mettler-Toledo’s Gruettner pointed out.
Vision systems are increasingly deployed for label verification, ensuring compliance with legislative food labelling requirements. The Mettler-Toledo label inspection portfolio features Smart Camera systems (V11, V13, V15) for basic label inspections, including barcodes, alphanumeric text and label quality. For more advanced applications, the PC based V31 and V33 systems offer a larger field of view, faster throughput and enhanced inspection capabilities.
Oxipital AI uses 3D product scans and synthetic data generation to eliminate the need for hand-labelling images. “All training is done at Oxipital AI, enabling food and beverage customers to deploy AI without needing a team of experts,” said Romeo. “Our solutions are designed for immediate impact, requiring no coding, DIY or machine-learning expertise to implement and maintain.”
Real-world applications and future prospects
According to Zebra’s Global Manufacturing Vision Study, which surveyed leaders across various manufacturing sectors, including F&B, 66% of respondents plan to implement machine vision within the next five years, while 54% expect AI to drive growth by 2029.
These figures, coupled with the expanding market for vision inspection systems, suggest
that the majority of manufacturing leaders are prioritising the integration of these advanced technologies, seeing them as crucial tools for both immediate improvements and long-term growth.
This shift is partly driven by increasingly stringent government regulations, which demand more accurate labelling and packaging. Many companies are already successfully leveraging AI to enhance their operations, particularly in labelling processes.
Despite its clear advantages, the uptake of AI has been slow. The main barrier appears to be cost. While the initial integration can be expensive, AI has demonstrated significant long-term cost savings, making it a worthwhile investment over time.
Zebra’s studies have shown that the pressure to maintain quality while managing fewer resources is intensifying for manufacturers. As a result, cost remains a significant consideration when implementing AI solutions.
Fortress recommends consolidating AI systems into a single interface, which helps reduce costs in the long term. Curry told FoodBev: “The future of our food supply chain depends on advanced inspection systems that enhance food safety, reduce product waste and require minimal factory floor space”.
She continued: “Combination systems offer the benefit of space efficiency, as all sales, services, parts and technical support are handled by one provider. A single interface simplifies training, improves operational safety and drives cost savings through faster installation and reduced training time.”
As AI continues to evolve, its role in vision and inspection is set to expand. Advancements in machine learning, sensor technology and robotics will lead to even more sophisticated and efficient inspection systems, raising quality and safety standards for consumers worldwide.
Tools & Platforms
AI Flow by TeleAI Recognized as a Breakthrough Framework for AI Deployment and Distribution by Omdia
SHANGHAI, July 11, 2025 /PRNewswire/ — AI Flow, the innovative framework developed by TeleAI, the Institute of Artificial Intelligence of China Telecom, has been recognized as a key role in the intelligent transformation of telecom infrastructure and services in the latest report by Omdia, a premier technology research and advisory firm. The report highlights AI Flow’s exceptional capabilities in addressing the edge GenAI implementation challenges, showcasing its device-edge-cloud computing architecture that optimizes both performance and efficiency as well as its groundbreaking combination of information and communication technologies.
According to the report, AI Flow facilitates seamless intelligence flow, allowing device-level agents to overcome the limitations of a single device and achieve enhanced functionality. The same communication network can connect advanced LLMs, VLMs, and diffusion models across heterogeneous nodes. By facilitating real-time, synergistic integration and dynamic interaction among these models, the approach achieves emergent intelligence that exceeds the capabilities of any individual model.
Lian Jye Su, Chief Analyst at Omdia, remarked that AI Flow has demonstrated sophisticated approaches to facilitate efficient collaboration across device-edge-cloud tiers and to achieve emergent intelligence through connective and interactive model operations.
The unveiling of AI Flow has also drawn great attention from the AI community on global social media. AI industry observer EyeingAI said on X “It’s a grounded, realistic take on where AI could be headed. ” AI tech influencer Parul Gautam said on X that AI Flow is pushing AI boundaries and ready to shape the future of intelligent connectivity.
Fulfill the Vision of Ubiquitous Intelligence in Future Communication Networks
AI Flow, under the leadership of Professer Xuelong Li, the CTO and Chief Scientist of China Telecom and Director of TeleAI, is introduced to address the significant challenges of the deployment of emerging AI applications posed by hardware resource limitations and communication network constraints, enhancing the scalability, responsiveness, and sustainability of real world AI systems. It is a multidisciplinary framework designed to enable seamless transmission and emergence of intelligence across hierarchical network architectures by leveraging inter-agent connections and human-agent interactions. At its core, AI Flow emphasizes three key points:
Device-Edge-Cloud Collaboration: AI Flow leverages a unified device-edge-cloud architecture, integrating end devices, edge servers, and cloud clusters, to dynamically optimize scalability and enable low-latency inference of AI models. By developing efficient collaboration paradigms tailored for the hierarchical network architecture, the system minimizes communication bottlenecks and streamlines inference execution.
Familial Models: Familial models refer to a set of multi-scale architectures designed to address diverse tasks and resource constraints within the AI Flow framework. These models facilitate seamless knowledge transfer and collaborative intelligence across the system through their interconnected capabilities. Notably, the familial models are feature-aligned, which allows efficient information sharing without the need for additional middleware. Furthermore, through well-structured collaborative design, deploying familial models over the hierarchical network can achieve enhanced inference efficiency under constrained communication bandwidth and computational resources.
Connectivity- and Interaction-based Intelligence Emergence: AI Flow introduces a paradigm shift to facilitate collaborations among advanced AI models, e.g., LLMs, vision-language models (VLMs), and diffusion models, thereby stimulating emergent intelligence surpassing the capability of any single model. In this framework, the synergistic integration of efficient collaboration and dynamic interaction among models becomes a key boost to the capabilities of AI models.
See AI Flow’s tech articles here:
https://www.arxiv.org/abs/2506.12479
https://ieeexplore.ieee.org/document/10884554
AI Flow’s First Move: AI-Flow-Ruyi Familial Model
Notably, TeleAI has just open-sourced the first version of AI Flow’s familial model: AI-Flow-Ruyi-7B-Preview last week on GitHhub.
The model is designed for the next-generation device-edge-cloud model service architecture. Its core innovation lies in the shared intermediate features across models of varying scales, enabling the system to generate response with a subset of parameters based on problem complexity through an early-exit mechanism. Each branch can operate independently while leveraging their shared stem network for computation reduction and seamless switching. Combined with distributed device-edge-cloud deployment, it achieves collaborative inference among large and small models within the family, enhancing the efficiency of distributed model inference.
Open-source address:
https://github.com/TeleAI-AI-Flow/AI-Flow-Ruyi
About TeleAI
TeleAI, the Institute of Artificial Intelligence of China Telecom, is a pioneering team of AI scientists and enthusiasts, working to create breakthrough AI technologies that could build up the next generation of ubiquitous intelligence and improve people’s wellbeing. Under the leadership of Professor Xuelong Li, the CTO and Chief Scientist of China Telecom, TeleAI aims to continuously expand the limits of human cognition and activities, by expediting research on AI governance, AI Flow, Intelligent Optoelectronics (with an emphasis on embodied AI), and AI Agents.
For more information:
https://www.teleai.com.cn/product/AboutTeleAI
Photo – https://mma.prnewswire.com/media/2729356/AI_Flow.jpg
-
Funding & Business2 weeks ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education4 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education6 days ago
How ChatGPT is breaking higher education, explained
-
Education5 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas