Connect with us

Tools & Platforms

Powerful AI technologies and AMD Ryzen AI processors

Published

on


I took a look at the latest information on AMD Ryzen AI processors, which represent a new class of Windows notebook chips and have been specially developed for next-generation AI applications. With this series, AMD is pursuing the goal of elevating notebooks to a new performance category in which artificial intelligence runs directly in the device and is no longer necessarily dependent on external cloud services. For you, this means that AI-supported functions such as language models, image processing or assistants are available locally and can be used more securely and energy-efficiently at the same time.

The Strix Point APUs from the Ryzen AI 300 series form the technical basis. They combine modern Zen 5 and Zen 5c cores, integrate an RDNA 3.5 graphics unit and have a Neural Processing Unit that achieves up to 55 TOPS performance. This is crucial as this NPU is specifically designed for AI workloads such as language processing or generative models. While the Zen 5 cores deliver high performance in demanding applications, the Zen 5c cores are designed for efficiency and optimize power consumption.

An important field of application is the use of Copilot PCs, a new PC category defined by Microsoft in which AI functions are seamlessly integrated into Windows. With Ryzen AI, the processors meet precisely these requirements. This means you can use devices that can, for example, run a voice assistant in real time or image processing with local AI without the need for a constant internet connection. This increases security, as sensitive data is not automatically transferred to servers, and also improves the speed of response.

Manufacturers are already using this technology in various devices. Dell is launching notebooks on the market that are equipped with Ryzen AI chips and offer a battery life of up to 22 hours. Asus is integrating the Ryzen AI 9 HX 370 into its new ROG Zephyrus G14, which is aimed at gamers who want to benefit from AI-supported features as well as high computing power. The OneXPlayer Super X, a hybrid device equipped with the Ryzen AI Max 395, is also particularly exciting. This version combines powerful graphics performance with an NPU that can process large language models such as Llama 3.2 with billions of parameters locally. The Ryzen AI 5 330, a chip that combines quad-core performance with an NPU of 50 TOPS, is also already available in the lower price range, making AI experiences accessible even for entry-level devices.

Conclusion

I see the new AMD Ryzen AI processors as the start of a new generation of PCs that will bring AI-supported applications directly to your notebook. Instead of being dependent on cloud servers, you can use computing power and models locally, which increases security and speed. The spectrum ranges from affordable entry-level solutions to high-end devices for gaming and professional applications. This opens up opportunities for you to use AI as an integral part of everyday life in notebooks, whether for creative work, productivity or entertainment.

Source: AMD

 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Half of firms lack AI expertise despite rising interest in EAM tech

Published

on


A global survey of maintenance professionals has found that almost half of industrial businesses lack internal expertise to adopt advanced tools such as artificial intelligence.

The Ultimo Maintenance Trend Report, based on input from over 200 maintenance professionals worldwide, highlights how emerging technologies including AI, machine learning, and digital twins are increasingly being recognised as key enablers in enterprise asset management (EAM). The report also points to persistent workforce challenges and the important role of human skill in the successful implementation of these next-generation tools.

Shift to next-generation technologies

According to the survey, there has been a significant increase in interest in advanced technologies since the last Ultimo EAM Trend Report in 2023. When respondents were asked about which innovations they believe will have the most positive impact on their maintenance and business practices, contextual intelligence was cited by 68% of participants, markedly up from 8% one year earlier. Automation and robotics (49%) and machine learning (41%) were also highlighted as areas of strong interest. The proportion of professionals interested in digital twins has more than doubled, now reaching 40%.

Despite the interest in and potential benefits of these technologies, some significant barriers remain. The survey found that 49% of respondents lack the internal expertise necessary to implement advanced tools like AI and machine learning. For many organisations, this skills gap poses a key challenge to the wider adoption of digital technologies in industrial asset management.

Workforce challenges

The survey data also indicates that workforce issues continue to be a dominant concern. An aging workforce was identified as the most pressing trend impacting maintenance strategy by 63% of respondents, underlining the urgency of knowledge transfer and workforce planning for businesses in asset-heavy sectors. In parallel, 50% of participants stated that recruiting experienced staff was their primary source of disruption over the past year, suggesting that both immediate and long-term workforce needs are being keenly felt.

Insights from Ultimo

“From global instability to changing regulations, socio-economic and political shifts are creating uncertainty across industries. In this environment, agility is critical,” said Berend Booms, Head of EAM Insights at Ultimo, an IFS company. “EAM can also serve as a catalyst for innovation. Internet of Things (IoT), AI, ML, digital twins, and predictive analytics are rapidly transforming industrial businesses. They unlock smarter decision-making, greater efficiency, and a sharper competitive edge.”

Role of data and analytics in asset maintenance

The report further explores how increased availability of real-time data, enabled by technologies such as IoT and predictive analytics, is contributing to the evolution of EAM systems. The perceived impact of predictive modelling has tripled according to this year’s findings compared with the previous year’s survey. Yet, with 49% of businesses citing inadequate internal know-how as a limiting factor, practical uptake remains uneven across industries and markets.

Modern EAM systems have moved beyond solely serving as record-keeping tools. By integrating AI and maintenance data, these systems are now being used to produce actionable insights, helping maintenance teams anticipate needs and transition from reactive repairs to proactive strategies. This shift, according to the research, is enabling organisations to improve efficiency, reduce downtime, and derive greater value from their maintenance investments.

Ultimo has introduced AI-powered EAM features designed to be accessible and deployable without the need for in-house AI model development or significant infrastructure investments. These capabilities aim to lower adoption barriers and facilitate immediate operational improvements for asset-heavy enterprises.

The Maintenance Trend Report, which contains contributions from Verdantix, TwinThread, ABS Consulting, and MaxGrip, states that blending human expertise with intelligent systems is likely to be the most effective approach as businesses strive to enhance their asset maintenance functions. As noted in the report, technology alone does not provide a complete solution, but the combination of skilled professionals and advanced digital tools is shaping future directions in maintenance management.

The survey captured perspectives from professionals working in sectors including manufacturing, healthcare, energy, utilities, telecommunications, transportation, and logistics, across a wide range of company sizes. Respondents were drawn from Austria, Belgium, Czech Republic, Denmark, Finland, Germany, Iceland, Luxembourg, Netherlands, Norway, UK, USA, and Sweden.



Source link

Continue Reading

Tools & Platforms

AI-Driven ‘Omni Cities’ Are the Way Forward

Published

on


More than 100 people lost their lives in July when flash floods ravaged Central Texas, illustrating the devastation that’s possible when mounting natural disasters are met with inefficient government responses. Agencies operated in complete silos as the floods quickly destroyed communities — county emergency systems couldn’t share real-time data with state coordinators while rescue teams worked from incompatible dispatch networks — resulting in desolation that was completely amplified by the failure of civic communications technology.

Today, our cities and towns operate like 18th-century mansions rewired with modern gadgets: functional in calm weather, yet lethal in storms. And as climate disasters intensify, cyber warfare evolves and AI-powered threats emerge, the “smart city” model that promised efficiency through sensors and dashboards has proven dangerously inadequate in recent years.

The path forward is not more tech — it’s new architecture. We need cities that don’t just collect data, but process that data in real time to adapt, evolve and take action as unified living systems. They can’t just be “smart,” but instead must be “omni” cities: urban ecosystems with integrated AI nervous systems that coordinate every element of civic life with both precision and purpose.

THE FRAGMENTATION CRISIS

Today’s cities are digital archipelagos. A drone from one vendor can’t share data with a robot from another, while emergency systems speak different languages. This is not inefficiency, but a structural failure seen across virtually every U.S. metropolis.

This fragmentation has been our downfall during the greatest of tragedies. When Hurricane Helene struck in 2024, outdoor sirens stayed silent while cell networks collapsed — not because the technology failed, but because nothing was designed to work together. Several months later when California wildfires forced 200,000 evacuations earlier this year, communication breakdowns and uncoordinated shelters led to 31 preventable deaths.

Yet these communication issues are not isolated to climate disasters. In fact, they’re happening in small ways every day, from 911 dispatch failures to fractured public transport services. And as time passes, new threats are emerging faster than cities can adapt, from AI-powered cyber attacks targeting infrastructure vulnerabilities, to weaponized drone swarms exploiting communication gaps. Our fragmented civic networks don’t just lag behind these challenges — they amplify them.

BEYOND ‘SMART’: THE OMNI CITY VISION

Just as smart cities promised efficiency in the 2010s, omni cities will deliver the resilience needed to face the looming threats of the next decade.

As suggested by its name, an omni city operates as a unified organism in which every component shares a common protocol for crisis response. When a wildfire approaches an omni city, the city doesn’t just sound alarms — it automatically reroutes traffic, opens shelters and coordinates evacuation routes while emergency teams receive real-time data from every connected system.

This isn’t science fiction; cities like Houston are already testing integrated frameworks that link climate response, public safety and mobility systems. The difference of an omni city, however, lies in treating cities as ecosystems rather than collections of isolated smart devices.

The key is interoperability — not just between machines, but between machines and humans. This requires a city-scale operating system that allows autonomous tools, public responders and ethical protocols to work in lockstep. Instead of retrofitting isolated apps, this OS treats cities like systems, linking drones, sensors, robotics, transport and emergency teams through a unified protocol layer.

When cities become more intelligent, they must also become more accountable, particularly in regards to equity. The smart city movement failed in part because it prioritized convenience for the wealthy (or those who can access technology in the first place) over resilience for everyone. Omni cities must flip this model by prioritizing resilience testing in the places most vulnerable to system failures, with the communities that legacy infrastructure has consistently ignored. This isn’t charity, it’s engineering. Systems that can’t serve everyone can’t truly serve anyone.

THE MOMENT FOR ACTION

While exploring the next iteration of a “smart city” may feel daunting, local governments don’t need federal permission to begin. They can start by requiring interoperability standards for new public technology, creating transparent audit systems for automated decisions and prioritizing deployments in underserved communities. The goal isn’t to build perfect cities overnight, but to create the foundations for urban systems that can evolve with emerging challenges.

The era of omni cities begins with recognizing that in a world of cascading crises, our urban infrastructure must become our first responder. Cities that understand this won’t just survive — they’ll define what governance means in the age of artificial intelligence.

When infrastructure thinks as fast as threats emerge, resilience becomes possible. The question is not whether cities will evolve, but which ones will evolve first.

Cesar R. Hernandez is an Equity Fellow in the Center for Public Leadership at Harvard Kennedy School.





Source link

Continue Reading

Tools & Platforms

AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector — Campus Technology

Published

on


AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.

In its AWS Marketplace listing, AI Detector is categorized as a private offer, part of a purchasing program that enables sellers and buyers to negotiate custom prices and end user licensing agreement (EULA) terms that aren’t publicly available. This means the product isn’t listed with standard public pricing that anyone can see and purchase immediately. Instead, sellers and buyers negotiate before committing to a private offer that’s different from the public offer.


[Click on image for larger view.]AI Detector Offer (source: AWS).

Architecture Built on AWS Infrastructure

AI Detector operates as a Software-as-a-Service (SaaS) solution leveraging multiple AWS services for enterprise performance and scalability. The architecture centers on Amazon Elastic Kubernetes Service (Amazon EKS) for orchestration, paired with Amazon Elastic Compute Cloud (Amazon EC2) GPU instances from the G5 family. DeepBrain AI recommends g5.8xlarge instances, with g5.2xlarge as the minimum configuration.


AI Detector Architecture
[Click on image for larger view.]AI Detector Architecture (source: AWS).

The solution also incorporates Amazon Rekognition for visual content analysis, enabling “content authenticity verification and inappropriate content detection,” according to the announcement. An external MongoDB Atlas database provides additional data management capabilities.

The system integrates with several additional AWS services:

  • Amazon Simple Storage Service (Amazon S3) for data storage
  • Amazon Elastic File System (Amazon EFS) for scalable file storage
  • Amazon MemoryDB for in-memory database operations
  • Elastic Load Balancing for traffic distribution
  • Amazon Route 53 for DNS services
  • AWS WAF for application firewall protection
  • Amazon Elastic Container Registry (Amazon ECR) for container management
  • AWS Lambda for serverless computing functions

Real-World Deployment Shows Promise

The Korean National Police Agency serves as a key customer case study, implementing AI Detector to address rising digital crimes involving manipulated videos and synthetic content. The deployment achieved “over 80% accuracy between real and synthetic media during investigations” while reducing manual verification workloads, according to the AWS Partner Network blog post.

The agency uses the tool to screen “manipulated videos featuring celebrities and inappropriate synthetic content reported by the public,” with improved early-stage content validation enabling faster response times during investigations.

Compliance And Security Features

AI Detector meets multiple enterprise compliance standards, including ISO 27001, SOC 2, GDPR, and ISO 42001. The solution emphasizes four core benefits: accuracy through “cutting-edge detection algorithms,” real-time processing speed, simplified deployment via AWS Marketplace, and comprehensive compliance coverage.



Source link

Continue Reading

Trending