Connect with us

Tools & Platforms

How On-Device AI Drives Consumer Tech Change

Published

on


The recent launch of Google’s Pixel 10 series signals a new frontier in AI development and competition, moving from large language models to AI embedded devices. This transition, enabled by the Tensor G5 chip and Gemini Nano model, reflects a broader industry trend where AI is becoming an integral, invisible layer within personal devices rather than a standalone service. The implications extend beyond smartphones to earbuds, watches, glasses, and other wearables, indicating a future where AI operates contextually and continuously within physical and personal items. This shift is part of the development toward “ambient intelligence”, where intelligence surrounds users and understand user needs without demanding active engagement.

The Trend Toward Device-Based AI

The Pixel 10’s features demonstrate the practical advantages of on-device AI processing. Magic Cue provides contextual suggestions by analyzing activity across applications without cloud dependency, making connections between info from emails, screenshots and notes. Camera Coach offers real-time photographic guidance through Gemini-powered scene analysis. Voice Translate maintains natural vocal characteristics during real-time call translation, all processed locally on the device. These capabilities extend throughout Google’s product ecosystem, including the Pixel Watch 4’s personal AI health coach and Pixel Buds Pro 2’s adaptive sound control, all powered by on-device AI processing.

Smartphones across companies and countries are racing to build on-device AI. Apple’s iOS 26 update will incorporate similar live translation for calls and messages and visual intelligence features. Huawei and Xiaomi are integrating real-time AI translation, AI videos, AI recorder and AI-powered gesture recognition for photo transferring into their flagship devices, supported by substantial investments in semiconductor development. Similarly, Samsung is collaborating with chip manufacturers to optimize on-device AI for transcribing and summarizing audios as well as video and image editing, reflecting a global industry shift toward localized AI processing.

Economic and Industrial Implications

The combination of AI-enhanced phones, glasses, watches, and other devices points toward a future where intelligence becomes integrated into our environment. This model emphasizes proactive, context-aware assistance that minimizes required user interaction. AI glasses can overlay real-time translations or navigation cues onto the physical environment, while smartwatches with on-device AI can monitor health metrics and provide personalized recommendations. This transition requires specialized hardware architectures, including Neural Processing Units (NPUs) and tensor-optimized chips like Google’s Tensor G5 and Qualcomm’s Snapdragon platforms, all designed to enable efficient local AI processing.

Ray-Ban Meta Smart Glasses utilize multi-modal AI to process visual and auditory information, enabling contextual queries and hands-free content creation. With sales exceeding 2 million units and production targets of 10 million annually by 2026, these devices demonstrate growing market acceptance. The Oakley Meta HSTN variant targets athletic users with features like real-time environmental analysis, while start-ups like XReal and Viture are focusing on high-fidelity augmented reality displays for productivity and entertainment applications, creating increasingly sophisticated alternatives to traditional screen-based interfaces.

The development of AI hardware involves specialized materials, supply chains, and manufacturing processes, creating opportunities for established companies and specialized manufacturers. The application of robotics, another key area of on-device AI, illustrates this transformation. Companies like Boston Dynamics, 1X, and Unitree are developing robotic systems for assisting industrial inspections, monitoring manufacturing plants, supporting logistics, managing warehouses, conducting rescue operations and helping with chores. These systems combine advanced mechanics with local processing capabilities, allowing them to operate autonomously in complex environments.

The emergence of world foundation models from Nvidia, Meta and Tencent suggests that next-generation robotics will possess unprecedented environmental understanding and adaptability. This progression could reshape labor markets, potentially displacing certain manual and cognitive tasks while creating new roles in robot maintenance, programming, and system integration. The economic impact extends beyond employment to encompass entirely new business models, such as robotics-as-service and adaptive manufacturing systems.

Historical Patterns of Technological Integration

This shift toward embedded AI follows established patterns of technological adoption. Mainframe computing decentralized into personal computers, placing processing power directly in users’ hands. Similarly, the internet evolved from a specialized resource accessed through terminals to a ubiquitous utility integrated into countless devices. Video technology transitioned from specialized equipment to a standard feature in cameras and mobile devices. The AI phones, glasses and other wearable tech, which transforms large language models into personal and portable devices, exemplify this same pattern of advanced technology becoming retrievable through everyday tools.

Challenges and Implementation Considerations

Despite rapid advancement, several significant challenges remain for widespread on-device AI adoption. Energy consumption represents a particular constraint for battery-powered devices, as computationally intensive AI tasks can rapidly drain power resources. This limitation has spurred research into energy-efficient algorithms and low-power AI chips, but optimal balance between capability and consumption remains elusive for many applications.

Privacy and security concerns also persist, despite the inherent advantages of local processing. While keeping data on-device reduces exposure during transmission, the devices themselves may become targets for the extraction of sensitive information. Additionally, the proliferation of connected devices expands the potential attack surface for security breaches, requiring robust encryption and access control measures.

Social acceptance and ethical considerations present further implementation challenges. The integration of AI into increasingly personal contexts, including health monitoring and home automation, raises questions about appropriate boundaries and consent mechanisms. These concerns necessitate careful design approaches that prioritize user control and transparency alongside technical capability.

Google’s launch of Pixel 10 series joins an architectural shift in AI, from centralized cloud resources to distributed, device-level intelligence. The competition is no longer about building the largest models but about creating useful devices that equip users with tools to synthesize increasing load of information, cope with a heightened demand for multitasking and meet a growing standard of productivity.



Source link

Tools & Platforms

Navigating Geopolitical Risk and Technological Indispensability

Published

on


The global AI chip market is a battleground of geopolitical strategy and technological innovation, with China’s demand for advanced semiconductors emerging as a critical focal point. For investors, the tension between U.S. export restrictions and China’s push for self-reliance creates a paradox: while geopolitical risks threaten to fragment markets, the indispensable nature of cutting-edge AI hardware ensures sustained demand. Nvidia, a leader in AI chip development, finds itself at the center of this dynamic, balancing compliance with its ambition to retain a foothold in China’s $50 billion AI opportunity [2].

Geopolitical Risk: The U.S. Export Control Conundrum

U.S. export restrictions have reshaped the AI chip landscape in China. In Q2 2025, Nvidia reported zero sales of its H20 AI chips to the region, a direct consequence of stringent export controls and the absence of finalized regulatory guidelines for its new licensing agreement [2]. This vacuum has allowed domestic competitors like Cambricon to surge, with the company’s revenue jumping 4,300% in the first half of 2025 [1]. The U.S. government’s 100% tariffs and revocation of VEU licenses have further fragmented global supply chains, compelling firms like AMD and Nvidia to develop lower-performance chips for China while TSMC shifts capital expenditures to the U.S. and Europe [1].

Yet, these restrictions have not eradicated demand for advanced AI hardware. China’s AI industry, supported by state-led investment funds and subsidized compute resources, is projected to grow into a $3–4 trillion infrastructure boom by 2030 [3]. The National Integrated Computing Network, a state-backed initiative, underscores Beijing’s commitment to building a self-sufficient ecosystem [4]. However, bottlenecks persist: limited access to EUV lithography and global supply chain integration remain significant hurdles [4].

Technological Indispensability: The Unmet Need for Performance

Despite China’s strides in self-reliance, the gap between domestic and U.S. semiconductor capabilities remains stark. Companies like Huawei and SMIC are closing this gap—Huawei’s CloudMatrix 384 and SMIC’s 7nm production expansion are notable advancements [1]. However, the performance of these chips still lags behind Nvidia’s Blackwell GPU, which offers unparalleled efficiency for large-scale AI training. This technological disparity has driven Chinese firms like Alibaba to invest in homegrown solutions, including a new AI chip, while still relying on U.S. technology for critical applications [2].

Nvidia’s recent development of the B30 chip—a China-compliant variant of the Blackwell GPU—exemplifies its strategy to navigate these challenges. By adhering to U.S. export restrictions while retaining performance, the B30 aims to secure market access in a landscape where even restricted chips are indispensable [3]. This approach mirrors the broader trend of “compliance-driven innovation,” where firms adapt to geopolitical constraints without sacrificing technological relevance.

Strategic Implications for Investors

For investors, the key lies in assessing how companies balance compliance with innovation. Nvidia’s ability to pivot to the B30 chip highlights its resilience, but the absence of H20 sales in Q2 2025 underscores the fragility of its China strategy [2]. Meanwhile, domestic players like Cambricon and SMIC offer high-growth potential but face long-term challenges in overcoming U.S. export controls and achieving parity with Western rivals [1].

The AI infrastructure boom, however, presents a universal opportunity. As global demand for advanced compute surges, firms that can navigate geopolitical risks—whether through compliance, localization, or hybrid strategies—will dominate. China’s push for self-reliance, while reducing its dependence on U.S. chips, also creates a fertile ground for innovation, with startups like DeepSeek optimizing FP8 formats for local hardware [1].

Conclusion

Nvidia’s experience in China encapsulates the dual forces shaping the AI chip sector: geopolitical risk and technological indispensability. While U.S. export controls have disrupted its access to the Chinese market, the company’s strategic adaptations—such as the B30 chip—demonstrate its commitment to maintaining relevance. For investors, the lesson is clear: the AI race is not just about hardware but about navigating a complex web of policy, innovation, and market dynamics. As China’s self-reliance drive accelerates, the winners will be those who can bridge the gap between compliance and cutting-edge performance.

Source:[1] China’s AI Chip Revolution: The Strategic Imperative and Investment Opportunities in Domestic Semiconductor Leaders [https://www.ainvest.com/news/china-ai-chip-revolution-strategic-imperative-investment-opportunities-domestic-semiconductor-leaders-2508/][2] Alibaba reportedly developing new AI chip as China’s Xi rejects AI’s ‘Cold War mentality’ [https://ca.news.yahoo.com/alibaba-reportedly-developing-ai-chip-123905455.html][3] Navigating Geopolitical Risk in the AI Chip Sector: Nvidia Remains a Strategic Buy Amid Chinese Restrictions [https://www.ainvest.com/news/navigating-geopolitical-risk-ai-chip-sector-nvidia-remains-strategic-buy-chinese-restrictions-2508/][4] Full Stack: China’s Evolving Industrial Policy for AI [https://www.rand.org/pubs/perspectives/PEA4012-1.html]



Source link

Continue Reading

Tools & Platforms

How AI Can Strengthen Your Company’s Cybersecurity – New Technology

Published

on


Key Takeaways:

  • Using AI cybersecurity tools can help you detect threats
    faster, reduce attacker dwell time, and improve your
    organization’s overall risk posture.

  • Generative AI supports cybersecurity compliance by accelerating
    breach analysis, reporting, and regulatory disclosure
    readiness.

  • Automating cybersecurity tasks with AI helps your business
    optimize resources, boost efficiency, and improve security program
    ROI.

Cyber threats are evolving fast — and your organization
can’t afford to fall behind. Whether you’re in healthcare, manufacturing, entertainment, or another dynamic industry,
the need to protect sensitive data and maintain trust with
stakeholders is critical.

With attacks growing in volume and complexity, artificial
intelligence (AI) offers powerful support to help you detect
threats earlier, respond faster, and stay ahead of changing
compliance demands.

Why AI Is a Game-Changer in Cybersecurity

Your business is likely facing more alerts and threats than your
team can manually manage. Microsoft reports that companies face
over 600 million cyberattacks daily — far
beyond human capacity to monitor alone.

AI tools can help by automating key aspects of your cybersecurity strategy, including:

  • Real-time threat detection: With
    “zero-day attack detection”, machine learning identifies
    anomalies outside of known attack signatures to flag new threats
    instantly.

  • Automated incident response: From triaging
    alerts to launching containment measures without waiting on human
    intervention.

  • Security benchmarking: Measuring your defenses
    against industry standards to highlight areas for improvement.

  • Privacy compliance support: Tracking data
    handling and reporting to meet regulatory requirements with less
    manual oversight.

  • Vulnerability prioritization and patch
    management
    : AI can rank identified weaknesses by severity
    and automatically push policies to keep systems up to date.

AI doesn’t replace your team — it amplifies their
ability to act with speed, precision, and foresight.

Practical AI Use Cases to Consider

Here are some ways AI is currently being used in cybersecurity
and where it’s headed next:

1. Summarize Incidents and Recommend Actions

Generative AI can instantly analyze a security event and draft
response recommendations. This saves time, supports disclosure
obligations, and helps your team update internal policies based on
real data.

2. Prioritize Security Alerts More Efficiently

AI triage tools analyze signals from across your environment to
highlight which threats require urgent human attention. This allows
your staff to focus where it matters most — reducing risk and
alert fatigue.

3. Automate Compliance and Reporting

From HIPAA to SEC rules to state-level privacy laws, the
regulatory landscape is more complex than ever. AI can help your
organization map internal controls to frameworks, generate
compliance reports, and summarize what needs to be disclosed
— quickly and accurately.

4. Monitor Behavior and Detect Threats

AI can track user behavior, spot anomalies, and escalate
suspicious actions (like phishing attempts or unauthorized access).
These tools reduce attacker dwell time and flag concerns in seconds
— not weeks or months.

5. The Next Frontier: Autonomous Security

The future of AI in cybersecurity includes agentic systems
— tools capable of acting independently when breaches occur.
For instance, if a user clicks a phishing link, AI could
automatically isolate the device or suspend access.

However, this level of automation must be used carefully. Human
oversight remains essential to prevent overreactions — such
as wiping a laptop unnecessarily. In short, AI doesn’t replace
your human cybersecurity team but augments it — automating
repetitive tasks, spotting hidden threats, and enabling faster,
smarter responses. As the technology matures, your governance
structures must evolve alongside it.

Building a Roadmap and Proving ROI

To unlock the benefits of AI, your business needs a strong data
and governance foundation. Move from defense to strategy by first
assessing whether your current systems can support AI —
identifying gaps in data structure, quality, and access.

Next, define clear goals and ROI metrics. For example:

  • How much time does AI save in daily operations?

  • How quickly are threats identified post-AI deployment?

  • What are the cost savings from prevented incidents?

Begin with a pilot program using an off-the-shelf AI product. If
it shows value, scale into customized prompts or embedded tooling
that fits your specific business systems.

Prompt Engineering to Empower Your Team

Your teams can get better results from AI by using structured
prompts. A well-designed prompt ensures your AI tools deliver
clear, useful, business-ready outputs.

Example prompt:

“Summarize the Microsoft 365 event with ID
‘1234’ to brief executive leadership. Include the event
description, threat level, correlated alerts, and mitigation steps
— in plain language suitable for a 10-minute
presentation.”

This approach supports internal decision-making, board
reporting, and team communication — all essential for
managing cyber risks effectively.

Don’t Wait: Make AI Part of Your Cybersecurity
Strategy

AI is no longer a “nice to have”; it’s a core
component of resilient, responsive cybersecurity programs.
Organizations that act now and implement AI strategically will be
better equipped to manage both today’s threats and
tomorrow’s compliance demands.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading

Tools & Platforms

The ICO’s role in balancing AI development

Published

on


The Innovation Platform spoke with Sophia Ignatidou, Group Manager, AI Policy at the Information Commissioner’s Office, about its role in regulating the UK’s AI sector, balancing innovation and economic growth with robust data protection measures.

Technology is evolving rapidly, and as artificial intelligence (AI) becomes more integrated into various aspects of our lives and industries, the role of regulatory bodies like the Information Commissioner’s Office (ICO) becomes crucial.

To explore the ICO’s role in the AI regulatory landscape, Sophia Ignatidou, Group Manager of AI Policy at the ICO, elaborates on the office’s comprehensive approach to managing AI development in the UK, emphasising the opportunities AI presents for economic growth, the inherent risks associated with its deployment, as well as the ethical considerations organisations must address.

What is the role of the Information Commissioner’s Office (ICO) in the UK’s AI landscape, and how does it enforce and raise awareness of AI legislation?

The ICO is the UK’s independent data protection authority and a horizontal regulator, meaning our remit spans both the public and private sectors, including government. We regulate the processing of personal data across the AI value chain: from data collection to model training and deployment. Since personal data underpins most AI systems that interact with people, our work is wide-ranging, covering everything from fraud detection in the public sector to targeted advertising on social media.

Our approach combines proactive engagement and regulatory enforcement. On the engagement side, we work closely with industry through our Enterprise and Innovation teams, and with the public sector via our Public Affairs colleagues. We also provide innovation services to support responsible AI development, with enforcement reserved for serious breaches. We also focus on public awareness, including commissioning research into public attitudes and engaging with civil society.

What opportunities for innovation and economic growth does AI present, and how can these be balanced with robust data protection?

AI offers significant potential to drive efficiency, reduce administrative burdens, and accelerate decision-making by identifying patterns and automating processes. However, these benefits will only be realised if AI addresses real-world problems rather than being a “solution in search of a problem.”

The UK is home to world-class AI talent and continues to attract leading minds. We believe that a multidisciplinary approach, combining technical expertise with insights from social sciences and economics, is essential to ensure AI development reflects the complexity of human experience.

Crucially, we do not see data protection as a barrier to innovation. On the contrary, strong data protection is fundamental to sustainable innovation and economic growth. Just as seatbelts enabled the safe expansion of the automotive industry, robust data protection builds trust and confidence in AI.

What are the potential risks associated with AI, and how does the ICO assess and mitigate them?

AI is not a single technology but an umbrella term for a range of statistical models with varying complexity, accuracy, and data requirements. The risks depend on the context and purpose of deployment.

When we identify a high-risk AI use case, we typically require the organisation, whether developer or deployer, to conduct a Data Protection Impact Assessment (DPIA). This document should outline the risks and the measures in place to mitigate them. The ICO assesses the adequacy of these DPIAs, focusing on the severity and likelihood of harm. Failure to provide an adequate DPIA can lead to regulatory action, as seen in our preliminary enforcement notice against Snap in 2023.

On a similar note, how could emerging technologies like blockchain or federated learning help resolve data protection issues?

Emerging technologies such as federated learning can help address data protection challenges by reducing the amount of personal information processed and improving security. Federated learning allows models to be trained without centralising raw data, which lowers the risk of large-scale breaches and limits exposure of personal information. When combined with other privacy-enhancing technologies, it further mitigates the risk of attackers inferring sensitive data.

Blockchain, when implemented carefully, can strengthen integrity and accountability through tamper-evident records, though it must be designed to avoid unnecessary on-chain disclosure. Our detailed guidance on blockchain will be published soon and can be tracked via the ICO’s technology guidance pipeline.

What ethical concerns are associated with AI, and how should organisations address them? What is the ICO’s strategic approach?

Data protection law embeds ethical principles through its seven core principles: lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; security; and accountability. Under the UK GDPR’s “data protection by design and by default” requirement, organisations must integrate these principles into AI systems from the outset.

Our recently announced AI and Biometrics Strategy sets out four priority areas: scrutiny of automated decision-making in government and recruitment, oversight of generative AI foundation model training, regulation of facial recognition technology in law enforcement and development of a statutory code of practice on AI and automated decision-making. This strategy builds on our existing guidance and aims to protect individuals’ rights while providing clarity for innovators.

How can the UK keep pace with emerging AI technologies and their implications for data protection?

The UK government’s AI Opportunities Plan rightly emphasises the need to strengthen regulators’ capacity to supervise AI. Building expertise and resources across the regulatory landscape is essential to keep pace with rapid technological change.

How does the ICO engage internationally on AI regulation, and how influential are other countries’ policies on the UK’s approach?

AI supply chains are global, so international collaboration is vital. We maintain active relationships with counterparts through forums such as the G7, OECD, Global Privacy Assembly, and the European Commission. We closely monitor developments like the EU AI Act, while remaining confident in the UK’s approach of empowering sector regulators rather than creating a single AI regulator.

What is the Data (Use and Access) Act, and what impact will it have on AI policy?

The Data (Use and Access) Act requires the ICO to develop a statutory Code of Practice on AI and automated decision-making. This will build on our existing non-statutory guidance and incorporate recent positions, such as our expectations for generative AI and joint guidance on AI procurement. The code will provide greater clarity on issues such as research provisions and accountability in complex supply chains.

How can the UK position itself as a global leader in AI, and what challenges does the ICO anticipate?

The UK already plays a leading role in global AI regulation discussions. For example, the Digital Regulation Cooperation Forum, bringing together the ICO, Ofcom, CMA and FCA, has been replicated internationally. The ICO was also the first data protection authority to provide clarity on generative AI.

Looking ahead, our main challenges include recruiting and retaining AI specialists, providing regulatory clarity amid rapid technical and legislative change, and ensuring our capacity matches the scale of AI adoption.

Please note, this article will also appear in the 23rd edition of our quarterly publication.



Source link

Continue Reading

Trending