Nvidia CEO Jensen Huang has downplayed Washington’s concerns that the Chinese military will use advanced U.S. AI tech to improve its capabilities. Mr. Huang said in an interview with CNN that China’s People’s Liberation Army (PLA) will avoid American tech the same way that the U.S.’s armed forces avoid Chinese products.
This announcement comes on the heels of the United States Senate’s open letter [PDF] to the CEO, asking him to “refrain from meeting with representatives of any companies that are working with the PRC’s military or intelligence establishment…or are suspected to have engaged in activities that undermine U.S. export controls.”
“…Depriving someone of technology is not a goal, it’s a tactic — and that tactic was not in service of the goal,” said the Nvidia CEO during the interview. “Just like we want the world to be built on the American dollar, using the American dollar as the global standard, we want the American tech stack to be the global standard.” He also added, “In order for America to have AI leadership, it needs to make sure the American tech stack is available to markets all over the world, so that amazing developers, including the ones in China, are able to build on American tech stack so that AI runs best on the American tech stack.”
When Zakaria asked him about the Chinese PLA’s use of this tech, Jensen said that it’s not going to be an issue. “The Chinese military [is] no different [from] the American military: [they] will not seek each other’s technology to be built on top [of each other]. They simply can’t rely on it — it could be, of course, limited at any time,” Jensen answered. “Not to mention, there’s plenty of computing capacity in China already. If you just think about the number of supercomputers that are in China, built by amazing Chinese engineers, that are already in operation — they don’t need Nvidia’s chips or American tech stacks in order to build their military.”
Chinese operators of these smuggled AI chips would have a harder time getting firmware updates and likely won’t have access to Nvidia’s advanced cloud tools and enterprise platforms. However, because Nvidia still sells export-compliant GPUs to China, the platform and cloud software can still potentially work with the banned higher-power equipment.
Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.
Aside from that, it will probably be difficult for the U.S. to disable these AI GPUs remotely, if it comes to that. After all, Nvidia would have a harder time selling its chips if a way to disable them remotely exists — that’s why the U.S. has a bill in the works that could force geo-tracking tech on high-end hardware. Even if there is such a technology, China can just air gap the systems to prevent them from being remotely killed.
Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Businesses are racing to adopt AI. But siloed tools, fragmented efforts, and lack of trust are slowing progress.
A new Forrester study, commissioned by Tines, surveyed over 400 IT leaders across North America and Europe. The study shows that governance and privacy compliance are both the top priorities and the biggest blockers to scaling AI.
The study also found that 88% of IT leaders say AI adoption remains difficult to scale without orchestration. Orchestration connects systems, tools, and teams so AI can run securely, transparently, and efficiently at scale. Without it, AI adoption stays fragmented and organizations struggle to deliver value.
The takeaway is clear: IT is primed to orchestrate AI across the enterprise. But first, teams must overcome blockers in governance, strategic alignment, and trust.
The biggest blockers to scaling AI
When it comes to scaling AI, governance is both a top priority and a top barrier. Forrester’s study found that over half (54%) of IT leaders say ensuring AI complies with privacy, governance, and regulatory standards is the highest priority for the next 12 months. Yet more than a third (38%) cite governance and security concerns as the biggest blockers to scaling AI.
This reflects a growing tension. A compliance-first approach to AI is essential. But if it isn’t effectively embedded into AI initiatives, it can also stall innovation and competitiveness.
AI introduces risks that existing governance processes weren’t built to handle, with many traditional approaches proving inadequate for AI’s real-time demands, speed, and complexity. Gaps in governance expose organizations to liabilities including bias, ethical breaches, shadow AI, and compliance failures that can lead to regulatory penalties and reputational damage.
Beyond security and governance concerns, the other top challenges when scaling AI include lack of budget or executive sponsorship, concerns about ROI, and fragmented ownership. Siloed AI initiatives and disconnected tools also present a barrier, making it difficult to connect systems across departments for greater visibility, control, and effectiveness.
Orchestration is the missing link to alignment, trust, and scale
AI orchestration offers a way forward. It unifies people, processes, technology, and workflows into a connected system that improves efficiency, transparency, and governance, addressing many of the key blockers that stall scaling AI.
Enabling this type of oversight is a top priority. According to the research, 73% say visibility across AI workflows and systems is critical. To achieve this, nearly half (49%) of organizations are looking for partners that provide end-to-end centralized solutions to overcome siloed workflows and fragmented AI efforts.
The cost of inaction is high. Without orchestration, the study shows, organizations face difficulties like:
Ensuring AI practices are ethical and transparent (50%)
Security concerns related to data access, compliance issues, inconsistent governance, auditing, and shadow AI (44%)
Lack of employee trust in the outcomes generated by AI (40%)
These challenges don’t just slow down your AI initiatives: they risk halting progress on core business goals, damaging brand reputation, and undermining trust.
IT is primed to lead orchestration
Some 86% of respondents believe that IT is uniquely positioned to orchestrate AI across workflows, systems, and teams. But while organizations are increasingly recognizing IT as an enabler of efficiency and innovation, many still underestimate its broader strategic potential.
Today, 40% of respondents say IT’s reactive focus on troubleshooting and uptime is what holds it back from being seen as a driver of business outcomes at the board level. Similarly, 38% believe that other departments frequently or occasionally overlook or underestimate IT’s potential to improve overall organizational efficiency.
With AI orchestration, IT has the opportunity to take a key strategic role that shapes the future of their organization’s success. IT leaders are ready: 38% of survey respondents believe that IT should own and lead AI orchestration, while 28% say it should act as the coordination hub between different business functions.
IT is primed to lead this charge as they’re well-placed to connect strategy, teams, and data. Through AI orchestration, they can facilitate secure, compliant adoption and scaling of AI that meets robust governance requirements.
This won’t just fuel organization-wide efficiency, but will unlock tangible business value, such as enhancing collaboration between business units, accelerating digital transformation, and improving employee productivity, positioning IT as significant drivers of impact.
Key recommendations
To strengthen its strategic role, IT should:
Orchestrate AI for visibility and alignment: Lead orchestration to connect tools, improve transparency, and align teams.
Embed governance by design: Orchestration provides a framework to build compliance and security into AI workflows from the start, ensuring consistency at scale.
Frame outcomes in business value: To secure executive sponsorship, IT should frame orchestration’s impact in terms of ROI, efficiency gains, and revenue opportunities unlocked.
“AI will replace QA.” It was not the first time I had heard this claim. But when someone said it to me directly, I asked them to demonstrate how and they simply couldn’t.
That exchange occurred shortly after my co-founder Guy and I launched our second company, BlinqIO. This time, we focused our efforts on building a fully autonomous AI Test Engineer.
We developed an advanced platform that was not only capable of understanding applications under test, generating and maintaining robust test suites, but also recovering from failures independently.
I’m pleased to say that the technology worked. However, from speaking with numerous global enterprises, there was constant concern, not about functionality but trust and control when it comes to AI tools.
Tal Barmeir
CEO and Co-Founder, BlinqIO.
The Limitations of Shift Left
Across industries, we are seeing organizations come under intense pressure to release software faster than ever.
Methodologies like Agile, CI/CD, DevOps, and Shift Left were all introduced to accelerate delivery without compromising quality. However, as Shift Left was implemented, its original intent was often either misunderstood or misapplied.
Originally intended to embed testing earlier in the development lifecycle, Shift Left too often resulted in the marginalization or elimination of dedicated QA roles altogether.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Developers were soon being asked not only to build features but to verify their correctness without independent validation. On paper, this may appear to be an efficient way of working; however, in reality, the consequences are evident.
From experience, developers lack incentives to test their own code, and, as a result, coverage frequently becomes deprioritized.
From my perspective, Shift Left did not fail because it was inherently flawed; rather it failed because it wasn’t yet complete and ready to be used the way it should.
I believe successful implementation of Shift Left requires rethinking collaboration models, redefining shared accountability, and embedding quality throughout the software life cycle.
In companies where it works, teams are not simply writing tests earlier but reframing how risk is assessed, how requirements are defined, and how feedback is used to drive continuous improvement.
Simply removing QA and assuming innovation will compensate is a false economy. Trust me when I say it doesn’t work.
FOAI: Fear of AI
Today, Artificial Intelligence is poised to give Shift Left a second chance. But widespread adoption remains hampered by a new and growing barrier, this is something I like to call ‘FOAI’, which is Fear of AI.
This fear is not rooted in science fiction. It is being replaced among even the most innovative employees. The fear of being accountable for decisions made by systems they don’t understand.
Most importantly, it’s about the fear of relinquishing control to technology introduced without adequate explanation or transparency.
In theory, I think most tech founders would agree that AI should be embraced. In practice, it is often introduced as a black box-opaque and seen as unexplainable yet mandatory.
Teams are expected to trust something they cannot interrogate. This, in turn, undermines confidence and fuels resistance. I have witnessed how quickly resistance can dissolve when people are invited into the AI adoption process.
When teams are able to fully understand how AI actually functions and the ways in which it prioritizes tests and why it flags certain failures, their entire perspective tends to shift.
Teams that began with skepticism are now using our platform to autonomously manage thousands of tests with confidence. This transformation was not just about the technology but the trust that developed once transparency and control were brought into the mix.
Leadership in AI – A Personal Perspective
I believe trust is key when it comes to technology adoption. In addition, I believe it’s helpful to also identify who within a team shapes these technologies and helps implement them.
Working in AI and deep tech as a female founder means navigating often subtle, persistent barriers. There is usually an unspoken expectation to prove one’s technical authority over and over. These reflect deeper assumptions about just who is seen as qualified to help companies build their future with AI in it.
What has helped me, personally and professionally, is visibility. When women are seen founding and leading AI companies and not just using AI, but building it, this challenges some deeply rooted biases.
This is why I remain active not only as a speaker at events, but in mentorship groups, panels, and one-on-one conversations. It’s to help with AI transition and acceptance.
To me, inclusion must go beyond representation. It requires access to influence. It means being present in the rooms where decisions about technology, ethics, and impact are being made. I think that the future of AI should be co-created by everyone using it.
Decoding the Language of AI
In the current landscape, artificial intelligence is surrounded by an often overwhelming level of jargon. From LLMs, agents, neural networks and synthetic data to autonomous systems. While AI-related terminology can be daunting, it needs to be understood.
In high-stakes domains such as healthcare, finance, and enterprise software testing, AI must be accountable. Teams need to know not just what happened, but why. Another is agentic behavior systems that operate autonomously on behalf of humans.
This kind of functionality is already present in modern AI platforms. But to use it safely and effectively, teams must be able to monitor and adjust how AI systems function in real-time. Without this, building that much-needed trust is nearly impossible.
The Future of AI – Quiet, Powerful, and Integrated
I don’t believe AI will change the world through one dramatic breakthrough. I think that its most powerful effects will unfold quietly and that this will happen within infrastructure, beneath user interfaces, and behind the scenes.
Future-ready AI will not necessarily announce itself with glossy demos. Its contributions will be measured not in headlines, but in release stability, in faster recovery cycles, and in the confidence with which teams ship software.
This shift will also reshape the value we place on human capabilities. As AI increasingly automates repetitive, mechanical tasks, the skills that rise to prominence will be curiosity, strategic thinking, and the ability to frame complex problems.
In my view, these are the traits that will define effective leadership in an AI-enabled world and not just one of technical proficiency.
The companies that will thrive in the future will be those that integrate AI in a thoughtful manner. Those that treat trust, quality, and explainability as essential design principles and not afterthoughts will be setting themselves up for success. I also think those that view AI not as a replacement for human insight, but as an enabler of it will perform well.
Trust me when I say that AI will not replace workers. However, I do believe that ignoring its potential or implementing it without transparency may hinder your organization’s future.
As for Shift Left? It may have fallen short the first time. But I think that with the right application of AI, we have an opportunity to try again, this time with the tools, mindset, and visibility to get it right.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Equator Global is pushing out a new training platform upgrade, uses cutting edge technology to make learning, product knowledge development and sales support faster and easier for travel agents.
The enhanced training platform incorporates AI to put instant knowledge, ideas and support at travel agents’ fingertips.
The new ‘Advanced Intelligence’ platform blends human and artificial intelligence to deliver instant information (in more than 25 languages) on destinations, hotels, cruises, airlines, and other travel products.
Agents simply type in their travel question, and the ‘Auto-Expert’ generates quick and clear information within seconds. The new tool is also capable of creating and suggesting itineraries.
To make learning more engaging and easier to absorb, the platform automatically generates Auto-Podcasts.
Each time an agent asks a question through the AI tool, the answer is transformed into a podcast, presented as a natural, discussion-style conversation between two life-like hosts.
The podcasts allow travel agents to revisit the platform’s responses in an easy and fun way, continuing learning whether at work, on the go, or at home.
Equator Global’s CEO, Ian Dockreay, says: “This is just the start of the next stage of travel e-learning, marketing and information technology.”
Philip Micallef, the newly appointed Marketing and Account Manager at Equator Global, said: “I’m excited to have joined Equator Global at this stage of their expansion and development. Emerging knowledge technologies are really taking off into a whole new world of innovation and delivery, most of which we couldn’t have imagined just a few short years ago.”
Drawing on more than 20 years of experience, supporting over 350,000 travel agents and tour operators worldwide, the new AI tool is a revolutionary upgrade to how agents learn and sell.
To ensure the answers are reliable and accurate, Equator Global is working with their travel and tourism clients to create a bespoke digital knowledge cloud with the client feeding it with data and sources that will deliver the best information for agents.