AI Research
To scale AI and bring Zero Trust security, look to the chips

Enabling secure and scalable artificial intelligence for Defense Department missions depends on deploying the right semiconductors across the AI lifecycle, from data sourcing and model training to deployment and real-time inferencing.
Enabling secure and scalable artificial intelligence architectures for Defense Department and public sector missions depends on deploying the right compute technologies across the entire AI lifecycle from data sourcing and model training to deployment and real-time inferencing – in other words, drawing conclusions.
At the same time, securing the AI pipeline can be accomplished through features like hardware-based semiconductor security such as confidential computing to provide a trusted foundation. This enables Zero Trust principles to be applied across both information technology (IT) and operational technology (OT) environments, with OT having different security needs and constraints compared to traditional enterprise IT systems. Recent DoD guidance on Zero Trust specifically addresses OT systems such as industrial control systems that have become attack vectors for adversaries.
Breaking Defense discussed the diverse roles that chips play across AI and Zero Trust implementations with Steve Orrin, Federal Security Research Director and a Senior Principal Engineer with Intel.
Breaking Defense: In this conversation we’re going to be talking about chip innovation for public sector mission impact. So what does that mean to you?
Orrin: The way to think about chip innovation for public sector is understanding that public sector writ large is almost a macro of the broader private sector industries. Across the federal government and public sector ecosystem, with some exceptions, you’ll find almost every kind of use case with much of the same usages and requirements that you find across multiple industries in the private sector: logistics and supply chain management, facilities operations and manufacturing, healthcare, and finance.
When we talk about chip innovation specific for the public sector, it’s this notion of taking private sector technology solutions and capabilities and federalizing them for the specific needs of the US government. There’s a lot of interplay there and, similarly, when we develop technologies for the public sector and for federal missions, oftentimes you find opportunities for commercializing those technologies to address a broader industry requirement.
With that as the baseline, we look at their requirements and whether there’s scalability of IT systems and infrastructure to support agencies in helping them achieve their goals around enabling the end user to perform their job or mission. In the DoD and specific industries, oftentimes they’ll have a higher security bar, and in the Defense Department there’s an edge component to their mission.
Being able to take enterprise-level capabilities and move them into edge and theater operations where you don’t necessarily have large-scale cloud infrastructure or other network access means you have to be more self-contained, more mobile. It’s about innovations that address specific mission needs.
One of the benefits of being Intel is that our chips are inside the cloud, the enterprise data center, the client systems, the edge processing nodes. We exist across that entire ecosystem, including network and wireless domains. We can bring the best of what’s being done at those various areas and apply them to specific mission requirements.
We also see a holistic view of cloud, on-prem, end-user, and edge requirements. We can look at the problem sets that they’re having from a more expansive approach as opposed to a stove pipe that’s looking just at desktop and laptop use cases or just at cloud applications.
This holistic view of the requirements enables us to help the government adopt the right technology for their mission. That comes to the heart of what we do. What the government needs is never one-size-fits-all when it comes to solving public-sector requirements.
It’s helping them achieve the right size, weight, and power profile, the right security requirements, and the right mission enabling and environmental requirements to meet their mission needs where they are, whether that be cloud utilization or at the pointy edge of the spear.

What’s required to enable secure, scalable AI architectures that advance technology solutions for national security?
From an Intel perspective, scalable AI is being able to go both horizontally and vertically to have the right kind of computing architecture for every stage of the lifecycle, from the training to the tuning, deployment, and inferencing. There are going to be different requirements from both SWaP and horsepower of the actual AI workload that’s performing.
Oftentimes you’ll find that it’s not the AI training, which everyone focuses on because it feels like the big problem, because that’s the tip of the iceberg. When you look at the challenge, sometimes it’s around data latency or input ingestion speeds. How do I get all of this data into the systems?
Maybe it’s doing federated learning because there’s too much data to put it all in one place and it’s all from different sensors. There’s actually benefits to pushing that compute closer to where the data is being generated and doing federated learning out at the edge.
At the heart of why Intel is a key player in this is understanding that it’s not a one size fits all approach from a compute perspective but providing that right compute to the needs of the various places in the horizontal scale.
At the same time there’s the vertical scale, which is the need to do massive large language model training, or inference across thousands of sensors, and fusion of data across multimodal sensors that are capturing different kinds of data such as video, audio, and RF spectrum in order to get an accurate picture of what’s being seen across logistics and supply chains, for example.
I need to pull in location data of where supplies are across vendor supply chains. I need to be able to pull in information from my project management demand signal to understand what’s needed where, and from mission platforms like planes, vehicles, weapons systems, radar stations, and sensor technologies to know where I’m deploying people. Those are different kinds of data sets and structures that have to be fused together in order to enable supply chain and logistics management at scale.
Being able to scale up computing power to meet the needs of those various parts is about how we’re providing the right architecture for those different parts of the ecosystem and AI pipeline.
Intel is helping defense and intelligence agencies adopt AI in ways that are secure, scalable, and aligned with Zero Trust principles, especially in operational technology environments as opposed to IT environments. Explain.
Operational technology has been around for a long time and is distinct from what is known as information technology or enterprise systems, where you have enterprise email and your classic collaboration and document management.
OT are the things that are not that – everything from fire suppression and alerting systems, HVAC, the robots and machines and error detection technologies that do quality control. Those are the operational technologies that perform the various task specific functions that support the operations and mission of an organization, they are not your classic IT operations.
One of the interesting transitions over the last many years is that the actual kinds of technology in those OT environments now look and feel a lot like IT. It’s a set of servers or client systems that are performing a fixed function, but the vendors are still your classic laptop and PC OEMs.
That mixing of the IT-style equipment in OT environments has created a tension point over the years when it comes to things like management and how you secure OT systems versus IT because OT systems are more mission critical. They’re more fixed-function and they often don’t have the space and luxury of having heaps of security tools monitoring them and performing because you have real-time reliability requirements like guaranteed uptime.
The DoD is coming out with new Zero Trust guidance specifically for OT, and the reason is because IT Zero Trust principles don’t easily translate to OT environments. There’s different constraints and limitations in OT, as well as some higher-level requirements, so there needs to be an understanding that there is a difference between the two when it comes to applying Zero Trust.
What do you suggest?
One of the first steps that I’ve talked about is getting the right people in the room for those initial phases of policy definition and architectural planning. Oftentimes you’ll find, and we’ve seen this a lot in the private sector, that when they start looking at OT, the IT people come up with security policy and force it on the OT systems. More often than not that fails miserably because OT just isn’t like IT. You don’t have the same flexibility and you have more stringent requirements for the actual operations side of OT.
That calls for crafting subset policies for that system and then containerizing that from a segmentation or a policy perspective and monitoring against that. The nice thing about OT is you don’t have to worry about every possible scenario. If you take the example of a laptop, users can do almost anything on their laptop. They can browse the Internet, send email, work with documents, collaborate on Teams calls. That means there’s a lot of security I have to worry about across the myriad usages enabled by that PC.
In an OT environment, you have a much smaller set of what it’s supposed to be doing, which means you can lock down that system and the access to that system to just key functions. That gives you a much tighter policy you can apply in OT that you wouldn’t have the availability of doing on the IT side of the camp. That way you can craft very specific policies, monitoring, and access controls specific to that particular OT or mission platform. That is a powerful way of applying it.
If you look at some of the guidance that’s coming out, the Navy has just recently published some specific OT guidance, NIST is coming out with OT guidance. It’s about tying the policies to the environment and being able to craft a subset of security controls specific to the domain, and then leveraging the right technologies that you need in order to achieve that goal.
Final thoughts?
Intel has technology and architectures that provide the right compute at the right place where and when the customer needs it. We understand the vertical and horizontal scale requirements, and provide the security, reliability, and performance for those environments that you need across your mission areas.
Second, when applying Zero Trust, it’s not one size fits all. You need to craft your Zero Trust policies, controls, and technologies to meet the requirements of your mission and of your enterprise IT and OT technologies.
Then, much of the technology and the security capabilities you need are already built in the system. You just need to take advantage of them, whether they be network segmentation, secure boot, and confidential computing. The hardware and software that has often already been deployed gives you a lot of those capabilities. You just need to leverage them.
To learn more about Intel and AI visit www.intel.com/usai.
AI Research
Top AI Code Generation Tools of 2025 Revealed in Info-Tech Research Group’s Emotional Footprint Report
The recently published 2025 AI Code Generation Emotional Footprint report from Info-Tech Research Group highlights the top AI code generation solutions that help organizations streamline development and support innovation. The report’s insights are based on feedback from users on the global IT research and advisory firm’s SoftwareReviews platform.
TORONTO, Sept. 3, 2025 /PRNewswire/ – Info-Tech Research Group has published its 2025 AI Code Generation Emotional Footprint report, identifying the top-performing solutions in the market. Based on data from SoftwareReviews, a division of the global IT research and advisory firm, the newly published report highlights the five champions in AI-powered code generation tools.
AI code generation tools make coding easier by taking care of repetitive tasks. Instead of starting from scratch, developers get ready-made snippets, smoother workflows, and support built right into their IDEs and version control systems. With machine learning and natural language processing behind them, these tools reduce mistakes, speed up projects, and give developers more room to focus on creative problem solving and innovation.
Info-Tech’s Emotional Footprint measures high-level user sentiment. It aggregates emotional response ratings across 25 proactive questions, creating a powerful indicator of overall user feeling toward the vendor and product. The result is the Net Emotional Footprint, or NEF, a composite score that reflects the overall emotional tone of user feedback.
Data from 1,084 end-user reviews on Info-Tech’s SoftwareReviews platform was used to identify the top AI code generation tools for the 2025 Emotional Footprint report. The insights support organizations looking to streamline development, improve code quality, and scale their software delivery capabilities to drive innovation and business growth.
The 2025 AI Code Generation Tools – Champions are as follows:
- Visual Studio IntelliCode, +96 NEF, ranked high for delivering more than promised.
- ChatGPT 5, +94 NEF, ranked high for its effectiveness.
- GitHub Copilot, +94 NEF, ranked high for its transparency.
- Replit AI, +96 NEF, ranked high for its reliability.
- Amazon Q Developer, +94 NEF, ranked high for helping save time.
Analyst Insight:
“Organizations that adopt AI code generation tools gain a significant advantage in software delivery and innovation,” says Thomas Randall, a research director at Info-Tech Research Group. “These tools help developers focus on complex, high-value work, improve code quality, and reduce errors. Teams that delay adoption risk slower projects, lower-quality software, and missed opportunities to innovate and stay competitive.”
User assessments of software categories on SoftwareReviews provide an accurate and detailed view of the constantly changing market. Info-Tech’s reports are informed by the data from users and IT professionals who have intimate experience with the software throughout the procurement, implementation, and maintenance processes.
Read the full report: Best AI Code Generation Tools 2025
For more information about Info-Tech’s SoftwareReviews, the Data Quadrant, or the Emotional Footprint, or to access resources to support the software selection process, visit softwarereviews.com.
About Info-Tech Research Group
Info-Tech Research Group is one of the world’s leading research and advisory firms, proudly serving over 30,000 IT and HR professionals. The company produces unbiased, highly relevant research and provides advisory services to help leaders make strategic, timely, and well-informed decisions. For nearly 30 years, Info-Tech has partnered closely with teams to provide them with everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.
To learn more about Info-Tech’s divisions, visit McLean & Company for HR research and advisory services and SoftwareReviews for software buying insights.
Media professionals can register for unrestricted access to research across IT, HR, and software, and hundreds of industry analysts through the firm’s Media Insiders program. To gain access, contact [email protected].
For information about Info-Tech Research Group or to access the latest research, visit infotech.com and connect via LinkedIn and X.
About SoftwareReviews
SoftwareReviews is a division of Info-Tech Research Group, a world-class technology research and advisory firm. SoftwareReviews empowers organizations with the best data, insights, and advice to improve the software buying and selling experience.
For buyers, SoftwareReviews’ proven software selection methodologies, customer insights, and technology advisors help maximize success with technology decisions. For providers, the firm helps build more effective marketing, product, and sales processes with expert analysts, how-to research, customer-centric marketing content, and comprehensive analysis of the buyer landscape.
SOURCE Info-Tech Research Group
AI Research
Vanderbilt launches Enterprise AI and Computing Innovation Studio

Vanderbilt University has established the Enterprise AI and Computing Innovation Studio, a groundbreaking collaboration between VUIT, the Amplify Generative AI Innovation Center and the Data Science Institute. This studio aims to prototype and pilot artificial intelligence–driven innovations that enhance how we learn, teach, work and connect.
Each of the partner areas has a strong record of addressing challenges and solving problems independently. By uniting this expertise, the studio can accelerate innovation and expand the capacity of the university to harness emerging technologies to support its mission.
Through the studio, students will have immersive experiences collaborating on AI-focused projects. Staff will deepen their skills through engagement with AI research. In addition, the studio underscores Vanderbilt’s position as a destination for global talent in artificial intelligence and related fields.
Members of the university community who have specific challenges or opportunities that AI may solve or address can submit a consultation request.
AI Research
Northwestern Magazine: Riding the AI Wave

Although Hammond says he barely remembers his life before computers and coding, there was indeed a time when his world was much more analog. Hammond grew up on the East Coast and spent his high school years in Salt Lake City, where his mother was a social worker and his father was a professor of archaeology at the University of Utah. Over the course of 50 years, Philip C. Hammond excavated several sites in the Middle East and made dozens of trips to Jordan, earning him the nickname Lion of Petra. Kris joined these expeditions for three summers, working as his father’s surveyor and draftsman.
“Now, once a week, I ask ChatGPT for a biography of my father, as an experiment,” Hammond says, bemused. “Sometimes, it gives me a beautifully inaccurate bio that makes him sound like Indiana Jones. Other times, it says he is a tech entrepreneur and that I have followed in his footsteps.”
While those biographical tidbits are more AI-generated falsehoods, Hammond and his father have both traced intelligence from different worlds — one etched in stone and another in silicon. Wanting a deeper understanding of the meaning of intelligence and thought, Hammond studied philosophy as an undergraduate at Yale University and planned to go law school after graduation. But his trail diverged when a fellow member of a local sci-fi club suggested that Hammond, who had taken one computer science class, try working as a programmer.
“After nine months as a programmer, I decided that’s what I wanted to do for a living,” Hammond says.
That sci-fi club guy was Chris Riesbeck, who is also now a professor of computer science at McCormick. Hammond earned his doctorate in computer science from Yale in 1986. But he didn’t abandon philosophy entirely. Instead, he applied those abstract frameworks — consciousness, knowledge, creativity, logic and the nature of reason — to the pursuit of intelligent systems.
“The structure of thought always fascinated me,” Hammond says. “Looking at it from the perspective of how humans think and how machines ‘think’ — and how we can ‘think’ together — became a driver for me.”
But the word “think” is tenuous in this context, he says. There’s a fundamental and important distinction between true human cognition and what current AI can do — namely, sophisticated mimicry. AI isn’t trying to critically assess data to devise correct answers, says Hammond. Instead, it’s a probabilistic engine, sifting through language likelihoods to finish a sentence — like the predictive text you might see on your phone while composing a message. It is seeking the most likely conclusion to any given string of words.
“These are responsive systems,” he says. “They aren’t reasoning. They just hold words together. That’s why they have problems answering questions about recent events.”
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics