Connect with us

Tools & Platforms

Pentagon Elevates AI as Cornerstone of FY2026 Defense Tech Strategy

Published

on


Insider Brief

  • The Pentagon’s FY2026 RDT&E budget prioritizes artificial intelligence as a core operational technology, with over $2.2 billion allocated across military domains.
  • AI and autonomy are integrated into systems ranging from drones and submarines to decision support and electronic warfare, marking a shift from experimentation to deployment.
  • Quantum technologies and space infrastructure continue to advance alongside AI, supporting secure communications, navigation, and sensor systems across classified and cross-service programs.

Artificial intelligence is no longer experimental for the U.S. Department of Defense. It’s operational, and it’s everywhere.

The Pentagon’s newly released fiscal year 2026 Research, Development, Test, and Evaluation (RDT&E) budget allocates more than $2.2 billion to artificial intelligence and machine learning initiatives, embedding the technology across every service branch and functional area—from battlefield targeting to undersea systems. The shift signals a move beyond pilot programs toward system-wide deployment of AI, as part of a broader strategy to build long-term technological dominance.

The total RDT&E request tops $179 billion, a substantial increase from the previous year’s $141 billion. Much of that growth reflects a deliberate realignment away from siloed development and toward a converging stack of deep technologies: AI, autonomy, quantum computing, cybersecurity, and advanced space infrastructure.

AI and Autonomy Now Embedded

Artificial intelligence and autonomy dominate the FY2026 budget narrative, not as standalone programs, but as integrated enablers within nearly every major initiative. AI is being applied across domains — air, land, sea, cyber, and space — often in tandem with autonomous systems like drones and robotic platforms.

Tactical Autonomy (0602022F) and Undersea AI/ML (0604797N) are two of the more visible line items in the budget, pointing to the increasing role of AI in mission-critical decision-making and control. Autonomy in this context refers to systems that can operate with minimal or no human input—whether that’s a drone flying a route, a robotic vehicle navigating terrain, or a software agent selecting targets in complex environments.

The Army’s “Artificial Intelligence and Machine Learning Technologies” program (0602180A) and the Air Force’s broader autonomy efforts show how AI is now treated as infrastructure, not innovation. In electronic warfare, battlefield management, surveillance, and logistics, AI is increasingly the core layer powering decisions.

A separate category—Software and Digital Technology Pilot Programs—received a $1.06 billion request, aimed at building out the software infrastructure needed to support scalable, secure, and adaptive AI deployment. These investments reflect an understanding that AI effectiveness depends not only on algorithms but also on data readiness, compute architecture, and integration into command systems.

Quantum Quietly Gains Ground

While AI takes center stage in terms of funding volume, quantum technology continues to move from research to operational relevance. Though still fragmented across services and often embedded in classified initiatives, quantum systems are now a consistent feature of the RDT&E budget.

The most explicit program is the Defense-Wide “Quantum Application” line item (0603330D8Z), which cuts across all service branches and supports transitioning quantum concepts into military use. While the public document doesn’t disclose funding levels for this item, its inclusion indicates growing urgency around quantum-enabling technologies—particularly in navigation, secure communications, and early warning systems.

Elsewhere in the budget, Assured Positioning, Navigation, and Timing (0604120A) includes exploration of quantum-based inertial sensors that can operate independently of GPS—a critical capability in contested or degraded satellite environments. Post-quantum cryptography, which protects communications from future quantum computers capable of breaking today’s encryption, appears under various classified and cyber modernization efforts.

Quantum sensing and secure timing are also likely to play supporting roles in the Pentagon’s broader space and AI architectures, though much of the detail remains hidden in defense-wide and classified line items.

Space Force as a Convergence Platform

Now in its fifth year, the U.S. Space Force continues to grow its role as the convergence point for deep tech. The FY2026 request for the Space Force’s RDT&E programs totals over $29 billion. Of that, $4.3 billion is allocated to advanced prototyping, while $12.5 billion goes to operational systems.

Key investments include the Resilient Missile Warning and Tracking architecture in low and medium Earth orbit (LEO and MEO), the Evolved Strategic SATCOM program for hardened space communications, and the GPS III Follow-On for navigation. These systems increasingly rely on both AI-driven data analysis and quantum-enhanced sensors.

Space is no longer treated as a static domain for observation, rather it is now a dynamic environment for intelligence gathering, decision superiority, and real-time command. As AI and quantum sensing capabilities mature, they will likely be integrated directly into space-based platforms.

Hypersonics and the Role of AI

Another major budget area — hypersonics — demonstrates how AI and autonomy are no longer isolated tools but part of system-level design. More than $3 billion is allocated for hypersonic platforms, including the Hypersonic Attack Cruise Missile (HACM) and other long-range strike capabilities.

These weapons, which travel at speeds above Mach 5, require precise guidance, threat detection, and maneuverability. AI is already being used to support targeting, trajectory correction, and operational integration. Quantum navigation systems, still under development, could further enhance performance by providing position data independent of GPS.

From Research to Deployment

The budget outlines a full-stack approach to defense technology. Basic research remains steady at $2.27 billion. Advanced technology development is funded at $11.99 billion, while system development and demonstration programs reach $39.68 billion.

This structure allows the Department of Defense to push ideas from early-stage science to battlefield-ready systems. Prototyping—defined in the budget as building and testing early versions of new technologies before full-scale production—serves as a key bridge between lab breakthroughs and military deployment.

The FY2026 budget includes over 200 Army line items alone, covering everything from robotics and electronic warfare to synthetic training environments and biotechnology. The common thread running through them is convergence: each technology is increasingly developed with cross-domain integration in mind.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

IT Summit focuses on balancing AI challenges and opportunities — Harvard Gazette

Published

on


Exploring the critical role of technology in advancing Harvard’s mission and the potential of generative AI to reshape the academic and operational landscape were the key topics discussed during University’s 12th annual IT Summit. Hosted by the CIO Council, the June 11 event attracted more than 1,000 Harvard IT professionals.

“Technology underpins every aspect of Harvard,” said Klara Jelinkova, vice president and University chief information officer, who opened the event by praising IT staff for their impact across the University.

That sentiment was echoed by keynote speaker Michael D. Smith, the John H. Finley Jr. Professor of Engineering and Applied Sciences and Harvard University Distinguished Service Professor, who described “people, physical spaces, and digital technologies” as three of the core pillars supporting Harvard’s programs. 

In his address, “You, Me, and ChatGPT: Lessons and Predictions,” Smith explored the balance between the challenges and the opportunities of using generative AI tools. He pointed to an “explainability problem” in generative AI tools and how they can produce responses that sound convincing but lack transparent reasoning: “Is this answer correct, or does it just look good?” Smith also highlighted the challenges of user frustration due to bad prompts, “hallucinations,” and the risk of overreliance on AI for critical thinking, given its “eagerness” to answer questions. 

In showcasing innovative coursework from students, Smith highlighted the transformative potential of “tutorbots,” or AI tools trained on course content that can offer students instant, around-the-clock assistance. AI is here to stay, Smith noted, so educators must prepare students for this future by ensuring they become sophisticated, effective users of the technology. 

Asked by Jelinkova how IT staff can help students and faculty, Smith urged the audience to identify early adopters of new technologies to “understand better what it is they are trying to do” and support them through the “pain” of learning a new tool. Understanding these uses and fostering collaboration can accelerate adoption and “eventually propagate to the rest of the institution.” 

The spirit of innovation and IT’s central role at Harvard continued throughout the day’s programming, which was organized into four pillars:  

  • Teaching, Learning, and Research Technology included sessions where instructors shared how they are currently experimenting with generative AI, from the Division of Continuing Education’s “Bot Club,” where instructors collaborate on AI-enhanced pedagogy, to the deployment of custom GPTs and chatbots at Harvard Business School.
  • Innovation and the Future of Services included sessions onAI video experimentation, robotic process automation, ethical implementation of AI, and a showcase of the University’s latest AI Sandbox features. 
  • Infrastructure, Applications, and Operations featured a deep dive on the extraordinary effort to bring the new David Rubenstein Treehouse conference center to life, including testing new systems in a physical “sandbox” environment and deploying thousands of feet of network cabling. 
  • And the Skills, Competencies, and Strategies breakout sessions reflected on the evolving skillsets required by modern IT — from automation design to vendor management — and explored strategies for sustaining high-functioning, collaborative teams, including workforce agility and continuous learning. 

Amid the excitement around innovation, the summit also explored the environmental impact of emerging technologies. In a session focused on Harvard’s leadership in IT sustainability — as part of its broader Sustainability Action Plan — presenters explored how even small individual actions, like crafting more effective prompts, can meaningfully reduce the processing demands of AI systems. As one panelist noted, “Harvard has embraced AI, and with that comes the responsibility to understand and thoughtfully assess its impact.” 



Source link
Continue Reading

Tools & Platforms

Tennis players criticize AI technology used by Wimbledon

Published

on


Some tennis players are not happy with Wimbledon’s new AI line judges, as reported by The Telegraph. 

This is the first year the prestigious tennis tournament, which is still ongoing, replaced human line judges, who determine if a ball is in or out, with an electronic line calling system (ELC).

Numerous players criticized the AI technology, mostly for making incorrect calls, leading to them losing points. Notably, British tennis star Emma Raducanu called out the technology for missing a ball that her opponent hit out, but instead had to be played as if it were in. On a television replay, the ball indeed looked out, the Telegraph reported. 

Jack Draper, the British No. 1, also said he felt some line calls were wrong, saying he did not think the AI technology was “100 percent accurate.”

Player Ben Shelton had to speed up his match after being told that the new AI line system was about to stop working because of the dimming sunlight. Elsewhere, players said they couldn’t hear the new automated speaker system, with one deaf player saying that without the human hand signals from the line judges, she was unable to tell when she won a point or not. 

The technology also met a blip at a key point during a match this weekend between British player Sonay Kartal and the Russian Anastasia Pavlyuchenkova, where a ball went out, but the technology failed to make the call. The umpire had to step in to stop the rally and told the players to replay the point because the ELC failed to track the point. Wimbledon later apologized, saying it was a “human error,” and that the technology was accidentally shut off during the match. It also adjusted the technology so that, ideally, the mistake could not be repeated.

Debbie Jevans, chair of the All England Club, the organization that hosts Wimbledon, hit back at Raducanu and Draper, saying, “When we did have linesmen, we were constantly asked why we didn’t have electronic line calling because it’s more accurate than the rest of the tour.” 

We’ve reached out to Wimbledon for comment.

This is not the first time the AI technology has come under fire as tennis tournaments continue to either partially or fully adopt automated systems. Alexander Zverev, a German player, called out the same automated line judging technology back in April, posting a picture to Instagram showing where a ball called in was very much out. 

The critiques reveal the friction in completely replacing humans with AI, making the case for why a human-AI balance is perhaps necessary as more organizations adopt such technology. Just recently, the company Klarna said it was looking to hire human workers after previously making a push for automated jobs. 



Source link

Continue Reading

Tools & Platforms

AI Technology-Focused Training Campaigns : Raspberry Pi Foundation

Published

on


The Raspberry Pi Foundation has issued a compelling report advocating for sustained emphasis on coding education despite the rapid advancement of AI technologies. The educational charity challenges emerging arguments that AI’s growing capability to generate code diminishes the need for human programming skills, warning against potential deprioritization of computer science curricula in schools.

The Raspberry Pi Foundation’s analysis presents coding as not merely a vocational skill but a fundamental literacy that develops critical thinking, problem-solving abilities, and technological agency — competencies argued to be increasingly vital as AI systems permeate all aspects of society. The foundation emphasizes that while AI may automate certain technical tasks, human oversight remains essential for ensuring the safety, ethics, and contextual relevance of computer-generated solutions.

For educators, parents, and policymakers, this report provides timely insights into preparing younger generations for an AI-integrated future.

Image Credit: Raspberry Pi Foundation



Source link

Continue Reading

Trending