AI Research
AI defense startups get compute power they need to complete their software

A handful of select startup companies have the opportunity to work with Intel’s most advanced chips, processors, and accelerators to supercharge software development in artificial intelligence, machine learning, and analytics. Access to these Intel assets, secure cloud, and expert mentors have helped give these startups the ability to crunch huge amounts of data that provide insights and answer questions posed by national security officials.
Through the Intel Liftoff program, companies are given access to Intel® Xeon® 6 processors with built-in AI acceleration and Intel Gaudi® 2 and 3 accelerators for large-scale AI tasks, along with the training needed to take full advantage of the chips’ computational power.
“Liftoff is the right word,” said Steve Orrin, federal security research director and a senior principal engineer with Intel. “It helps get them off the ground without having to do a significant investment in infrastructure nor an investment in having to hire talent that is already pre-trained. We give them access to people who have knowledge and experience to help their engineers and architects craft the right solution.
“Tech mentorships and one-on-one guidance from Intel AI engineers help them understand their workloads and the combination of hardware architectures available to them, and then give them access to that without a major investment on their part. We have the central processing units, graphics processing units, accelerators, and software hosted at Intel, where they can get access to it so they can bring their workload and data, try it out in an environment where they can use all the different architectures and software enablement, and actually run their model, or train their model to do interesting inferencing and to help them accelerate their adoption and deployment.”
Kamiwaza is one of those AI startups that have matured their AI software in the Liftoff program, and as a result are now conducting important work for the Department of Homeland Security Cybersecurity and Infrastructure Security Agency (CISA).
Kamiwaza’s eponymous AI orchestration engine enables massive dispersed data sets to be rationalized and fused together to achieve mission results.

“A core tenet at Kamiwaza says we want it to be what we call ‘silicon neutral’,” said Luke Norris, CEO and co-founder of Kamiwaza. “We wanted to make sure that our customers could run those outcomes on the right silicon that matched their need – not just cost but the thermal load, speed, a large spectrum of considerations. There is no reason one vendor should fit all because each of these vendors have unique capabilities and structures, and when you can match those capabilities and structures to the outcomes that the customers want then it’s a best-of-all solution.
“We started with Intel in the Liftoff program to get ourselves situated to figure out how to get our code to work with Intel Xeon 6 and Gaudi 2 and 3 GPUs because that took some technical uplift at the time. We started to get some interest in our work and I was able to go to Intel’s Industrial Solution Builders enablement group and tell them we have a large opportunity with DHS to test all of this data and if it’s positive it’s going to result in, we believe, a level of applications and services we’ll be able to take to the state and fed. They were then able to secure us a couple Gaudi 3 servers in Intel’s cyber cloud and also get us a technical team to make sure we were maximizing the capabilities of the Gaudi 3 chips and cyber cloud.”
Through Intel’s support and actual production use cases Kamiwaza was able to produce during its time in the Liftoff program, the company was able to demonstrate to CISA that its software had the ability to do an immense amount of data processing and cross correlations on large systems. In addition, leadership at the agency were assured to see that the processing power of the Gaudi 3 GPUs met all of the core benchmarks showing overall wattage usage and impact on the environment.
Intel also benefits from the partnerships developed under Liftoff, especially in ways that help it improve its own products.
“We’re helping them adopt the technology and drive it,” said Orrin. “For Intel, there’s three benefits. There’s the utilization of our technologies for these novel and startup use cases and services. It’s about how we get our technologies embedded and help them achieve their goals.
“It’s also collaborative, where we’re learning about the next wave of things that are coming so we can better engineer our products to meet the broader set of ecosystem players where they are. As we learn how they’re leveraging an AI accelerator or a startup is doing interesting graph analytics as part of their AI, are there things we could do in our future hardware to better enable those kinds of use cases. Both sides learn from this exercise.”
Orrin also noted that Intel has the ability to help these startups gain broader exposure for their work because Intel products are embedded across numerous industries, each with varied use cases and customers. “We can help them get to the right customers and the right partners to accelerate delivery of their products and services,” he said.
What Kamiwaza produced with Intel’s tools
As part of its DHS work, Kamiwaza is helping to create smart bases, cities, and venues through critical weather correlation and event planning. This is done through an AI orchestration engine that correlates data from 90 years of public and private barometric pressure-related events to predict potential hazardous conditions where life could be threatened.
“Let me take you through a critical event planning scenario,” offered Norris. “If NOAA comes up with a forecast of a very low barometric pressure event and it’s targeted in a very small region, say a base, an area there’s a training exercise going on, or a large event like an outdoor concert, we can put that into the AI system and it will cross correlate those 90 years of other barometric pressure events that previously happened in that region and it will know all of the impacts of that similar-level event in that area.
“Airports that were shut down, routes and highways that were closed, fallen trees and debris, surge in emergency rooms, you name it. The AI system can actually build a reverse canonical plan based off of historics and based off of the future forecast that we are given.
“Now you have this massive system that says the last five times an event of this level happened, here was the total impact. Here’s what you should be looking for and here’s how you should proactively plan around that – pre-station troops, pre-station emergency response suppliers, bring in additional resources of X, Y and Z.”
The ability to use AI to make these types of granular predictions didn’t exist as recently as two-to-three years ago, and will lead to smart bases, smart cities, and smart event planning.
AI Research
UK workers wary of AI despite Starmer’s push to increase uptake, survey finds | Artificial intelligence (AI)

It is the work shortcut that dare not speak its name. A third of people do not tell their bosses about their use of AI tools amid fears their ability will be questioned if they do.
Research for the Guardian has revealed that only 13% of UK adults openly discuss their use of AI with senior staff at work and close to half think of it as a tool to help people who are not very good at their jobs to get by.
Amid widespread predictions that many workers face a fight for their jobs with AI, polling by Ipsos found that among more than 1,500 British workers aged 16 to 75, 33% said they did not discuss their use of AI to help them at work with bosses or other more senior colleagues. They were less coy with people at the same level, but a quarter of people believe “co-workers will question my ability to perform my role if I share how I use AI”.
The Guardian’s survey also uncovered deep worries about the advance of AI, with more than half of those surveyed believing it threatens the social structure. The number of people believing it has a positive effect is outweighed by those who think it does not. It also found 63% of people do not believe AI is a good substitute for human interaction, while 17% think it is.
Next week’s state visit to the UK by Donald Trump is expected to signal greater collaboration between the UK and Silicon Valley to make Britain an important centre of AI development.
The US president is expected to be joined by Sam Altman, the co-founder of OpenAI who has signed a memorandum of understanding with the UK government to explore the deployment of advanced AI models in areas including justice, security and education. Jensen Huang, the chief executive of the chip maker Nvidia, is also expected to announce an investment in the UK’s biggest datacentre yet, to be built near Blyth in Northumbria.
Keir Starmer has said he wants to “mainline AI into the veins” of the UK. Silicon Valley companies are aggressively marketing their AI systems as capable of cutting grunt work and liberating creativity.
The polling appears to reflect workers’ uncertainty about how bosses want AI tools to be used, with many employers not offering clear guidance. There is also fear of stigma among colleagues if workers are seen to rely too heavily on the bots.
A separate US study circulated this week found that medical doctors who use AI in decision-making are viewed by their peers as significantly less capable. Ironically, the doctors who took part in the research by Johns Hopkins Carey Business School recognised AI as beneficial for enhancing precision, but took a negative view when others were using it.
Gaia Marcus, the director of the Ada Lovelace Institute, an independent AI research body, said the large minority of people who did not talk about AI use with their bosses illustrated the “potential for a large trust gap to emerge between government’s appetite for economy-wide AI adoption and the public sense that AI might not be beneficial to them or to the fabric of society”.
“We need more evaluation of the impact of using these tools, not just in the lab but in people’s everyday lives and workflows,” she said. “To my knowledge, we haven’t seen any compelling evidence that the spread of these generative AI tools is significantly increasing productivity yet. Everything we are seeing suggests the need for humans to remain in the driving seat with the tools we use.”
after newsletter promotion
A study by the Henley Business School in May found 49% of workers reported there were no formal guidelines for AI use in their workplace and more than a quarter felt their employer did not offer enough support.
Prof Keiichi Nakata at the school said people were more comfortable about being transparent in their use of AI than 12 months earlier but “there are still some elements of AI shaming and some stigma associated with AI”.
He said: “Psychologically, if you are confident with your work and your expertise you can confidently talk about your engagement with AI, whereas if you feel it might be doing a better job than you are or you feel that you will be judged as not good enough or worse than AI, you might try to hide that or avoid talking about it.”
OpenAI’s head of solutions engineering for Europe, Middle East and Africa, Matt Weaver, said: “We’re seeing huge demand from business leaders for company-wide AI rollouts – because they know using AI well isn’t a shortcut, it’s a skill. Leaders see the gains in productivity and knowledge sharing and want to make that available to everyone.”
AI Research
What is artificial intelligence’s greatest risk? – Opinion

Risk dominates current discussions on AI governance. This July, Geoffrey Hinton, a Nobel and Turing laureate, addressed the World Artificial Intelligence Conference in Shanghai. His speech bore the title he has used almost exclusively since leaving Google in 2023: “Will Digital Intelligence Replace Biological Intelligence?” He stressed, once again, that AI might soon surpass humanity and threaten our survival.
Scientists and policymakers from China, the United States, European countries and elsewhere, nodded gravely in response. Yet this apparent consensus masks a profound paradox in AI governance. Conference after conference, the world’s brightest minds have identified shared risks. They call for cooperation, sign declarations, then watch the world return to fierce competition the moment the panels end.
This paradox troubled me for years. I trust science, but if the threat is truly existential, why can’t even survival unite humanity? Only recently did I grasp a disturbing possibility: these risk warnings fail to foster international cooperation because defining AI risk has itself become a new arena for international competition.
Traditionally, technology governance follows a clear causal chain: identify specific risks, then develop governance solutions. Nuclear weapons pose stark, objective dangers: blast yield, radiation, fallout. Climate change offers measurable indicators and an increasingly solid scientific consensus. AI, by contrast, is a blank canvas. No one can definitively convince everyone whether the greatest risk is mass unemployment, algorithmic discrimination, superintelligent takeover, or something entirely different that we have not even heard of.
This uncertainty transforms AI risk assessment from scientific inquiry into strategic gamesmanship. The US emphasizes “existential risks” from “frontier models”, terminology that spotlights Silicon Valley’s advanced systems.
This framework positions American tech giants as both sources of danger and essential partners in control. Europe focuses on “ethics” and “trustworthy AI”, extending its regulatory expertise from data protection into artificial intelligence. China advocates that “AI safety is a global public good”, arguing that risk governance should not be monopolized by a few nations but serve humanity’s common interests, a narrative that challenges Western dominance while calling for multipolar governance.
Corporate actors prove equally adept at shaping risk narratives. OpenAI’s emphasis on “alignment with human goals” highlights both genuine technical challenges and the company’s particular research strengths. Anthropic promotes “constitutional AI” in domains where it claims special expertise. Other firms excel at selecting safety benchmarks that favor their approaches, while suggesting the real risks lie with competitors who fail to meet these standards. Computer scientists, philosophers, economists, each professional community shapes its own value through narrative, warning of technical catastrophe, revealing moral hazards, or predicting labor market upheaval.
The causal chain of AI safety has thus been inverted: we construct risk narratives first, then deduce technical threats; we design governance frameworks first, then define the problems requiring governance. Defining the problem creates causality. This is not epistemological failure but a new form of power, namely making your risk definition the unquestioned “scientific consensus”. For how we define “artificial general intelligence”, which applications constitute “unacceptable risk”, what counts as “responsible AI”, answers to all these questions will directly shape future technological trajectories, industrial competitive advantages, international market structures, and even the world order itself.
Does this mean AI safety cooperation is doomed to empty talk? Quite the opposite. Understanding the rules of the game enables better participation.
AI risk is constructed. For policymakers, this means advancing your agenda in international negotiations while understanding the genuine concerns and legitimate interests behind others’.
Acknowledging construction doesn’t mean denying reality, regardless of how risks are defined, solid technical research, robust contingency mechanisms, and practical safeguards remain essential. For businesses, this means considering multiple stakeholders when shaping technical standards and avoiding winner-takes-all thinking.
True competitive advantage stems from unique strengths rooted in local innovation ecosystems, not opportunistic positioning. For the public, this means developing “risk immunity”, learning to discern the interest structures and power relations behind different AI risk narratives, neither paralyzed by doomsday prophecies nor seduced by technological utopias.
International cooperation remains indispensable, but we must rethink its nature and possibilities. Rather than pursuing a unified AI risk governance framework, a consensus that is neither achievable nor necessary, we should acknowledge and manage the plurality of risk perceptions. The international community needs not one comprehensive global agreement superseding all others, but “competitive governance laboratories” where different governance models prove their worth in practice. This polycentric governance may appear loose but can achieve higher-order coordination through mutual learning and checks and balances.
We habitually view AI as another technology requiring governance, without realizing it is changing the meaning of “governance” itself. The competition to define AI risk isn’t global governance’s failure but its necessary evolution: a collective learning process for confronting the uncertainties of transformative technology.
The author is an associate professor at the Center for International Security and Strategy, Tsinghua University.
The views don’t necessarily represent those of China Daily.
If you have a specific expertise, or would like to share your thought about our stories, then send us your writings at opinion@chinadaily.com.cn, and comment@chinadaily.com.cn.
AI Research
Albania’s prime minister appoints an AI-generated ‘minister’ to tackle corruption

TIRANA, Albania — Albania’s prime minister on Friday tapped an Artificial Intelligence-generated “minister” to tackle corruption and promote transparency and innovation in his new Cabinet.
Officially named Diella — the female form of the word for sun in the Albanian language — the new AI minister is a virtual entity.
Diella will be a “member of the Cabinet who is not present physically but has been created virtually,” Prime Minister Edi Rama said in a post on Facebook.
Rama said the AI-generated bot would help ensure that “public tenders will be 100% free of corruption” and will help the government work faster and with full transparency.
Diella uses AI’s up-to-date models and techniques to guarantee accuracy in offering the duties it is charged with, according to Albania’s National Agency for Information Society’s website.
Diella, depicted as a figure in a traditional Albanian folk costume, was created earlier this year, in cooperation with Microsoft, as a virtual assistant on the e-Albania public service platform, where she has helped users navigate the site and get access to about 1 million digital inquiries and documents.
Rama’s Socialist Party secured a fourth consecutive term after winning 83 of the 140 Assembly seats in the May 11 parliamentary elections. The party can govern alone and pass most legislation, but it needs a two-thirds majority, or 93 seats, to change the Constitution.
The Socialists have said it can deliver European Union membership for Albania in five years, with negotiations concluding by 2027. The pledge has been met with skepticism by the Democrats, who contend Albania is far from prepared.
The Western Balkan country opened full negotiations to join the EU a year ago. The new government also faces the challenges of fighting organized crime and corruption, which has remained a top issue in Albania since the fall of the communist regime in 1990.
Diella also will help local authorities to speed up and adapt to the bloc’s working trend.
Albanian President Bajram Begaj has mandated Rama with the formation of the new government. Analysts say that gives the prime minister authority “for the creation and functioning” of AI-generated Diella.
Asked by journalists whether that violates the constitution, Begaj stopped short on Friday of describing Diella’s role as a ministerial post.
The conservative opposition Democratic Party-led coalition, headed by former prime minister and president Sali Berisha, won 50 seats. The party has not accepted the official election results, claiming irregularities, but its members participated in the new parliament’s inaugural session. The remaining seats went to four smaller parties.
Lawmakers will vote on the new Cabinet but it was unclear whether Rama will ask for a vote on Diella’s virtual post. Legal experts say more work may be needed to establish Diella’s official status.
The Democrats’ parliamentary group leader Gazmend Bardhi said he considered Diella’s ministerial status unconstitutional.
“Prime minister’s buffoonery cannot be turned into legal acts of the Albanian state,” Bardhi posted on Facebook.
Parliament began the process on Friday to swear in the new lawmakers, who will later elect a new speaker and deputies and formally present Rama’s new Cabinet.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi