Connect with us

AI Research

New Black Duck Research Shows AI and Supply Chain Transparency Redefining Embedded Software Landscape

Published

on


Cybersecurity solutions provider Black Duck recently unveiled their The State of Embedded Software Quality and Safety 2025 report, which showed how AI is redefining the embedded software landscape. Findings include:

  • 89 percent of responding developers and security professionals use AI assistants and 96 percent embed open source AI models.
  • Weak governance leaves 21 percent uncertain about stopping vulnerabilities, and Shadow AI, developers using tools against policy, impacts 18 percent.
  • 71 percent of organizations now produce Software Bills of Materials, led more by customer/partner demand than compliance.
  • 80 percent have adopted memory-safe languages, with Python overtaking C++ in some embedded contexts.
  • 86 percent of executives call projects successful, compared with just 56 percent of developers, highlighting an optimism gap with business risk.

The report’s findings on shadow AI being introduced at the developer’s desktop, and the need for continuous SBOM monitoring after deployment, prove that a shift left–only strategy is no longer sufficient. Risk is introduced, discovered, and must be managed across the entire software development life cycle, and in response, a modern strategy must shift everywhere.

A number of industry stakeholders shared their thoughts on the findings>

Diana Kelley, Chief Information Security Officer at Noma Security

“AI systems, and especially agentic tools, are fragile to certain kinds of manipulation because their behaviors and outputs can be drastically altered by malicious or poorly formed prompts. AI interprets prompts as executable commands, so a single malformed prompt can reasonably result in wiped systems. 

“Robust AI security and agentic AI governance has never been more critical, ensuring systems are not harmed due to AI agent system access.

“AI agents bridge the gap between LLMs, tools, and system actions. Agents can execute commands, often autonomously, or instruct tools to perform actions. If an attacker can influence the agent via malicious AI prompt, they have the ability to direct the system to perform destructive operations at scale with a much bigger blast radius than a traditional AI application.”

Nicole Carignan, Senior VP, Security & AI Strategy, and Field CISO at Darktrace

“Before organizations can think meaningfully about AI governance, they need to lay the groundwork with strong data science principles. That means understanding how data is sourced, structured, classified, and secured—because AI systems are only as reliable as the data they’re built on.

“For organizations adopting third-party AI tools, it’s also critical to recognize that this introduces a shared security responsibility model—much like what we’ve seen with cloud adoption. When visibility into vendor infrastructure, data handling, or model behavior is limited, organizations must proactively mitigate those risks. That includes putting robust guardrails in place, defining access boundaries, and applying security controls that account for external dependencies.

“As organizations increasingly embed AI tools and agentic systems into their workflows, they must develop governance structures that can keep pace with the complexity and continued innovation of these technologies. But there is no one-size-fits-all approach. 

“Each organization must tailor its AI policies based on its unique risk profile, use cases and regulatory requirements. That’s why executive leadership for AI governance is essential, whether the organization is building AI internally or adopting external solutions.

“Effective AI governance requires deep cross-functional collaboration. Security, privacy, legal, HR, compliance, data, and product leaders each bring vital perspectives. Together, they must shape policies that prioritize ethics, data privacy, and safety—while still enabling innovation. In the absence of mature regulatory frameworks, industry collaboration is equally critical. Sharing successful governance models and operational insights will help raise the bar across sectors.

“As these systems evolve, so must governance strategies. Static policies won’t be enough, AI governance must be dynamic, real-time, and embedded from the start. Organizations that treat governance and security as strategic enablers will be best positioned to harness the full potential of AI safely and responsibly.”

Guy Feinberg, Growth Product Manager at Oasis Security

“AI agents, like human employees, can be manipulated. Just as attackers use social engineering to trick people, they can prompt AI agents into taking malicious actions. The real risk isn’t AI itself, but the fact that organizations don’t manage these non-human identities (NHIs) with the same security controls as human users.

“Manipulation Is inevitable. Just as we can’t prevent attackers from tricking people, we can’t stop them from manipulating AI agents. The key is limiting what these agents can do without oversight. AI agents need identity governance. They must be managed like human identities, with least privilege access, monitoring, and clear policies to prevent abuse. 

“Security teams need visibility. If these NHIs were properly governed, security teams could detect and block unauthorized actions before they escalate into a breach. Organizations should:

  • Treat AI agents like human users. Assign them only the permissions they need and continuously monitor their activity.
  • Implement strong identity governance. Track which systems and data AI agents can access, and revoke unnecessary privileges.
  • Assume AI will be manipulated. Build security controls that detect and prevent unauthorized actions, just as you would with phishing-resistant authentication for humans.

“The bottom line is that you can’t stop attackers from manipulating AI, just like you can’t stop them from phishing employees. The solution is better governance and security for all identities—human and non-human alike.”

Mayuresh Dani, Security Research Manager, at Qualys Threat Research Unit

“In recent times, government mandates are forcing vendors to create and share SBOMs with their customers. Organizations should request SBOMs from their vendors. This is the easiest approach. There are other approaches where the firmware is dumped and actively probed for, but this may lead to a breach of agreements. Such activities can also be carried out in conjunction with a vendor’s approval.

“Organizations should maintain and audit the existence of exposed ports by their network devices. These should then be mapped to the installed software based on the vendor provided SBOM. These are the highest priority since they will be publicly exposed. Secondly, OS updates should be preceded by reading the change logs that signifies the software’s being updated, removed.

“Note that SBOMs will bring visibility into which components are being used in a project. This can definitely help in a post compromise scenario where triaging for affected systems is necessary. However, more scrutiny is needed when dealing with open-source projects. Steps like detecting the use and vetting open-source project code should be made mandatory. Also, there should be a verification mechanism for everyone who contributes to open-source projects.

“Security leaders can harden their defenses against software supply chain attacks by investing in visibility and risk assessment across their complex software environment, including SBOM risk assessment and Software Composition Analysis (SCA). Part of the risk assessment should include accounting for upcoming EoS software so they can upgrade or replace it proactively.”

Satyam Sinha, CEO and Co-founder at Acuvity

“There has been a great deal of information and mindfulness about the risks and threats with regards to AI provided over the past year. In addition, there are abundant regulations brought in by various governments. 

“In our discussions with customers, it is evident that they are overwhelmed on how to prioritize and tackle the issues – there’s a lot that needs to be done. At the face of it, personnel seems to be a key inhibitor, however, this pain will only grow. GenAI has helped in multiple industries from customer support to writing code. Workflows that could not be automated are being handled by AI agents. We have to consider the use of GenAI native security products and techniques which will help achieve a multiplier effect on the personnel.

“The field of AI has seen massive leaps over the last two years, but it is evolving with new developments nearly every day. The gap in confidence and understanding of AI creates a massive opportunity for AI native security products to be created which can ease this gap. In addition, enterprises must consider approaches to bridge this gap with specialized learning programs or certifications to aid their cybersecurity teams. 

“GenAI has helped in multiple industries from customer support to writing code. Workflows that could not be automated are being handled by AI agents.

“Moving forward, we must consider the use of Gen-AI native security products and techniques which will help achieve a multiplier effect on the personnel. This is the only way to solve this problem.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

What It Means for State and Local Projects

Published

on


To lead the world in the AI race, President Donald Trump says the U.S. will need to “triple” the amount of electricity it produces. At a cabinet meeting on Aug. 26, he made it clear his administration’s policy is to favor fossil fuels and nuclear energy, while dismissing solar and wind power.

“Windmills, we’re just not going to allow them. They ruin our country,” Trump said at the meeting. “They’re ugly, they don’t work, they kill your birds, they’re bad for the environment.”

He added that he also didn’t like solar because of the space it takes up on land that could be used for farming.


“Whether we like it or not, fossil fuel is the thing that works,” said Trump. “We’re going to fire up those big monster factories.”

In the same meeting, he showcased a photo of what he said was a $50 billion mega data center planned for Louisiana, provided by Mark Zuckerberg.

Watch a condensed version of Trump’s comments at the cabinet meeting in the video below.

But there’s a reason coal-fired power plants have been closing at a rapid pace for years: cost. According to the think tank Energy Innovation, coal power in the U.S. tends to cost more to run than renewables. Before Trump’s second term, the U.S. Department of Energy publicized a strategy to support new energy demand for AI with renewable sources, writing that “solar energy, land-based wind energy, battery storage and energy efficiency are some of the most rapidly scalable and cost competitive ways to meet increased electricity demand from data centers.”

Further, many governments examining how to use AI also have climate pledges in place to reduce their greenhouse gas emissions — including states such as North Carolina and California.

Earlier this year Trump passed an executive order, “Reinvigorating America’s Beautiful Clean Coal Industry and Amending Executive Order 14241,” directing the secretaries of the Interior, Commerce and Energy to identify regions where coal-powered infrastructure is available and suitable for supporting AI.

A separate executive order, “Accelerating Federal Permitting of Data Center Infrastructure,” shifts the power to the federal government to ensure that new AI infrastructure, fueled by specific energy sources, is built quickly by “easing federal regulatory burdens.”

In an interview with Government Technology, a representative of Core Natural Resources, a U.S.-based mining and mineral resource company, explained this federal shift will be a “resurgency for the industry,” pressing that coal is “uniquely positioned” to fill the energy need AI will create.”

“If you’re looking to generate large amounts of energy that these data centers are going to require, you need to focus on energy sources that are going to be able to meet that demand without sacrificing the power prices for the consumers,” said Matthew Mackowiak, director of government affairs at Core.

“It’s going to be what powers the future, especially when you look at this demand growth over the next few years,” said Mackowiak.

Yet these plans for the future, including increased reliance on fossil fuels and coal, as well as needing mega data centers, may not be what the public is willing to accept. According to the International Energy Agency, a typical AI-focused data center consumes as much electricity as 100,000 households, but larger ones currently under construction may consume 20 times as much.

A recent report from Data Center Watch suggests that local activism is threatening to derail a potential data center boom.

According to the research firm, $18 billion worth of data center projects have been blocked, while $46 billion of projects were delayed over the last two years in situations where there was opposition from residents and activist groups. Common arguments against the centers are higher utility bills, water consumption, noise, impact on property value and green space preservation.

The movement may put state and local governments in the middle of a clash between federal directives and backlash from their communities. Last month in Tucson, Ariz., City Council members voted against a proposed data center project, due in large part to public pressure from residents with fears about its water usage.

St. Charles, Mo., recently considered banning proposed data centers for one year, pausing the acceptance of any zoning change applications for data centers or the issuing of any building permits for data centers following a wave of opposition from residents.

This debate may hit a fever pitch as many state and local governments are also piloting or launching their own programs powered by AI, from traffic management systems to new citizen portals.

As the AI energy debate heats up, local leaders could be in for some challenging choices. As Mackowiak of Core Natural Resources noted, officials have a “tough job, listening to constituents and trying to do what’s best.” He asserted that officials should consider “resource adequacy,” adding that “access to affordable, reliable, dependable power is first and foremost when it comes to a healthy economy and national security.”

The ultimate question for government leaders is not just whether they can meet the energy demands of a private data center, but how the public’s perception of this new energy future will affect their own technology goals. If the citizens begin to associate AI with contentious projects and controversial energy sources, it could create a ripple effect of distrust, disrupting the potential of the technology regardless of the benefits.

Ben Miller contributed to this story.





Source link

Continue Reading

AI Research

OpenAI Spends $10 Billion to Get Into the Chip Business

Published

on


OpenAI would like to stop being so reliant on Nvidia to handle its processing needs. To address that, the artificial intelligence startup is reportedly teaming up with Broadcom to develop its own chips, set to be available starting next year, according to the Financial Times.

The Wall Street Journal reports that OpenAI’s deal with the US-based semiconductor firm will see the two work together to create custom artificial intelligence chips, which will be used internally by OpenAI to train and run its new ChatGPT models and other AI products. The deal will reportedly put $10 billion into the pockets of Broadcom, which had announced a mystery deal on Thursday that apparently didn’t stay all that mysterious for long.

The deal probably shouldn’t be too big a surprise, just given the sheer volume of demand that Nvidia is currently tasked with fulfilling. The company has been the go-to for hyperscalers in the AI space looking to build quickly, producing chips that have become the standard for Amazon Web Services, Google, Microsoft, and Oracle. In fact, Oracle just announced plans to buy more than $40 billion worth of Nvidia chips for use in a new data center that will reportedly be a part of the Stargate Project, a joint effort by AI firms to expand computing infrastructure. Plus, there were hints that OpenAI was working on an in-house chip earlier this year. It appears those plans are now coming to fruition.

OpenAI isn’t the only company trying to wean itself off of its need for Nvidia’s supply of compute. Google has reportedly been calling around to data centers and offering its own custom chips to help handle AI-related processing, according to The Information. Amazon is reportedly working on its own AI chips, and Microsoft has gotten into the chipmaking business, as well.

Nvidia likely won’t be short on demand even with some of the big players attempting to go their own way. Just last week, the company reported that its sales were up 56% in the most recent quarter, suggesting that demand isn’t slowing down. There were also reports last month that the Trump administration may be loosening some of its trade tensions with China and other countries in a way that would allow Nvidia to sell its latest chips overseas, opening the company back up to some major international markets that have been complicated by the trade wars initiated by Trump and company.



Source link

Continue Reading

AI Research

‘I trust AI the way a sailor trusts the sea. It can carry you far, or it can drown you’: Poll results reveal majority do not trust AI

Published

on


Everywhere we turn, we are reminded of the rapid advances that artificial intelligence (AI) is making. As the technology continues to evolve, it raises an important question: Can we really trust it?

Trusting AI can mean many things — from letting it recommend a TV show to watch to relying on it for medical advice or putting it in charge of your car. On Aug. 29 we shared a poll asking Live Science readers where they stand on AI’s trustworthiness — and 382 people responded.



Source link

Continue Reading

Trending