Connect with us

AI Insights

AI is making it easier for bad actors to create biosecurity threats

Published

on


Artificial intelligence is helping accelerate the pace of scientific discovery, but the technology also makes it easier than ever to create biosecurity threats and weapons, cybersecurity experts say. 

It’s
an issue that currently flies under the radar for most Americans, said
Lucas Hansen, cofounder of AI education nonprofit CivAI.

The
COVID-19 pandemic increased awareness of biosecurity measures globally,
and some instances of bioterrorism, like the 2001 anthrax attacks, are
well known. But advancements in AI have made information about how to
create biosecurity threats, like viruses, bacteria and toxins, so much
more accessible in just the last year, Hansen said.  

“Many
people on the face of the planet already could create a bio weapon,”
Hansen said. “But it’s just pretty technical and hard to find. Imagine
AI being used to [multiply] the number of people that are capable of
doing that.”

It’s an issue that OpenAI CEO Sam Altman spoke about at a Federal Reserve conference in July. 

“We
continue to like, flash the warning lights on this,” Altman said. “I
think the world is not taking us seriously. I don’t know what else we
can do there, but it’s like, this is a very big thing coming.”

AI increasing biosecurity threats

Hansen
said there’s primarily two ways he believes AI could be used to create
biosecurity threats. Much less common, he believes, would be using AI to
make more dangerous bioweapons than have ever existed before using
technologies that enable the engineering of biological systems, such as
creating new viruses or toxic substances. 

Second, and
more commonly, Hansen said, AI is making information about existing
harmful viruses or toxins much more readily accessible. 

Consider
the polio virus, Hansen said. There are plenty of scientific journals
that share information on the origins and growth of polio and other
viruses that have been mostly eradicated, but the average person would
have to do much research and data collection to piece together how to
recreate it. 

A few years ago, AI models didn’t have
great metacognition, or ability to give instructions, Hansen said. But
in the last year, updates to models like Claude and ChatGPT have been
able to interpret more information and fill in the gaps. 

Paromita
Pain, an associate professor of global media at the University of
Nevada, Reno and an affiliated faculty member of the university’s
cybersecurity center, said she believes there’s a third circumstance
that could be contributing to biosecurity threats: accidents. The
increased access to information by people not properly trained to have
it could have unintended consequences. 

“It’s
essentially like letting loose teenagers in the lab,” Pain said. “It’s
not as if people are out there to willingly do bad, like, ‘I want to
create this pathogen that will wipe out mankind.’ Not necessarily. It’s
just that they don’t know that if you are developing pathogens, you need
to be careful.”

For those that are looking to do harm, though, it’s not hard, Hansen said. CivAI offers demos
to show how AI can be used in various scenarios, with a goal of
highlighting the potential harms the technology can cause if not used
responsibly. 

In a demo not available to the public,
Hansen showed States Newsroom how someone may use a current AI model to
assist them in creating a biothreat. CivAI keeps the example private, so
as to not inspire any nefarious actions, Hansen said. 

Though
many AI models are trained to flag and not to respond to dangerous
requests, like how to build a gun or how to recreate a virus, many can
be “jailbroken” easily, with a few prompts or lines of code, essentially
tricking the AI into answering questions it was instructed to ignore.

Hansen
walked through the polio virus example, prompting a jailbroken version
of Claude 4.0 Sonnet to give him instructions for recreating the virus.
Within a few seconds, the model provided 13 detailed steps, including
directions like “order the custom plasmid online,” with links to
manufacturers. 

The models are scraping information from
a few public research papers about the polio virus, but without the
step by step instructions, it would be very hard to find what you’re
looking for, make a plan and find the materials you’d need. The models
sometimes add information to supplement the scientific papers, helping
non-expert users understand complex language, Hansen said. 

It
would still take many challenging steps, including accessing lab
equipment and rare materials, to recreate the virus, Hansen said, but AI
has made access to the core information behind these feats so much more
available. 

“AI has turned bioengineering from a Ph.D.
level skill set to something that an ambitious high school student could
do with some of the right tools,” said Neil Sahota, an AI advisor to
the United Nations, and a cofounder of its AI for Good initiative.

CivAI
estimates that since 2022, the number of people who would be capable of
recreating a virus like polio with the tools and resources publicly
available has gone from 30,000 globally to 200,000 today because of AI.
They project 1.5 million people could be capable in 2028. An increase in
the number of languages that AI models are fluent in also increases the
chances of a global issue, Hansen said. 

“I think the
language thing is really, really important, because part of what we’re
considering here is the number of people that are capable of doing these
things and removing a language barrier is a pretty big deal,” he said.

How is the government addressing it? 

The
current Trump administration and the previous Biden administration
introduced similar strategies to addressing the threats. In Biden’s
October 2023 Executive Order
“Safe, Secure, and Trustworthy Development and Use of AI,” Biden sought
to create guidelines to evaluate and audit AI capabilities “through
which AI could cause harm, such as in the areas of cybersecurity and
biosecurity.”

Trump’s AI Action Plan,
which rolled out in July, said AI could “unlock nearly limitless
potential in biology,” but could also “create new pathways for malicious
actors to synthesize harmful pathogens and other biomolecules.” 

In
his action plan, he said he wishes to require scientific institutions
that receive federal funding to verify customers, and create enforcement
guidelines. The plan also says the Office of Science and Technology
Policy should develop a way for nucleic acid synthesis — the process of
creating DNA and RNA — providers to share data and screen for malicious
customers.

Sahota said the potential benefits of
bioengineering AI make regulating it complicated. The models can help
accelerate vaccine development and research into genetic disorders, but
can also be used nefariously.

“AI in itself is not good
or evil, it’s just a tool,” Sahota said. “And it really depends on how
people use it. I don’t think like a bad actor, and many people don’t, so
we’re not thinking about how they may weaponize these tools, but
someone probably is.”

California aimed to address biosecurity in SB 1047
last year, the “Safe and Secure Innovation for Frontier Artificial
Intelligence Models Act,” which sought to regulate foundational AI
models and impose obligations on companies that develop them to ensure
safety and security measures. 

The act outlines many
potential harms, but among them was AI’s potential to help “create novel
threats to public safety and security, including by enabling the
creation and the proliferation of weapons of mass destruction, such as
biological, chemical, and nuclear weapons.”

After passing in both chambers, the Act was vetoed by Gov. Gavin Newsom in September, for potentially “curtailing the very innovation that fuels advancement in favor of the public good.”  

Pain
said few international frameworks exist for how to share biological
data and train AI systems around biosecurity, and it’s unclear whether
AI developers, biologists, publishers or governments could be held
accountable for its misuse. 

“Everything that we are
talking about when it comes to biosecurity and AI has already happened
without the existence of AI,” she said of previous biothreats.

Sahota
said he worries we may need to see a real-life example of AI being
weaponized for a biological threat, “where we feel the pain on a massive
scale,” before governments get serious about regulating the technology.

Hansen
agrees, and he predicts those moments may be coming. While some
biological attacks could come from coordinated groups aiming to pull off
a terroristic incident, Hansen said he worries about the “watch the
world burn” types — nihilistic individuals that have historically turned
to mass shootings. 

“Right now, they look for
historical precedent on how to cause collateral damage, and the
historical precedent that they see is public shootings,” Hansen said. “I
think very easily it could start to be the case that deploying bio
weapons becomes pretty normal. I think after the first time that that
happens in real life, we’ll start seeing a lot of copycats. And that
makes me pretty, pretty nervous.”



Source link

AI Insights

Pittsburgh’s AI summit: five key takeaways

Published

on


The push for artificial intelligence-related investments in Western Pennsylvania continued Thursday with a second conference that brought together business leaders and elected officials. 

Not in attendance this time was President Donald Trump, who headlined a July 15 celebration of AI opportunity at Carnegie Mellon University.

This time Gov. Josh Shapiro, U.S. Sen. David McCormick and others converged in Bakery Square in Larimer to emphasize emerging public-private initiatives in anticipation of growing data center development and other artificial intelligence-related infrastructure including power plants. 

Here’s what speakers and attendees at the summit were saying.

AI is not a fad

As regional leaders and business investors consider their options, BNY Mellon’s CEO Robin Vince cautioned against not taking AI seriously.

“The way to get left behind in the next 10 years is to not care about AI,” Vince said

“AI is transforming everything,” said Selin Song during Thursday’s event. As president of Google Customer Solutions, Song said that the company’s recent investment of $25 million across the Pennsylvania-Jersey-Maryland grid will help give AI training access to the more than 1 million small businesses in the state.

Google isn’t the only game in town 

Shapiro noted that Amazon recently announced plans to spend at least $20 billion to establish multiple high-tech cloud computing and AI innovation campuses across the state.

“This is a generational change,” Shapiro said, calling it the largest private sector investment in Pennsylvania’s history. “This is our next chapter in innovative growth. It all fits together. This new investment is beyond data center 1.0 that we saw in Virginia.”   

Fracking concerns elevated

With all of the plans for new power-hungry data centers, some are concerned that the AI push will create more environmental destruction. Outside the summit, Food & Water Watch Pennsylvania cautioned that the interest in AI development is a “Trojan horse” for more natural gas fracking. Amid President Donald Trump’s attempts to dismantle wind and solar power, alternatives to natural gas appear limited. 

People gather in the Bakery Square area of Larimer Thursday, Sept. 11, to protest the nearby AI Horizons Summit. (Photo by Eric Jankiewicz/Pittsburgh’s Public Source)

Nuclear ready for its moment

But one possible alternative was raised at the AI conference by Westinghouse Electric Company’s interim CEO Dan Summer.

The Pittsburgh-headquartered organization is leading a renewed interest in nuclear energy with plans to build a number of its AP 1000 reactors to help match energy needs and capabilities. 

Summer said that the company is partnering with Google, allowing them to leverage Google’s AI capabilities “with our nuclear operations to construct new nuclear here.” 

China vs. ‘heroes’

Underlying much of the AI activity: concerns with China’s work in this field

With its vast resources, enormous capital, energy, workforce, the Chinese government is leveraging its resources to beat the United States in AI development,” said Nazak Nikakhtar, a national security and international trade attorney who chaired one of the panels Thursday. 

Four men in business attire participate in a panel discussion on stage at the PA AI Horizons Conference, with one man speaking at a podium while the others listen.
Carnegie Mellon University President Farnam Jahanian, right, speaks at the AI Horizons Summit alongside Gov. Josh Shapiro, center, and other panelists. (Photo by Eric Jankiewicz/Pittsburgh’s Public Source)

Speaking to EQT’s CEO Toby Rice and Groq executive Ian Andrews, Nikakhtar outlined some of the challenges she saw in U.S. development of AI technology compared to China. 

We are attempting to leverage, now, our own resources, albeit in some respects much more limited vis-a-vis what China has, to accelerate AI leadership here in the United States and beat China,” she said. “But we’re somewhat constrained by the resources we have, by our population, by workforce, capital.”

Rice said in response that the natural resources his company is extracting will help power the country’s ability to compete with China. 

Rice drew a link between the 9/11 terror attacks 24 years earlier and the “urgency” of competing with China in AI.

“People are looking to take down American economies,” Rice said. “And we have heroes. Never forget. And I do believe that us winning this race against China in AI is going to be one of the most heroic things we’re going to do.”

Eric Jankiewicz is PublicSource’s economic development reporter and can be reached at ericj@publicsource.org or on Twitter @ericjankiewicz.

Your gift will keep stories like this coming.

Have you learned something new today? Consider supporting our work with a donation.

We take pride in serving our community by delivering accurate, timely, and impactful journalism without paywalls, but with rising costs for the resources needed to produce our top-quality journalism, every reader contribution matters. It takes a lot of resources to produce this work, from compensating our staff, to the technology that brings it to you, to fact-checking every line, and much more.

Your donation to our nonprofit newsroom helps ensure that everyone in Allegheny County can stay informed about the decisions and events that impact their lives. Thank you for your support!

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.





Source link

Continue Reading

AI Insights

Commanders vs. Packers props, SportsLine Machine Learning Model AI picks, bets: Jordan Love Over 223.5 yards

Published

on


The NFL Week 2 schedule gets underway with a Thursday Night Football matchup between NFC playoff teams from a year ago. The Washington Commanders battle the Green Bay Packers beginning at 8:15 p.m. ET from Lambeau Field. Second-year quarterback Jayden Daniels led the Commanders to a 21-6 opening-day win over the New York Giants, completing 19 of 30 passes for 233 yards and one touchdown. Jordan Love, meanwhile, helped propel the Packers to a dominating 27-13 win over the Detroit Lions in Week 1. He completed 16 of 22 passes for 188 yards and two touchdowns. 

NFL prop bettors will likely target the two young quarterbacks with NFL prop picks, in addition to proven playmakers like Deebo Samuel, Romeo Doubs and Zach Ertz. Green Bay’s Jayden Reed has been dealing with a foot injury, but still managed to haul in a touchdown pass in the opener, while Austin Ekeler (shoulder) does not carry an injury designation for TNF. The Packers enter as a 3-point favorite with Green Bay at -172 on the money line, while the over/under is 49 points. Before betting any Commanders vs. Packers props for Thursday Night Football, you need to see the Commanders vs. Packers prop predictions powered by SportsLine’s Machine Learning Model AI.

Built using cutting-edge artificial intelligence and machine learning techniques by SportsLine’s Data Science team, AI Predictions and AI Ratings are generated for each player prop. 

For Packers vs. Commanders NFL betting on Monday Night Football, the Machine Learning Model has evaluated the NFL player prop odds and provided Commanders vs. Packers prop picks. You can only see the Machine Learning Model player prop predictions for Washington vs. Green Bay here.

Top NFL player prop bets for Commanders vs. Packers

After analyzing the Commanders vs. Packers props and examining the dozens of NFL player prop markets, the SportsLine’s Machine Learning Model says Packers quarterback Love goes Over 223.5 passing yards (-112 at FanDuel). Love passed for 224 or more yards in eight games a year ago, despite an injury-filled season. In 15 regular-season games in 2024, he completed 63.1% of his passes for 3,389 yards and 25 touchdowns with 11 interceptions. Additionally, Washington allowed an average of 240.3 passing yards per game on the road last season.

In a 30-13 win over the Seattle Seahawks on Dec. 15, he completed 20 of 27 passes for 229 yards and two touchdowns. Love completed 21 of 28 passes for 274 yards and two scores in a 30-17 victory over the Miami Dolphins on Nov. 28. The model projects Love to pass for 259.5 yards, giving this prop bet a 4.5 rating out of 5. See more NFL props here, and new users can also target the FanDuel promo code, which offers new users $300 in bonus bets if their first $5 bet wins:

How to make NFL player prop bets for Washington vs. Green Bay

In addition, the SportsLine Machine Learning Model says another star sails past his total and has nine additional NFL props that are rated four stars or better. You need to see the Machine Learning Model analysis before making any Commanders vs. Packers prop bets for Thursday Night Football.

Which Commanders vs. Packers prop bets should you target for Thursday Night Football? Visit SportsLine now to see the top Commanders vs. Packers props, all from the SportsLine Machine Learning Model.





Source link

Continue Reading

AI Insights

Adobe Says Its AI Sales Are Coming in Strong. But Will It Lift the Stock?

Published

on


Adobe (ADBE) just reported record quarterly revenue driven by artificial intelligence gains. Will it revive confidence in the stock?

The creative software giant late Thursday posted adjusted earnings per share of $5.31 on revenue that jumped 11% year-over-year to a record $5.99 billion in the fiscal third quarter, above analysts’ estimates compiled by Visible Alpha, as AI revenues topped company targets.

CEO Shantanu Narayen said that with the third-quarter’s revenue driven by AI, Adobe has already surpassed its “AI-first” revenue goals for the year, leading the company to boost its outlook. The company said it now anticipates full-year adjusted earnings of $20.80 to $20.85 per share and revenue of $23.65 billion to $23.7 billion, up from adjusted earnings of $20.50 to $20.70 on revenue of $23.50 billion to $23.6 billion previously.

Shares of Adobe were recently rising in late trading. But they’ve had a tough year so far, with the stock down more than 20% for 2025 through Thursday’s close amid worries about the company’s AI progress and growing competition.

Wall Street is optimistic. The shares finished Thursday a bit below $351, and the mean price target as tracked by Visible Alpha, above $461, represents a more than 30% premium. Most of the analysts tracking the stock have “buy” ratings.

But even that target represents a degree of caution in the context of recent highs. The shares were above $600 in February 2024.



Source link

Continue Reading

Trending