Connect with us

Ethics & Policy

“AI Ethics” Discourse Ignores Its Deadliest Use: War

Published

on


AI Ethics is a hot topic in the artificial intelligence world. It features in keynote speeches at major conferences and spawns entire dedicated safety teams at large companies—all the while, government, industry and academic leaders make a point of how hard they’re working to make sure AI proceeds in an ethical way. Ostensibly, this is a response to well-founded fears about the technology’s possible (and proven) downsides, like its threat to the job market, or potential for harm in mental health settings

But these conversations omit an honest consideration of the scariest application of all: the use of artificial intelligence to build weapons. AI’s use in the military doesn’t receive as much media attention as other industries, but that does not mean it isn’t happening. And if AI is really so capable that it threatens to replace our livelihoods, how much more threatening is it to imagine that same capability directed deliberately  at causing death and destruction? 

Such capability is, as of right now, in its nascent stages. The large majority of Earth’s existing weaponry has nothing to do with artificial intelligence. But the US and, insofar as one can tell, China, are betting that the next generation of weapons will. AI could soon reshape military power in several key areas, including intelligence analysis, vehicle guidance and control, and target acquisition–all areas where machine learning has improved significantly in recent years. 

Drones are an obvious application for AI guidance and control, but in order to be deployed there, AI still needs to solve some further technical challenges.  Firstly, it needs to become more functional on constrained hardware—meaning devices with limited battery life and processing power (see: drones, which only have enough energy to remain in flight for about 30 minutes, and can therefore only allocate a small percentage of power to data processing). Secondly, it needs to be integrated into low-latency systems, meaning those with near-instant reaction times. Thirdly, it needs to be able to work with noisy and incomplete data, because that’s the only sort of data you can get in real-time on the battlefield. If a large, state of the art computer vision model was shown a clean video of a busy city street, it could easily locate the head of every human; but try the same thing on a model small enough to fit on drone hardware, using low-quality footage, and it probably wouldn’t be usable. It’s possible this will soon change though. All of these three hurdles are active areas of research, and already, automatic target acquisition systems are starting to be tested on the battlefield—which is the only real way to find out if a new military technology will work. 

Ukraine’s Operation SpiderWeb, of June 2025, constituted one of these tests. It smuggled over 100 drones across the Russian border, and launched them to strike Russian long-range bomber aircraft parked in military bases. Zelensky claimed the attack hit 41 aircraft worth $7 billion. Putin has not commented on numbers, and claims the operation was a terrorist attack against civilians—but the magnitude of his response confirms the operation was a military success, meaning it inflicted a lot of damage.

Normally, drones are piloted by streaming video feeds and sensor data back to a human pilot in base, who then moves a joy stick, which sends its own signal back saying where to move and what to fire at. Wireless signals are notoriously unreliable though—especially with a moving source—and can be jammed by the enemy. Operation Spiderweb still mostly used remote human pilots, but its drones made some decisions autonomously, like navigation and small adjustments in position. Also, if the connection was completely lost, the drones had a fall-back in the form of an on-board targeting system. 

Exactly how much of the operation was automated isn’t clear; Ukraine may want to overstate the role of AI to make its military seem more advanced, but it is part of a series of developments towards increased drone autonomy. Drones have been used extensively by both Ukraine and Russia, and they’re clearly going to be a key part of future conflicts. At present, “jamming” them is one of the best defenses because it can take out multiple drones at a time. The tactic involves using radio frequency to interfere with the signals between drone and pilot. Once you mess up the right frequency band, everything using that band in the area will stop working. If drones could work fully autonomously—meaning they fly to the target area, find the target, aim accurately and fire—then they would need pretty much no external signal at all. The only defense would be to shoot them down physically, one at a time (difficult if there’s a large swarm), using missiles that are more expensive than the drones themselves. This could be the sort of game-changing technology that big military powers have their eyes on. 

Meanwhile, on the site of the world’s second highest human-inflicted death rate, the Israeli Defense Force is spearheading a different area of AI militarization. Unlike the Ukraine War, between two advanced state militaries, Gaza sees a powerful state military on one side and a small paramilitary group, plus a large unarmed civilian population, on the other. This means the IDF doesn’t need AI for front-line combat but for intense surveillance, so they can track and kill anyone suspected of being a threat. (The IDF may have killed more than 60,000 Palestinians in Gaza, but there are still over 2 million people alive there.) 

Keeping track of the movements of even 10 percent of Gaza’s population means processing a lot of data: cell phone locations, video footage, tapped phone lines, etc. It is only possible with a lot of automation, and that’s precisely what the IDF has used. One automated system, Lavender, identifies potential Hamas operatives from a database, which can then be given a cursory approval by a human before an attack is ordered. The system is trained using known examples of Gazans with some involvement in Hamas or Palestinian Islamic Jihad, and learns general patterns in their behaviour. Then, when it finds new people exhibiting the same rough patterns, it marks them as targets. A senior IDF officer told the Israeli-Palestinian publication +972  that he has “much more trust” in Lavender than in human soldiers, because the technology takes emotion out of the equation: “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier,”

This technology can then be coupled with other automated systems, like the grotesquely named “Where’s Daddy?”, which tracks the moment a target enters a given location, normally their family home, where they can be easily killed. As the IDF source put it: “It’s much easier to bomb a family home.” It’s possible that “Where’s Daddy?” also uses machine learning to predict things like future visits or how long they’ll last. Whatever the exact machine learning used in Lavender and related systems, it’s nothing like ChatGPT (the techniques are much older), but it still enables military action at an unprecedented scale. In the past, it was impossible for a state to monitor the movements of even a fraction of a small country, which made it very difficult to defeat a population that was broadly united against it. But as machines get better at recommending targets, the balance shifts more in favor of a wealthy, tech-advanced minority, and away from a much larger, less well-equipped public.

So AI is starting to be used at various parts of the “kill chain.” Lavender and Where’s Daddy? are for the first two steps: identify and find; while the drone warfare seen in Ukraine is about the latter two: dispatch and destroy. You might wonder what the big deal is, within the omnipresent atrocities of war. Whether someone is identified by a machine or a human, shot by a drone or a soldier, they die in the same way—and while the machine can make mistakes, so can the human. Even if you think in such utilitarian terms, there are a few reasons to be concerned about the current weaponization of artificial intelligence. 

Firstly, its use could outstrip frameworks for responsible use. After the invention of the atomic bomb, it took 20 years before negotiation even started on a non-proliferation treaty, and several more decades to enact the Test Ban Treaty. We’re not likely to see a single AI weapon as significant as the nuclear bomb, but cumulatively it may keep changing what weapons are capable of—and that process may happen faster than we can agree on new laws or treaties. 

Secondly, it could make everyone both more powerful and more vulnerable: a situation known in  military theory as the capability-vulnerability paradox. For example, the use of oil made WW2 militaries far more capable: they had no choice but to transition to using it—in tanks, ships, aircraft and bomb production—otherwise, they’d be left behind. But it also meant that if they ran out of oil, everything would fall apart. For a military to use AI, it needs to ensure a steady supply of computing infrastructure, well-trained workers who understand it, and lots of data. If it loses even one of these things, then the entire system could break. When both sides trade vulnerability for increased capability, the game-theoretic calculation shifts. Now, a first strike looks more attractive, because you’re able to cripple the opponent before they can respond. 

Finally, the mere uncertainty of how AI might develop in the future creates a general instability in geopolitical relations. It makes everyone more suspicious and prone to short-term thinking, which is already something that pushes toward escalation. 

War and geopolitics are extremely complicated, and the inclusion of artificial intelligence in the equation makes them even harder to understand. We should be devoting a lot of effort to figuring out the right principles to operate by and trying to get some idea of how we want this to play out. This is why it’s so disappointing that the movement billing itself ‘AI Ethics’ makes essentially no mention of the topic. The problems it does fixate on include the idea that AI systems might learn to deceive or disobey humans, and that if we don’t work hard to ‘align’ them, they may start to intentionally harm us. The plausibility of this threat is still a matter of debate. Other AI Ethics subfields do address clear problems, but of comparably minor importance—like how chatbots can reflect societal biases, for example, by assuming a doctor is a man or perpetuating stereotypes. Other focuses aren’t even ethical problems at all, in the sense that they don’t involve questioning what’s right or wrong—like making sure AI medical diagnoses are accurate, or that driverless cars don’t crash. 

The movement does sometimes talk about preventing misuse by “bad actors,” which at least acknowledges that AI can enable humans to do bad things to each other. But this doesn’t come close to the heart of the matter. Firstly, it focuses almost entirely on chatbots, constructing semi-plausible scenarios in which they allow an individual to do terrible damage—e.g. by helping them make a bioweapon—even though autonomous vehicle control and object detection have much more direct uses in weaponry. Secondly, these so-called bad actors, while not specified in detail, clearly do not include, for example, the US Department of Defense, meaning the group in the world who is spending more money than anyone else on AI weaponry is ignored.

If you take a look at DoD spending, it may provide a partial explanation for the glaring hole in AI ethics discourse. As it turns out, plenty of AI researchers and practitioners receive military funding. Depending on how you count, it may be that many do—the cultural divide between Silicon Valley and the military industrial complex doesn’t stop the former from accepting tens, if not hundreds of billions of dollars, from the latter. Some of this comes in the form of DoD contracts, those traditionally gobbled up by the likes of Lockheed Martin. Giants such as Amazon and Microsoft receive DoD contracts, as well as Palantir and a host of military AI startups.

Money also comes in the form of research grants to universities and other institutes. Carnegie Mellon University has over 20 contracts with the DoD, several of which fund AI research, including, over $3 million from the US Army’s Artificial Intelligence Task Force to research reconnaissance tools to increase soldiers’ “lethality and survivability,”  about $5 million vaguely pertaining to “AI fusion”, and about $13.4 million to “fortify the nation’s security and defense”. In total, the DoD spent over $3bn on basic research in 2022, and their budget has continued to grow ever since. What exact topics they’re interested in is classified, to prevent other countries inferring their strategy, but given their prioritization of the area, it’s likely that AI research is a substantial chunk of it. And that portion really is just for pure research–there’s separate $80bn for deploying it into military applications–a lot of the stuff in the big AI conferences could be funded as basic research here. China spends a third as much as the US, and has an even tighter lid on the details, but again one would assume that a significant portion goes to AI research.

Where does this money end up? It’s certainly possible to find AI research papers, both from universities and industry labs, with explicit support from DARPA (an agency under the DoD), and there may be many more that downplay the connection. There’s a requirement in most conferences that authors of new AI research comment on the potential harms of what they’re building. If the technology may be used in a weapon, then the correct answers to that question are “could be used to kill people”, “could exacerbate violent state surveillance and ethnic cleansing”, or “could contribute to the AI arms race and the destabilization of geopolitical relations”. But you’re hardly going to say anything so challenging to your funders. If your research was specifically funded by the military, you’re hardly going to acknowledge mass surveillance, death, and destruction as potential harms—those outcomes are the whole point. 

Another key element in understanding why AI ethics focuses on what it does, is the broader beliefs of the small number of people that drive the movement. Much of the AI world is led by  Silicon Valley, especially OpenAI and Anthropic, and so AI Ethics has been infused with their particular worldview. That means it follows a few core dogmas: that science and technological progress can solve nearly all of today’s problems; that AI, and specifically LLMs, by rapidly accelerating science and technology will radically transform most aspects of life in the next few years; and that it is essential that the US leads this revolution, because it is the only country that will use this untold power benevolently.

Any problems that don’t resonate with this framework are sidelined—especially if they challenge the idea that the United States is a unilateral force for good. Under this worldview, it’s palatable to discuss a misdiagnosed medical scan, but it’s much more uncomfortable to consider the IDF using machine learning and US-provided computing services to plan bombing family homes. While one can make the former out to be very concerning, it doesn’t challenge any of the core dogmas, whereas the latter most certainly does. 

The favorite dystopian danger fantasy is that of a rogue super intelligence, one that pursues its objective so effectively that it starts to kill humans. (For example, the AI’s instruction is to eradicate cancer and it achieves this by exterminating the human race.) These scenarios play neatly into the belief that LLM-based products are immensely powerful—and casts their creators as the noble guardians of this great power. Thank god it is us, the good guys, who are the best in the world at building LLMs, because we will take care to do it justly, unlike those other guys. Of course, ‘other guys’ here essentially means China: the only other country able to produce world-leading LLMs. 

When Chinese company DeepSeek released its R1 model in January 2025, many in SV reacted as if it was an act of aggression by the Chinese Communist Party. There was a slew of calls for the US to up its game in the AI arms race, and Meta even assembled emergency “war rooms” to analyze where they’d been outdone. Normal market competition was reframed as SV engineers being tasked with defending democracy itself.

This gives another perspective on the “bad actors” mentioned above. Restricting GPU exports or banning DeepSeek can now actually be sold as AI ethics: a way to ensure freedom triumphs.

This comes from the belief that whoever controls LLMs will be positioned to control everything else. The same logic is used by those hawkish enough to argue directly for the need to build AI weapons: China is doing it, they say, they’re going to defeat us,unless we do too. Anduril CEO Palmer Luckey gave a TED talk in April 2025 titled “The AI Arsenal That Could Stop WWIII,” which described a doomsday scenario of China conducting a bolt-from-the-blue, full-scale invasion of Taiwan. Luckey argued that this could soon be a reality, unless the U.S. fields a vast AI-driven arsenal to repel China’s attack. On the face of it, he makes a convincing case. While there are different opinions on how likely such a doomsday scenario is, there’s broad agreement it’s a possibility.. The Chinese naval expansion under Xi Jinping, coupled with his consistent position on the need for reunification, is indeed scary–it is comforting, and may seem sensible to beef up militarily in response. But where does this path lead? Palantir CEO Alex Karp recently told CNBC in regards to the AI arms race: “Either we will win, or China will win.”  Karp was expressing the same pot-commitment to the zero sum game—but also implying that this race has a finish line; that one day, the US will win and the whole problem will be resolved. 

Anthropic CEO Dario Amodei seems to believe something similar,  saying that democracies “may be able to parlay their AI superiority into a durable advantage”, so that “our worst adversaries.. give up competing with democracies in order to receive all the benefits and not fight a superior foe.” This is a very optimistic view of how arms races work, and betrays a lack of understanding of China’s perspective. Central to Chinese identity and values is their historical ‘Century of Humiliation’, beginning with the Opium Wars in the 1800s, when Western naval superiority forcefully imposed unfair and harmful trade agreements. The idea of accepting anything like a repeat round is not something China would ever do. 

Unfortunately, this reality is lost on current policy makers. When the Biden administration restricted GPU exports to China in 2022, it primarily cited national security reasons—implying China might use them for game-changing weapons. But the restrictions targeted chips used for LLMs, which have little near-term military utility. The effect was to antagonize China, trigger memories of the Opium Wars, and tilt the market toward U.S. firms. Eventually, China can simply produce its own chips or design around weaker ones, as DeepSeek already did. The strategy buys at most a few years, while ratcheting up tensions.

That “short-termism” is the failure of all these perspectives. Each accepts a future of arms races and antagonism, and ignores the one outcome that isn’t terrible: a peaceful resolution. Through compromise and careful diplomatic maneuvering, it remains possible for Taiwan and the PRC t resume dialogue and cool off, and for China–West relations to stabilize. This is what we should be devoting our energy to achieving. Instead the AI ethics community sits comfortably inside the worldview of US supremacy–without getting its hands dirty by actually mentioning the military, it largely reflects the techno-hawkism of Luckey and Karp. .

In the end then, the silence of the AI Ethics movement towards its burgeoning use in the military is unsurprising. The movement doesn’t say anything controversial to Washington (including the military industrial complex), because that’s a source of money, as well as an invaluable stamp of importance. It’s fine—even encouraged—to make veiled digs at China, Russia or North Korea, at the “bad actors” it sometimes refers to, but otherwise the industry avoids anything “political.” It also mostly frames the issues as centered on LLMs, because it wants to paint the tech products of its leaders as pivotally important in all respects. This then makes it a bit awkward to bring in military applications because it’s pretty obvious that LLMs have little current military value. 

I personally came to AI research nearly ten years ago, from a deep curiosity about the nature of the mind and the self. At that time it was still a somewhat fringe subject, and as the field exploded into public awareness, I’ve been horrified to watch it intertwine with the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. To find the right way forward, we need to think much more deeply about where we’re going and what our values are. We need an authentic AI Ethics movement that questions the forces and assumptions shaping current development, rather than imbibing the views passed down from a few, often misguided, leaders. 

There are many new difficult questions facing the AI world, with likely many more on the way, and so far we have barely found the courage to even ask them.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Hyderabad: Dr. Pritam Singh Foundation hosts AI and ethics round table at Tech Mahindra

Published

on


The Dr. Pritam Singh Foundation and IILM University hosted a Round Table on “Human at Core: AI, Ethics, and the Future” in Hyderabad. Leaders and academics discussed leveraging AI for inclusive growth while maintaining ethics, inclusivity, and human-centric technology.

Published Date – 30 August 2025, 12:57 PM




Hyderabad: The Dr. Pritam Singh Foundation, in collaboration with IILM University, hosted a high-level Round Table Discussion on “Human at Core: AI, Ethics, and the Future” at Tech Mahindra, Cyberabad.

The event, held in memory of the late Dr. Pritam Singh, pioneering academic, visionary leader, and architect of transformative management education in India, brought together policymakers, business leaders, and academics to explore how India can harness artificial intelligence (AI) while safeguarding ethics, inclusivity, and human values.


In his keynote address, Padmanabhaiah Kantipudi, IAS (Retd.), Chairman of the Administrative Staff College of India (ASCI),

paid tribute to Dr. Pritam Singh, describing him as a nation-builder who bridged academia, business, and governance.
The Round Table theme, Leadership: AI, Ethics, and the Future, underscored India’s opportunity to leverage AI for inclusive growth across healthcare, agriculture, education, and fintech—while ensuring technology remains human-centric and trustworthy.



Source link

Continue Reading

Ethics & Policy

AI ethics: Bridging the gap between public concern and global pursuit – Pennsylvania

Published

on


(The Center Square) – Those who grew up in the 20th and 21st centuries have spent their lives in an environment saturated with cautionary tales about technology and human error, projections of ancient flood myths onto modern scenarios in which the hubris of our species brings our downfall.

They feature a point of no return, dubbed the “singularity” by Manhattan Project physicist John von Neumann, who suggested that technology would advance to a stage after which life as we know it would become unrecognizable.

Some say with the advent of artificial intelligence, that moment has come. And with it, a massive gap between public perception and the goals of both government and private industry. While states court data center development and tech investments, polling from Pew Research indicates Americans outside the industry have strong misgivings about AI.

In Pennsylvania, giants like Amazon and Microsoft have pledged to spend billions building the high-powered infrastructure required to enable the technology. Fostering this progress is a rare point of agreement between the state’s Democratic and Republican leadership, even bringing Gov. Josh Shapiro to the same event – if not the same stage – as President Donald Trump.

Pittsburgh is rebranding itself as the “global capital of physical AI,” leveraging its blue-collar manufacturing reputation and its prestigious academic research institutions to depict the perfect marriage of code and machine. Three Mile Island is rebranding itself as Crane Clean Energy Center, coming back online exclusively to power Microsoft AI services. Some legislators are eager to turn the lights back on fossil fuel-burning plants and even build new ones to generate the energy required to feed both AI and the everyday consumers already on the grid.

– Advertisement –

At the federal level, Trump has revoked guardrails established under the Biden administration with an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence.” In July, the White House released its “AI Action Plan.”

The document reads, “We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to ‘Build, Baby, Build!’”

To borrow an analogy from Shapiro’s favorite sport, it’s a full-court press, and there’s hardly a day that goes by that messaging from the state doesn’t tout the thrilling promise of the new AI era. Next week, Shapiro will be returning to Pittsburgh along with a wide array of luminaries to attend the AI Horizons summit in Bakery Square, a hub for established and developing tech companies.

According to leaders like Trump and Shapiro, the stakes could not be higher. It isn’t just a race for technological prowess — it’s an existential fight against China for control of the future itself. AI sits at the heart of innovation in fields like biotechnology, which promise to eradicate disease, address climate collapse, and revolutionize agriculture. It also sits at the heart of defense, an industry that thrives in Pennsylvania.

Yet, one area of overlap in which both everyday citizens and AI experts agree is that they want to see more government control and regulation of the technology. Already seeing the impacts of political deepfakes, algorithmic bias, and rogue chatbots, AI has far outpaced legislation, often to disastrous effect.

In an interview with The Center Square, Penn researcher Dr. Michael Kearns said that he’s less worried about autonomous machines becoming all-powerful than the challenges already posed by AI.

– Advertisement –

Kearns spends his time creating mathematical models and writing about how to embed ethical human principles into machine code. He believes that in some areas like chatbots, progress may have reached a point where improvements appear incremental for the average user. He cites the most recent ChatGPT update as evidence.

“I think the harms that are already being demonstrated are much more worrisome,” said Kearns. “Demographic bias, chatbots hurling racist invectives because they were trained on racist material, privacy leaks.”

Kearns says that a major barrier to getting effective regulatory policy is incentivizing experts to leave behind engaging work in the field as researchers and lucrative roles in tech in order to work on policy. Without people who understand how the algorithms operate, it’s difficult to create “auditable” regulations, meaning there are clear tests to pass.

Kearns pointed to ISO 420001. This is an international standard that focuses on process rather than outcome to guide developers in creating ethical AI. He also noted that the market itself is a strong guide. When someone gets hurt or hurts someone else using AI, it’s bad for business, incentivizing companies to do their due diligence.

He also noted crossroads where two ethical issues intersect. For instance, companies are entrusted with their users’ personal data. If policing misuse of the product requires an invasion of privacy, like accessing information stored on the cloud, there’s only so much that can be done.

OpenAI recently announced that it is scanning user conversations for concerning statements and escalating them to human teams, who may contact authorities when deemed appropriate. For some, the idea of alerting the police to someone suffering from mental illness is a dangerous breech. Still, it demonstrates the calculated risks AI companies have to make when faced with reports of suicide, psychosis, and violence arising out of conversations with chatbots.

Kearns says that even with the imperative for self-regulation on AI companies, he expects there to be more stumbling blocks before real improvement is seen in the absence of regulation. He cites watchdogs like the investigative journalists at ProPublica who demonstrated machine bias against Black people in programs used to inform criminal sentencing in 2016.

Kearns noted that the “headline risk” is not the same as enforceable regulation and mainly applies to well-established companies. For the most part, a company with a household name has an investment in maintaining a positive reputation. For others just getting started or flying under the radar, however, public pressure can’t replace law.

One area of AI concern that has been widely explored in the media is the use of AI by those who make and enforce the law. Kearns said, for his part, he’s found “three-letter agencies” to be “among the most conservative of AI adopters just because of the stakes involved.

In Pennsylvania, AI is used by the state police force.

In an email to The Center Square, PSP Communications Director Myles Snyder wrote, “The Pennsylvania State Police, like many law enforcement agencies, utilizes various technologies to enhance public safety and support our mission. Some of these tools incorporate AI-driven capabilities. The Pennsylvania State Police carefully evaluates these tools to ensure they align with legal, ethical, and operational considerations.”

PSP was unwilling to discuss the specifics of those technologies.

AI is also used by the U.S. military and other militaries around the world, including those of Israel, Ukraine, and Russia, who are demonstrating a fundamental shift in the way war is conducted through technology.

In Gaza, the Lavender AI system was used to identify and target individuals connected with Hamas, allowing human agents to approve strikes with acceptable numbers of civilian casualties, according to Israeli intelligence officials who spoke to The Guardian on the matter. Analysis of AI use in Ukraine calls for a nuanced understanding of the way the technology is being used and ways in which it should be regulated by international bodies governing warfare in the future.

Then, there are the more ephemeral concerns. Along with the long-looming “jobpocalypse,” many fear that offloading our day-to-day lives into the hands of AI may deplete our sense of meaning. Students using AI may fail to learn. Workers using AI may feel purposeless. Relationships with or grounded in AI may lead to disconnection.

Kearns acknowledged that there would be disruption in the classroom and workplace to navigate but it would also provide opportunities for people who previously may not have been able to gain entrance into challenging fields.

As for outsourcing joy, he asked “If somebody comes along with a robot that can play better tennis than you and you love playing tennis, are you going to stop playing tennis?”



Source link

Continue Reading

Ethics & Policy

Bridging the gap between public concern and global pursuit

Published

on


(The Center Square) – Those who grew up in the 20th and 21st centuries have spent their lives in an environment saturated with cautionary tales about technology and human error, projections of ancient flood myths onto modern scenarios in which the hubris of our species brings our downfall.

They feature a point of no return, dubbed the “singularity” by Manhattan Project physicist John von Neumann, who suggested that technology would advance to a stage after which life as we know it would become unrecognizable.

Some say with the advent of artificial intelligence, that moment has come. And with it, a massive gap between public perception and the goals of both government and private industry. While states court data center development and tech investments, polling from Pew Research indicates Americans outside the industry have strong misgivings about AI.

People are also reading…

In Pennsylvania, giants like Amazon and Microsoft have pledged to spend billions building the high-powered infrastructure required to enable the technology. Fostering this progress is a rare point of agreement between the state’s Democratic and Republican leadership, even bringing Gov. Josh Shapiro to the same event – if not the same stage – as President Donald Trump.

Pittsburgh is rebranding itself as the “global capital of physical AI,” leveraging its blue-collar manufacturing reputation and its prestigious academic research institutions to depict the perfect marriage of code and machine. Three Mile Island is rebranding itself as Crane Clean Energy Center, coming back online exclusively to power Microsoft AI services. Some legislators are eager to turn the lights back on fossil fuel-burning plants and even build new ones to generate the energy required to feed both AI and the everyday consumers already on the grid.

At the federal level, Trump has revoked guardrails established under the Biden administration with an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence.” In July, the White House released its “AI Action Plan.”

The document reads, “We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to ‘Build, Baby, Build!’”

To borrow an analogy from Shapiro’s favorite sport, it’s a full-court press, and there’s hardly a day that goes by that messaging from the state doesn’t tout the thrilling promise of the new AI era. Next week, Shapiro will be returning to Pittsburgh along with a wide array of luminaries to attend the AI Horizons summit in Bakery Square, a hub for established and developing tech companies.

According to leaders like Trump and Shapiro, the stakes could not be higher. It isn’t just a race for technological prowess — it’s an existential fight against China for control of the future itself. AI sits at the heart of innovation in fields like biotechnology, which promise to eradicate disease, address climate collapse, and revolutionize agriculture. It also sits at the heart of defense, an industry that thrives in Pennsylvania.

Yet, one area of overlap in which both everyday citizens and AI experts agree is that they want to see more government control and regulation of the technology. Already seeing the impacts of political deepfakes, algorithmic bias, and rogue chatbots, AI has far outpaced legislation, often to disastrous effect.

In an interview with The Center Square, Penn researcher Dr. Michael Kearns said that he’s less worried about autonomous machines becoming all-powerful than the challenges already posed by AI.

Kearns spends his time creating mathematical models and writing about how to embed ethical human principles into machine code. He believes that in some areas like chatbots, progress may have reached a point where improvements appear incremental for the average user. He cites the most recent ChatGPT update as evidence.

“I think the harms that are already being demonstrated are much more worrisome,” said Kearns. “Demographic bias, chatbots hurling racist invectives because they were trained on racist material, privacy leaks.”

Kearns says that a major barrier to getting effective regulatory policy is incentivizing experts to leave behind engaging work in the field as researchers and lucrative roles in tech in order to work on policy. Without people who understand how the algorithms operate, it’s difficult to create “auditable” regulations, meaning there are clear tests to pass.

Kearns pointed to ISO 420001. This is an international standard that focuses on process rather than outcome to guide developers in creating ethical AI. He also noted that the market itself is a strong guide. When someone gets hurt or hurts someone else using AI, it’s bad for business, incentivizing companies to do their due diligence.

He also noted crossroads where two ethical issues intersect. For instance, companies are entrusted with their users’ personal data. If policing misuse of the product requires an invasion of privacy, like accessing information stored on the cloud, there’s only so much that can be done.

OpenAI recently announced that it is scanning user conversations for concerning statements and escalating them to human teams, who may contact authorities when deemed appropriate. For some, the idea of alerting the police to someone suffering from mental illness is a dangerous breech. Still, it demonstrates the calculated risks AI companies have to make when faced with reports of suicide, psychosis, and violence arising out of conversations with chatbots.

Kearns says that even with the imperative for self-regulation on AI companies, he expects there to be more stumbling blocks before real improvement is seen in the absence of regulation. He cites watchdogs like the investigative journalists at ProPublica who demonstrated machine bias against Black people in programs used to inform criminal sentencing in 2016.

Kearns noted that the “headline risk” is not the same as enforceable regulation and mainly applies to well-established companies. For the most part, a company with a household name has an investment in maintaining a positive reputation. For others just getting started or flying under the radar, however, public pressure can’t replace law.

One area of AI concern that has been widely explored in the media is the use of AI by those who make and enforce the law. Kearns said, for his part, he’s found “three-letter agencies” to be “among the most conservative of AI adopters just because of the stakes involved.

In Pennsylvania, AI is used by the state police force.

In an email to The Center Square, PSP Communications Director Myles Snyder wrote, “The Pennsylvania State Police, like many law enforcement agencies, utilizes various technologies to enhance public safety and support our mission. Some of these tools incorporate AI-driven capabilities. The Pennsylvania State Police carefully evaluates these tools to ensure they align with legal, ethical, and operational considerations.”

PSP was unwilling to discuss the specifics of those technologies.

AI is also used by the U.S. military and other militaries around the world, including those of Israel, Ukraine, and Russia, who are demonstrating a fundamental shift in the way war is conducted through technology.

In Gaza, the Lavender AI system was used to identify and target individuals connected with Hamas, allowing human agents to approve strikes with acceptable numbers of civilian casualties, according to Israeli intelligence officials who spoke to The Guardian on the matter. Analysis of AI use in Ukraine calls for a nuanced understanding of the way the technology is being used and ways in which it should be regulated by international bodies governing warfare in the future.

Then, there are the more ephemeral concerns. Along with the long-looming “jobpocalypse,” many fear that offloading our day-to-day lives into the hands of AI may deplete our sense of meaning. Students using AI may fail to learn. Workers using AI may feel purposeless. Relationships with or grounded in AI may lead to disconnection.

Kearns acknowledged that there would be disruption in the classroom and workplace to navigate but it would also provide opportunities for people who previously may not have been able to gain entrance into challenging fields.

As for outsourcing joy, he asked “If somebody comes along with a robot that can play better tennis than you and you love playing tennis, are you going to stop playing tennis?”



Source link

Continue Reading

Trending