Connect with us

AI Insights

The Dangers of AI in the Courtroom

Published

on


Scholars examine the dangers of difficult-to-understand AI in criminal investigations and cases.

Artificial intelligence (AI) is becoming increasingly prevalent in American society and has even found its way into the courtroom.

In an article, Brandon L. Garrett, a professor at Duke Law School, and Cynthia Rudin, a professor in the computer science department at Duke University, argue for national and local regulation to ensure that judges, jurors, and lawyers can completely and fully interpret AI used in the criminal justice system.

Garrett and Rudin note a troubling trend toward “black box” AI, which they define as AI models that are too complex for ordinary people to understand. Garrett and Rudin contrast” glass box” AI, which they define as models whose calculations are inherently understandable, allowing people to understand them and the information that they rely upon.

Garrett and Rudin explain that the use of black box AI has increased rapidly in criminal cases, including the use of facial recognition technology, risk assessments of the chances that criminals will reoffend, and predictive policing. Garrett and Rudin argue, however, that because the life and liberty of people are at stake during criminal trials, it is necessary that judges and jurors fully understand the AI used in criminal cases. Absent compelling or credible government interest, Garrett and Rudin explain, substantial constitutional rights and safety interests require that all AI used in the criminal justice system is glass box AI.

Garrett and Rudin note the misconception that there is a tradeoff between black box AI and glass box AI–that black box AI is incomprehensible but also more accurate than glass box AI. Garrett and Rudin cite research showing that in criminal law settings, black box AI does not perform better than models that are simpler and easier to interpret.

Garrett and Rudin argue that national and local regulatory measures are needed to ensure that glass box AI, as opposed to black box AI, is used in the criminal justice system. Garret and Rudin explain that in Europe, the General Data Protection Regulation (GDPR) provides a “right to explanation” for consumers of AI. A companion directive to the GDPR restricts the use of AI in investigations of possible criminal activity and requires assessments for AI’s risk to “rights and freedoms” and data privacy. Garrett and Rudin contend that U.S. criminal defendants deserve similar protections from black box AI.

Garrett and Rudin explain that in the United States, multiple groups have called for bans on facial recognition technology—technology that attempts to identify persons through matching images of a face to a database of collected faces—based on claims that it is unfair and unjust. Garrett and Rudin note that although ten U.S. states have passed restrictions on law enforcement’s use of facial recognition technology, none of those state laws mandate the use of glass box AI for facial recognition technology.

The Federal Trade Commission (FTC) has issued guidance to prevent the use of AI to engage in unfair or deceptive practices in private industry. Although the FTC acknowledges that datasets that train AI models may lead to privacy concerns, Garrett and Rudin note that the FTC has not discussed the possibility of replacing black box AI approaches with glass box AI approaches.

Garrett and Rudin propose enacting legislation requiring that law enforcement agencies that use AI in criminal investigations use glass box AI. Statutes requiring the validation of adequate data for AI used by law enforcement agencies, Garrett and Rudin contend, should also be enacted to ensure that the information and material used to investigate and convict persons are valid. Garrett and Rudin note that no such statutes have been introduced in the United States.

Garrett and Rudin explain that the European Union’s Law Enforcement Directive limits AI’s use in criminal cases and emphasizes the need of AI use to respect accountability, fairness, and nondiscrimination. The directive calls to address AI risks in criminal cases through verification, transparency, careful testing, and explainability. The directive emphasizes that all law enforcement officers’ use of AI is high risk and should be subject to enhanced oversight.

Garrett and Rudin conclude by noting that in limited circumstances the government may have a compelling case to justify the use of black box AI, such as in national security cases. Garrett and Rudin emphasize, however, that the burden should always lie with the government to establish the existence of such a state interest.



Source link

AI Insights

xAI Releases Grok 4 AI Models

Published

on


Elon Musk’s xAI startup has unveiled the latest version of its flagship foundation artificial intelligence (AI) model, Grok 4.

In a livestream on X, Musk both bragged about the model yet simultaneously fretted about the impact on humanity if the AI turns evil. 

“This is the smartest AI in the world,” said Musk while surrounded by members of his xAI team. “In some ways, it’s terrifying.”

He compared Grok 4 to a “super-genius child” in which the “right values” of truthfulness and a sense of honor must be instilled so society can benefit from its advances. 

Musk admitted to being “worried,” saying that “it’s somewhat unnerving to have intelligence created that is far greater than our own, and will this be bad or good for humanity?”

The xAI owner concluded that “most likely, it’ll be good.”

Musk said Grok 4 is designed to perform at the “post-graduate level” in many topics simultaneously, which no person can do. It can also handle images, generate realistic visuals and tackle complex analytical tasks.

Musk claims that Grok 4 would score perfectly on SAT and graduate-level exams like GRE even without seeing the questions beforehand.

Alongside the model release, xAI introduced SuperGrok Heavy, a subscription tier priced at $300 per month. A standard Grok 4 tier is available for $30 monthly, and the basic tier is free.

OpenAI, Google, Anthropic and Perplexity have unveiled higher-priced tiers as well: ChatGPT Pro, at $200 a month; Gemini Ultra, at $249.99 a month; Claude Max, at $200 a month; and Perplexity Max, for $200 a month.

See also: Elon Musk Startup xAI Launches App Offering Access to Grok Chatbot

Turbulent Week for Grok and X

Grok 4’s launch follows a turbulent week marked by antisemitic content generated by Grok 3 and the resignation of Linda Yaccarino, the CEO of X.

Grok 4 is being released in two configurations: the standard Grok 4 and the premium “Heavy” version.

The Heavy model features a multi-agent architecture capable of collaborative reasoning on challenging problems.

The model demonstrates advances in multimodal processing, faster reasoning and an upgraded user interface. According to xAI, Grok 4 can solve complex math problems, interpret images — including scientific visuals such as black hole collisions — and perform predictive analytics, such as estimating a team’s odds of winning a championship. 

Benchmark data shared by xAI shows that Grok 4 Heavy outperformed previous models on tests such as Humanity’s Last Exam.

xAI outlined an aggressive roadmap for the remainder of 2025: launching a coding‑specific AI in August, a multimodal agent in September and a model capable of generating full video by October.

Grok 4’s release intensifies the competition among leading AI firms. OpenAI is expected to roll out GPT‑5 later this summer, while Google continues to develop its Gemini series.

Read more:



Source link

Continue Reading

AI Insights

Artificial Intelligence Coverage Under Cyber Insurance

Published

on


A small but growing number of cyber insurers are incorporating language into their policies that specifically addresses risks from artificial intelligence (AI). The June 2025 issue of The Betterley Report’s Cyber/Privacy Market Survey identifies at least three insurers that are incorporating specific definitions or terms for AI. This raises an important question for policyholders: Does including specific language for AI in a coverage grant (or exclusion) change the terms of coverage offered?

To be sure, at present few cyber policies expressly address AI. Most insurers appear to be maintaining a “wait and see” approach; they are monitoring the risks posed by AI, but they have not revised their policies. Nevertheless, a few insurers have sought to reassure customers that coverage is available for AI-related events. One insurer has gone so far as to state that its policy “provides affirmative cover for cyber attacks that utilize AI, ensuring that the business is covered for any of the losses associated with such attacks.” To the extent that AI is simply one vector for a data breach or other cyber incident that would otherwise be an insured event, however, it is unclear whether adding AI-specific language expands coverage. On the other side of the coin, some insurers have sought to limit exposure by incorporating exclusions for certain AI events.   

To assess the impact of these changes, it is critical to ask: What does artificial intelligence even mean?

This is a difficult question to answer. The field of AI is vast and constantly evolving. AI can curate social media feeds, recommend shows and products to consumers, generate email auto-responses, and more. Banks use AI to detect fraud. Driving apps use it to predict traffic. Search engines use it to rank and recommend search results. AI pervades daily life and extends far beyond the chatbots and other generative AI tools that have been the focus of recent news and popular attention.

At a more technical level, AI also encompasses numerous nesting and overlapping subfields.  One major subfield, machine learning, encompasses techniques ranging from linear regression to decision trees. It also includes neural networks, which, when layered together, can be used to power the subfield of deep learning. Deep learning, in turn, is used by the subfield of generative AI. And generative AI itself can take different forms, such as large language models, diffusion models, generative adversarial networks, and neural radiance fields.

That may be why most insurers have been reluctant to define artificial intelligence. A policy could name certain concrete examples of AI applications, but it would likely miss many others, and it would risk falling behind as AI was adapted for other uses. The policy could provide a technical definition, but that could be similarly underinclusive and inflexible. Even referring to subsets such as “generative AI” could run into similar issues, given the complex techniques and applications for the technology.

The risk, of course, is that by not clearly defining artificial intelligence, a policy that grants or excludes coverage for AI could have different coverage consequences than either the insurer or insured expected. Policyholders should pay particular attention to provisions purporting to exclude loss or liability from AI risks, and consider what technologies are in use that could offer a basis to deny coverage for the loss. We will watch with interest cyber insurers’ approach to AI — will most continue to omit references to AI, or will more insurers expressly address AI in their policies?

Listen to this article

This article was co-authored by Anna Hamel



Source link

Continue Reading

AI Insights

Code Green or Code Red? The Untold Climate Cost of Artificial Intelligence

Published

on


As the world races to harness artificial intelligence, few are pausing to ask a critical question: What is AI doing to the planet?

AI is being heralded as a game-changer in the global fight against climate change. AI is already assisting scientists in modeling rising temperatures and extreme weather phenomena, enabling decision-making bodies to predict and prepare for unexpected weather, while allowing energy systems to become smarter and more efficient. According to the World Economic Forum, AI has the potential to contribute up to 5.1 trillion dollars annually to the global economy, under the condition that it is deployed sustainably during the climate transition (WEF, 2025).

Beneath the sleek interfaces and climate dashboards lies a growing environmental cost. The widespread use of generative AI, in particular, is creating a new carbon frontier, one that we’re just beginning to untangle and understand.

Training large-scale AI models is energy-intensive, according to a 2024 MIT report. Training a single GPT-3 sized model can consume enough electricity to power almost 120 U.S. homes for a year, which totals up to 1.300 megawatt-hours of electricity. AI systems, once deployed, are not static, since they continue to consume energy each time a user interacts with them. For example, an AI-generated image may require as much energy as watching a short video on an online platform, while large language model queries require almost 10 times more energy than a typical Google search (MIT, 2024).

As AI becomes embedded into everything from online search to logistics and social media, this energy use is multiplying at scale. The International Energy Agency (IEA) warns that by 2026, data center electricity consumption could double globally, driven mainly by the rise of AI and cryptocurrency. Taking into account the recent developments regarding the Digital Euro, the discussion instantly receives more value. Without rapid decarbonization of energy grids, this could significantly increase global emissions, undermining progress on climate goals (IEA,2024).

Sotiris Anastasopoulos/ With data from the IEA’s official website.

The climate irony is real: AI is both the solution and the multiplier to Earth’s climate challenges.

Still, when used responsibly, AI remains a powerful ally. The UNFCCC’s 2023 “AI for Climate Action” roadmap outlines dozens of promising, climate-friendly applications. AI can detect deforestation from satellite imagery, track methane leaks, help decarbonize supply chains, and forecast the spread of wildfires. In agriculture, AI systems can optimize irrigation and fertilizer use, helping reduce emissions and protect soil. In the energy sector, AI enables real-time management of grids, integrating variable sources like solar and wind while improving reliability. But to unlock this potential, the conversation around AI must evolve, from excitement about its capabilities to accountability for its impact.

This starts with transparency. Today, few AI developers publicly report the energy or emissions cost of training and running their models. That needs to change. The IEA calls for AI models to be accompanied by “energy use disclosures” and impact assessments. Governments and regulators should enforce such standards, similarly to industrial emissions or vehicle efficiency (UNFCC, 2023).

Second, green infrastructure must become the default. Data centers must be powered by renewable energy, not fossil fuels. AI models must be optimized for efficiency, not just performance. Instead of racing toward ever-larger models, we should ask what the climate cost of model inflation is and if it’s worth it (UNFCC, 2023).

Third, we need to question the uses of AI itself. Not every application is essential. Does society actually benefit from energy-intensive image generation tools for trivial entertainment or advertising? While AI can accelerate climate solutions, it can also accelerate consumption, misinformation, and surveillance. A climate-conscious AI agenda must weigh trade-offs, not just celebrate innovation (UNFCC,2023).

Finally, equity matters. As the UNFCC report emphasizes, the AI infrastructure powering the climate transition is heavily concentrated in the Global North. Meanwhile, the Global South, home to many of the world’s most climate-vulnerable populations, lacks access to these tools, data, and services. An inclusive AI-climate agenda must invest in capacity-building, data access,  and technological advancements to ensure no region is left behind (UNFCC, 2023).

Artificial intelligence is not green or dirty by its nature. Like all tools, its impact depends on how and why we use it. We are still early in the AI revolution to shape its trajectory, but not for long.

The stakes are planetary. If deployed wisely, AI could help the transition to a net-zero future. If mismanaged, it risks becoming just another accelerant of a warming world.

Technology alone will not solve the climate crisis. But climate solutions without responsible technology are bound to fail.

*Sotiris Anastasopoulos is a student researcher at the Institute of European Integration and Policy of the UoA. He is an active member of YCDF and AEIA and currently serves as a European Climate Pact Ambassador. 

This op-ed is part of To BHMA International Edition’s NextGen Corner, a platform for fresh voices on the defining issues of our time



Source link

Continue Reading

Trending