AI Insights
The Dangers of AI in the Courtroom
Scholars examine the dangers of difficult-to-understand AI in criminal investigations and cases.
Artificial intelligence (AI) is becoming increasingly prevalent in American society and has even found its way into the courtroom.
In an article, Brandon L. Garrett, a professor at Duke Law School, and Cynthia Rudin, a professor in the computer science department at Duke University, argue for national and local regulation to ensure that judges, jurors, and lawyers can completely and fully interpret AI used in the criminal justice system.
Garrett and Rudin note a troubling trend toward “black box” AI, which they define as AI models that are too complex for ordinary people to understand. Garrett and Rudin contrast” glass box” AI, which they define as models whose calculations are inherently understandable, allowing people to understand them and the information that they rely upon.
Garrett and Rudin explain that the use of black box AI has increased rapidly in criminal cases, including the use of facial recognition technology, risk assessments of the chances that criminals will reoffend, and predictive policing. Garrett and Rudin argue, however, that because the life and liberty of people are at stake during criminal trials, it is necessary that judges and jurors fully understand the AI used in criminal cases. Absent compelling or credible government interest, Garrett and Rudin explain, substantial constitutional rights and safety interests require that all AI used in the criminal justice system is glass box AI.
Garrett and Rudin note the misconception that there is a tradeoff between black box AI and glass box AI–that black box AI is incomprehensible but also more accurate than glass box AI. Garrett and Rudin cite research showing that in criminal law settings, black box AI does not perform better than models that are simpler and easier to interpret.
Garrett and Rudin argue that national and local regulatory measures are needed to ensure that glass box AI, as opposed to black box AI, is used in the criminal justice system. Garret and Rudin explain that in Europe, the General Data Protection Regulation (GDPR) provides a “right to explanation” for consumers of AI. A companion directive to the GDPR restricts the use of AI in investigations of possible criminal activity and requires assessments for AI’s risk to “rights and freedoms” and data privacy. Garrett and Rudin contend that U.S. criminal defendants deserve similar protections from black box AI.
Garrett and Rudin explain that in the United States, multiple groups have called for bans on facial recognition technology—technology that attempts to identify persons through matching images of a face to a database of collected faces—based on claims that it is unfair and unjust. Garrett and Rudin note that although ten U.S. states have passed restrictions on law enforcement’s use of facial recognition technology, none of those state laws mandate the use of glass box AI for facial recognition technology.
The Federal Trade Commission (FTC) has issued guidance to prevent the use of AI to engage in unfair or deceptive practices in private industry. Although the FTC acknowledges that datasets that train AI models may lead to privacy concerns, Garrett and Rudin note that the FTC has not discussed the possibility of replacing black box AI approaches with glass box AI approaches.
Garrett and Rudin propose enacting legislation requiring that law enforcement agencies that use AI in criminal investigations use glass box AI. Statutes requiring the validation of adequate data for AI used by law enforcement agencies, Garrett and Rudin contend, should also be enacted to ensure that the information and material used to investigate and convict persons are valid. Garrett and Rudin note that no such statutes have been introduced in the United States.
Garrett and Rudin explain that the European Union’s Law Enforcement Directive limits AI’s use in criminal cases and emphasizes the need of AI use to respect accountability, fairness, and nondiscrimination. The directive calls to address AI risks in criminal cases through verification, transparency, careful testing, and explainability. The directive emphasizes that all law enforcement officers’ use of AI is high risk and should be subject to enhanced oversight.
Garrett and Rudin conclude by noting that in limited circumstances the government may have a compelling case to justify the use of black box AI, such as in national security cases. Garrett and Rudin emphasize, however, that the burden should always lie with the government to establish the existence of such a state interest.
AI Insights
xAI Releases Grok 4 AI Models
Elon Musk’s xAI startup has unveiled the latest version of its flagship foundation artificial intelligence (AI) model, Grok 4.
AI Insights
Artificial Intelligence Coverage Under Cyber Insurance
A small but growing number of cyber insurers are incorporating language into their policies that specifically addresses risks from artificial intelligence (AI). The June 2025 issue of The Betterley Report’s Cyber/Privacy Market Survey identifies at least three insurers that are incorporating specific definitions or terms for AI. This raises an important question for policyholders: Does including specific language for AI in a coverage grant (or exclusion) change the terms of coverage offered?
To be sure, at present few cyber policies expressly address AI. Most insurers appear to be maintaining a “wait and see” approach; they are monitoring the risks posed by AI, but they have not revised their policies. Nevertheless, a few insurers have sought to reassure customers that coverage is available for AI-related events. One insurer has gone so far as to state that its policy “provides affirmative cover for cyber attacks that utilize AI, ensuring that the business is covered for any of the losses associated with such attacks.” To the extent that AI is simply one vector for a data breach or other cyber incident that would otherwise be an insured event, however, it is unclear whether adding AI-specific language expands coverage. On the other side of the coin, some insurers have sought to limit exposure by incorporating exclusions for certain AI events.
To assess the impact of these changes, it is critical to ask: What does artificial intelligence even mean?
This is a difficult question to answer. The field of AI is vast and constantly evolving. AI can curate social media feeds, recommend shows and products to consumers, generate email auto-responses, and more. Banks use AI to detect fraud. Driving apps use it to predict traffic. Search engines use it to rank and recommend search results. AI pervades daily life and extends far beyond the chatbots and other generative AI tools that have been the focus of recent news and popular attention.
At a more technical level, AI also encompasses numerous nesting and overlapping subfields. One major subfield, machine learning, encompasses techniques ranging from linear regression to decision trees. It also includes neural networks, which, when layered together, can be used to power the subfield of deep learning. Deep learning, in turn, is used by the subfield of generative AI. And generative AI itself can take different forms, such as large language models, diffusion models, generative adversarial networks, and neural radiance fields.
That may be why most insurers have been reluctant to define artificial intelligence. A policy could name certain concrete examples of AI applications, but it would likely miss many others, and it would risk falling behind as AI was adapted for other uses. The policy could provide a technical definition, but that could be similarly underinclusive and inflexible. Even referring to subsets such as “generative AI” could run into similar issues, given the complex techniques and applications for the technology.
The risk, of course, is that by not clearly defining artificial intelligence, a policy that grants or excludes coverage for AI could have different coverage consequences than either the insurer or insured expected. Policyholders should pay particular attention to provisions purporting to exclude loss or liability from AI risks, and consider what technologies are in use that could offer a basis to deny coverage for the loss. We will watch with interest cyber insurers’ approach to AI — will most continue to omit references to AI, or will more insurers expressly address AI in their policies?
Listen to this article
This article was co-authored by Anna Hamel
AI Insights
Code Green or Code Red? The Untold Climate Cost of Artificial Intelligence
As the world races to harness artificial intelligence, few are pausing to ask a critical question: What is AI doing to the planet?
AI is being heralded as a game-changer in the global fight against climate change. AI is already assisting scientists in modeling rising temperatures and extreme weather phenomena, enabling decision-making bodies to predict and prepare for unexpected weather, while allowing energy systems to become smarter and more efficient. According to the World Economic Forum, AI has the potential to contribute up to 5.1 trillion dollars annually to the global economy, under the condition that it is deployed sustainably during the climate transition (WEF, 2025).
Beneath the sleek interfaces and climate dashboards lies a growing environmental cost. The widespread use of generative AI, in particular, is creating a new carbon frontier, one that we’re just beginning to untangle and understand.
Training large-scale AI models is energy-intensive, according to a 2024 MIT report. Training a single GPT-3 sized model can consume enough electricity to power almost 120 U.S. homes for a year, which totals up to 1.300 megawatt-hours of electricity. AI systems, once deployed, are not static, since they continue to consume energy each time a user interacts with them. For example, an AI-generated image may require as much energy as watching a short video on an online platform, while large language model queries require almost 10 times more energy than a typical Google search (MIT, 2024).
As AI becomes embedded into everything from online search to logistics and social media, this energy use is multiplying at scale. The International Energy Agency (IEA) warns that by 2026, data center electricity consumption could double globally, driven mainly by the rise of AI and cryptocurrency. Taking into account the recent developments regarding the Digital Euro, the discussion instantly receives more value. Without rapid decarbonization of energy grids, this could significantly increase global emissions, undermining progress on climate goals (IEA,2024).
Sotiris Anastasopoulos/ With data from the IEA’s official website.
The climate irony is real: AI is both the solution and the multiplier to Earth’s climate challenges.
Still, when used responsibly, AI remains a powerful ally. The UNFCCC’s 2023 “AI for Climate Action” roadmap outlines dozens of promising, climate-friendly applications. AI can detect deforestation from satellite imagery, track methane leaks, help decarbonize supply chains, and forecast the spread of wildfires. In agriculture, AI systems can optimize irrigation and fertilizer use, helping reduce emissions and protect soil. In the energy sector, AI enables real-time management of grids, integrating variable sources like solar and wind while improving reliability. But to unlock this potential, the conversation around AI must evolve, from excitement about its capabilities to accountability for its impact.
This starts with transparency. Today, few AI developers publicly report the energy or emissions cost of training and running their models. That needs to change. The IEA calls for AI models to be accompanied by “energy use disclosures” and impact assessments. Governments and regulators should enforce such standards, similarly to industrial emissions or vehicle efficiency (UNFCC, 2023).
Second, green infrastructure must become the default. Data centers must be powered by renewable energy, not fossil fuels. AI models must be optimized for efficiency, not just performance. Instead of racing toward ever-larger models, we should ask what the climate cost of model inflation is and if it’s worth it (UNFCC, 2023).
Third, we need to question the uses of AI itself. Not every application is essential. Does society actually benefit from energy-intensive image generation tools for trivial entertainment or advertising? While AI can accelerate climate solutions, it can also accelerate consumption, misinformation, and surveillance. A climate-conscious AI agenda must weigh trade-offs, not just celebrate innovation (UNFCC,2023).
Finally, equity matters. As the UNFCC report emphasizes, the AI infrastructure powering the climate transition is heavily concentrated in the Global North. Meanwhile, the Global South, home to many of the world’s most climate-vulnerable populations, lacks access to these tools, data, and services. An inclusive AI-climate agenda must invest in capacity-building, data access, and technological advancements to ensure no region is left behind (UNFCC, 2023).
Artificial intelligence is not green or dirty by its nature. Like all tools, its impact depends on how and why we use it. We are still early in the AI revolution to shape its trajectory, but not for long.
The stakes are planetary. If deployed wisely, AI could help the transition to a net-zero future. If mismanaged, it risks becoming just another accelerant of a warming world.
Technology alone will not solve the climate crisis. But climate solutions without responsible technology are bound to fail.
*Sotiris Anastasopoulos is a student researcher at the Institute of European Integration and Policy of the UoA. He is an active member of YCDF and AEIA and currently serves as a European Climate Pact Ambassador.
This op-ed is part of To BHMA International Edition’s NextGen Corner, a platform for fresh voices on the defining issues of our time
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle