AI Research
AI in Agriculture Symposium, hackathon set for September in Fayettevil…

FAYETTEVILLE, Ark. — The inaugural AI in Agriculture Symposium, hosted by the Center for Agricultural Data Analytics within the Arkansas Agricultural Experiment Station, will highlight the latest in AI research and real-world applications for agriculture on Sept. 15.
The free event, featuring artificial intelligence and automation experts from academia and industry, will be offered online and in-person at the Don Tyson Center for Agricultural Sciences, 1371 W. Altheimer Drive, in Fayetteville. The experiment station is the research arm of the University of Arkansas System Division of Agriculture.
“AI is present in every field, and we would like to make sure our ag students and researchers have the opportunity to interact with people at the forefront of this field to foster collaborations and awareness of the potential of AI in agriculture,” said Samuel B. Fernandes, organizer of the event and an assistant professor of agricultural statistics and quantitative genetics with the experiment station.
The AI in Agricultural Symposium begins at 8:30 a.m. with a light breakfast and opening remarks from Jean-François Meullenet, senior associate vice president for agriculture-research and director of the experiment station.
Sessions begin at 9 a.m. Lunch will be provided at noon, and the event concludes at 5 p.m., followed by a reception and poster session until 7 p.m.
Featured speakers include:
Girish Chowdhary, associate professor of agricultural and biological engineering, and computer science with the University of Illinois Urbana-Champaign.
Rohit Sanjay, automation developer with Tyson Foods.
Rich Adams, assistant professor of agricultural statistics for the Center for Agricultural Data Analytics, and the entomology and plant pathology department in the Dale Bumpers College of Agricultural, Food and Life Sciences at the University of Arkansas.
Aranyak Goswami, assistant professor and computational biologist with the Center for Agricultural Data Analytics, and the animal science and poultry science departments for the Division of Agriculture and Bumpers College.
Nicholas Ames, principal data scientist for Bayer Crop Science.
Erin Gilbert, staff data steward for Bayer Crop Science.
Alon Arad, director of artificial intelligence and analytics for Walmart Global Tech.
Ana Maria Heilman-Morales, director of Agricultural Data Analytics at North Dakota State University.
Heilman-Morales will lead a roundtable beginning at 4 p.m. on the topic “AI as a bridge for multidisciplinary collaborations in agriculture.”
The deadline to register for in-person attendance is Sept. 7. There is no deadline to register for online attendance.
AI in Ag Hackathon
In addition to the symposium, Fernandes also highlighted the inaugural AI in Ag Hackathon, which gives graduate students from the University of Arkansas in Fayetteville and the University of Arkansas at Pine Bluff a chance to address real-world scenarios commonly faced in the ag industry. The hackathon takes place Sept. 13-14 in the Mullins Library on the University of Arkansas campus in Fayetteville.
Fernandes said that while participants can win prizes for developing the best solutions, “most importantly, the top three teams will be given 5 minutes to present their solution at the Arkansas AI in Ag Symposium.”
Interested graduate students can find more details on the AI in Agriculture Symposium event page. The AI in Ag Hackathon is a collaboration between the Center for Agricultural Data Analytics, the Dale Bumpers College of Agricultural, Food and Life Sciences, and Bayer Crop Science.
The registration deadline for the AI in Ag Hackathon is Sept. 10.
To learn more about the AI in Agriculture Symposium, contact Samuel B. Fernandes at samuelbf@uark.edu or 479-575-5677. Dial 711 for Arkansas Relay.
AI Research
Guardrails for Responsible AI

Clarivate explores how responsible AI guardrails and content filtering can support safe, ethical use of generative AI in academic research — without compromising scholarly freedom. As AI becomes embedded in research workflows, this blog outlines a suggested path to shaping industry standards for academic integrity, safety, and innovation.
Generative AI has opened new possibilities for academic research, enabling faster discovery, summarization, and synthesis of knowledge, as well as supporting the scholarly discourse. Yet, as these tools become embedded in scholarly workflows, the segment faces a complex challenge: how do we balance responsible AI use and the prevention of harmful outputs with the need to preserve academic freedom and research integrity?
This is an industry-wide problem that affects every organization deploying Large Language Models (LLMs) in academic contexts. There is no simple solution, but there is a pressing need for collaboration across vendors, libraries, and researchers to address it.
There are different ways to technically address the problem. The two most important ones are guardrails and content filtering.
Guardrails
Guardrails are proactive mechanisms designed to prevent undesired behaviour from the model. They are often implemented at a deeper level in the system architecture and can, for example, include instructions in an application’s system prompt to steer the model away from risky topics or to make sure that the language is suitable for the application where it’s being used.
The goal of guardrails is to prevent the model from ever generating harmful or inappropriate content in the first place or misbehaving, with the caveat that the definition of what constitutes ‘inappropriate’ is highly subjective and often dependent on cultural differences and context.
Guardrails are critical for security and compliance, but they can also contribute to over-blocking. For instance, defences against prompt injection — where malicious instructions are hidden in user input — may reject queries that appear suspicious, even if they are legitimate academic questions. It can block certain types of outputs (e.g., hate speech, self-harm advice) or exclude the training data from the output. This tension between safety and openness is one of the hardest problems to solve.
The guardrails used in our products play a very significant role in shaping the model’s output. For example, we carefully design the prompts that guide the LLM, instructing it to rely exclusively on scholarly sources through a Retrieval-Augmented Generation (RAG) architecture or preventing the tools from answering non-scholarly questions such as “Which electric vehicle should I buy”? These techniques limit products’ reliance on the LLM broader training data, significantly minimizing the risk of problematic content impacting user results.
Content filtering
Content filtering is a reactive mechanism that evaluates both the application input as well as the model-generated output to determine whether it should be shown to the user. It uses automated classification models to detect and block (or flag) unwanted or harmful content. Essentially, content filters are processes that can block content from getting to the LLM, as well as block the LLMs responses from being delivered. The goal of content filtering is to catch and block inappropriate content that might slip through the model’s generation process.
However, content filtering is not a single switch; it is a multi-layered process designed to prevent harmful, illegal, or unsafe outputs. Here are the main steps in the pipeline where filtering occurs:
- At the LLM level (e.g. GPT, Claude, Gemini, Llama, etc.)
Most modern LLM stacks include a provider-side safety layer that evaluates both the prompt (input) and the model’s draft answer (output) before the application ever sees it. It’s designed to reduce harmful or illegal uses (e.g., violence, self-harm, sexual exploitation, hateful conduct, or instructions to commit wrongdoing), but this same functionality can unintentionally suppress legitimate, research-relevant topics — particularly in history, politics, medicine, and social sciences.
- At the LLM cloud provider level (e.g., Azure, AWS Bedrock, etc.)
Organizations, vendors and developers often use LLMs APIs via cloud providers like Azure or Bedrock when they need to control where their data is processed, meet strict compliance and privacy requirements like GDPR, and run everything within private network environments for added security.
These cloud providers implement baseline safety systems to block prompts or outputs that violate their acceptable use policies. These filters are often broad, covering sensitive topics such as violence, self-harm, or explicit content. While essential for safety, these filters can inadvertently block legitimate academic queries — such as research on war crimes or historical atrocities.
This can result in frustrating messages alerting users that the request failed – even when the underlying content is academically valid. At Clarivate, while we recognize these tools may be imperfect, we continue to believe they are essential to incorporate in our arsenal and enable us to balance the benefits with the risks when using this technology. Our commitment to building responsible AI remains steadfast as we continue to monitor and adapt our dynamic controls based on our learnings, feedback and cutting-edge research.
Finding the right safety level
When we first introduced our AI-powered tools in May 2024, the content filter settings we used were well-suited to the initial needs. However, as adoption of these tools significantly increased, we found that the filters could sometimes be over-sensitive, with users sometimes encountering errors when exploring sensitive or controversial topics, even when the intent was clearly scholarly.
In response, we have adjusted our settings, and early results are promising: Searches previously blocked (e.g., on genocide or civil rights history) now return results, while genuinely harmful queries (e.g., instructions for building weapons) remain blocked.
The central Clarivate Academic AI Platform provides a consistent framework for safety, governance, and content management across all our tools. This shared foundation ensures a uniform standard of responsible AI use. Because content filtering is applied at the model level, we validate any adjustments carefully across solutions, rolling them out gradually and testing against production-like data to maintain reliability and trust.
Our goal is to strike a better balance between responsible AI use and academic freedom.
Working together to balance safety and openness – a community effort
Researchers expect AI tools to support inquiry, not censor it. Yet every vendor using LLMs faces the same constraints: provider-level filters, regulatory requirements, and the ethical imperative to prevent harm.
There is no silver bullet. Overly strict filters undermine research integrity; overly permissive settings risk abuse. The only way forward is collaboration — between vendors, libraries, and the academic community — to define standards, share best practices, and advocate for provider-level flexibility that recognises the unique needs of scholarly environments.
At Clarivate, we are committed to transparency and dialogue. We’ve made content filtering a key topic for our Academia AI Advisory Council and are actively engaging with customers to understand their priorities. But this conversation must extend beyond any single company. If we want AI to truly serve scholarship, we need to push this topic with academic AI in mind, balancing safety and openness within the unique context of scholarly discourse. With this goal, we are creating an Academic AI working group that will help us navigate this and other challenges originating from this new technology. If you are interested in joining this group or know someone who might be, please contact us at academiaai@clarivate.com.
Discover Clarivate Academic AI solutions
AI Research
(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)
AI Research
[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models

View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors
Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.
Submission history
From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries