Connect with us

AI Research

Chatbots gloss over critical details in summaries of scientific studies, say scientists

Published

on


Large language models (LLMs) are becoming less “intelligent” in each new version as they oversimplify and, in some cases, misrepresent important scientific and medical findings, a new study has found.

Scientists discovered that versions of ChatGPT, Llama and DeepSeek were five times more likely to oversimplify scientific findings than human experts in an analysis of 4,900 summaries of research papers.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

How to Scale Up AI in Government

Published

on


State and local governments are experimenting with artificial intelligence but lack systematic approaches to scale these efforts effectively and integrate AI into government operations. Instead, efforts have been piecemeal and slow, leaving many practitioners struggling to keep up with the ever-evolving uses of AI for transforming governance and policy implementation.

While some state and local governments are leading in implementing the technology, AI adoption remains fragmented. Last year, some 150 state bills were considered relating to the government use of AI, governors in 10 states issued executive orders supporting the study of AI for use in government operations, and 10 legislatures tasked agencies with capturing comprehensive inventories.

Taking advantage of the opportunity presented by AI is critical as decision-makers face an increasing slate of challenging implementation problems and as technology quickly evolves and develops new capabilities. The use of AI is not without risks. Developing and adapting the necessary checks and guidance is critical but can be challenging for such dynamic technologies. Shifting from seeing AI as merely a technical capability to considering what AI technology should be asked to do can help state and local governments think more creatively and strategically. Here are some of the benefits governments are already exploring:


Administrative efficiency: Half of all states are using AI chatbots to reduce administrative burden and free staff for substantive and creative work. The Indiana General Assembly uses chatbots to answer questions about regulations and statutes. Austin, Texas, streamlines residential construction permitting with AI, while Vermont’s transportation agency inventories road signs and assesses pavement quality.

Research synthesis: AI tools help policymakers quickly access evolving best practices and evidence-based approaches. Overton’s AI platform, for example, allows policymakers to identify how existing evidence aligns with priority areas, compare policy approaches across states and nations, and match with relevant researchers and projects.

Implementation monitoring: AI fills critical gaps in program evaluation without major new investments. California’s transportation department analyzes traffic patterns to optimize highway safety and inform infrastructure investments.

Predictive modeling: AI-enabled models help test assumptions about which interventions will succeed. These models use features such as organizational characteristics, physical and contextual factors, and historical implementation data to predict success of policy interventions, and their outputs can help tailor interventions and improve outcomes and success. Applications include targeting health interventions to patients with modifiable risk factors, identifying lead service lines in municipal water systems, predicting flood response needs and flagging households at eviction risk.

Scaling up to wider adoption in policy and practice requires proactive steps by state and local governments and attendant guidance, monitoring and evaluation:

Adaptive policy framework: AI adoption often outpaces planning, and the definition of AI is often specific to its application. States need to define AI applications by sector (health, transportation, etc.) and develop adaptive operating strategies to guide and assess its impact. Thirty states have some guidance, but comprehensive approaches require clear definitions and inventories of current use.

Funding strategies: Policymakers must identify and leverage funding streams to cover the costs of procurement and training. Federal grants like the State and Local Cybersecurity Grant Program offer potential, though current authorization expires this Sept. 30. Massachusetts’ FutureTech Act exemplifies direct state investment, authorizing $1.23 billion for IT capital projects including AI.

Smart procurement: Effective AI procurement requires partnerships with vendors and suppliers and between chief information officers and procurement specialists. Contracts must ensure ethical use, performance monitoring and continuous improvement, but few states have procurement language related to AI. Speed matters — AI purchases risk obsolescence during lengthy procurement cycles.

Training and workforce development: Both current and future state and local government workforces need AI skills. Solutions include AI training academies and literacy programs for government workers, joint training programs between professional associations, and the General Services Administration’s AI Community of Practice‘s events and training. The Partnership for Public Service has recently opened up its AI Government Leadership program to state and local policymakers. Universities including Stanford and Michigan offer specialized programs for policymakers. Graduate programs in public policy, administration and law should incorporate AI governance tracks.

State AI policy development involves governor’s offices, chief information offices, security offices and legislatures. But success requires moving beyond pilot projects to systematic implementation. Governments that embrace this transition will be best positioned for future challenges. The opportunity exists now to set standards for AI-enabled governance, but it requires proactive steps in policy development, funding, procurement, workforce development and safeguards.

Joie Acosta is a senior behavioral scientist and the Global Scholar in Translation at RAND, a nonprofit, nonpartisan research institute. Sara Hughes is a senior policy researcher and the Global Scholar of Implementation at RAND and a professor of policy analysis at the RAND School of Public Policy.


Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.





Source link

Continue Reading

AI Research

AI-powered research training to begin at IPE for social science scholars

Published

on


Hyderabad: The Institute of Public Enterprise (IPE), Hyderabad, has launched a pioneering 10-day Research Methodology Course (RMC) focused on the application of Artificial Intelligence (AI) tools in social science research. Sponsored by the Indian Council of Social Science Research (ICSSR), Ministry of Education, Government of India, the program commenced on October 6 and will run through October 16, 2025, at the IPE campus in Osmania University.

Designed exclusively for M.Phil., Ph.D., and Post-Doctoral researchers across social science disciplines, the course aims to equip young scholars with cutting-edge AI and Machine Learning (ML) skills to enhance research quality, ethical compliance, and interdisciplinary collaboration. The initiative is part of ICSSR’s Training and Capacity Building (TCB) programme and is offered free of cost, with travel and daily allowances reimbursed as per eligibility.

The course is being organized by IPE’s Centre for Data Science and Artificial Intelligence (CDSAI), under the academic leadership of Prof. S Sreenivasa Murthy, Director of IPE and Vice-Chairman of AIMS Telangana Chapter. Dr. Shaheen, Associate Professor of Information Technology & Analytics, serves as the Course Director, while Dr. Sagyan Sagarika Mohanty, Assistant Professor of Marketing, is the Co-Director.

Participants will undergo hands-on training in Python, R, Tableau, and Power BI, alongside modules on Natural Language Processing (NLP), supervised and unsupervised learning, and ethical frameworks such as the Digital Personal Data Protection (DPDP) Act, 2023.

The curriculum also includes field visits to policy labs like T-Hub and NIRDPR, mentorship for research proposal refinement, and guidance on publishing in Scopus and ABDC-indexed journals.

Speaking about the program, Dr. Shaheen emphasized the need for social scientists to evolve beyond traditional methods and embrace computational tools for data-driven insights.

“This course bridges the gap between conventional research and emerging technologies, empowering scholars to produce impactful, ethical, and future-ready research,” she said.

Seats for the course are allocated on a first-come, first-served basis. The last date for nominations is September 15, 2025. With its unique blend of technical training, ethical grounding, and publication support, the RMC at IPE intends to take a significant step to empower scholars in the process of modernizing social science research in India.

Interested candidates can contact: Dr Shaheen, Programme Director, at [email protected] or on mobile number 9866666620.



Source link

Continue Reading

AI Research

New AI study aims to predict and prevent sinkholes in Tennessee’s vulnerable roadways

Published

on


A large sinkhole that appeared on Chattanooga’s Northshore after last month’s historic flooding is just the latest example of roadway problems that are causing concern for drivers.

But a new study looks to use artificial intelligence (AI) to predict where these sinkholes will appear before they do any damage.

“It’s pretty hard to go about a week without hearing somebody talking about something going wrong with the road.”

According to the American Geoscience Institute, sinkholes can have both natural and artificial causes.

However, they tend to occur in places where water can dissolve bedrock, making Tennessee one of the more sinkhole prone states in the country.

Brett Malone, CEO of UTK’s research park, says…

“Geological instability, the erosions, we have a lot of that in East Tennessee, and so a lot of unsteady rock formations underground just create openings that then eventually sort of cave in.”

Sinkholes like the one on Heritage Landing Drive have become a serious headache for drivers in Tennessee.

Nearby residents say its posed safety issues for their neighborhood.

Now, UTK says they are partnering with tech company TreisD to find a statewide solution.

The company’s AI technology could help predict where a sinkhole forms before it actually happens.

“You can speed up your research. So since we’ve been able to now use AI for 3D images, it means we get to our objective and our goals much faster.”

TreisD founder Jerry Nims says their AI algorithm uses those 3D images to study sinkholes in the hopes of learning ways to prevent them.

“If you can see what you’re working with, the experts, and they can gain more information, more knowledge, and it’ll help them in their decision making.”

We asked residents in our area, like Hudson Norton, how they would feel about a study like this in our area.

“If it’s helping people and it can save people, then it sounds like a good use of AI, and responsible use of it, more importantly.”

Chattanooga officials say the sinkhole on Heritage Landing Drive could take up to 6 months to repair.



Source link

Continue Reading

Trending