Connect with us

AI Research

Draft bill from Ted Cruz would establish a federal AI sandbox in OSTP

Published

on


Upcoming legislation from Sen. Ted Cruz, R-Texas, looks to establish a federal artificial intelligence sandbox program to evaluate the safety and efficacy of AI technologies, with the effort being overseen by the White House Office of Science and Technology Policy.

In documents reviewed by Nextgov/FCW, the proposed regulatory sandbox program created within OSTP would allow participating AI companies to receive temporary exemption from external AI regulations.

These temporary waivers would be available for “one or more covered provisions of an applicable agency in order to test, experiment, or temporarily provide to consumers artificial intelligence products or services or artificial intelligence development methods on a limited basis without being subject to the enforcement, licensing, or authorization requirements of such covered provisions.”

Part of Cruz’s proposed program, which was first reported by Bloomberg, would create a review process to determine whether an AI software applying for an exemption waiver presents “a health and safety risk, a risk of economic damage, or a risk of unfair and deceptive trade practices.”

The bill further calls for the specific assessment and evaluation criteria to be published in the Federal Register with an open comment period. 

Cruz first announced his intent to propose an AI sandbox bill back in May, with the ultimate goal being to remove barriers to AI adoption and prevent overregulation at the state level.

The first draft of his sandbox bill addresses the failed 10-year moratorium that was originally part of the recent budget reconciliation package and would have prohibited the passage of new state-level AI regulation for the next decade by offering waivers on existing regulations for companies testing their AI in the sandbox program. 

Federal agencies and private sector companies have participated in sandbox efforts before, such as NVIDIA and non-profit MITRE working to improve and implement AI tools tailored for government workloads.





Source link

AI Research

How to Scale Up AI in Government

Published

on


State and local governments are experimenting with artificial intelligence but lack systematic approaches to scale these efforts effectively and integrate AI into government operations. Instead, efforts have been piecemeal and slow, leaving many practitioners struggling to keep up with the ever-evolving uses of AI for transforming governance and policy implementation.

While some state and local governments are leading in implementing the technology, AI adoption remains fragmented. Last year, some 150 state bills were considered relating to the government use of AI, governors in 10 states issued executive orders supporting the study of AI for use in government operations, and 10 legislatures tasked agencies with capturing comprehensive inventories.

Taking advantage of the opportunity presented by AI is critical as decision-makers face an increasing slate of challenging implementation problems and as technology quickly evolves and develops new capabilities. The use of AI is not without risks. Developing and adapting the necessary checks and guidance is critical but can be challenging for such dynamic technologies. Shifting from seeing AI as merely a technical capability to considering what AI technology should be asked to do can help state and local governments think more creatively and strategically. Here are some of the benefits governments are already exploring:


Administrative efficiency: Half of all states are using AI chatbots to reduce administrative burden and free staff for substantive and creative work. The Indiana General Assembly uses chatbots to answer questions about regulations and statutes. Austin, Texas, streamlines residential construction permitting with AI, while Vermont’s transportation agency inventories road signs and assesses pavement quality.

Research synthesis: AI tools help policymakers quickly access evolving best practices and evidence-based approaches. Overton’s AI platform, for example, allows policymakers to identify how existing evidence aligns with priority areas, compare policy approaches across states and nations, and match with relevant researchers and projects.

Implementation monitoring: AI fills critical gaps in program evaluation without major new investments. California’s transportation department analyzes traffic patterns to optimize highway safety and inform infrastructure investments.

Predictive modeling: AI-enabled models help test assumptions about which interventions will succeed. These models use features such as organizational characteristics, physical and contextual factors, and historical implementation data to predict success of policy interventions, and their outputs can help tailor interventions and improve outcomes and success. Applications include targeting health interventions to patients with modifiable risk factors, identifying lead service lines in municipal water systems, predicting flood response needs and flagging households at eviction risk.

Scaling up to wider adoption in policy and practice requires proactive steps by state and local governments and attendant guidance, monitoring and evaluation:

Adaptive policy framework: AI adoption often outpaces planning, and the definition of AI is often specific to its application. States need to define AI applications by sector (health, transportation, etc.) and develop adaptive operating strategies to guide and assess its impact. Thirty states have some guidance, but comprehensive approaches require clear definitions and inventories of current use.

Funding strategies: Policymakers must identify and leverage funding streams to cover the costs of procurement and training. Federal grants like the State and Local Cybersecurity Grant Program offer potential, though current authorization expires this Sept. 30. Massachusetts’ FutureTech Act exemplifies direct state investment, authorizing $1.23 billion for IT capital projects including AI.

Smart procurement: Effective AI procurement requires partnerships with vendors and suppliers and between chief information officers and procurement specialists. Contracts must ensure ethical use, performance monitoring and continuous improvement, but few states have procurement language related to AI. Speed matters — AI purchases risk obsolescence during lengthy procurement cycles.

Training and workforce development: Both current and future state and local government workforces need AI skills. Solutions include AI training academies and literacy programs for government workers, joint training programs between professional associations, and the General Services Administration’s AI Community of Practice‘s events and training. The Partnership for Public Service has recently opened up its AI Government Leadership program to state and local policymakers. Universities including Stanford and Michigan offer specialized programs for policymakers. Graduate programs in public policy, administration and law should incorporate AI governance tracks.

State AI policy development involves governor’s offices, chief information offices, security offices and legislatures. But success requires moving beyond pilot projects to systematic implementation. Governments that embrace this transition will be best positioned for future challenges. The opportunity exists now to set standards for AI-enabled governance, but it requires proactive steps in policy development, funding, procurement, workforce development and safeguards.

Joie Acosta is a senior behavioral scientist and the Global Scholar in Translation at RAND, a nonprofit, nonpartisan research institute. Sara Hughes is a senior policy researcher and the Global Scholar of Implementation at RAND and a professor of policy analysis at the RAND School of Public Policy.


Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.





Source link

Continue Reading

AI Research

AI-powered search engine to help Singapore lawyers with legal research

Published

on


SINGAPORE – An artificial intelligence (AI)-powered search engine is expected to accelerate legal research and free up time for more than three quarters of all lawyers working in Singapore who subscribe to legal research platform LawNet.

Developed in collaboration with the Singapore Academy of Law, this new tool allows lawyers to ask legal research questions in natural language and receive contextual, relevant responses.

It is trained on Singapore’s legal context and supported by data such as judgments, Singapore Law Reports, legislation and books.

GPT-Legal Q&A, which has been rolled out on LawNet, was launched by Justice Kwek Mean Luck on the second day of the TechLaw.Fest on Sept 11 at the Sands Expo and Convention Centre.

The earlier GPT-Legal model launched in 2024 provided summaries of unreported court judgments, and has since been used to generate more than 15,000 of them.

“This is a game-changing feature. This new function enables lawyers to ask legal research questions in natural language, and receive contextual, relevant responses, which are generated by AI grounded in LawNet’s content,” said Justice Kwek.

“It is designed to complement traditional keyword-based search by offering a more intuitive and responsive research experience.”

For a start, the feature is focused on delivering insights on contract law, as it is a fundamental area of law that underpins many specialised fields.

“This is a significant undertaking. It involves extensive development and rigorous testing, to align technology to the demands of your work. As such, we will be rolling out this implementation in phases,” said Justice Kwek.

The model will be improved to give insights into other significant areas of law like family law and criminal law.

The Infocomm Media Development Authority has also developed an agentic AI demonstrator for the Singapore Academy of Law to help corporate secretaries arrange annual general meetings (AGMs).

Agentic AI can help to perform tasks without the need for human intervention.

The AI agent can automate tasks like looking through the schedules of directors to find a time slot for AGMs.

With the AI agent offering routine corporate secretarial duties autonomously, professionals will be freed up to focus on higher-value advisory and strategic tasks.

Source: The Straits Times © SPH Media Limited. Permission required for reproduction

Discover how to enjoy other premium articles here



Source link

Continue Reading

AI Research

AI-powered research training to begin at IPE for social science scholars

Published

on


Hyderabad: The Institute of Public Enterprise (IPE), Hyderabad, has launched a pioneering 10-day Research Methodology Course (RMC) focused on the application of Artificial Intelligence (AI) tools in social science research. Sponsored by the Indian Council of Social Science Research (ICSSR), Ministry of Education, Government of India, the program commenced on October 6 and will run through October 16, 2025, at the IPE campus in Osmania University.

Designed exclusively for M.Phil., Ph.D., and Post-Doctoral researchers across social science disciplines, the course aims to equip young scholars with cutting-edge AI and Machine Learning (ML) skills to enhance research quality, ethical compliance, and interdisciplinary collaboration. The initiative is part of ICSSR’s Training and Capacity Building (TCB) programme and is offered free of cost, with travel and daily allowances reimbursed as per eligibility.

The course is being organized by IPE’s Centre for Data Science and Artificial Intelligence (CDSAI), under the academic leadership of Prof. S Sreenivasa Murthy, Director of IPE and Vice-Chairman of AIMS Telangana Chapter. Dr. Shaheen, Associate Professor of Information Technology & Analytics, serves as the Course Director, while Dr. Sagyan Sagarika Mohanty, Assistant Professor of Marketing, is the Co-Director.

Participants will undergo hands-on training in Python, R, Tableau, and Power BI, alongside modules on Natural Language Processing (NLP), supervised and unsupervised learning, and ethical frameworks such as the Digital Personal Data Protection (DPDP) Act, 2023.

The curriculum also includes field visits to policy labs like T-Hub and NIRDPR, mentorship for research proposal refinement, and guidance on publishing in Scopus and ABDC-indexed journals.

Speaking about the program, Dr. Shaheen emphasized the need for social scientists to evolve beyond traditional methods and embrace computational tools for data-driven insights.

“This course bridges the gap between conventional research and emerging technologies, empowering scholars to produce impactful, ethical, and future-ready research,” she said.

Seats for the course are allocated on a first-come, first-served basis. The last date for nominations is September 15, 2025. With its unique blend of technical training, ethical grounding, and publication support, the RMC at IPE intends to take a significant step to empower scholars in the process of modernizing social science research in India.

Interested candidates can contact: Dr Shaheen, Programme Director, at [email protected] or on mobile number 9866666620.



Source link

Continue Reading

Trending