AI Insights
Americans Prioritize AI Safety and Data Security

WASHINGTON, D.C. — As artificial intelligence continues to develop and grow in capability, Americans say the government should prioritize maintaining rules for AI safety and data security. According to a new nationally representative Gallup survey conducted in partnership with the Special Competitive Studies Project (SCSP), 80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it means developing AI capabilities more slowly.
In contrast, 9% say the government should prioritize developing AI capabilities as quickly as possible, even if it means reducing rules for AI safety and data security. Eleven percent of Americans are unsure.
###Embeddable###
Majority-level support for maintaining rules for AI safety and data security is seen across all key subgroups of U.S. adults, including by political affiliation, with 88% of Democrats and 79% of Republicans and independents favoring maintaining rules for safety and security. The poll did not explore which specific AI rules Americans support maintaining.
This preference is notable against the backdrop of global competitiveness in AI development. Most Americans (85%) agree that global competition for the most advanced AI is already underway, and 79% say it is important for the U.S. to have more advanced AI technology than other countries.
However, there are concerns about the United States’ current standing, with more Americans saying the U.S. is falling behind other countries (22%) than moving ahead (12%) in AI development. Another 34% say the U.S. is keeping pace, while 32% are unsure. Despite ambitions for U.S. AI leadership — and doubts about achieving it — Americans still prefer maintaining rules for safety and security, even if development slows. This view aligns with their generally low levels of trust in AI, which is correlated to low adoption and use.
Only 2% of U.S. adults “fully” trust AI’s capability to make fair and unbiased decisions, while 29% trust it “somewhat.” Six in 10 Americans distrust AI somewhat (40%) or fully (20%), although trust rises notably among AI users (46% trust it somewhat or fully).
Among those who favor maintaining rules for AI safety and data security, 30% trust AI either somewhat or fully, compared with 56% among those who favor developing AI capabilities as quickly as possible.
###Embeddable###
Robust Support for Shared Governance and Independent Testing
Almost all Americans (97%) agree that AI safety and security should be subject to rules and regulations, but views diverge on who should be responsible for creating them. Slightly over half say the U.S. government should create rules and regulations governing private companies developing AI (54%), in line with the percentage who think companies should work together to create a shared set of rules (53%).
Relatively few Americans (16%) say each company should be allowed to create its own rules and regulations. These findings indicate broad support for both government and industry standards.
###Embeddable###
People are more emphatic about peer testing and evaluating the safety of AI systems before they are released. A majority (72%) say independent experts should conduct safety tests and evaluations, significantly more than those who think the government (48%) or each company (37%) should conduct tests.
###Embeddable###
Multilateral Advancement Preferred to Working Alone
The spirit of cooperation extends to how people think the U.S. should develop its AI technology. Americans favor advancing AI technology in partnership with a broad coalition of allies and friendly countries (42%) over collaborating with a smaller group of its closest allies (19%) or working independently (14%).
This preference for AI multilateralism holds across party lines. Although Democrats are nearly twice as likely as Republicans (58% vs. 30%, respectively) to favor the U.S. collaborating with a larger group of allies, Republicans still favor working with either a large or small group of allies over working independently (19%).
###Embeddable###
Bottom Line
Findings from Gallup’s research with SCSP highlight important commonalities in how Americans wish to see AI governance evolve. Americans favor U.S. advancement in developing AI while also prioritizing maintaining rules for AI safety and data security. Majorities favor government regulation of AI, company collaboration on shared rules, independent expert testing, and multilateral cooperation in development. As policymakers and companies chart the future of AI, public trust — which is closely tied to adoption and use — will play an important role in advancing AI technology and shaping which rules are maintained.
Read the full Reward, Risk, and Regulation: American Attitudes Toward Artificial Intelligence report.
Stay up to date with the latest insights by following @Gallup on X and on Instagram.
Learn more about how the Gallup Panel works.
###Embeddable###
AI Insights
Uber Adds Kenya Wildlife Safaris With Eye on $4 Billion Industry

Uber Technologies Inc has launched Uber Safari for expeditions into the Nairobi National Park, the world’s only wildlife park within a capital city.
Source link
AI Insights
Transforming cancer care with Artificial Intelligence

According to the Global Cancer Observatory, cancer remains one of the country’s most pressing public health challenges, with nearly 58,000 new cases and close to 20,000 deaths recorded in 2022. Achieving fully personalized care remains challenging due to fragmented data and limited integration across institutions. A closer coordination across the national healthcare network will result in more effective and equitable treatments for patients.
NAIPO (National AI Initiative for Precision Oncology) responds to this need with an integrated, AI-powered precision oncology platform to transform cancer care delivery. By applying advanced AI models at every stage of the patient journey, it aims to optimize diagnostics, personalize treatments, and support data-driven clinical decision-making. “Building on lessons from previous efforts in precision oncology in Switzerland, our initiative targets the development of novel, clinically informed AI tools by seamlessly integrating a common data platform, continuously adapting robust models, and designing effective clinical interfaces and patient apps.” says Dorina Thanou, lead of the initiative at the EPFL AI Center.
Selected as a Flagship Initiative byInnosuisse, the Swiss Innovation Agency, NAIPO will unfold over four years under the leadership of the EPFL AI Center and ETH AI Center, uniting a large transdisciplinary team from a wide array of institutions including the Swiss Data Science Center (SDSC), the Swiss National Supercomputing Centre (CSCS), the Universities of Applied Sciences and Arts of Northwestern Switzerland, the Bern University of Applied Sciences, the Universities and University Hospitals of Basel, Bern, Geneva, and Zurich, Debiopharm, Roche, SOPHIA GENETICS, Switch, Tune Insight, as well as the regional hospitals of Aarau, Baden, Ticino, Luzern and Winterthur and the private clinics of Hirslanden and Swiss Medical Network. With an expected total cost of CHF 18.9 million, the project will receive approximately CHF 8.25 million in public funding from Innosuisse with the remaining amount coming from the implementation partners.
Transforming cancer research
NAIPO pioneers new AI approaches in cancer research and care, from clinical decision-support agents and large language models for records mining, to foundation models for treatment response prediction and privacy-preserving approaches. “Combined with high-throughput experimental models and patient avatars, these technologies will allow us to capture and model each patient’s uniqueness.The program will redefine AI’s role in medicine and strengthen Switzerland’s position as a leader in medical AI innovation” said Elisa Oricchio, director of the Swiss Institute of Experimental Cancer Research (ISREC) at EPFL
“Tailoring predictions and recommendations to individual patients is one of the most exciting aspects of NAIPO,” said Charlotte Bunne, professor at EPFL working on model development. “Our models will continuously learn from curated biomedical literature, as well as from individual biological and clinical data to identify potential new targets, biomarkers, and investigational drugs. Novel AI-driven insights will be integrated with clinically validated models and translated into decision-support systems.” Placing patients’ specific needs at the center of the initiative, dedicated solutions will be developed, such as a mobile app, to enhance communication and ensure patients remain actively informed and engaged throughout their care.
Deployment and long-term vision
The program’s roadmap foresees clinical pilots at university and cantonal hospitals and private clinics, leading to an initial rollout at participating hospitals nationwide within four years. In addition to advancing cancer care, the infrastructure is intended to serve as a model for future applications in other disease domains.
“This initiative marks a transition toward a proactive model for precision oncology,” said Olivier Michielin, Head of Precision Oncology at Geneva University Hospitals (HUG) and Clinical Co-Coordinator of the project. “It reflects a commitment to ensuring that all patients, regardless of where they are treated within this network, benefit from the latest advances in AI-supported medicine.”
Secure, privacy-conscious collaboration is central to the initiative. Using modern data governance, the infrastructure will enable collective intelligence without centralizing sensitive health data. “We’re creating a secure and federated system that allows collaboration across institutions without compromising privacy,” said Nora Toussaint, Lead Health & Biomedical at the Swiss Data Science Center (SDSC). “Trust and transparency will be built into the design.”
“NAIPO is exactly what clinical oncology needs today. We are able to produce much more data than a couple of years ago, but we often don’t know how to integrate this in actual patient care. NAIPO is instrumental to close this gap.” Says Andreas Wicki, oncology professor at the University of Zurich and Clinical Co-Coordinator of the project.
NAIPO’s long-term vision includes reducing disparities in access, accelerating the discovery of new biomarkers and treatments, and supporting sustainable innovation across the Swiss healthcare system. Milestones and key results will be shared as the project progresses.

AI Insights
Free AI, data science lecture series launched at UH Mānoa

The University of Hawaiʻi at Mānoa launched a free artificial intelligence (AI) and data science public lecture series on September 15, with a talk by Eliane Ubalijoro, chief executive officer of the Center for International Forestry Research and World Agroforestry. Ubalijoro, based in Nairobi, Kenya, spoke on AI governance policies and ethics for managing land, biodiversity and fire.

The event, hosted at the Walter Dods, Jr. RISE Center, was organized by the Department of Information and Computer Sciences (ICS) in partnership with the Pacific Asian Center for Entrepreneurship (PACE). It kicked off a four-part series designed to share industry and government perspectives on emerging issues in AI and data science.
All lectures are open to students, professionals and community members, providing another avenue for the public to engage with UH Mānoa’s new graduate certificate and professional master’s program in AI and data science. The series is tied to ICS 601, the Applied Computing Industry Seminar, which connects students to real-world applications of AI.
“This series opens the door for our students and community to learn directly from leaders shaping the future of AI and data science,” said Department of Information and Computer Sciences Chair and Professor Guylaine Poisson.
PACE Executive Director Sandra Fujiyama added, “By bringing these talks into the public sphere, we’re strengthening the bridge between UH Mānoa, industry sectors and Hawaiʻi’s innovation community.”
Three additional talks are scheduled this fall:
- September 22, 12–1:15 p.m.: Rebecca Cai, chief data officer for the State of Hawaiʻi, will discuss government data and AI use cases.
- October 13, 12–1:15 p.m.: Shovit Bhari of IBM will share industry lessons on machine learning.
- November 10, 12–1:15 p.m.: Peter Dooher, senior vice president at Digital Service Pacific Inc., will cover designing end-to-end AI systems.
Register for the events at the PACE website.
ICS is housed in UH Mānoa’s College of Natural Sciences and PACE is housed in UH Mānoa’s Shidler College of Business.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries