Connect with us

AI Insights

GSA to unveil USAi, a new tool for federal agencies to experiment with AI models 

Published

on


The General Services Administration will roll out a new governmentwide tool Thursday that gives federal agencies the ability to test major artificial intelligence models, a continuation of Trump administration efforts to ramp up government use of automation. 

The AI evaluation suite, titled USAi.gov, will launch later Thursday morning and allow federal agencies to test various AI models, including those from Anthropic, OpenAI, Google and Meta to start, two senior GSA officials told FedScoop. 

The launch of USAi underscores the Trump administration’s increasing appetite for AI integration into federal government workspaces. The GSA has described these tools as a way to help federal workers with time-consuming tasks, like document summaries, and give government officials access to some of the country’s leading AI firms. 

The GSA, according to one of the officials, will act as a “curator of sorts” for determining which models will be available for testing on USAi. The official noted that additional models are being considered for the platform, with input from GSA’s industry and federal partners, and that American-made models are the primary focus. 

Grok, the chatbot made by Elon Musk’s xAI firm, is notably not included on the platform for its launch Thursday. xAI introduced a Grok for Government product last month, days after FedScoop reported on the GSA’s interest in the chatbot for government use. 

FedScoop reported last month that GSA recently registered the domain usai.gov

How USAi.gov will work

The USAi tool builds upon GSA’s internal chatbot, GSAi, which was rolled out internally in March to give GSA employees access to different enterprise AI models. Zach Whitman, GSA’s chief AI officer and data officer, hinted last month that the GSA was exploring how it could implement its internal AI chatbot in other agencies. 

Once an agency tests the model on USAi, it has the option to procure it from the normal federal marketplace, one of the officials said. In other cases, an agency may stay on the USAi platform in the wake of changing market dynamics but can still access the model for testing, the official added. 

The platform appears to directly coincide with the GSA’s ongoing rehaul of the federal procurement process, which is focused on consolidating the government’s purchasing of goods and services. 

“What we don’t want is to get into this situation where we buy a few licenses for something here and a few licenses for something there, so being able to blanket our entire workforce with the same market-leading capabilities was hugely valuable to us, right off the bat,” Whitman said in an interview with FedScoop about the USAi launch. 

GSA has announced a number of new collaborations this month with firms like OpenAI, Anthropic and Box, to offer their products at a significantly discounted price to federal agencies. And FedScoop reported this week that the GSA is considering prioritizing the review of AI technologies in the procurement process. 

The USAi launch comes on the heels of the White House’s AI Action Plan, which calls on the GSA to establish an “AI procurement toolbox” to encourage “uniformity across the federal enterprise.” The plan, released last month, mandates that federal agencies guarantee any employee “whose work could benefit” from frontier language models has access to them. 

Building trust with models

Whitman said GSA is hopeful federal users will have more trust to work with a platform like USAi, noting public tools on their own can prompt fears around working with sensitive materials. 

Dave Shive, GSA’s chief information officer, said in an interview with FedScoop that the agency is “not just prototyping technology.” 

“We’re also prototyping new ways to do business and it made a bunch of sense for us to build … a ‘model garden’ — a portfolio of models that our users can choose from, because they all have different strengths and weaknesses,” Shive said. And those are models across a variety of vendors, because they’re trying to think of new, creative ways to do 21st-century business at GSA. 

“And if they have that full suite of models, instead of being limited to just one vendor, it allows them to do that business level, business architecture, prototyping, the very things that we’re all expecting AI can help with,” he added. 

In addition to the chatbot and API testing features on USAi, agency administrators can also view GSA’s data-based evaluations for the models to determine which are best for their specific use cases, one of the officials said. 

“You can define ‘best’ in any number of ways, from cost implications, from speed implications, from usability implications to bias sensitivity implications,” the other official said, adding that “we have all this kind of decision criteria across a vast number of domains that go into them.” 

The GSA said it is offering USAi to all civilian federal agencies, along with the Defense Department. A person familiar with the matter said that as of late Wednesday afternoon, chief AI officers had not yet been briefed about the launch of the USAI.gov platform.

Three evaluations take place prior to a model being available for testing on USAi, one of the officials explained. The first focuses on safety, such as looking at whether a model outputs hate speech, while the second is based on performance at answering questions and the third involves red-teaming, or testing of durability. 

The safety teams reviewing the report are specific to USAi, the official noted, emphasizing that this process is not intended to “overstep the role or function of a USAi platform” that welcomes agency input.

Rebecca Heilweil contributed reporting. 


Written by Miranda Nazzaro

Miranda Nazzaro is a reporter for FedScoop in Washington, D.C., covering government technology. Prior to joining FedScoop, Miranda was a reporter at The Hill, where she covered technology and politics. She was also a part of the digital team at WJAR-TV in Rhode Island, near her hometown in Connecticut. She is a graduate of the George Washington University School of Media and Pubic Affairs. You can reach her via email at miranda.nazzaro@fedscoop.com or on Signal at miranda.952.



Source link

AI Insights

LifeLong Learning and TXST expand series on Artificial Intelligence

Published

on


Dr. Marianne Reese, Founder and Director of LifeLong Learning, conceived of the AI series due to AI’s exponential growth and the need for the public to understand its uses and limitations.

“AI is a relatively new tool that is being used in ways the public is often unaware of,” Reese noted. “We all need to know more about this powerful technology, understand AI’s positive and concerning applications, and learn the skills necessary to scrutinize the information it generates.

“AI will become increasingly prevalent, so we need to be informed consumers as AI impacts politics, medicine, business, finance and other areas of our lives,” Reese said.

The AI Learning Series is led by Dr. Kimberly Conner, Digital Strategy Lead for Information Technology at Texas State. Connor’s role is to help demystify innovation and make technology approachable for students, staff and faculty. With a rare combination of expertise in law, education and IT, Dr. Connor bridges the gap between complex digital tools and the people who use them.

Almost 80 lifelong learners attended the AI Series Kickoff Event on Tuesday, Aug. 19.

The Sept. 3 class covers AI use of our personal data and AI-generated misinformation and scams.

The Sept. 17 class features a comparison of different AI services (e.g., Chat GPT, Gemini).

The Oct. 1 class covers practical AI tools for daily life, with an exploration of AI applications for communication and creative projects.

The Oct. 15 class covers AI reliability & accuracy, AI limitations and and best practices for verification.

The Sept. 29 class covers AI for personal enrichment, such as enhancing hobbies and expanding personal interests.

The final class on Nov. 3 covers hands-on activities and features a closing presentation.

For more information visit their website at lllsanmarcos.org.



Source link

Continue Reading

AI Insights

China Calls for Regulation of Investment in Artificial Intelligence

Published

on


In a move reflecting a cautious strategic direction, China has called for curbing “excessive investment” and “random competition” in the artificial intelligence sector, despite its classification as a key driver of national economic growth and a critical competitive field with the United States.

Chang Kailin, a senior official at the National Development and Reform Commission – the highest economic planning body in the country – confirmed that Beijing will take a coordinated and integrated approach to developing artificial intelligence across various provinces, focusing on leveraging the advantages and local industrial resources of each region to avoid duplicating efforts, warning against “herd mentality” in investment without careful planning.

These statements come amid a contraction in China’s manufacturing industries for the fifth consecutive month, reflecting the pressures faced by the world’s second-largest economy, as policymakers attempt to avoid repeating past mistakes like those in the electric vehicle sector, which led to an oversupply of production capacity and subsequent deflationary pressures.

Chinese President Xi Jinping also warned last month against the rush of local governments towards artificial intelligence without proper planning, a clear indication of the Chinese leadership’s desire to regulate the pace of growth in this vital sector.

Despite these warnings, China continues to accelerate the development, application, and governance of artificial intelligence, as the government revealed a new action plan last week aimed at boosting this sector, which includes significant support for private companies and encouragement for the emergence of strong startups capable of global competition, which the National Committee described as a pursuit for the emergence of “black horses” in the innovation race, implicitly referring to notable success stories like the Chinese company DeepMind.

DeepMind gained international fame earlier this year after launching a powerful and low-cost artificial intelligence model, competing with the models of major American companies, igniting a wave of local and international interest in Chinese technologies.

In a separate context, a Bloomberg analysis showed that Chinese technology companies plan to install more than 115,000 artificial intelligence chips produced by the American company Nvidia in massive data centers being built in the desert regions of western China, indicating a continued effort to build strong artificial intelligence infrastructure despite regulatory constraints.

These steps come at a time when Beijing seeks to balance support for technological innovation with regulating investment chaos, in an attempt to shape a more sustainable path for the growth of artificial intelligence within China’s broader economic vision.



Source link

Continue Reading

AI Insights

A new research project is the first comprehensive effort to categorize all the ways AI can go wrong, and many of those behaviors resemble human psychiatric disorders.

Published

on


Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That’s why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.

In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is “Psychopathia Machinalis” — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.



Source link

Continue Reading

Trending