Connect with us

Tools & Platforms

Want Accountable AI in Government? Start with Procurement

Published

on


In 2018, the public learned that the New Orleans Police Department had been using predictive policing software from Palantir to decide where to send officers. Civil rights groups quickly raised alarms about the tool’s potential for racial bias. But the deeper issue wasn’t just how the technology worked, but the processes that shaped its adoption by the city. Who approved its use? Why was it hidden from the public?

Like New Orleans, all US cities rely on established public procurement processes to contract with private vendors. These regulations, often written into law, typically apply to every government purchase, whether it’s school buses, office supplies, or artificial intelligence systems. But this case exposed a major loophole in the city’s procurement rules: because Palantir donated the software for free, the deal sidestepped the city’s usual oversight processes. No money changed hands, so the agreement didn’t trigger standard checks such as a requirement for city council debate and approval. The city didn’t treat philanthropic gifts like traditional purchases, and as a result, key city officials and council members had no idea the partnership even existed.

Inspired by this story and several others across the US, our research team, made up of scholars from Carnegie Mellon University and the University of Pittsburgh, decided to investigate the purchasing processes that shape critical decisions about public sector AI. Through interviews with nineteen city employees based in seven anonymous US cities, we found that procurement practices vary widely across localities, shaping what’s possible when it comes to governing AI in the public sector.

Procurement plays a powerful role in shaping critical decisions about AI. In the absence of federal regulation of AI vendors, procurement remains one of the few levers governments have to push for public values, such as safety, non-discrimination, privacy, and accountability. But efforts to reform governments’ procurement practices to address the novel risks of emerging AI technologies will fall short if they fail to acknowledge how purchasing decisions are actually made on the ground. The success of AI procurement reform interventions will hinge on reconciling responsible AI goals with legacy purchasing norms in the public sector.

When asked what procurement entails, many people think of a competitive solicitation process, which often involves a review followed by a reward decision.. Once a use case for AI has been identified, a government initiates a solicitation process where they outline their needs, and invite vendors to submit proposals (a “Request for Proposal”, or RFP). City employees then follow structured review processes to score vendors’ proposed AI systems, and select a winner. The city and awarded vendor negotiate a contract that specifies obligations for each party, such as an agreed price for a specified time period. In some cities (but not others), all contracts must be approved in a public city council meeting.

Today, most efforts to improve AI procurement target steps in this conventional solicitation process. Groups like the World Economic Forum have published resources to help governments include responsible AI considerations into RFPs and contract templates.

But as we’ve seen, many AI systems bypass the formal solicitation process altogether. Instead, cities often make use of alternative purchasing pathways. For example, procurement law typically allows small-dollar purchases to skip competitive bidding. Employees can instead buy low cost AI tools using government-issued purchasing cards.

Other alternative purchasing pathways include AI donated by companies, acquired through university partnerships, or freely available to the public, like ChatGPT. Vendors are increasingly rolling out new AI features into their existing contracts, without notifying the public or city staff. The result is that most available resources designed to support responsible AI procurement are not applicable to the majority of AI acquisitions today.

While competitive solicitations offer several benefits to promoting responsible AI governance, many city employees view them as inefficient and cumbersome. Instead, many employees make use of alternative purchasing pathways when acquiring AI. This raises a key question: how might local governments establish consistent AI governance norms when most tools are acquired outside of the formal solicitation process? Answering this question requires looking more closely at who is involved in each type of acquisition.

How local governments organize and staff their procurements

Across interviewed cities, one of the clearest divides was in which city employees were brought in to oversee each AI acquisition. Some interviewed cities had established fully centralized oversight processes where every software acquisition — AI included — must pass through IT staff who can vet it for quality and risk. In contrast, other cities were largely decentralized, giving individual departments like police, fire, and schools free reign to manage their own IT portfolio.

These governance arrangements have real implications for oversight capacity — and suggest that a one-size-fits-all reform approach is unlikely to succeed. Some cities have started adopting centralized reviews that require “AI experts” trained to assess AI risks into every acquisition, enabling more consistent oversight. In contrast, cities with histories of decentralized IT governance face two paths: either train individual departments to assess AI risks, or reconfigure existing procurement workflows to establish centralized reviews to ensure minimum ethical standards are met.

Open questions looking ahead

Advocates have long recognized the potential of public procurement to serve as a gatekeeping role in determining which technologies are acquired and deployed. The past year has marked an especially exciting time for local governments who have started to integrate responsible AI considerations into their existing public procurement practices through grassroots organizations such as the Government AI Coalition. Our team’s research, published at the 2025 ACM Conference on Fairness, Accountability, and Transparency, adds a missing layer to the existing conversation on AI procurement by surfacing how AI procurement actually works in practice.

Our research raises key questions that local governments will need to grapple with to establish effective oversight for all AI acquisitions:

  1. How can local governments stand up oversight and review processes for AI proposals that may bypass the conventional solicitation process?
  2. Who within a government has the capacity and leverage to be responsible for identifying and managing the risks posed by procured AI technology?
  3. How might existing procurement workflows be restructured to ensure that the right people are brought in to conduct meaningful evaluation of proposed AI solutions?

We anticipate there’s no one-size-fits-all model for how local governments should structure their procurement processes to promote responsible procurement and governance of AI. But this moment offers a rare opportunity for policy experts, researchers, and advocates to come together to reshape AI procurement (for example, to center residents’ input and participation). Public procurement is where some of the most consequential decisions about public sector AI are made. If we want to understand why an AI system is adopted — and whose interests it serves — we must begin by looking at how it was acquired in the first place.

Acknowledgements: Co-authors Beth Schwanke, Ravit Dotan, Harrison Leon, and Motahhare Eslami contributed to this research.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Could gen AI radically change the power of the SLA?

Published

on


Clorox’s lawsuit cites transcripts of help desk calls as evidence of Cognizant’s negligence, but what if those calls been captured, transcribed, and analyzed to send real-time alerts to Clorox management? Could the problem behavior have been discovered early enough to thwart the breach?

Here, generative AI could have a significant impact, as it delivers the capability to capture information from a wide range of communication channels — potentially actions as well via video — and analyze for deviations from what a company has been contracted to deliver. This could deliver near-real-time alerts regarding problematic behavior in a way that could spur a rethinking of the SLA as it is currently practiced. 

“This is flipping the whole idea of SLA,” said Kevin Hall, CIO for the Westconsin Credit Union, which has 129,000 members throughout Wisconsin and Minnesota. “You can now have quality of service rather than just performance metrics.”



Source link

Continue Reading

Tools & Platforms

Box’s new AI features help unlock dormant data – Computerworld

Published

on


AI provides a technique to extract value from this untapped resource, said Ben Kus, chief technology officer at Box. To use the widely scattered data properly requires preparation, organization, and interpretation to make sure it is applied accurately, Kus said.

Box Extract uses reasoning to dig deep and extract relevant information. The AI technology ingests the data, reasons and extracts context, matches patterns, reorganizes the information by placing it in fields, and then draws correlations from the new structure. In a way, it restructures unstructured data with smarter analysis by AI.

“Unstructured data is cool again. All of a sudden it’s not just about making it available in the cloud, securing it, or collaboration, but it’s about doing all that and AI,” Kus said.



Source link

Continue Reading

Tools & Platforms

CoreWeave scales rapidly to meet AI growth

Published

on


Despite short-term stock pressure, CoreWeave remains positioned to meet overwhelming AI compute demand, supporting long-term optimism in the sector.

Nvidia-backed CoreWeave says peak AI investment is still far off, as demand for compute capacity from OpenAI, hyperscalers, enterprises, and governments continues to surge. CEO Michael Intrator said CoreWeave is rapidly scaling to meet soaring global GPU demand.

CoreWeave shares have fallen around 20% despite strong market interest over the past month. The decline follows a higher-than-expected Q2 net loss, $1 billion in capital expenditure, and a projected $500 million this quarter, raising debt concerns.

Since the IPO lockup expiry, insider stock sales have added to the downward pressure.

Intrator defended the company’s strategy, describing debt as the most efficient way to fund growth. Analysts warn CoreWeave shares could stay volatile, though strong AI infrastructure demand supports long-term optimism.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot



Source link

Continue Reading

Trending