Connect with us

Tools & Platforms

3 AI roadblocks—and how to overcome them

Published

on


Evidence of uneven AI adoption in the private sector grows by the day, with executives worried about falling behind more tech-savvy competitors. But the stakes are different, and considerably higher, in government. For local leaders, AI isn’t about winning a race. It’s about unlocking new problem-solving capacity to deliver better services and meet pressing resident needs.

Even so, city governments face real barriers to further adoption, including persistent concerns about accuracy and privacy, procurement hurdles, and too little space for civil-servant experimentation. The good news? Innovative leaders are already showing how to overcome these obstacles. And, in doing so, they’re reaping insights of use to others aiming to do the same.

Designing AI tools tailored to employees’ needs.

Boston Chief Innovation Officer Santiago Garces has no doubt that his city-hall colleagues want to push their efforts forward with AI, and he’s got the data to prove it.  His team recently conducted a survey of 600 Boston city employees and found that 78 percent of them want to further integrate the technology into their work. When asked what’s holding them back, security, accuracy, and intellectual property are among civil servants’ top concerns. 

Boston’s solution: Developing AI tools with more specific use cases, such as speeding up the procurement process, and employee concerns in mind from the start. 

Following through on a project they began last year, Garces and his team recently deployed a tool called Bitbot that can answer employees’ questions about procurement. Because it was trained on dozens of procurement documents, as well as state law, local ordinances, and city best practices, Garces argues the tool is best described not as a chatbot (though it resembles one) and more like the AI version of a handbook people know they can trust. And while the city’s randomized controlled trial of the tool’s impact is still wrapping up, Garces says the city has generally seen faster task completion and higher levels of accuracy from employees using it. At the same time, the tool is set up not to send information back to the major tech companies the way most public-facing AI tools do, which helps address employee concerns around privacy and security. 

While not every city has the resources to develop products like this on its own, Garces notes that working with university partners (he works closely with Northeastern University) can be very affordable. And this sort of approach could help civil servants everywhere be more comfortable in pushing AI use forward.

“They want the city-provided tool that they know that they can trust,” Garces explains.

Rapidly prototyping to de-risk big purchases.

When not developing bespoke AI solutions, cities turn to outside vendors. And they’re increasingly doing so with great success and impact, according to Mitchell Weiss, a Harvard Business School professor and senior advisor to the Bloomberg Harvard City Leadership Initiative. Still, adoption is uneven. “Some local leaders are wary [of making a sizeable], given broader concerns in the private sector and worries about the return on investment, ” he adds. Tight city budgets make the stakes of a misstep especially, and private-sector caution only reinforces city leaders’ hesitation.  

That’s why some cities are shaking up how they buy AI tools, both to speed that procurement process up and make sure that they stay laser-focused on boosting efficiency and effectiveness, rather than pursuing new tech for its own sake. Call it “try before you buy” for cities and AI.

Take San Antonio. Emily Royall, who until this past month worked as a senior manager for the emerging technology division in the city, helped run a rapid prototyping initiative that ensures potential AI contracts address tangible, department-level needs. The city spends up to $25,000 on three-to-six-month pilots before committing to longer-term vendor deals. The goal is to gauge impact and kick the tires first. 

Longer term, Royal and her new colleagues at the Procurement Excellence Network (she joined the team in September) believe one way cities will take their AI games to the next level is by banding together and conducting joint solicitations. And unlike traditional approaches to cooperative purchasing, cities are now determined to take a more muscular role in deciding for themselves what the most valuable AI use cases look like, and then calling on industry to develop the products that bring them to life while still meeting cities’ privacy concerns.

“This is about pooling purchasing power to deliver the outcomes that governments actually want to see from their implementation of the technology,” she says.

Leading teams toward bolder experimentation.

One of the cities leading that charge when it comes to local governments shaping the AI market is San Jose, Calif., which on Wednesday announced the first winners of its AI Incentive Program, offering grants to AI startups taking on everything from food waste to maternal health. But that’s not the only way the city is standing out. San Jose is also a model when it comes to creating a workplace where employees trust that leaders will have their backs as they constantly experiment in new ways with the technology.  

“Integrating AI into city hall isn’t just a question of expense,” explains Mai-Ling Garcia, digital practice director at the Bloomberg Center for Public Innovation at Johns Hopkins University. “It also requires that you have the political capital to spend to take risks.” 

And San Jose Mayor Matt Mahan is spending that political capital to great effect.

“He tells us it’s OK if you try something and it doesn’t work—you will not be penalized so long as there’s sufficient due diligence,” explains Stephen Caines, the city’s chief innovation officer.

But it’s not just what the mayor tells civil servants. And it’s not just the training San Jose provides through its data and AI upskilling programs, which are delivered in partnership with San Jose State University and which the mayor wants to train 1,000 more civil servants next year. It’s the larger political climate he’s cultivated to encourage AI experimentation. 

For example, the mayor presented a memo to the city council two years ago calling for the city to seize the moment and help shape (and stimulate) the emerging industry, and to integrate it across city operations. When local lawmakers voted for it, it helped clarify for everyone in city hall that pushing public-sector AI use forward wasn’t just allowed, but a key part of their job.

“I am often reminding policymakers and my colleagues that we spend probably a disproportionate amount of time focused on the technology itself or the latest hot startup versus what moves the needle the most, which is the people who will use these tools,” Mayor Mahan tells Bloomberg Cities. He adds that it isn’t just him, but city leaders across the organization who encourage experimentation with the technology. 

“How you choose to react to failure matters a tremendous amount for building culture,” Mayor Mahan, who is participating in the Bloomberg Harvard City Leadership Initiative, explains.

Among San Jose’s most concrete AI successes so far is a traffic-signal initiative that has already shown the potential to reduce resident commute times by 20 percent. And if the mayor and his team have anything to say about it, that’s just the start of not just pushing AI use forward in their city but encouraging other cities to experiment, too.

“The outdated vision of government is that we are merely consumers of technology,” Caines, the local innovation officer, explains. “The thesis that we’re putting forward is that government can be not only a lab where technology can be deployed, it can also be a valuable partner in co-creation, and we can actually serve as a market indicator by highlighting use cases that make a difference for residents.”

 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Trump’s AI Chip Bans Backfire, Ignite 60% China Tech Index Surge

Published

on

By


In the escalating U.S.-China tech rivalry, President Trump’s stringent bans on exporting advanced AI chips have unexpectedly ignited a surge in China’s domestic semiconductor and technology sectors. Despite persistent economic headwinds like a protracted property crisis and ongoing trade tensions, Chinese tech stocks have experienced a remarkable rally throughout 2025. The Hang Seng Tech Index, a key barometer for the sector, has climbed more than 60%, outpacing many global benchmarks and drawing intense scrutiny from investors and analysts alike.

This boom stems directly from Trump’s policies aimed at curbing China’s access to cutting-edge U.S. technology, particularly chips from giants like Nvidia. By restricting exports of high-performance AI accelerators, the administration sought to hinder Beijing’s artificial intelligence ambitions. Instead, these measures have accelerated China’s push for self-reliance, funneling billions into homegrown alternatives and boosting companies such as Semiconductor Manufacturing International Corp. and Huawei Technologies Co.

Policy Repercussions and Market Dynamics

Critics argue that Trump’s approach, while intended to protect American dominance, has backfired by supercharging China’s innovation ecosystem. For instance, domestic chipmakers have ramped up production of alternatives to banned U.S. products, leading to skyrocketing valuations. Shares in firms like Cambricon Technologies, often dubbed China’s Nvidia equivalent, have quintupled in the past year, according to market data cited in reports from Yahoo Finance. This fervor has not only attracted domestic investment but also lured foreign hedge funds betting on Beijing’s resilience.

However, the rapid ascent has sparked warnings of overheating. Analysts at Goldman Sachs and JPMorgan have projected further gains—up to 35% for certain indices by 2026—but caution against bubble risks, echoing sentiments from Business Insider, which highlighted concerns over inflated valuations amid China’s broader economic slowdown. The rally’s intensity recalls past market manias, where policy-driven booms preceded sharp corrections.

Shifting Strategies in U.S.-China Tech Trade

Trump’s administration has shown signs of tactical flexibility, with reports of negotiations allowing limited sales of downgraded Nvidia chips to China. Nvidia CEO Jensen Huang noted in August that discussions with the White House for exporting a less advanced version of its next-gen GPU could take time, as detailed in coverage from Reuters. This comes after an unusual deal where the U.S. government would take a 15% cut of revenues from such sales, a move criticized in The New York Times as a short-term profit grab that risks eroding America’s long-term AI edge.

Senate Democrats have urged Trump to reconsider, warning in an open letter that easing restrictions could empower China’s tech sector further, per CNBC. Meanwhile, sentiment on platforms like X reflects investor frustration and irony, with users noting how bans intended to stifle China have instead propelled its chip industry forward, though such posts underscore speculative hype rather than hard evidence.

Investor Sentiment and Future Risks

The overheating debate has intensified as Chinese tech giants like Alibaba and Tencent ride the wave, with their stocks surging alongside semiconductor plays. Yet, regulatory pressures in China, including mandates for tech firms to prioritize domestic chips over foreign ones like Nvidia’s H20, signal potential volatility. As reported in The Times of India, Beijing is actively discouraging imports, favoring local options to build independence.

For industry insiders, this dynamic presents a double-edged sword: opportunities in undervalued Chinese assets amid the rally, but heightened risks from geopolitical escalations or economic downturns. Trump’s policies have undeniably reshaped global supply chains, forcing companies worldwide to navigate a fragmented tech environment. As 2025 progresses, the sustainability of this boom will hinge on whether China’s domestic innovations can match U.S. prowess without overheating into a bust.

Broader Implications for Global Tech Competition

Looking ahead, the U.S. exemption of certain chipmakers from tariffs—provided they commit to domestic manufacturing—has buoyed stocks like Nvidia’s, as noted in Yahoo Finance. This carrot-and-stick approach aims to repatriate production, but critics in The Washington Post decry it as a historic blunder, potentially handing China the tools to close the AI gap.

Ultimately, the chip ban’s unintended consequences highlight the complexities of tech nationalism. While boosting short-term gains in China’s markets, it underscores the need for balanced strategies that foster innovation without isolating key players. As tensions persist, stakeholders must weigh the allure of rapid growth against the perils of overvaluation in this high-stakes arena.



Source link

Continue Reading

Tools & Platforms

Latanya Sweeney on Time magazine’s list of 100 Most Influential People in AI — Harvard Gazette

Published

on


Latanya Sweeney, Daniel Paul Professor of Government and Technology, has been named to Time magazine’s 2025 list of the 100 most influential people in the field of artificial intelligence.

Now in its third year, the Time list recognizes innovators, advocates, policymakers, and artists whose work is shaping the future of artificial intelligence. Sweeney is featured alongside figures such as Sam Altman, Elon Musk, and Mark Zuckerberg, underscoring the global impact of her contributions to privacy, data governance, and public interest technology.

Sweeney, a computer scientist who served as the chief technologist of the Federal Trade Commission and founded Harvard’s Public Interest Tech Lab, is a pioneer in data privacy. Her early research on k-anonymity and re-identification founded the field of data privacy, identified algorithmic bias for the first time, and helped shape national policy which remains foundational to multiple fields.

At Harvard, she has built a platform for training new technologists, advancing research on technology’s social impacts, and launching tools that serve the public good.

Among recent projects highlighted by Time are MyPrivacyPolls, a secure platform for whistleblowers to share information anonymously, and studies on the role of technology in voter registration and elections. Sweeney also co-authored a new framework for AI Data Communities, which enables small and medium-sized companies to share data to build AI tools without compromising privacy.

“I am deeply honored by this recognition, but it truly reflects the dedication, creativity, and vision of so many colleagues I’ve been fortunate to work alongside at Harvard,” said Sweeney. “From my time as FAS co-chair on AI to building the Public Interest Tech Lab at the Kennedy School, and engaging with the broader public interest technology community, I’ve seen firsthand how much is possible when we approach technology with democracy, equity, and accountability at the center. I am profoundly grateful for the encouragement, collaboration, and support of the remarkable community at Harvard.”

Beyond her research, Sweeney is a prominent voice in policy discussions, testifying before Congress and contributing to national debates on the ethical use of technology. Through both scholarship and advocacy, she works to amplify the public interest in technology governance. Her selection to the Time list reflects her influence at the intersection of technology and society, and her leadership in steering AI toward outcomes that benefit all.



Source link
Continue Reading

Tools & Platforms

China unveils ‘world’s first’ brain-like AI, 100x faster on local tech

Published

on


Researchers at the Chinese Academy of Sciences’ Institute of Automation in Beijing have introduced a new artificial intelligence system called SpikingBrain 1.0. 

Described by the team as a “brain-like” large language model, it is designed to use less energy and operate on homegrown Chinese hardware rather than chips from industry leader Nvidia. 

“Mainstream Transformer-based large language models (LLMs) face significant efficiency bottlenecks: training computation scales quadratically with sequence length, and inference memory grows linearly,” said the researchers in a non-peer-reviewed technical paper. 

According to the research team, SpikingBrain 1.0 performed certain tasks up to 100 times faster than some conventional models while being trained on less than 2% of the data typically required.

This project is part of a larger scientific pursuit of neuromorphic computing, which aims to replicate the remarkable efficiency of the human brain, which operates on only about 20 watts of power.

“Our work draws inspiration from brain mechanisms,” added the researchers.

To replicate efficiency of human brain

The core technology behind SpikingBrain 1.0 is known as “spiking computation,” a method that mimics how biological neurons in the human brain function. 

Instead of activating an entire vast network to process information, as mainstream AI tools like ChatGPT do, SpikingBrain 1.0’s network remains mostly quiet. It uses an event-driven approach where neurons fire signals only when specifically triggered by input. 

This selective response is the key to reduced energy consumption and faster processing time. To demonstrate their concept, the team built and tested two versions of the model, a smaller one with 7 billion parameters and a larger one containing 76 billion parameters. Both were trained using a total of approximately 150 billion tokens of data, a comparatively small amount for models of this scale.

The model’s efficiency is particularly notable when handling long sequences of data. In one test cited in the paper, the smaller model responded to a prompt consisting of 4 million tokens more than 100 times faster than a standard system. 

In a different test, a variant of SpikingBrain 1.0 demonstrated a 26.5-fold speed-up over conventional Transformer architectures when generating just the first token from a one-million-token context.

Stable performance

The researchers reported that their system ran stably for weeks on a setup of hundreds of MetaX chips, a platform developed by the Shanghai-based company MetaX Integrated Circuits Co. This sustained performance on domestic hardware underscores the system’s potential for real-world deployment.

These potential applications include the analysis of lengthy legal and medical documents, research in high-energy physics, and complex tasks like DNA sequencing, all of which involve making sense of vast datasets where speed and efficiency are critical.

“These results not only demonstrate the feasibility of efficient large-model training on non-NVIDIA platforms, but also outline new directions for the scalable deployment and application of brain-inspired models in future computing systems,” concluded the research paper.



Source link

Continue Reading

Trending