AI Insights
Is an ‘AI winter’ coming? Here’s what investors and executives can learn from past AI slumps

As summer fades into fall, many in the tech world are worried about winter. Late last month, a Bloomberg columnist asked “is the AI winter finally upon us?” British newspaper The Telegraph was more definitive. “The next AI winter is coming,” it declared. Meanwhile, social media platform X was filled with chatter about a possible AI winter.
An “AI winter” is what folks in artificial intelligence call a period in which enthusiasm for the idea of machines that can learn and think like people wanes—and investment for AI products, companies, and research dries up. There’s a reason this phrase comes so naturally to the lips of AI pundits: We’ve already lived through several AI winters over the 70-year history of artificial intelligence as a research field. If we’re about to enter another one, as some suspect, it’ll be at least the fourth.
The most recent talk of a looming winter has been triggered by growing concerns among investors that AI technology may not live up to the hype surrounding it—and that the valuations of many AI-related companies are far too highl. In a worst case scenario, this AI winter could be accompanied by the popping of an AI-inflated stock market bubble, with reverberations across the entire economy. While there have been AI hype cycles before, they’ve never involved anything close to the multiple hundreds of billions of dollars that investors have sunk into the generative AI boom. And so if there is another AI winter, it could involve polar vortex levels of pain.
The markets have been spooked recently by comments from OpenAI CEO Sam Altman, who told reporters he thought some venture-backed AI startups were grossly overvalued (although not OpenAI, of course, which is one of the most highly-valued venture-backed startups of all time). Hot on the heels of Altman’s remarks came a study from MIT that concluded that 95% of AI pilot projects fail.
A look at past AI winters, and what caused them, may give us some indication of whether that chill in the air is just a passing breeze or the first hints of an impending Ice Age. Sometimes those AI winters have been brought on by academic research highlighting the limitations of particular AI techniques. Sometimes they have been caused by frustrations getting AI tech to work well in real world applications. Sometimes both factors have been at play. But what previous AI winters all had in common was disillusionment among those footing the bill after promising new advances failed to deliver on the ensuing hype.
The first AI hype cycle
The U.S. and allied governments lavishly funded artificial intelligence research throughout the early days of the Cold War. Then, as now, Washington saw the technology as potentially conferring a strategic and military advantage, and much of the funding for AI research came from the Pentagon.
During this period, there were two competing approaches to AI. One was based on hard-coding logical rules for categorizing inputs into symbols and then for manipulating those symbols to arrive at outputs. This was the method that yielded the first great leaps forward in computers that could play checkers and chess, and also led to the world’s first chatbots.
The rival AI method was based on something called a perceptron, which was the forerunner of today’s neural networks, a kind of AI loosely built on a caricature of how the brain works. Rather than starting with rules and logic, a perceptron learned a rule for accomplishing some task from data. The U.S. Office of Naval Research funded much of the early work on perceptrons, which were pioneered by Cornell University neuroscientist and psychologist Frank Rosenblatt. Both the Navy and the CIA tested perceptrons to see if they could classify things like the silhouettes of enemy ships or potential targets in aerial reconnaissance photos.
The two competing camps both made hyperbolic claims that their technology would soon deliver computers that equalled or exceeded human intelligence. Rosenblatt told The New York Times in 1958 that his perceptrons would soon be able to recognize individuals and call out their names, that it was “only one more step of development” before they could instantly translate languages, and that eventually the AI systems would self-replicate and become conscious. Meanwhile Marvin Minsky, cofounder of MIT’s AI Lab and a leading figure in the symbolic AI camp, told Life magazine in 1970 that “in three to eight years we will have a machine with the general intelligence of an average human being.”
That’s the first prerequisite for an AI winter: hype. And there are clear parallels today in statements made by a number of prominent AI figures. Back in January, OpenAI CEO Sam Altman wrote on his personal blog that “we are now confident we know how to build [human-level artificial general intelligence] as we have traditionally understood it” and that OpenAI was turning increasingly towards building super-human “superintelligence.” He wrote that this year “we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Dario Amodei, the cofounder and CEO of Anthropic, has said the human-level AI could arrive in 2026. Meanwhile, Demis Hassabis, the cofounder and CEO of Google DeepMind, has said that AI matching humans across all cognitive domains would arrive in the next “five to 10 years.”
Government loses faith
But what precipitates an AI winter is some definitive evidence this hype cannot be met. For the first AI winter, that evidence came in a succession of blows. In 1966, a committee commissioned by the National Research Council issued a damning report on the state of natural language processing and machine translation. It concluded that computer-based translation was more expensive, slower and less accurate than human translation. The research council, which had provided $20 million towards this early kind of language AI (at least $200 million in today’s dollars), cut off all funding.
Then, in 1969, Minsky was responsible for a second punch. That year, he and Seymour Papert, a fellow AI researcher, published a book-length takedown of perceptrons. In the book, Minsky and Papert proved mathematically that a single layer perceptron, like the kind Rosenblatt had shown off to great fanfare in 1958, could only ever make accurate binary classifications—in other words, it could identify if something were black or white, or a circle or a square. But it could not categorize things into more than two buckets.
It turned out there was a big problem with Minsky’s and Papert’s critique. While most interpreted the book as definitive proof that neural network-based AI would never come close to human-level intelligence, their proofs applied only to a simple perceptron that had just a single layer: an input layer consisting of several neurons that took in data, all linked to a single output neuron. They had ignored, likely deliberately, that some researchers in the 1960s had already begun experimenting with multilayer perceptrons, which had a middle “hidden” layer of neurons that sat between the input neurons and output neuron. True forerunners of today’s “deep learning,” these multilayer perceptrons could, in fact, classify data into more than two categories. But at the time, training such a multilayer neural network was fiendishly difficult. And it didn’t matter. The damage was done. After the publication of Minsky’s and Papert’s book, U.S. government funding for neural network-based approaches to AI largely ended.
Minsky’s and Papert’s attack didn’t just persuade Pentagon funding bodies. It also convinced many computer scientists too that neural networks were a dead end. Some neural network researchers came to blame Minsky for setting back the field by decades. In 2006, Terry Sjenowski, a researcher who helped revive interest in neural networks, stood up at a conference and confronted Minsky, asking him if he were the devil. Minsky ignored the question and began detailing what he saw as the failings of neural networks. Sjenowski persisted in asking Minsky again if he were the devil. Eventually an angry Minsky shouted back: “Yes, I am!”
But Minsky’s symbolic AI soon faced a funding drought too. Also in 1969, Congress forced the Defense Advanced Research Project Agency (DARPA), which had been a major funder of both AI approaches, to change its approach to issuing grants. The agency was told to fund research that had clear, applied military applications, instead of more blue-sky research. And while some symbolic AI research fit this rubric, a lot of it did not.
The final punch came in 1973, when the U.K. parliament commissioned Cambridge University mathematician James Lighthill to investigate the state of AI research in Britain. His conclusion was that AI had failed to show any promise of fulfilling its grand claims of equaling human intelligence and that many of its favored algorithms, while they might work for toy problems, could never deal with the real world’s complexity. Based on Lighthill’s conclusions, the U.K. government curtailed all funding for A.I. research.
Lighthill had only looked at U.K. AI efforts, but DARPA and other U.S. funders of AI research took note of its conclusions, which reinforced their own growing skepticism of AI. By 1974, U.S. funding for AI projects was a fraction of what it had been in the 1960s. Winter had set in—and it would last until the early 1980s.
Today, too, there are parallels with this first AI winter when it comes to studies suggesting AI isn’t meeting expectations. Two recent research papers from researchers at Apple and Arizona State University have cast doubt on whether the cutting edge AI models, which are supposed to use a “chain of thought” to reason about how to answer a prompt, are actually engaging in reasoning at all. Both papers conclude that rather than learning to apply generalizable logical rules and problem-solving techniques to new problems—which is what humans would consider reasoning—the models simply try to match a problem to one seen in its training data. These studies could turn out to be the equivalent of Minsky’s and Papert’s attack on perceptrons.
Meanwhile, there are also a growing number of studies on the real world impact of today’s AI models that parallel the Lighthill and NRC reports. For instance, there’s that MIT study which concluded 95% of AI pilots are failing to boost corporate revenues. There’s a recent study from researchers at Salesforce that concluded most of today’s large language models (LLMs) cannot accurately perform customer relation management (CRM) tasks—a particularly ironic conclusion since Salesforce itself has been pushing AI agents to automate CRM processes. Anthropic research showed that its Claude model could not successfully run a vending machine business—a relatively simple business compared to many of those that tech boosters say are poised to be “utterly transformed” by the AI agents. There’s also a study from the AI research group METR that showed software developers using an AI coding assistant were actually 19% slower at completing tasks than they were without it.
But there are some key differences. Most significantly, today’s AI boom is not dependent on public funding. Although government entities, including the U.S. military, are becoming important customers for AI companies, the money fueling the current boom is almost entirely private. Venture capitalists have invested at least $250 billion into AI startups since ChatGPT debuted in November 2022. And that doesn’t include the vast amount being spent by large, publicly-traded tech companies like Microsoft, Alphabet, Amazon, and Meta on their own AI efforts. An estimated $350 billion is being spent to build out AI data centers this year alone, with even more expected next year.
What’s more, unlike in that first AI winter, when AI systems were mostly just research experiments, today AI is being widely deployed across businesses. AI has also become a massive consumer technology—ChatGPT alone is thought to have 700 million weekly users—which was never the case previously. While today’s AI still seems to lack some key aspects of human intelligence, it is a lot better than systems that existed previously and it is hard to argue that people are not finding the technology useful for a good number of tasks.
Winter No. 2: Business loses patience
That first AI winter thawed in the early 1980s thanks largely to increases in computing power and some improved algorithmic techniques. This time, much of the hype in AI was around “expert systems”. These were computer programs that were designed to encode the knowledge of human experts in a particular domain into a set of logical rules which the software would then apply to accomplish some specific task.
Nevertheless, business was enthusiastic, believing expert systems would lead to a productivity boom. At the height of this AI hype cycle, nearly two-thirds of the Fortune 500 said they had deployed expert systems. By 1985, U.S. corporations were collectively spending more than $1 billion on expert systems and an entire industry, much of it backed by venture capital, sprouted up around the technology. Much of it was focused on building specialized computer hardware, called LISP machines, that were optimized to run expert systems, many of which were coded in the programming language LISP. What’s more, starting in 1983, DARPA returned to funding AI research through the new Strategic Computing Initiative, eventually offering over $100 million to more than 90 different AI projects at universities throughout the U.S.
Although expert systems drew on many of the methods symbolic AI researchers pioneered, many academic computer scientists were wary that inflated expectations would once again precipitate a boom and bust cycle that would hurt the field. Among them were Minsky and fellow AI researcher Roger Schank who coined the term “AI winter” during an AI conference in 1984. The pair chose the neologism to echo the term “nuclear winter”—the devastating and bleak period without sunlight that would likely follow a major nuclear war.
Three things then happened to bring about the next winter. In 1987, a new kind of computer workstation debuted from Sun Microsystems. These workstations, as well as increasingly powerful desktop computers from IBM and Apple, obviated the need for specialized LISP machines. Within a year, the market for LISP machines evaporated. Many venture capitalists lost their shirts—and became wary of ever backing AI-related startups again. That same year, New York University computer scientist Jack Schwartz became head of DARPA’s computing research. He was no fan of AI in general or expert systems in particular, and slashed funding for both.
Meanwhile, businesses gradually discovered that expert systems were difficult and expensive to build and maintain. They were also “brittle”—while they could handle highly routinized tasks well, when they encountered slightly unusual cases, they struggled to apply the logical rules they had been given. In such cases, they often produced bizarre and inaccurate outputs, or simply broke down completely. Delineating rules that would apply to every edge case proved an impossible task. As a result, by the early 1990s, companies were starting to abandon expert systems. Unlike in the first AI boom, where scientists and government funders came to question the technology, this second winter was mostly driven much more by business frustration.
Again there are some clear echoes in what’s happening with AI today. For instance, hundreds of billions of dollars are being invested in AI datacenters being constructed by Microsoft, Alphabet, Amazon’s AWS, Elon Musk’s X.ai, and Meta. OpenAI is working on its $500 billion Project Stargate data center plan with Softbank, Oracle and other investors. Nvidia has become the world’s most valuable company with a $4.3 trillion market cap largely by catering to this demand for AI chips for data centers. One of the big suppositions behind the big data center boom is that the most cutting edge AI models will be at least as large, if not larger, than the leading models that exist today. Training and running models of this size requires extremely large data centers.
But, at the same time, a number of startups have found clever ways to create much smaller models that mimic many of the capabilities of the giant models. These smaller models require far less computing resources—and in some cases don’t even require the kinds of specialized AI chips that Nvidia makes. Some might be small enough to run on a smart phone. If this trend continues, it is possible that those massive data centers won’t be required—just as it turned out LISP machines weren’t necessary. That could mean that hundreds of billions of dollars in AI infrastructure investment winds up stranded.
Today’s AI systems are in many ways more capable—and flexible—than the expert systems of the 1980s. But businesses are still finding them complicated and expensive to deploy and their return on investment too often elusive. While more general purpose and less brittle than the expert systems were, today’s AI models remain unreliable, especially when it comes to addressing unusual cases that might not have been well-represented in their training data. They are prone to hallucinations, confidently spewing inaccurate information, and can sometimes make mistakes no human ever would. This means companies and governments cannot use AI to automate mission critical processes. Whether this means companies will lose patience with generative AI and large language models, just as they did with expert systems, remains to be seen. But it could happen.
Winter No. 3: The rise and fall (and rise) of neural networks
The 1980s also saw renewed interest in the other AI method, neural networks, due in part to the work of David Rumelhart, Geoffrey Hinton and Ronald Williams, who in 1986 figured out a way to overcome a key challenge that had bedeviled multilayered perceptrons since the 1960s. Their innovation was something called backpropagation, or backprop for short, which was a method for correcting the outputs of the middle, hidden layer of neurons during each training pass so that the network as a whole could learn efficiently.
Backprop, along with more powerful computers, helped spur a renaissance in neural networks. Soon researchers were building multilayered neural networks that could decipher handwritten letters on envelopes and checks, learn the relationships between people in a family tree, recognize typed characters and read them aloud through a voice synthesizer, and even steer an early self-driving car, keeping it between the lanes of a highway.
This led to a short-lived boom in neural networks in the late 1980s. But neural networks had some big drawbacks too. Training them required a lot of data, and for many tasks, the amount of data required just didn’t exist. They also were extremely slow to train and sometimes slow to run on the computer hardware that existed at the time.
This meant that there were many things neural networks could still not do. Businesses did not rush to adopt neural networks as they had expert systems because their uses seemed highly circumscribed. Meanwhile, there were other statistical machine learning techniques that used less data and required less computing power that seemed to be making rapid progress. Once again, many AI researchers and engineers wrote off neural networks. Another decade-long AI winter set in.
Two things thawed this third winter: the internet created vast amounts of digital data and made accessing it relatively easy. This helped break the data bottleneck that had held neural networks back in the 1980s. Then, starting in 2004, researchers at the University of Maryland and then Microsoft began experimenting with using a new kind of computer chip that had been invented for video games, called a graphics processing unit, to train and run neural networks. GPUs could perform many of the same operations in parallel, which is what neural networks required. Soon, Geoffrey Hinton and his graduate students began demonstrating that neural networks, trained on large datasets and run on GPUs, could do things—like classify images into a thousand different categories—that would have been impossible in the late 1980s. The modern “deep learning” revolution was taking off.
That boom has largely continued through today. At first, neural networks were largely trained to do one particular task well—to play Go, or to recognize faces. But the AI summer deepened in 2017, when researchers at Google designed a particular kind of neural network called a Transformer that was good at figuring out language sequences. It was given another boost in 2019 when OpenAI figured out that Transformers trained on large amounts of text could not only write text well, but master many other language tasks, from translation to summarization. Three years later, an updated version of OpenAI’s transformer-based neural network, GPT-3.5, would be used to power the viral chatbot ChatGPT.
Now, three years after ChatGPT’s debut, the hype around AI has never been greater. There are certainly a few autumnal signs, a falling leaf carried on the breeze here and there, if past AI winters are any guide. But only time will tell if it is the prelude to another Arctic bomb that will freeze AI investment for a generation, or merely a momentary cold-snap before the sun appears again.
AI Insights
America’s 2025 AI Action Plan: Deregulation and Global Leadership

In July 2025, the White House released America’s AI Action Plan, a sweeping policy framework asserting that “the United States is in a race to achieve global dominance in artificial intelligence,” and that whoever controls the largest AI hub “will set global AI standards and reap broad economic and military benefits” (see Introduction). The Plan, following a January 2025 executive order, underscores the Trump administration’s vision of a deregulated, innovation-driven AI ecosystem designed and optimized to accelerate technological progress, expand workforce opportunities, and assert U.S. leadership internationally.
“America is the country that started the AI race. And as President of the United States, I’m here today to declare that America is going to win it.” –President Donald J. Trump 🇺🇸🦅 pic.twitter.com/AwnTeTmfBn
— The White House (@WhiteHouse) July 24, 2025
This article outlines the Plan’s development, key pillars, associated executive orders, and the legislative and regulatory context that frames its implementation. It also situates the Plan within ongoing legal debates about state versus federal authority in regulating AI, workforce adaptation, AI literacy, and cybersecurity.
Laying the Groundwork for AI Dominance
January 2025: Executive Order Calling for Deregulation
The first major executive action of Trump’s second term was the January 23, 2025, order titled “Removing Barriers to American Leadership in Artificial Intelligence.” This Executive Order (EO) formally rescinded policies deemed obstacles to AI innovation under the prior administration, particularly regarding AI regulation. Its stated purpose was to consolidate U.S. leadership by ensuring that AI systems are “free from ideological bias or engineered social agendas,” and that federal policies actively foster innovation.
The EO emphasized three broad goals:
- Promoting human flourishing and economic competitiveness: AI development was framed as central to national prosperity, with the federal government creating conditions for private-sector-led growth.
- National security: Leadership in AI was explicitly tied to the United States’ global strategic position.
- Deregulation: Existing federal regulations, guidance, and directives perceived as constraining AI innovation were revoked, streamlining federal involvement and eliminating bureaucratic barriers.
The January order set the stage for the July 2025 Action Plan, signaling a decisive break from the prior administration’s cautious, regulatory stance.
Scroll to continue reading
April 2025: Office of Management and Budget Memoranda
Prior to the release of America’s AI Action Plan, the Trump administration issued key guidance to facilitate federal adoption and procurement of AI technologies. This guidance focused on streamlining agency operations, promoting responsible innovation, and ensuring that federal AI use aligns with broader strategic objectives.
Two memoranda were issued by the Office of Management and Budget (OMB) on April 3, 2025, provided a framework for this shift:
- “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” (M-25-21): OMB Empowers Chief AI Officers to serve as change agents, promoting agency-wide AI adoption. Through this memorandum, agencies empower AI leaders to remove barriers to AI innovation. Also, they require federal agencies to track AI adoption through maturity assessments, identifying high-impact use cases that necessitate heightened oversight. This balances the rapid deployment of AI with privacy, civil rights, and civil liberties protections.
- “Driving Efficient Acquisition of Artificial Intelligence in Government” (M-25-22): Provides agencies with tools and concise, effective guidance on how to acquire “best-in-class” AI systems quickly and responsibly while promoting innovation across the federal government. It streamlined procurement processes, emphasizing competitive acquisition and prioritization of American AI technologies. M-25-22 also reduced reporting burdens while maintaining accountability for lawful and responsible AI use.
These April memoranda laid the procedural foundation for federal AI adoption, ensuring agencies could implement emerging AI technologies responsibly while aligning with strategic U.S. objectives.
July 2025: America’s AI Action Plan
Released on July 23, 2025, the AI Action Plan builds on the April memoranda by articulating clear principles for government procurement of AI systems, particularly Large Language Models (LLMs), to ensure federal adoption aligns with American values:
- Truth-seeking: LLMs must respond accurately to factual inquiries, prioritize historical accuracy and scientific inquiry, and acknowledge uncertainty.
- Ideological neutrality: LLMs should remain neutral and nonpartisan, avoiding the encoding of ideological agendas such as DEI unless explicitly prompted by users.
The Plan emphasizes that these principles are central to federal adoption, establishing expectations that agencies procure AI systems responsibly and in accordance with national priorities. OMB guidance, to be issued by November 20, 2025, will operationalize these principles by requiring federal contracts to include compliance terms and decommissioning costs for noncompliant vendors. Unlike the April memoranda, which focused narrowly on agency adoption and contracting, the July Plan set broad national objectives designed to accelerate U.S. leadership in artificial intelligence across sectors. These foundational principles inform the broader strategic vision outlined in the Plan, which is organized into three primary pillars:
- Accelerating AI Innovation
- Building American AI Infrastructure
- Leading in International AI Diplomacy and Security
📃The White House’s AI Action Plan sets a bold vision for innovation, infrastructure & global AI leadership. 🇺🇸🤖
In our episode [linked below], we unpack its 3 pillars, the mixed reactions around it, and what it means for practitioners.#AI #AIActionPlan #PracticalAI pic.twitter.com/ehOLTB5Haj
— Practical AI 🤖 (@PracticalAIFM) August 26, 2025
Across 3 pillars, the Plan identifies over 90 federal policy actions. The Plan highlights the Trump administration’s objective of achieving “unquestioned and unchallenged global technological dominance,” positioning AI as a driver of economic growth, job creation, and scientific advancement.
Pillar 1: Accelerating AI Innovation
The Plan emphasizes the United States must have the “most powerful AI systems in the world” while ensuring these technologies create broad economic and scientific benefits. Not only should the U.S. have the most powerful systems, but also the most transformative applications.
The pillar covers topics in AI adoption, regulation, and federal investment.
- Removing bureaucratic “red tape and onerous regulation”: The administration argued that AI innovation should not be slowed by federal rules, particularly those at the state level that are considered “burdensome.” Funding for AI projects is directed toward states with favorable regulatory climates, potentially pressuring states to align with federal deregulatory priorities.
- Encouraging open-source and open-weight AI: Expanding access to AI systems for researchers and startups is intended to catalyze rapid innovation. Particularly, the administration is looking to invest in AI interpretability, control, and robustness breakthroughs to create an “AI evaluations ecosystem.”
- Federal adoption and workforce development: Federal agencies are instructed to accelerate AI adoption, particularly in defense and national security applications.
- Workforce development: The uses of technology should ultimately create economic growth, new jobs, and scientific advancement. Policies also support workforce retraining to ensure that American workers thrive in an AI-driven economy, including pre-apprenticeship programs and high-demand occupation initiatives.
- Advancing protections: Ensuring that frontier AI protects free speech and American values. Notably, the pillar includes measures to “combat synthetic media in the legal system,” including deepfakes and fake AI-generated evidence.
Consistent with the innovation pillar, the Plan emphasizes AI literacy, recognizing that training and oversight are essential to AI accountability. This aligns with analogous principles in the EU AI Act, which requires deployers to inform users of potential AI harms. The administration proposes tax-free reimbursement for private-sector AI training and skills development programs to incentivize adoption and upskilling.
Pillar 2: Building American AI Infrastructure
AI’s computational demands require unprecedented energy and infrastructure. The Plan identifies infrastructure development as critical to sustaining global leadership, demonstrating the Administration’s pursuit of large-scale industrial plans. It contains provisions for the following:
- Data center expansion: Federal agencies are directed to expedite permitting for large-scale data centers, defined as—in a July 23, 2025 EO titled “Accelerating Federal Permitting Of Data Center Infrastructure”—facilities “requiring 100 megawatts (MW) of new load dedicated to AI inference, training, simulation, or synthetic data generation.” These policies ease federal regulatory burdens to facilitate the rapid and efficient buildout of infrastructure. This EO revokes the Biden Administration’s January 2025 Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure,” but maintains an emphasis on expediting permits and leasing federal lands for AI infrastructure development.
- Energy and workforce development: To meet AI power requirements, the Plan calls for streamlined permitting for semiconductor manufacturing facilities and energy infrastructure, for example, strengthening and growing the electric grid. The Plan also calls for the development of covered components, defined by the July 23, 2025 EO as “materials, products, and infrastructure that are required to build Data Center Projects or otherwise upon which Data Center Projects depend.” Additionally, investments will be made in workforce training to operate these high-demand systems. This is on par with the new national initiative to increase high-demand occupations such as electricians and HVAC technicians.
- Cybersecurity and secure-by-design AI: Recognizing AI systems as both defensive tools and potential security risks, the Administration directs information sharing of AI threats between public and private sectors and updates incident response plans to account for AI-specific threats.
Pillar 3: Leading in International AI Diplomacy and Security
The Plan extends beyond domestic priorities to assert U.S. leadership globally. The following measures illustrate a dual focus of fostering innovation while strategically leveraging American technological dominance:
- Exporting American AI: The Plan reflects efforts to drive the adoption of American AI systems, computer hardware, and standards. Commerce and State Departments are tasked with partnering with the industry to deliver “secure full-stack AI export packages… to America’s friends and allies” including hardware, software, and applications to allies and partners (see “White House Unveils America’s AI Action Plan”)
- Countering foreign influence: The Plan explicitly seeks to restrict access to advanced AI technologies by adversaries, including China, while promoting the adoption of American standards abroad.
- Global coordination: Strategic initiatives are proposed to align protection measures internationally and ensure the U.S. leads in evaluating national security risks associated with frontier AI models.
[Learn more about the pillars at ai.gov]
California’s Reception and Industry Response
The Plan addresses the interplay between federal and state authority, emphasizing that states may legislate AI provided their regulations are not “unduly restrictive to innovation.” Federal funding is explicitly conditioned on state regulatory climates, incentivizing alignment with the Plan’s deregulatory priorities. For California, this creates a favorable environment for the state’s robust tech sector, encouraging continued innovation while aligning with federal objectives. Simultaneously, the Federal Trade Commission (FTC) is directed to review its AI investigations to avoid burdening innovation, a policy reflected in the removal of prior AI guidance from the FTC website in March 2025, further supporting California’s leading role in AI development.
.@POTUS launched America’s AI Action Plan to lead in AI diplomacy and cement U.S. dominance in artificial intelligence.
AI is here now, and the USA will lead a new spirit of innovation. More on America’s action plan for AI:https://t.co/5lY6ktLDri pic.twitter.com/2R1meOje7z
— Department of State (@StateDept) August 27, 2025
The White House released an article showcasing acclaim for the Plan. Among the supporters are the AI Innovation Association, Center for Data Innovation, Consumer Technology Association, and the US Chamber of Commerce. Leading tech companies—including California-based companies Meta, Anthropic, xAI, and Zoom—praised the Plan’s focus on federal adoption, infrastructure buildout, and innovation acceleration.
California’s Anthropic highlighted alignment with its own policy priorities, including safety testing, AI interpretability, and secure deployment in a reflection. The reflection includes commentary on how to accelerate AI infrastructure and adoption, promote secure AI development, democratize AI’s benefits, and establish a natural standard by proposing a framework for frontier model transparency. The AI Action Plan’s recommendations to increase federal government adoption of AI include proposals aligned with policy priorities and recommendations Anthropic made to the White House; recommendations made in response to the Office of Science and Technology’s “Request for Information on the Development of an AI Action Plan.” Additionally, Anthropic released a “Build AI in America” report detailing steps the Administration can take to accelerate the buildout of the nation’s AI infrastructure. The company is looking to work with the administration on measures to expand domestic energy capacity.
California’s tech industry has not only embraced the Action Plan but positioned itself as a key partner in shaping its implementation. With companies like Anthropic, Meta, and xAI already aligning their priorities to federal policy, California has an opportunity to set a national precedent for constructive collaboration between industry and government. By fostering accountability principles grounded in truth-seeking and ideological neutrality, and by maintaining a regulatory climate favorable to innovation, the state can both strengthen its relationship with Washington and serve as a model for other states seeking to balance growth, safety, and public trust in the AI era.
America’s AI Action Plan moves from policy articulation to implementation, the coordination between federal guidance and state-level innovation will be critical. California’s tech industry is already demonstrating how strategic alignment with national priorities can accelerate adoption, build infrastructure, and set standards for responsible AI development. The Plan offers an opportunity for states to serve as models of effective governance, showing how deregulation, accountability principles, and public-private collaboration can advance technological leadership while safeguarding public trust. By continuing to harmonize innovation with ethical oversight, the United States can solidify its position as the global leader in artificial intelligence.
AI Insights
Job seekers, HR professionals grapple with use of artificial intelligence

RALEIGH, N.C. (WTVD) — The conversation surrounding the use of generative artificial intelligence, such as OpenAI’s ChatGPT, Microsoft CoPilot, Google Gemini, and others, is rapidly evolving and continuing to provoke questions of thought.
The debate comes as North Carolina Governor Josh Stein signed into law an executive order geared toward artificial intelligence.
It’s a space that is transforming at a pace much quicker than many people can adapt to, and is finding its way more and more into everyday use.
One of those spaces is the job market.
“I’ll even share with my experience yesterday. So I had gotten a completely generative AI-written resume, and my first reaction was, ‘Oh, I don’t love this. ‘ And then my second reaction was, ‘but why?’ I’m going to want them doing this at work. So why wouldn’t I want them doing it in the application process?” said human resources executive Steve O’Brien.
O’Brien’s comments caught the attention of colleagues internally and externally.
“I think what we need to do is ask ourselves, how do we interview in a world where generative AI is involved. Not how do we exclude generative AI from the interview process,” added O’Brien.
According to the 2025 Job Seeker Nation Report by Employ, 69% of applicants say they use artificial intelligence to find or match their work history with relevant job listings. That is up by one percent compared to 2024. Alternatively, in 2025, Employ found that 52% of applicants write or review resumes using artificial intelligence, down from 58% in 2024.
“I think recruiters are getting very good at spotting this AI-generated content. Every resume sounds the same, every line sounds the same, and the resume is missing the stories that. I mean, humans love stories,” said resume and career coaching expert Mir Garvy.
ALSO SEE Judge orders search shakeup in Google monopoly case, but keeps hands off Chrome and default deals
Meanwhile, career website Zety found that 58% of HR managers believe it’s ethical for candidates to use AI during their job search.
“Now those applicant tracking systems are AI-informed. But when all of us have access to tools like ChatGPT, in a sense, we have now a more level playing field,” Garvy said.
“If you had asked me six months ago, I’d have said that I was disappointed that generative AI had made the resume. But I don’t think that I have that opinion anymore,” said O’Brien. “So I don’t fault the candidates who are being asked to write 75 resumes and reply to 100 jobs before they get an interview for trying to figure out an efficient way to engage in that marketplace.”
The pair, along with job seekers, agree that AI is a tool that is best used to aid and assist, but not replace.
“(Artificial intelligence) should tell your story. It should highlight the things that are most important and downplay or eliminate the things that aren’t,” said Garvy.
O’Brien added, “If you completely outsource the creative process to ChatGPT, that’s probably not great, right? You are sort of erasing yourself from the equation. But if there’s something in there that you need help articulating, you need a different perspective on how to visualize, I have found it to be an extraordinary partner.”
Copyright © 2025 WTVD-TV. All Rights Reserved.
AI Insights
North Carolina Governor Creates AI Council, State Accelerator

North Carolina Gov. Josh Stein on Tuesday signed an executive order (EO) creating the state’s Artificial Intelligence Leadership Council, tasked with advising on and supporting AI strategy, policy and training. The move comes just more than a year after the state published its AI responsible use framework.
Executive Order No. 24: Advancing Trustworthy Artificial Intelligence That Benefits All North Carolinians sets the direction for the council and creates the North Carolina AI Accelerator, which will serve as a hub within the N.C. Department of Information Technology (NCDIT). Council duties include creating a state AI road map; recommending AI policy, governance and ethics frameworks; guiding the accelerator; addressing workforce, economic and infrastructure impacts; and issuing recommendations for AI and public safety. Its first report is due June 30, 2026.
“AI has the potential to transform how we work and live, carrying with it both extraordinary opportunities and real risks,” Stein said in a news release. “Our state will be stronger if we are equipped to take on these challenges responsibly. I am looking forward to this council helping our state effectively deploy AI to enhance government operations, drive economic growth and improve North Carolinians’ lives.”
State CIO and NCDIT Secretary Teena Piccione will co-chair the council alongside state Department of Commerce Secretary Lee Lilley. The governor named 22 additional members from the public and private sectors. They include technology leaders, educators, state legislators and state agency representatives such as David Yokum, chief scientist of the Office of State Budget and Management. Vera Cubero, emerging technologies consultant for the N.C. Department of Public Instruction, and Charlotte CIO Markell Storay are among the appointees, each of whom will serve a two-year term.
“I am honored to chair this council dedicated to strategically harnessing the exponential potential of AI for the benefit of North Carolina’s people, businesses and communities,” Piccione said in the release. “The AI Accelerator, along with our other initiatives, puts us in a strong position to implement swift and transformative solutions that will not only position North Carolina at the forefront of technological innovation but also uphold the latest standards of data privacy and security.”
The AI Accelerator will serve as the hub for AI governance, research, development and training. It is housed in the NCDIT, where staff will develop an AI governance framework, risk assessment and statewide definitions for AI and generative AI, according to the EO. When it comes to AI, Piccione sees significant potential for its use in government, identifying use cases in areas including procurement, fraud detection and cybersecurity, she told Government Technology earlier this year.
The state, like others, has been accelerating its AI moves of late. NCDIT named its first AI governance and policy executive this year, the University of North Carolina has been working with faculty to address AI in classroom settings, and some state agencies are looking at ways to safely implement chat and other services. North Carolina now joins other states that have appointed councils; are working toward ethical governance; and are wrestling with data centers, AI use and how it impacts energy use, also mentioned in the EO.
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions