AI Research
OpenAI Quietly Turns to Google to Stay Online

OpenAI, the company behind ChatGPT, has quietly added Google Cloud as one of its official service providers, meaning Google will now help power the systems that run ChatGPT and other AI products.
This development was disclosed on OpenAI’s website in a list of what are called sub-processors, or companies that handle or process user data on OpenAI’s behalf.
For everyday users, it may not seem like a big deal. But behind the scenes, it is a major shift.
OpenAI, which is backed by Microsoft, has often been seen as a direct competitor to Google in the race to build and monetize artificial intelligence. Both companies have invested billions into AI and compete on everything, from chatbot performance to search engine dominance. Now, OpenAI is renting server space and computing power from the same company it is trying to beat.
Why This Is Happening
Earlier this year, OpenAI CEO Sam Altman made a series of public posts on X (formerly Twitter) admitting that the company was struggling with infrastructure. There were not enough graphics processing units—known as GPUs—to keep up with user demand. GPUs are the specialized chips that allow AI models like ChatGPT to operate at scale. They are expensive, hard to find, and mostly controlled by a few tech giants.
Altman put it bluntly in April: “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges.”
He later added: “If anyone has GPU capacity in 100,000 chunks we can get ASAP, please call!”
working as fast we can to really get stuff humming; if anyone has GPU capacity in 100k chunks we can get asap please call!
— Sam Altman (@sama) April 1, 2025
That was apparently not a joke.
In the months since, OpenAI has quietly moved to stabilize its systems. And now we know how. By partnering with Google Cloud, OpenAI gains access to some of the most advanced AI hardware and data center infrastructure on Earth. Google, like Amazon and Microsoft, runs massive server farms that rent out computing power to other companies. And unlike OpenAI, it has enough chips to meet demand.
What This Means for Users
If you’ve noticed ChatGPT slowing down or glitching in recent weeks, it is likely a result of the overwhelming demand on OpenAI’s servers. Millions of people now use the tool daily, and the company’s infrastructure has not scaled fast enough to handle it.
With Google now onboard, OpenAI may be able to deliver faster responses, more reliable uptime, and future feature rollouts that had previously been delayed. It also gives OpenAI breathing room to focus on its core research and product development without being held back by hardware shortages.
Big Tech Is Still the Backbone
This partnership also reveals something deeper about the future of AI. Even as companies talk about independence, decentralization, and disruption, the reality is that a handful of tech giants still control the essential tools. Whether it is through chips, data centers, or cloud infrastructure, companies like Google, Microsoft, and Amazon are still the backbone of everything online, including artificial intelligence.
So while OpenAI and Google may be rivals on the surface, they are now quietly working together behind the scenes. And for users, that means the future of AI may be more interconnected than anyone expected.
AI Research
Congress ramps up push to arm consumer product regulators with AI tools

A move to empower federal consumer product regulators with artificial intelligence tools picked up steam this week with the introduction of a bipartisan Senate bill whose companion has already passed the House.
The Consumer Safety Technology Act from Sens. John Curtis, R-Utah, and Lisa Blunt Rochester, D-Del., calls on the Consumer Product Safety Commission to create a pilot program that uses AI to track product injury trends, identify hazards, monitor recalls and pinpoint which products fall short of critical standards.
The legislation also directs the Federal Trade Commission and the Commerce secretary to deliver a report on blockchain technology and tokens.
“The world is changing fast, and consumer protection must keep pace,” Curtis said in a press release Thursday. “This bill puts the right tools in the hands of experts — employing AI to catch dangerous products before they hurt families, exploring blockchain to strengthen supply chains, and making sure digital tokens don’t become a new avenue for fraud. This is about keeping people safe while helping American innovation thrive.”
The House version of the bill, introduced in March by Rep. Darren Soto, cleared the lower chamber in July. The Florida Democrat said at the time that the legislation would “help make the CPSC more efficient.”
“The reality is, the crooks are already using AI,” Soto said. “The cops on the beat need to be able to use this, too.”
The Senate bill directs the CPSC to seek out a variety of stakeholders to consult on the agency’s AI pilot, including cybersecurity experts, technologists, data scientists, machine-learning specialists, retailers, consumer product safety groups and manufacturers.
Within a year of the pilot’s conclusion, the CPSC would be charged with submitting a report to Congress detailing its findings and data, “including the extent to which the use of artificial intelligence improved the ability of the Commission to advance the consumer product safety mission,” the bill states.
The blockchain section of the bill orders the FTC and Commerce Department to study how the technology can be leveraged to protect consumers by guarding against fraud attempts and other unfair and deceptive practices. There would also be an examination of what federal regulations could be modified to spur blockchain adoption.
A separate report would look into unfair or deceptive acts and practices tied to transactions via digital tokens. A fact sheet from Curtis said that provision is aimed at “ensuring consumers are protected without stifling responsible innovation.”
Blunt Rochester said in a statement that the government “must be able to keep up with new and emerging technologies, especially when it comes to consumer safety.”
“The Consumer Safety Technology Act would allow the Consumer Product Safety Commission to explore using artificial intelligence to further its critical goals,” she continued. “I am grateful to work alongside Senator Curtis on this legislation and look forward to getting it over the finish line.”
AI Research
3 Arguments Against AI in the Classroom

Generative artificial intelligence is here to stay, and K-12 schools need to find ways to use the technology for the benefit of teaching and learning. That’s what many educators, technology companies, and AI advocates say.
In response, more states and districts are releasing guidance and policies around AI use in the classroom. Educators are increasingly experimenting with the technology, with some saying that it has been a big time saver and has made the job more manageable.
But not everyone agrees. There are educators who are concerned that districts are buying into the AI hype too quickly and without enough skepticism.
A nationally representative EdWeek Research Center survey of 559 K-12 educators conducted during the summer found that they are split on whether AI platforms will have a negative or positive impact on teaching and learning in the next five years: 47% say AI’s impact will be negative, while 43% say it will be positive.
Education Week talked to three veteran teachers who are not using generative AI regularly in their work and are concerned about the potential negative effects the technology will have on teaching and learning.
Here’s what they think about using generative AI in K-12.
AI provides ‘shortcuts’ that are not conducive for learning
Dylan Kane, a middle school math teacher at Lake County High School in Leadville, Colo., isn’t “categorically against AI,” he said.
He has experimented with the technology personally, using it to help him improve his Spanish-language skills. AI is a “half decent” Spanish tutor, if you understand its limitations, he said. For his teaching job, Kane has experimented with AI tools to generate student materials like many other teachers, but it takes too many iterations of prompting to generate something he would actually put in front of his classes.
“I will do a better job just doing it myself and probably take less time to do so,” said Kane, who is in his 14th year of teaching. Creating student materials himself means he can be “more intentional” about the questions he asks, how they’re sequenced, how they fit together, how they build on each other, and what students already know.
His biggest concern is how generative AI will affect educators and students’ critical-thinking skills. Too often, people are using these tools to take “shortcuts,” he said.
“If I want students to learn something, I need them to be thinking about it and not finding shortcuts to avoid thinking,” Kane said.
The best way to prepare students for an AI-powered future is to “give them a broad and deep collection of knowledge about the world and skills in literacy, math, history and civics, and science,” so they’ll have the knowledge they need to understand if an AI tool is providing them with a helpful answer, he said.
That’s true for teachers, too, Kane said. The reason he can evaluate whether AI-generated material is accurate and helpful is because of his years of experience in education.
“One of my hesitations about using large language models is that I won’t be developing skills as a teacher and thinking really hard about what things I put in front of students and what I want them to be learning,” Kane said. “I worry that if I start leaning heavily on large language models, that it will stunt my growth as a teacher.”
And the fact that teachers have to use generative AI tools to create student materials “points to larger issues in the teaching profession” around the curricula and classroom resources teachers are given, Kane said. AI is not “an ideal solution. That’s a Band-Aid for a larger problem.”
Kane’s open to using AI tools. For instance, he said he finds generative AI technology helpful for writing word problems. But educators should “approach these things with a ton of skepticism and really ask ourselves: ‘Is this better than what we should be doing?’”
Experts and leaders haven’t provided good justifications for AI use in K-12
Jed Williams, a high school math and science teacher in Belmont, Mass., said he hasn’t heard any good justifications for why generative AI should be implemented in schools.
The way AI is being presented to teachers tends to be “largely uncritical,” said Williams, who teaches computer science, physics, and robotics at Belmont High School. Often, professional development opportunities about AI don’t provide a “critical analysis” of the technology and just “check the box” by mentioning that AI tools have downsides, he said.
For instance, one professional development session he attended only spent “a few seconds” on the downsides of AI tools, Williams said. The session covered the issue of overreliance on AI tools, but Williams criticized it for not talking about “labor exploitation, overuse of resources, sacrificing the privacy of students and faculty,” he said.
“We have a responsibility to be skeptical about technologies that we bring into the classroom,” Williams said, especially because there’s a long history of ed-tech adoption failures.
Williams, who has been teaching since 2006, is also concerned that AI tools could decrease students’ cognitive abilities.
“So much of learning is being put into a situation that is cognitively challenging,” he said. “These tools, fundamentally, are built on relieving the burden of cognitive challenge.
“Especially in introductory courses, where students aren’t familiar with programming and you want them to try new things and experiment and explore, why would you give them this tool that completely removes those aspects that are fundamental to learning?” Williams said.
Williams is also worried that a rushed implementation of AI tools would sacrifice students and teachers’ privacy and use them as “experimental subjects in developing technologies for tech companies.”
Education leaders “have a tough job,” Williams said. He understands the pressure they feel around implementing AI, but he hopes they give it “critical thought.”
Decisionmakers need to be clear about what technology is being proposed, how they anticipate teachers and students using it, what the goal of its use is, and why they think it’s a good technology to teach students how to use, Williams said.
“If somebody has a good answer for that, I’m very happy to hear proposals on how to incorporate these things in a healthy, safe way,” he said.
Educators shouldn’t fall for the ‘fallacy’ that AI is inevitable
Elizabeth Bacon, a middle school computer science teacher in California, hasn’t found any use cases with generative AI tools that she feels will be beneficial for her work.
“I would rather do my own lesson plan,” said Bacon, who has been teaching for more than 20 years. “I have an idea of what I want the students to learn, of what’s interesting to them, and where they are and the entry points for them to engage in it.”
Teachers have a lot of pressure to do more with less. That’s why Bacon said she doesn’t judge other teachers who want to use AI to get the job done. It’s “a systemic problem,” but teaching and learning shouldn’t be replaced by machines, she said.
Bacon believes it’s “particularly dangerous” for middle school students to be using “a machine emulating a person.” Students are still developing their character, their empathy, their ability to socialize with peers and work collectively toward a goal, she said, and a chatbot would undermine that.
She can foresee using generative AI tools to explain to her students what large language models are. It’s important for them to learn about generative AI, that it’s a statistical model predicting the next likely word based on data it’s been trained on, that there’s no meaning [or feelings] behind it, Bacon said.
Last school year, she asked her high school students what they wanted to know about AI. Their answers: the technology’s social and environmental impacts.
Bacon doesn’t think educators should fall for the “fallacy” that AI is the inevitable future because technology companies are the ones saying that and they have an incentive to say that, she said.
“Educators have basically been told, in a lot of ways, ‘don’t trust your own instincts about what’s right for your students, because [technology companies are] going to come in and tell you what’s going to be good for your students,” she said.
It’s discouraging to see that a lot of the AI-related professional development events she’s attended have “essentially been AI evangelism” and “product marketing,” she said. There should be more thought about why this technology is necessary in K-12, she said.
Technology experts have talked up AI’s potential to increase productivity and efficiency. But as an educator, “efficiency is not one of my values,” Bacon said.
“My value is supporting students, meeting them where they are, taking the time it takes to connect with these students, taking the time that it takes to understand their needs,” she said. “As a society, we have to take a hard look: Do we value education? Do we value doing our own thinking?”
AI Research
University Of Utah Teams With HPE, NVIDIA To Boost AI Research

The University of Utah (the U) is planning to join forces with two powerhouse tech firms to accelerate research and discovery using artificial intelligence (AI). The agreement with Hewlett Packard Enterprise (HPE) and AI chipmaker NVIDIA will amplify the U’s capacity for understanding cancer, Alzheimer’s disease, mental health, and genetics. The initiative is projected to enable medical breakthroughs, driving innovation, and scientific discovery across disciplines.
“The U has a proud legacy of pioneering technological breakthroughs,” said Taylor Randall, president of the University of Utah. “Our goal is to make the state awash in computing power by building a robust AI ecosystem benefiting our entire system of higher education, driving research to find new cures, and igniting Utah’s entrepreneurial spirit.”
The partnership, which includes a $50 million investment of funds from both public and philanthropic sources, is projected to increase the U’s computing capacity 3.5-fold. The flagship school’s Board of Trustees gave preliminary approval to the proposed arrangement on September 9.
The structure paves a path for substantial advances in computing storage and infrastructure required for Utah-based projects in AI and innovation. The goal is to lay the foundation for a scalable AI ecosystem available to researchers, learners, and entrepreneurs across Utah. The multi-year initiative would build upon existing capabilities in AI, giving the U access to substantially more computing power.
Brynn and Peter Huntsman along with the Huntsman Family Foundation will provide a lead philanthropic gift to the U that is intended to initiate the project and help encourage other supporters to make investments required to move the work forward through AI “supercomputer” systems designed to handle enormous processing and storage needs. The university will seek remaining funds from the state of Utah and other sources.
“This AI initiative will accelerate world class cancer research that enhances capabilities in ways we hardly imagined just a few years ago,” said Peter Huntsman, CEO and chairman, Huntsman Cancer Foundation. “Huntsman Cancer Foundation recently announced our commitment to support the expansion of the educational, research, and clinical care capacity of the world renown Huntsman Cancer Institute in Vineyard, Utah, which will serve as a hub for cancer AI research. These investments will speed discoveries and enhance the state of Utah’s leadership in AI education and economic opportunity.”
Mental health will be a major focus of the AI research endeavor.
“As the Huntsman Mental Health Institute opens its new 185,000-square-foot Translational Research Building this coming year, we’re looking forward to increasing momentum around mental health research, including the impact of this technology,” said Christena Huntsman Durham, Huntsman Mental Health Foundation CEO and co-chair. “We know so many people are struggling with mental health challenges; we’re thrilled we will be able to move even faster to get help to those who need it most.”
Check out all the latest news related to Utah economic development, corporate relocation, corporate expansion and site selection.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi