AI Insights
OpenAI says spending to rise to $115 billion through 2029: Information

OpenAI Inc. told investors it projects its spending through 2029 may rise to $115 billion, about $80 billion more than previously expected, The Information reported, without providing details on how and when shareholders were informed.
OpenAI is in the process of developing its own data center server chips and facilities to drive the technologies, in an effort to control cloud server rental expenses, according to the report.
The company predicted it could spend more than $8 billion this year, roughly $1.5 billion more than an earlier projection, The Information said.
Another factor influencing the increased need for capital is computing costs, on which the company expects to spend more than $150 billion from 2025 through 2030.
The cost to develop AI models is also higher than previously expected, The Information said.
AI Insights
AI can be a great equalizer, but it remains out of reach for millions of Americans; the Universal Service Fund can expand access

In an age defined by digital transformation, access to reliable, high-speed internet is not a luxury; it is the bedrock of opportunity. It impacts the school classroom, the doctor’s office, the town square and the job market.
As we stand on the cusp of a workforce revolution driven by the “arrival technology” of artificial intelligence, high-speed internet access has become the critical determinant of our nation’s economic future. Yet, for millions of Americans, this essential connection remains out of reach.
This digital divide is a persistent crisis that deepens societal inequities, and we must rally around one of the most effective tools we have to combat it: the Universal Service Fund. The USF is a long-standing national commitment built on a foundation of bipartisan support and born from the principle that every American, regardless of their location or income, deserves access to communications services.
Without this essential program, over 54 million students, 16,000 healthcare providers and 7.5 million high-need subscribers would lose internet service that connects classrooms, rural communities (including their hospitals) and libraries to the internet.
Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.
The discussion about the future of USF has reached a critical juncture: Which communities will have access to USF, how it will be funded and whether equitable access to connectivity will continue to be a priority will soon be decided.
Earlier this year, the Supreme Court found the USF’s infrastructure to be constitutional — and a backbone for access and opportunity in this country. Congress recently took a significant next step by relaunching a bicameral, bipartisan working group devoted to overhauling the fund. Now they are actively seeking input from stakeholders on how to best modernize this vital program for the future, and they need our input.
I’m urging everyone who cares about digital equity to make their voices heard. The window for our input in support of this vital connectivity infrastructure is open through September 15.
While Universal Service may appear as only a small fee on our monthly phone bills, its impact is monumental. The fund powers critical programs that form a lifeline for our nation’s most vital institutions and vulnerable populations. The USF helps thousands of schools and libraries obtain affordable internet — including the school I founded in downtown Brooklyn. For students in rural towns, the E-Rate program, funded by the USF, allows access to the same online educational resources as those available to students in major cities. In schools all over the country, the USF helps foster digital literacy, supports coding clubs and enables students to complete homework online.
By wiring our classrooms and libraries, we are investing in the next generation of innovators.
The coming waves of technological change — including the widespread adoption of AI — threaten to make the digital divide an unbridgeable economic chasm. Those on the wrong side of this divide experienced profound disadvantages during the pandemic. To get connected, students at my school ended up doing homework in fast-food parking lots. Entire communities lost vital connections to knowledge and opportunity when libraries closed.
But that was just a preview of the digital struggle. This time, we have to fight to protect the future of this investment in our nation’s vital infrastructure to ensure that the rising wave of AI jobs, opportunities and tools is accessible to all.
AI is rapidly becoming a fundamental tool for the American workforce and in the classroom. AI tools require robust bandwidth to process data, connect to cloud platforms and function effectively.
The student of tomorrow will rely on AI as a personalized tutor that enhances teacher-led classroom instruction, explains complex concepts and supports their homework. AI will also power the future of work for farmers, mechanics and engineers.
Related: Getting kids online by making internet affordable
Without access to AI, entire communities and segments of the workforce will be locked out. We will create a new class of “AI have-nots,” unable to leverage the technology designed to propel our economy forward.
The ability to participate in this new economy, to upskill and reskill for the jobs of tomorrow, is entirely dependent on the one thing the USF is designed to provide: reliable connectivity.
The USF is also critical for rural health care by supporting providers’ internet access and making telehealth available in many communities. It makes internet service affordable for low-income households through its Lifeline program and the Connect America Fund, which promotes the construction of broadband infrastructure in rural areas.
The USF is more than a funding mechanism; it is a statement of our values and a strategic economic necessity. It reflects our collective agreement that a child’s future shouldn’t be limited by their school’s internet connection, that a patient’s health outcome shouldn’t depend on their zip code and that every American worker deserves the ability to harness new technology for their career.
With Congress actively debating the future of the fund, now is the time to rally. We must engage in this process, call on our policymakers to champion a modernized and sustainably funded USF and recognize it not as a cost, but as an essential investment in a prosperous, competitive and flourishing America.
Erin Mote is the CEO and founder of InnovateEDU, a nonprofit that aims to catalyze education transformation by bridging gaps in data, policy, practice and research.
Contact the opinion editor at opinion@hechingerreport.org.
This story about the Universal Service Fund was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.
AI Insights
Examining the Evolving Landscape of Medical AI

I. Glenn Cohen discusses the risks and rewards of using artificial intelligence in health care.
In a discussion with The Regulatory Review, I. Glenn Cohen offers his thoughts on the regulatory landscape of medical artificial intelligence (AI), the evolving ways in which patients may encounter AI in the doctor’s office, and the risks and opportunities of a rapidly evolving technological landscape.
The use of AI in the medical field poses new challenges and tremendous potential for scientific and technological advancement. Cohen highlights how AI is increasingly integrated into health care through tools such as ambient scribing and speaks to some of the ethical concerns around data bias, patient privacy, and gaps in regulatory oversight, especially for underrepresented populations and institutions lacking resources. He surveys several of the emerging approaches to liability for the use of medical AI and weighs the benefits and risks of permitting states to create their own AI regulations in the absence of federal oversight. Despite the challenges facing regulators and clinicians looking for ways to leverage these new technologies, Professor Cohen is optimistic about AI’s potential to expand access to care and improve health care quality.
A leading expert on bioethics and the law, Cohen is the James A. Attwood and Leslie Williams Professor of Law at Harvard Law School. He is an elected member of the National Academy of Medicine. He has addressed the Organisation for Economic Co-operation and Development, members of the U.S. Congress, and the National Assembly of the Republic of Korea on medical AI policy, as well as the North Atlantic Treaty Organization on biotechnology and human advancement. He has provided bioethical advising and consulting to major health care companies.
The Regulatory Review is pleased to share the following interview with I. Glenn Cohen.
The Regulatory Review: In what ways is the average patient today most likely to encounter artificial intelligence (AI) in the health care setting?
Cohen: Part of it will depend on what we mean by “AI.” In a sense, using Google Maps to get to the hospital is the most common use, but that’s probably not what you have in mind. I think one very common use we are already seeing deployed in many hospitals is ambient listening or ambient scribing. I wrote an article on that a few months ago with some colleagues. Inbox management—drafting initial responses to patient queries that physicians are meant to look over—is another way that patients may encounter AI soon. Finally, in terms of more direct usage in clinical care, AI involvement in radiology is one of the more typical use cases. I do want to highlight your use of “encounter,” which is importantly ambiguous between “knowingly” or “unknowingly” encounter. As I noted several years ago, patients may never be told about AI’s involvement in their care. That is even more true today.
TRR: Are some patient populations more likely to encounter or benefit from AI than others?
Cohen: Yes. There are a couple of ethically salient ways to press this point. First, because of contextual bias, those who are closer demographically or in other ways to the training data sets are more likely to benefit from AI. I often note that, as a middle-aged Caucasian man living in Boston, I am well-represented in most training data sets in a way that, say, a Filipino-American woman living in rural Arkansas may not be. There are many other forms of bias, but this form of missing data bias is pretty straightforward as a barrier to receiving the benefits from AI.
Second, we have to follow the money. Absent charitable investment, what gets built depends on what gets paid for. That may mean, to use the locution of my friend and co-author W. Nicholson Price II, that that AI may be directed primarily toward “pushing frontiers”—making excellent clinicians in the United States even better, rather than “democratizing expertise”—taking pretty mediocre physician skills and scaling access to them up via AI to improve access across the world and in parts of the United States without good access to healthcare.
Third, ethically and safely implementing AI requires significant evaluation, which requires expertise and imposes costs. Unless there are good clearinghouses for expertise or other interventions, this evaluation is something that leading academic medical centers can do, but many other kinds of facilities cannot.
TRR: What risks does the use of AI in the medical context pose to patient privacy? How should regulators address such challenges?
Cohen: Privacy definitely can be put at risk by AI. There are a couple of ways that come to mind. One is just the propensity to share information that AI invites. Take, for example, large language models such as ChatGPT. If you are a hospital system getting access for your clinicians, you are going to want to get a sandboxed instance that does not share queries back to OpenAI. Otherwise, there is a concern you may have transmitted protected information in violation of the Health Insurance Portability and Accountability Act (HIPAA), as well as your ethical obligations of confidentiality. But if the hospital system makes it too cumbersome to access the LLM, your clinicians are going to start using their phones to access it, and there goes your HIPAA protections. I do not want to make it sound like this is a problem unique to medical AI. In one of my favorite studies—now a bit dated—someone rode in elevators at a hospital and recorded the number of privacy and other violations.
A different problem posed by AI in general is that it worsens a problem I sometimes call data triangulation: the ability to reidentify users by stitching together our presence in multiple data sets, even if we are not directly identified in some of the sensitive data sets. I have discussed this issue in an article, where I include a good illustrative real-life example involving Netflix.
As for solutions, although I think there is space for improving HIPAA—a topic I have discussed along with the sharing of data with hospitals—I have not written specifically about AI privacy legislation in any great depth.
TRR: What are some emerging best practices for mitigating the negative effects of bias in the development and use of medical AI?
Cohen: I think the key starting point is to be able to identify biases. Missing data bias is a pretty obvious one to spot, though it is often hard to fix if you do not have resources to try to diversify the population represented in your data set. Even if you can diversify, some communities might be understandably wary of sharing information. But there are also many harder-to-spot biases.
For example, measurement or classification bias is where practitioner bias is translated into what is in the data set. What this may look like in practice is that women are less likely to receive lipid-lowering medications and procedures in the hospital compared to men, despite being more likely to present with hypertension and heart failure. Label bias is particularly easy to overlook, and it occurs when the outcome variable is differentially ascertained or has a different meaning across groups. A paper published in Science by Ziad Obermeyer and several coauthors has justifiably become the locus classicus example.
A lot of the problem is in thinking very hard at the front end about design and what is feasible given the data and expertise you have. But that is no substitute for auditing on the back end because even very well-intentioned designs may prove to lead to biased results on the back end. I often recommend a paper by Lama H. Nazer and several coauthors, published in PLOS Digital Health, to folks as a summary of the different kinds of bias.
All that said, I often finish talks by saying, “If you have listened carefully, you have learned that medical AI often makes errors, is bad at explaining how it is reaching its conclusion and is a little bit racist. The same is true of your physician, though. The real question is what combination of the two might improve on those dimensions we care about and how to evaluate it.”
TRR: You have written about the limited scope of the U.S. Food and Drug Administration (FDA) in regulating AI in the medical context. What health-related uses of AI currently fall outside of the FDA’s regulatory authority?
Cohen: Most is the short answer. I would recommend a paper written by my former post-doc and frequent coauthor, Sara Gerke, which does a nice job of walking through it. But the punchline is: if you are expecting medical AI to have been FDA-reviewed, your expectations are almost always going to be disappointed.
TRR: What risks, if any, are associated with the current gaps in FDA oversight of AI?
The FDA framework for drugs is aimed at showing safety and efficacy. With devices, the way that review is graded by device classes means that some devices skirt by because they can show a predicate device—in an AI context, sometimes quite unrelated—or they are classified as devices rather than general wellness products. Then there is the stuff that FDA never sees—most of it. For all these products, there are open questions about safety and efficacy. All that said, some would argue that the FDA premarket approval process is a bad fit for medical AI. These critics may defend FDA’s lack of review by comparing it to areas such as innovation in surgical techniques or medical practices, where FDA largely does not regulate the practice of medicine. Instead, we rely on licensure of physicians and tort law to do a lot of the work, as well as on in-house review processes. My own instinct as to when to be worried—to give a lawyerly answer—is it depends. Among other things, it depends on what non-FDA indicia of quality we have, what is understood by the relevant adopters about how the AI works, what populations it does or does not work for, what is tracked or audited, what the risk level in the worst-case scenario looks like, and who, if anyone, is doing the reviewing.
TRR: You have written in the past about medical liability for harms caused to patients by faulty AI. In the current technological and legal landscape, who should be liable for these injuries?
Cohen: Another lawyerly answer: it’s complicated, and the answer will be different for different kinds of AI. Physicians ultimately are responsible for a medical decision at the end of the day, and there is a school of thought that treats AI as just another tool, such as an MRI machine, and suggests that physicians are responsible even if the AI is faulty.
The reality is that few reported cases have succeeded against physicians for a myriad of reasons detailed in a paper published last year by Michelle M. Mello and Neel Guha. W. Nicholson Price II and I have focused on two other legs of the stool in the paper you asked about: hospital systems and developers. In general, and this may be more understandable given that in tort liability for hospital systems is not all that common, it seems to me that most policy analyses place too little emphasis on the hospital system as a potential locus of responsibility. We suggest “the application of enterprise liability to hospitals—making them broadly liable for negligent injuries occurring within the hospital system—with an important caveat: hospitals must have access to the information needed for adaptation and monitoring. If that information is unavailable, we suggest that liability should shift from hospitals to the developers keeping information secret.”
Elsewhere, I have also mused as to whether this is a good space for traditional tort law at all and whether instead we ought to have something more like the compensation schemes we see for vaccine injuries or workers’ compensation. In those schemes, we would have makers of AI pay into a fund that could pay for injuries without showing fault. Given the cost and complexity of proving negligence and causation in cases involving medical AI, this might be desirable.
TRR: The U.S. Senate rejected adding a provision to the recently passed “megalaw” that would have set a 10-year moratorium on any state enforcing a law or regulation affecting “artificial intelligence models,” “artificial intelligence systems,” or “automated decision systems.” What are some of the pros and cons of permitting states to develop their own AI regulations?
Cohen: This is something I have not written about, so I am shooting from the hip here. Please take it with an even larger grain of salt than what I have said already. The biggest con to state regulation is that it is much harder for an AI maker to develop something subject to differential standards or rules in different states. One can imagine the equivalent of impossibility-preemption type effects: state X says do this, state Y says do the opposite. But even short of that, it will be difficult to design a product to be used nationally if there are substantial variations in the standards of liability.
On the flip side, this is a feature of tort law and choice of law rules for all products, so why should AI be so different? And unlike physical goods that ship in interstate commerce, it is much easier to geolocate and either alter or disable AI running in states with different rules if you want to avoid liability.
On the pro side for state legislation, if you are skeptical that the federal government is going to be able to do anything in this space—or anything you like, at least—due to the usual pathologies of Congress, plus lobbying from AI firms, action by individual states might be attractive. States have innovated in the privacy space. The California Consumer Privacy Act is a good example. For state-based AI regulation, maybe there is a world where states fulfill the Brandeisian ideal of laboratories of experimentation that can be used to develop federal law.
Of course, a lot of this will depend on your prior beliefs about federalism. People often speak about the “Brussels Effect,” relating to the effects of the General Data Protection Regulation on non-European privacy practices. If a state the size of California was to pass legislation with very clear rules that differ from what companies do now, we might see a similar California effect with companies conforming nationwide to these standards. This is particularly true given that much of U.S. AI development is centered in California. One’s views about whether that is good or bad depend not only on the content of those rules but also on the views of what American federalism should look like.
TRR: Overall, what worries you most about the use of AI in the medical context? And what excites you the most?
Cohen: There is a lot that worries me, but the incentives are number one. What gets built is a function of what gets paid for. We may be giving up on some of what has the highest ethical value, the democratization of expertise and improving access, for lack of a business model that supports it. Government may be able to step in to some extent as a funder or for reimbursement, but I am not that optimistic.
Although your questions have led me to the worry side of the house, I am actually pretty excited. Much of what is done in medicine is unanalyzed, or at least not rigorously so. Even the very best clinicians have limited experience, and even if they read the leading journals, go to conferences, and complete other standard means of continuing education for physicians, the amount of information they can synthesize is orders of magnitude smaller than that of AI. AI may also allow scaling of the delivery of some services in a way that can serve underrepresented people in places where providers are scarce.
AI Insights
AI and machine learning for engineering design | MIT News

Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.
“When people think about mechanical engineering, they’re thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”
In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.
Cat Trees to Motion Capture: AI and ML for Engineering Design
Video: MIT Department of Mechanical Engineering
“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.
First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.
The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.
Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.
Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”
The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.
“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.
“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.
Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.
“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.”
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi