Connect with us

AI Research

Artificial intelligence is here. Will it replace teachers? – ABC News

Published

on

AI Research

Three Reasons Why Universities are Crucial for Understanding AI

Published

on


Artificial intelligence is already transforming almost every aspect of human work and life: It can perform surgery, write code, and even make art. While it is a powerful tool, no one fully understands how AI learns or reasons—not even the companies developing it.

This is where the academic mission to conduct open, scientific research can make a real difference, says Surya Ganguli. The Stanford physicist is leading “The Physics of Learning and Neural Computation,” a collaborative project recently launched by the Simons Foundation that brings together physicists, computer scientists, mathematicians, and neuroscientists to help break AI out of its proverbial “black box.” 

Surya Ganguli will oversee a collaboration called The Physics of Learning and Neural Computation.

“We need to bring the power of our best theoretical ideas from many fields to confront the challenge of scientifically understanding one of the most important technologies to have appeared in decades,” said Ganguli, associate professor of applied physics in Stanford’s School of Humanities and Sciences. “For something that’s of such societal importance, we have got to do it in academia, where we can share what we learn openly with the world.”

There are many compelling reasons why this work needs to be done by universities, says Ganguli, who is also a senior fellow at the Stanford Institute for Human-Centered AI. Here are three: 

Improving Scientific Understanding

The companies on the frontier of AI technology are more focused on improving performance, without necessarily having a complete scientific understanding of how the technology works, Ganguli contends. 

“It’s imperative that the science catches up with the engineering,” he said. “The engineering of AI is way ahead, so we need a concerted, all-hands-on-deck approach to advance the science.”

AI systems are developed very differently than something like a car, with physical parts that are explicitly designed and rigorously tested. AI neural networks are inspired by the human brain, with a multitude of connections. These connections are then implicitly trained using data. 

Ganguli likens that training to human learning: We educate children by giving them information and correct them when they are wrong. We know when a child learns a word like cat or a concept like generosity, but we do not know explicitly what happens in the brain to acquire that knowledge.

The same is true of AI, but it makes strange mistakes that a human would never make. Researchers believe it is critical to understand why for both practical and ethical reasons. 

“AI systems are derived in a very implicit way, but it’s not clear that we’re baking in the same empathy and caring for humanity that we do in our children,” Ganguli said. “We try a lot of ad hoc stuff to bake human values into these large language models, but it’s not clear that we’ve figured out the best way to do it.”

Physics Can Tackle AI’s Complexity

Traditionally, the field of physics has focused on studying complex natural systems. While AI has artificial in its very name, its complexity lends itself well to physics, which has increasingly expanded beyond its historical boundaries to branch into many other fields, including biology and neuroscience. 

Physicists have a lot of experience working with high dimensional systems, Ganguli pointed out. For example, some physicists study materials with many billions of interacting particles with complex, dynamic laws that influence their collective behavior and give rise to surprising, “emergent” properties—new characteristics that arise from the interaction but are not present in the individual particles themselves  

AI is similar, with many billions of weights that constantly change during training, and the project’s main goals are to better understand this process. Specifically, the researchers want to know how learning dynamics, training data, and the architecture of an AI system interact to produce emergent computations such as AI creativity and reasoning, the origins of which are not currently understood. Once this interaction is uncovered, it will likely be easier to control the process by choosing the right data for a given problem. 

It might also be possible to create smaller, more efficient networks that can do more with fewer connections, said project member Eva Silverstein, professor of physics in H&S.  

“It’s not that the extra connections necessarily cause a problem. It’s more that they’re expensive,” she said. “Sometimes they can be pruned after training, but you have to understand a lot about the system—learning and reasoning dynamics, structure of data, and architecture—in order to be able to predict in advance how it’s going to work.”

Ganguli and Silverstein are two of the 17 principal investigators representing 12 universities on the Simons Foundation project. Ganguli hopes to expand participation further, ultimately bringing a new generation of physicists into the AI field. The collaboration will be holding workshops and summer school sessions to build the scientific community. 

Academic Findings Are Shared

Everything that comes out of this collaboration will be shared, with findings vetted and published in peer-reviewed journals. In contrast, companies that need to develop their AI products with the goal of delivering economic returns have little incentive, and no obligation, to share information with others. 

“We need to do open science because walls of secrecy are being erected around these frontier AI companies,” Ganguli said. “I really love being at the university, where our very mission is to share what we learn with the world.”

This story was first published by the Stanford School of Humanities and Sciences.



Source link

Continue Reading

AI Research

How colleges are preparing students for AI in the workplace – MPR News

Published

on



How colleges are preparing students for AI in the workplace  MPR News



Source link

Continue Reading

AI Research

Attorneys general warn OpenAI ‘harm to children will not be tolerated’

Published

on


California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings met with and sent an open letter to OpenAI to express their concerns over the safety of ChatGPT, particularly for children and teens. 

The warning comes a week after Bonta and 44 other attorneys general sent a letter to 12 of the top AI companies, following reports of sexually inappropriate interactions between AI chatbots and children. 

“Since the issuance of that letter, we learned of the heartbreaking death by suicide of one young Californian after he had prolonged interactions with an OpenAI chatbot, as well as a similarly disturbing murder-suicide in Connecticut,” Bonta and Jennings write. “Whatever safeguards were in place did not work.”

The two state officials are currently investigating OpenAI’s proposed restructuring into a for-profit entity to ensure that the mission of the nonprofit remains intact. That mission “includes ensuring that artificial intelligence is deployed safely” and building artificial general intelligence (AGI) to benefit all humanity, “including children,” per the letter. 

“Before we get to benefiting, we need to ensure that adequate safety measures are in place to not harm,” the letter continues. “It is our shared view that OpenAI and the industry at large are not where they need to be in ensuring safety in AI products’ development and deployment. As Attorneys General, public safety is one of our core missions. As we continue our dialogue related to OpenAI’s recapitalization plan, we must work to accelerate and amplify safety as a governing force in the future of this powerful technology.”

Bonta and Jennings have asked for more information about OpenAI’s current safety precautions and governance, and said they expect the company to take immediate remedial measures where appropriate.

TechCrunch has reached out to OpenAI for comment.

Techcrunch event

San Francisco
|
October 27-29, 2025



Source link

Continue Reading

Trending