Connect with us

AI Research

Three Reasons Why Universities are Crucial for Understanding AI

Published

on


Artificial intelligence is already transforming almost every aspect of human work and life: It can perform surgery, write code, and even make art. While it is a powerful tool, no one fully understands how AI learns or reasons—not even the companies developing it.

This is where the academic mission to conduct open, scientific research can make a real difference, says Surya Ganguli. The Stanford physicist is leading “The Physics of Learning and Neural Computation,” a collaborative project recently launched by the Simons Foundation that brings together physicists, computer scientists, mathematicians, and neuroscientists to help break AI out of its proverbial “black box.” 

Surya Ganguli will oversee a collaboration called The Physics of Learning and Neural Computation.

“We need to bring the power of our best theoretical ideas from many fields to confront the challenge of scientifically understanding one of the most important technologies to have appeared in decades,” said Ganguli, associate professor of applied physics in Stanford’s School of Humanities and Sciences. “For something that’s of such societal importance, we have got to do it in academia, where we can share what we learn openly with the world.”

There are many compelling reasons why this work needs to be done by universities, says Ganguli, who is also a senior fellow at the Stanford Institute for Human-Centered AI. Here are three: 

Improving Scientific Understanding

The companies on the frontier of AI technology are more focused on improving performance, without necessarily having a complete scientific understanding of how the technology works, Ganguli contends. 

“It’s imperative that the science catches up with the engineering,” he said. “The engineering of AI is way ahead, so we need a concerted, all-hands-on-deck approach to advance the science.”

AI systems are developed very differently than something like a car, with physical parts that are explicitly designed and rigorously tested. AI neural networks are inspired by the human brain, with a multitude of connections. These connections are then implicitly trained using data. 

Ganguli likens that training to human learning: We educate children by giving them information and correct them when they are wrong. We know when a child learns a word like cat or a concept like generosity, but we do not know explicitly what happens in the brain to acquire that knowledge.

The same is true of AI, but it makes strange mistakes that a human would never make. Researchers believe it is critical to understand why for both practical and ethical reasons. 

“AI systems are derived in a very implicit way, but it’s not clear that we’re baking in the same empathy and caring for humanity that we do in our children,” Ganguli said. “We try a lot of ad hoc stuff to bake human values into these large language models, but it’s not clear that we’ve figured out the best way to do it.”

Physics Can Tackle AI’s Complexity

Traditionally, the field of physics has focused on studying complex natural systems. While AI has artificial in its very name, its complexity lends itself well to physics, which has increasingly expanded beyond its historical boundaries to branch into many other fields, including biology and neuroscience. 

Physicists have a lot of experience working with high dimensional systems, Ganguli pointed out. For example, some physicists study materials with many billions of interacting particles with complex, dynamic laws that influence their collective behavior and give rise to surprising, “emergent” properties—new characteristics that arise from the interaction but are not present in the individual particles themselves  

AI is similar, with many billions of weights that constantly change during training, and the project’s main goals are to better understand this process. Specifically, the researchers want to know how learning dynamics, training data, and the architecture of an AI system interact to produce emergent computations such as AI creativity and reasoning, the origins of which are not currently understood. Once this interaction is uncovered, it will likely be easier to control the process by choosing the right data for a given problem. 

It might also be possible to create smaller, more efficient networks that can do more with fewer connections, said project member Eva Silverstein, professor of physics in H&S.  

“It’s not that the extra connections necessarily cause a problem. It’s more that they’re expensive,” she said. “Sometimes they can be pruned after training, but you have to understand a lot about the system—learning and reasoning dynamics, structure of data, and architecture—in order to be able to predict in advance how it’s going to work.”

Ganguli and Silverstein are two of the 17 principal investigators representing 12 universities on the Simons Foundation project. Ganguli hopes to expand participation further, ultimately bringing a new generation of physicists into the AI field. The collaboration will be holding workshops and summer school sessions to build the scientific community. 

Academic Findings Are Shared

Everything that comes out of this collaboration will be shared, with findings vetted and published in peer-reviewed journals. In contrast, companies that need to develop their AI products with the goal of delivering economic returns have little incentive, and no obligation, to share information with others. 

“We need to do open science because walls of secrecy are being erected around these frontier AI companies,” Ganguli said. “I really love being at the university, where our very mission is to share what we learn with the world.”

This story was first published by the Stanford School of Humanities and Sciences.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Anthropic’s $1.5-billion settlement signals new era for AI and artists

Published

on


Chatbot builder Anthropic agreed to pay $1.5 billion to authors in a landmark copyright settlement that could redefine how artificial intelligence companies compensate creators.

The San Francisco-based startup is ready to pay authors and publishers to settle a lawsuit that accused the company of illegally using their work to train its chatbot.

Anthropic developed an AI assistant named Claude that can generate text, images, code and more. Writers, artists and other creative professionals have raised concerns that Anthropic and other tech companies are using their work to train their AI systems without their permission and not fairly compensating them.

As part of the settlement, which the judge still needs to be approve, Anthropic agreed to pay authors $3,000 per work for an estimated 500,000 books. It’s the largest settlement known for a copyright case, signaling to other tech companies facing copyright infringement allegations that they might have to pay rights holders eventually as well.

Meta and OpenAI, the maker of ChatGPT, have also been sued over alleged copyright infringement. Walt Disney Co. and Universal Pictures have sued AI company Midjourney, which the studios allege trained its image generation models on their copyrighted materials.

“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said Justin Nelson, a lawyer for the authors, in a statement. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

Last year, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson sued Anthropic, alleging that the company committed “large-scale theft” and trained its chatbot on pirated copies of copyrighted books.

U.S. District Judge William Alsup of San Francisco ruled in June that Anthropic’s use of the books to train the AI models constituted “fair use,” so it wasn’t illegal. But the judge also ruled that the startup had improperly downloaded millions of books through online libraries.

Fair use is a legal doctrine in U.S. copyright law that allows for the limited use of copyrighted materials without permission in certain cases, such as teaching, criticism and news reporting. AI companies have pointed to that doctrine as a defense when sued over alleged copyright violations.

Anthropic, founded by former OpenAI employees and backed by Amazon, pirated at least 7 million books from Books3, Library Genesis and Pirate Library Mirror, online libraries containing unauthorized copies of copyrighted books, to train its software, according to the judge.

It also bought millions of print copies in bulk and stripped the books’ bindings, cut their pages and scanned them into digital and machine-readable forms, which Alsup found to be in the bounds of fair use, according to the judge’s ruling.

In a subsequent order, Alsup pointed to potential damages for the copyright owners of books downloaded from the shadow libraries LibGen and PiLiMi by Anthropic.

Although the award was massive and unprecedented, it could have been much worse, according to some calculations. If Anthropic were charged a maximum penalty for each of the millions of works it used to train its AI, the bill could have been more than $1 trillion, some calculations suggest.

Anthropic disagreed with the ruling and didn’t admit wrongdoing.

“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel for Anthropic, in a statement. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

The Anthropic dispute with authors is one of many cases where artists and other content creators are challenging the companies behind generative AI to compensate for the use of online content to train their AI systems.

Training involves feeding enormous quantities of data — including social media posts, photos, music, computer code, video and more — to train AI bots to discern patterns of language, images, sound and conversation that they can mimic.

Some tech companies have prevailed in copyright lawsuits filed against them.

In June, a judge dismissed a lawsuit authors filed against Facebook parent company Meta, which also developed an AI assistant, alleging that the company stole their work to train its AI systems. U.S. District Judge Vince Chhabria noted that the lawsuit was tossed because the plaintiffs “made the wrong arguments,” but the ruling didn’t “stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

Trade groups representing publishers praised the Anthropic settlement on Friday, noting it sends a big signal to tech companies that are developing powerful artificial intelligence tools.

“Beyond the monetary terms, the proposed settlement provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models,” said Maria Pallante, president and chief executive of the Association of American Publishers in a statement.

The Associated Press contributed to this report.



Source link

Continue Reading

AI Research

Palantir CEO Alex Karp says U.S. labor workers won’t lose their jobs to AI—‘it’s not true’

Published

on


As fears swirl that American manufacturing workers and skilled laborers may soon be replaced by artificial intelligence and robots, Alex Karp, CEO of the AI and data analytics software company Palantir Technologies, hopes to change the narrative. 

“It’s not true, and in fact, it’s kind of the opposite,” Karp said in an interview with Fortune Thursday at the company’s commercial customer conference, AIPCon, where Palantir customers showcased how they were using the company’s software platform and generative AI within their own businesses at George Lucas’ Skywalker Ranch in Marin County, Calif. 

The primary danger of AI in this country, says Karp, is that workers don’t understand that AI will actually help them in their roles—and it will hardly replace them. “Silicon Valley’s done an immensely crappy job of explaining that,” he said. “If you’re in manufacturing, in any capacity: You’re on the assembly line, you maintain a complicated machine—you have any kind of skilled labor job—the way we do AI will actually make your job more valuable and make you more valuable. But currently you would think—just roaming around the country, and if you listen to the AI narratives coming out of Silicon Valley—that all these people are going to lose their jobs tomorrow.”

Karp made these comments the day before the Bureau of Labor Statistics released its August jobs report, which showcased a climbing unemployment rate and stagnating hiring figures, reigniting fears of whether AI is at all responsible for the broader slowdown. There has been limited data thus far suggesting that generative AI is to blame for the slowing jobs market—or even job cuts for that matter—though a recent ADP hiring report offered a rare suggestion that AI may be one of several factors influencing hiring sentiment. Some executives, including Salesforce’s Marc Benioff, have cited the efficiency gains of AI for layoffs at their companies, and others, like Ford CEO Jim Farley and Amazon CEO Andy Jassy, have made lofty predictions about how AI is on track to replace jobs in the future. Most of these projections have been centered around white collar roles, in particular, versus manufacturing or skilled labor positions.

Karp, who has a PhD in neoclassical social theory and a reputation for being outspoken and contrarian on many issues, argues that fears of AI eliminating skilled labor jobs are unfounded—and he’s committed to “correcting” the public perception. 

Earlier this week, Palantir launched “Working Intelligence: The AI Optimism Project,” a quasi-public information and marketing campaign centered around artificial intelligence in the workplace. The project has begun with a series of short blog posts featuring Palantir’s customers and their opinions on AI, as well as a “manifesto” that takes aim at both the “doomers” and “pacifiers” of AI. “Doomers fear, and pacifiers welcome, a future of conformity: a world in which AI flattens human difference. Silicon Valley is already selling such bland, dumbed-down slop,” the manifesto declares, arguing that the true power of AI is not to standardize but to “supercharge” workers.

Jordan Hirsch, who is spearheading the new project at Palantir, said that there are approximately 20 people working on it and that they plan to launch a corresponding podcast.

While Palantir has an obvious commercial interest in dispelling public fears about AI, Karp framed his commitment to the project as something important for society. Fears about job replacement will “feed a kind of weird populism based on a notion that’s not true—that’s going to make the factions on the right and left much, much, much more powerful based on something that’s not true,” he said. “I think correcting that—but not just by saying platitudes, but actually showing how this works, is one of the most important things we have to get on top of.”

Karp said he planned to invest “lots of energy and money” into the AI Optimism Project. When asked how much money, he said he didn’t know yet, but that “we have a lot of money, and it’s one of my biggest priorities.” 

Palantir has seen enormous growth within the commercial side of its business in the last two years, largely due to the artificial intelligence product it released in 2023, called “AIP.” Palantir’s revenue surpassed $1 billion for the first time last quarter. And while Palantir only joined the S&P 500 last year, it now ranks as one of the most valuable companies in the world thanks to its soaring stock price.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

AI Research

Delaware Partnership to Build AI Skills in Students, Workers

Published

on


Delaware has announced a partnership with OpenAI on its certification program, which aims to build AI skills in the state among students and workers alike.

The Diamond State’s officials have been exploring how to move forward responsibly with AI, establishing a generative AI policy this year to help inform safe use among public-sector employees, which one official said was the “first step” to informing employees about acceptable AI use. The Delaware Artificial Intelligence Commission also took action this year to advance a “sandbox” environment for testing new AI technologies including agentic AI; the sandbox model has proven valuable for governments across the U.S., from San Jose to Utah.

The OpenAI Certification Program aims to address a common challenge for states: fostering AI literacy in the workforce and among students. It builds on the OpenAI Academy, an open-to-all initiative launched in an effort to democratize knowledge about AI. The initiative’s expansion will enable the company to offer certifications based upon levels of AI fluency, from the basics to prompt engineering. The company is committing to certifying 10 million Americans by 2030.


“As a former teacher, I know how important it is to give our students every advantage,” Gov. Matt Meyer said in a statement. “As Governor, I know our economy depends on workers being ready for the jobs of the future, no matter their zip code.”

The partnership will start with early-stage programming across schools and workforce training programs in Delaware in an effort led by the state’s new Office of Workforce Development, which was created earlier this year. The office will work with schools, colleges and employers in coming months to identify pilot opportunities for this programming, to ensure that every community in the state has access.

Delaware will play a role in shaping how certifications are rolled out at the community level because the program is in its early stages and Delaware is one of the first states to join, per the state’s announcement.

“We’ll obviously use AI to teach AI: anyone will be able to prepare for the certification in ChatGPT’s Study mode and become certified without leaving the app,” OpenAI’s CEO of Applications Fidji Simo said in an article.

This announcement comes on the heels of the federal AI Action Plan’s release. The plan, among other content potentially limiting states’ regulatory authority, aims to invest in skills training and AI literacy.

“By boosting AI literacy and investing in skills training, we’re equipping hardworking Americans with the tools they need to lead and succeed in this new era,” U.S. Secretary of Labor Lori Chavez-DeRemer said in a statement about the federal plan.

Delaware’s partnership with OpenAI for its certification program mirrors this goal, equipping Delawareans with the knowledge to use these tools — in the classroom, in their careers and beyond.

AI skills are a critical part of broader digital literacy efforts; today, “even basic digital skills include AI,” National Digital Inclusion Alliance Director Angela Siefer said earlier this summer.





Source link

Continue Reading

Trending