Connect with us

AI Research

Indian American Penn students use machine learning to track and protect whales

Published

on


As whales face harm from ship strikes, fishing net entanglements, and redistribution of prey due to changes in ocean temperature, it’s increasingly important to track their locations and populations across the globe. 

 

To help accelerate these efforts, two Indian American students at the University of Pennsylvania Chinmay Govind and Nihar Ballamudi dedicated their summer to a Penn Undergraduate Research Mentoring Program (PURM) project that combines mathematics, signal processing, animal behavior, and machine learning.

 

Their goal: Leverage whale sound data and artificial intelligence to map the locations of whales and determine how many live in any given target area. For this work, Govind and Ballamudi have used National Oceanic and Atmospheric Administration (NOAA) data from sound receivers north of Cape Cod Bay, though their research applies to any location, according to Penn Today.

 

Results from this endeavor could help the duo obtain “better data on how many whales are in an area or the distribution of whales in an area, which can inform policymakers and environmental groups on policies involving whales,” says Govind, a double major in artificial intelligence and computer engineering in the School of Engineering and Applied Science and originally from Mechanicsburg, Pennsylvania. “The findings of our research can extend not just to whales, but [also] other sea animals.”

 

READ: 3 Indian American students chosen for Arizona’s fast medical education program (

PURM, offered by the Center for Undergraduate Research & Fellowships, immerses students finishing their first or second year at Penn in a 10-week summer research experience under the expert guidance of a faculty mentor.

 

“Math research isn’t really used that often outside of, you know, just math,” says Ballamudi, a mathematics major in the College of Arts & Sciences and computer science minor from Madison, Wisconsin. “It’s really cool for me to be able to work on a project that [uses math to] help influence what policy will look like if we can census whales.”

 

Each student led a portion of the PURM project: Govind focused on locating whales, and Ballamudi worked on censusing them. In this context, locating entails tracking and counting individual whales. Censusing, on the other hand, involves approximating the size and distribution of whale populations to more effectively monitor their movements.

 

To locate whales, Govind leveraged acoustic data from NOAA receivers—essentially underwater microphones—to estimate the origin points of whale calls. Each receiver detects the sound waves from a unique whale call at different times. Govind feeds the recorded audio data into a machine learning model to estimate “time difference of arrival,” which is then used to calculate the whale’s coordinates—similar to how mobile phones derive their locations using GPS.

 

“Time difference isolates sound to a specific curve,” Govind explains. “If you have more receivers—we use five—then you have enough data to pinpoint the whale’s position or generate a confidence interval for where the whale could be.”

 

Using AI to optimize and refine acoustic data, Govind has been able to record the origin points of whale calls with a “median error of 20 milliseconds.” This small margin of uncertainty, he says, is more than sufficient for estimating whale locations.

 

Concurrently, Ballamudi used machine learning models and NOAA sound data to simulate sea environments and census whale populations. This AI-driven approach can be more effective than relying on data from physical receivers given the obstacles posed by ocean noise and multipath.

 

READ: Four Indian Americans named 2025 Guggenheim fellows (

“We’ve sampled real ocean noise and generated signals according to literature regarding what whale signals will usually look like,” Ballamudi says. “Using that information, we’re able to generate as much data as we want.”

 

This strategy allows Govind and Ballamudi to innovate as they learn about individual and collective whale behavior. During the PURM project, Ballamudi has accurately predicted the number and distribution of whales between 90-95% of the time—a significant feat given the challenges involved with censusing whales.

 

The AI models used for this PURM project also continually optimize and improve their precision, showing promise for future steps the students could take with this research.

 

“It would be very nice if we could get a model to recognize multiple sources at the same time and be able to look at all of them in one shot,” Govind says, noting how current limitations allow them to locate only one whale at a time.

 

Once the pair can record the exact number of whales in a target range, Ballamudi says, they could leverage that data to retroactively pinpoint the precise location of each whale.

 

“We want to see if this approach will work no matter what—not just in our well-controlled software, but also in a world that has way more confounding variables than our software could ever account for,” Ballamudi says.

 

Toward the end of their PURM experience, Govind and Ballamudi were invoted  to present their results to U.S. Navy sponsors; highlight the implications for policymakers working to protect whales; and share opportunities for expanding upon this research.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

What It Means for State and Local Projects

Published

on


To lead the world in the AI race, President Donald Trump says the U.S. will need to “triple” the amount of electricity it produces. At a cabinet meeting on Aug. 26, he made it clear his administration’s policy is to favor fossil fuels and nuclear energy, while dismissing solar and wind power.

“Windmills, we’re just not going to allow them. They ruin our country,” Trump said at the meeting. “They’re ugly, they don’t work, they kill your birds, they’re bad for the environment.”

He added that he also didn’t like solar because of the space it takes up on land that could be used for farming.


“Whether we like it or not, fossil fuel is the thing that works,” said Trump. “We’re going to fire up those big monster factories.”

In the same meeting, he showcased a photo of what he said was a $50 billion mega data center planned for Louisiana, provided by Mark Zuckerberg.

Watch a condensed version of Trump’s comments at the cabinet meeting in the video below.

But there’s a reason coal-fired power plants have been closing at a rapid pace for years: cost. According to the think tank Energy Innovation, coal power in the U.S. tends to cost more to run than renewables. Before Trump’s second term, the U.S. Department of Energy publicized a strategy to support new energy demand for AI with renewable sources, writing that “solar energy, land-based wind energy, battery storage and energy efficiency are some of the most rapidly scalable and cost competitive ways to meet increased electricity demand from data centers.”

Further, many governments examining how to use AI also have climate pledges in place to reduce their greenhouse gas emissions — including states such as North Carolina and California.

Earlier this year Trump passed an executive order, “Reinvigorating America’s Beautiful Clean Coal Industry and Amending Executive Order 14241,” directing the secretaries of the Interior, Commerce and Energy to identify regions where coal-powered infrastructure is available and suitable for supporting AI.

A separate executive order, “Accelerating Federal Permitting of Data Center Infrastructure,” shifts the power to the federal government to ensure that new AI infrastructure, fueled by specific energy sources, is built quickly by “easing federal regulatory burdens.”

In an interview with Government Technology, a representative of Core Natural Resources, a U.S.-based mining and mineral resource company, explained this federal shift will be a “resurgency for the industry,” pressing that coal is “uniquely positioned” to fill the energy need AI will create.”

“If you’re looking to generate large amounts of energy that these data centers are going to require, you need to focus on energy sources that are going to be able to meet that demand without sacrificing the power prices for the consumers,” said Matthew Mackowiak, director of government affairs at Core.

“It’s going to be what powers the future, especially when you look at this demand growth over the next few years,” said Mackowiak.

Yet these plans for the future, including increased reliance on fossil fuels and coal, as well as needing mega data centers, may not be what the public is willing to accept. According to the International Energy Agency, a typical AI-focused data center consumes as much electricity as 100,000 households, but larger ones currently under construction may consume 20 times as much.

A recent report from Data Center Watch suggests that local activism is threatening to derail a potential data center boom.

According to the research firm, $18 billion worth of data center projects have been blocked, while $46 billion of projects were delayed over the last two years in situations where there was opposition from residents and activist groups. Common arguments against the centers are higher utility bills, water consumption, noise, impact on property value and green space preservation.

The movement may put state and local governments in the middle of a clash between federal directives and backlash from their communities. Last month in Tucson, Ariz., City Council members voted against a proposed data center project, due in large part to public pressure from residents with fears about its water usage.

St. Charles, Mo., recently considered banning proposed data centers for one year, pausing the acceptance of any zoning change applications for data centers or the issuing of any building permits for data centers following a wave of opposition from residents.

This debate may hit a fever pitch as many state and local governments are also piloting or launching their own programs powered by AI, from traffic management systems to new citizen portals.

As the AI energy debate heats up, local leaders could be in for some challenging choices. As Mackowiak of Core Natural Resources noted, officials have a “tough job, listening to constituents and trying to do what’s best.” He asserted that officials should consider “resource adequacy,” adding that “access to affordable, reliable, dependable power is first and foremost when it comes to a healthy economy and national security.”

The ultimate question for government leaders is not just whether they can meet the energy demands of a private data center, but how the public’s perception of this new energy future will affect their own technology goals. If the citizens begin to associate AI with contentious projects and controversial energy sources, it could create a ripple effect of distrust, disrupting the potential of the technology regardless of the benefits.

Ben Miller contributed to this story.





Source link

Continue Reading

AI Research

‘I trust AI the way a sailor trusts the sea. It can carry you far, or it can drown you’: Poll results reveal majority do not trust AI

Published

on


Everywhere we turn, we are reminded of the rapid advances that artificial intelligence (AI) is making. As the technology continues to evolve, it raises an important question: Can we really trust it?

Trusting AI can mean many things — from letting it recommend a TV show to watch to relying on it for medical advice or putting it in charge of your car. On Aug. 29 we shared a poll asking Live Science readers where they stand on AI’s trustworthiness — and 382 people responded.



Source link

Continue Reading

AI Research

Three Reasons Why Universities are Crucial for Understanding AI

Published

on


Artificial intelligence is already transforming almost every aspect of human work and life: It can perform surgery, write code, and even make art. While it is a powerful tool, no one fully understands how AI learns or reasons—not even the companies developing it.

This is where the academic mission to conduct open, scientific research can make a real difference, says Surya Ganguli. The Stanford physicist is leading “The Physics of Learning and Neural Computation,” a collaborative project recently launched by the Simons Foundation that brings together physicists, computer scientists, mathematicians, and neuroscientists to help break AI out of its proverbial “black box.” 

Surya Ganguli will oversee a collaboration called The Physics of Learning and Neural Computation.

“We need to bring the power of our best theoretical ideas from many fields to confront the challenge of scientifically understanding one of the most important technologies to have appeared in decades,” said Ganguli, associate professor of applied physics in Stanford’s School of Humanities and Sciences. “For something that’s of such societal importance, we have got to do it in academia, where we can share what we learn openly with the world.”

There are many compelling reasons why this work needs to be done by universities, says Ganguli, who is also a senior fellow at the Stanford Institute for Human-Centered AI. Here are three: 

Improving Scientific Understanding

The companies on the frontier of AI technology are more focused on improving performance, without necessarily having a complete scientific understanding of how the technology works, Ganguli contends. 

“It’s imperative that the science catches up with the engineering,” he said. “The engineering of AI is way ahead, so we need a concerted, all-hands-on-deck approach to advance the science.”

AI systems are developed very differently than something like a car, with physical parts that are explicitly designed and rigorously tested. AI neural networks are inspired by the human brain, with a multitude of connections. These connections are then implicitly trained using data. 

Ganguli likens that training to human learning: We educate children by giving them information and correct them when they are wrong. We know when a child learns a word like cat or a concept like generosity, but we do not know explicitly what happens in the brain to acquire that knowledge.

The same is true of AI, but it makes strange mistakes that a human would never make. Researchers believe it is critical to understand why for both practical and ethical reasons. 

“AI systems are derived in a very implicit way, but it’s not clear that we’re baking in the same empathy and caring for humanity that we do in our children,” Ganguli said. “We try a lot of ad hoc stuff to bake human values into these large language models, but it’s not clear that we’ve figured out the best way to do it.”

Physics Can Tackle AI’s Complexity

Traditionally, the field of physics has focused on studying complex natural systems. While AI has artificial in its very name, its complexity lends itself well to physics, which has increasingly expanded beyond its historical boundaries to branch into many other fields, including biology and neuroscience. 

Physicists have a lot of experience working with high dimensional systems, Ganguli pointed out. For example, some physicists study materials with many billions of interacting particles with complex, dynamic laws that influence their collective behavior and give rise to surprising, “emergent” properties—new characteristics that arise from the interaction but are not present in the individual particles themselves  

AI is similar, with many billions of weights that constantly change during training, and the project’s main goals are to better understand this process. Specifically, the researchers want to know how learning dynamics, training data, and the architecture of an AI system interact to produce emergent computations such as AI creativity and reasoning, the origins of which are not currently understood. Once this interaction is uncovered, it will likely be easier to control the process by choosing the right data for a given problem. 

It might also be possible to create smaller, more efficient networks that can do more with fewer connections, said project member Eva Silverstein, professor of physics in H&S.  

“It’s not that the extra connections necessarily cause a problem. It’s more that they’re expensive,” she said. “Sometimes they can be pruned after training, but you have to understand a lot about the system—learning and reasoning dynamics, structure of data, and architecture—in order to be able to predict in advance how it’s going to work.”

Ganguli and Silverstein are two of the 17 principal investigators representing 12 universities on the Simons Foundation project. Ganguli hopes to expand participation further, ultimately bringing a new generation of physicists into the AI field. The collaboration will be holding workshops and summer school sessions to build the scientific community. 

Academic Findings Are Shared

Everything that comes out of this collaboration will be shared, with findings vetted and published in peer-reviewed journals. In contrast, companies that need to develop their AI products with the goal of delivering economic returns have little incentive, and no obligation, to share information with others. 

“We need to do open science because walls of secrecy are being erected around these frontier AI companies,” Ganguli said. “I really love being at the university, where our very mission is to share what we learn with the world.”

This story was first published by the Stanford School of Humanities and Sciences.



Source link

Continue Reading

Trending