Connect with us

Tools & Platforms

How NASA Is Testing AI to Make Earth-Observing Satellites Smarter

Published

on


A technology called Dynamic Targeting could enable spacecraft to decide, autonomously and within seconds, where to best make science observations from orbit.

In a recent test, NASA showed how artificial intelligence-based technology could help orbiting spacecraft provide more targeted and valuable science data. The technology enabled an Earth-observing satellite for the first time to look ahead along its orbital path, rapidly process and analyze imagery with onboard AI, and determine where to point an instrument. The whole process took less than 90 seconds, without any human involvement.

Called Dynamic Targeting, the concept has been in development for more than a decade at NASA’s Jet Propulsion Laboratory in Southern California. The first of a series of flight tests occurred aboard a commercial satellite in mid-July. The goal: to show the potential of Dynamic Targeting to enable orbiters to improve ground imaging by avoiding clouds and also to autonomously hunt for specific, short-lived phenomena like wildfires, volcanic eruptions, and rare storms.

“The idea is to make the spacecraft act more like a human: Instead of just seeing data, it’s thinking about what the data shows and how to respond,” says Steve Chien, a technical fellow in AI at JPL and principal investigator for the Dynamic Targeting project. “When a human sees a picture of trees burning, they understand it may indicate a forest fire, not just a collection of red and orange pixels. We’re trying to make the spacecraft have the ability to say, ‘That’s a fire,’ and then focus its sensors on the fire.”

This first flight test for Dynamic Targeting wasn’t hunting specific phenomena like fires — that will come later. Instead, the point was avoiding an omnipresent phenomenon: clouds.

Most science instruments on orbiting spacecraft look down at whatever is beneath them. However, for Earth-observing satellites with optical sensors, clouds can get in the way as much as two-thirds of the time, blocking views of the surface. To overcome this, Dynamic Targeting looks 300 miles (500 kilometers) ahead and has the ability to distinguish between clouds and clear sky. If the scene is clear, the spacecraft images the surface when passing overhead. If it’s cloudy, the spacecraft cancels the imaging activity to save data storage for another target.

“If you can be smart about what you’re taking pictures of, then you only image the ground and skip the clouds. That way, you’re not storing, processing, and downloading all this imagery researchers really can’t use,” said Ben Smith of JPL, an associate with NASA’s Earth Science Technology Office, which funds the Dynamic Targeting work. “This technology will help scientists get a much higher proportion of usable data.”

The testing is taking place on CogniSAT-6, a briefcase-size CubeSat that launched in March 2024. The satellite — designed, built, and operated by Open Cosmos — hosts a payload designed and developed by Ubotica featuring a commercially available AI processor. While working with Ubotica in 2022, Chien’s team conducted tests aboard the International Space Station running algorithms similar to those in Dynamic Targeting on the same type of processor. The results showed the combination could work for space-based remote sensing.

Since CogniSAT-6 lacks an imager dedicated to looking ahead, the spacecraft tilts forward 40 to 50 degrees to point its optical sensor, a camera that sees both visible and near-infrared light. Once look-ahead imagery has been acquired, Dynamic Targeting’s advanced algorithm, trained to identify clouds, analyzes it. Based on that analysis, the Dynamic Targeting planning software determines where to point the sensor for cloud-free views. Meanwhile, the satellite tilts back toward nadir (looking directly below the spacecraft) and snaps the planned imagery, capturing only the ground.

This all takes place in 60 to 90 seconds, depending on the original look-ahead angle, as the spacecraft speeds in low Earth orbit at nearly 17,000 mph (7.5 kilometers per second).

With the cloud-avoidance capability now proven, the next test will be hunting for storms and severe weather — essentially targeting clouds instead of avoiding them. Another test will be to search for thermal anomalies like wildfires and volcanic eruptions. The JPL team developed unique algorithms for each application.

“This initial deployment of Dynamic Targeting is a hugely important step,” Chien said. “The end goal is operational use on a science mission, making for a very agile instrument taking novel measurements.”

There are multiple visions for how that could happen — possibly even on spacecraft exploring the solar system. In fact, Chien and his JPL colleagues drew some inspiration for their Dynamic Targeting work from another project they had also worked on: using data from ESA’s (the European Space Agency’s) Rosetta orbiter to demonstrate the feasibility of autonomously detecting and imaging plumes emitted by comet 67P/Churyumov-Gerasimenko.

On Earth, adapting Dynamic Targeting for use with radar could allow scientists to study dangerous extreme winter weather events called deep convective ice storms, which are too rare and short-lived to closely observe with existing technologies. Specialized algorithms would identify these dense storm formations with a satellite’s look-ahead instrument. Then a powerful, focused radar would pivot to keep the ice clouds in view, “staring” at them as the spacecraft speeds by overhead and gathers a bounty of data over six to eight minutes.

Some ideas involve using Dynamic Targeting on multiple spacecraft: The results of onboard image analysis from a leading satellite could be rapidly communicated to a trailing satellite, which could be tasked with targeting specific phenomena. The data could even be fed to a constellation of dozens of orbiting spacecraft. Chien is leading a test of that concept, called Federated Autonomous MEasurement, beginning later this year.

Melissa Pamer
Jet Propulsion Laboratory, Pasadena, Calif.
626-314-4928
melissa.pamer@jpl.nasa.gov

2025-094



Source link

Tools & Platforms

Apple’s AI and search executive Robby Walker to leave: Report

Published

on


FILE PHOTO: Robby Walker, one of Apple’s most senior AI executives, is leaving the company.
| Photo Credit: AP

Robby Walker, one of Apple’s most senior artificial intelligence executives, is leaving the company, Bloomberg News reported on Friday, citing people with knowledge of the matter.

Walker’s exit comes as Apple’s cautious approach to AI has fueled concerns it is sitting out what could be the industry’s biggest growth wave in decades.

The company was slow to roll out its Apple Intelligence suite, including a ChatGPT integration, while a long-awaited AI upgrade to Siri has been delayed until next year.

Walker has been the senior director of the iPhone maker’s Answers, Information and Knowledge team since April this year. He has been with Apple since 2013, according to his LinkedIn profile.

He is planning to leave Apple next month, the report said. Walker was in charge of Siri until earlier this year, before management of the voice assistant was shifted to software chief Craig Federighi.

Apple did not immediately respond to a Reuters request for comment.

Recently, Apple has seen a slew of its AI executives leave to join Meta Platforms. The list includes Ruoming Pang, Apple’s top executive in charge of AI models, according to a Bloomberg report from July.

Meta has also hired two other Apple AI researchers, Mark Lee and Tom Gunter — who worked closely with Pang — for its Superintelligence Labs team.

Mike Rockwell, vice president in charge of the Vision Products Group, would be in charge of Siri virtual assistant as CEO Tim Cook has lost confidence in AI head John Giannandrea’s ability to execute on product development, Bloomberg had reported in March.

At its annual product launch event last week, Apple introduced an upgraded line of iPhones, alongside a slimmer iPhone Air, and held prices steady amid U.S. President Donald Trump’s tariffs that have hurt the company’s profit.

The event, though, was light on evidence of how Apple — a laggard in the AI race — aimes to close the gap with the likes of Google, which showcased the capabilities of its Gemini AI model in its latest flagship phones.



Source link

Continue Reading

Tools & Platforms

Nano Banana AI: ChatGPT vs Qwen vs Grok vs Gemini; the top alternatives to try in 2025 – The Times of India

Published

on



Nano Banana AI: ChatGPT vs Qwen vs Grok vs Gemini; the top alternatives to try in 2025  The Times of India



Source link

Continue Reading

Tools & Platforms

Judges call for joint oversight of AI expansion

Published

on


Beijing judges have called for stronger regulatory collaboration focused on artificial intelligence developers and service providers, with the aim of supporting innovation in the industry while enhancing the protection of individual rights.

Zhao Changxin, vice-president of the Beijing Internet Court, emphasized the need to supervise AI development and application across sectors. He suggested judicial bodies promptly communicate issues encountered in handling AI-related cases to departments such as cyberspace management, public security, market regulation and intellectual property.

“This joint approach aims to strengthen the regulation and guidance of AI use, and to clearly delineate the responsibilities and obligations of the technology developers, providers and users,” Zhao said on Wednesday.

Since the court’s establishment in September 2018, it has concluded more than 245,000 cases.

“Among them, those involving AI have been rapidly growing, primarily focusing on issues such as the ownership of copyright for AI-generated works and whether AI-powered products or services constitute online infringement,” he said.

As AI expands into more areas, disputes are no longer limited to the internet sector but are emerging in culture, entertainment, finance and advertising sectors, Zhao said.

“While introducing new products and services, the fast development of the technology has also brought new legal risks such as AI hallucinations and algorithmic problems,” he said, adding that judicial decisions should balance encouraging technological innovation with upholding social ethics.

In handling AI-related disputes, Zhao said priority should be given to safeguarding people’s dignity and rights. He cited a landmark ruling by the court as an example.

In 2024, the court heard a lawsuit in which a voice-over artist surnamed Yin claimed her voice had been used without her consent in audiobooks circulating online. The voice was processed by AI, according to Sun Mingxi, another vice-president of the court.

Yin sued five companies, including a cultural media corporation that provided recordings of her voice for unauthorized use, an AI software developer and a voice-dubbing app operator.

The court found the cultural media company had sent Yin’s recordings to the software developer without her permission. The software firm then used AI to mimic Yin’s voice and offered the AI-generated products for sale.

Sun said the AI-powered voice mimicked Yin’s vocal characteristics, intonation and pronunciation style to a high degree.

“This level of similarity allowed for the identification of Yin’s voice,” Sun said.

The court ruled that the actions of the cultural media company and the AI software developer infringed on Yin’s voice rights and ordered them to pay her 250,000 yuan ($35,111) in compensation. The other defendants were not held liable as they unknowingly used the AI-generated voice products.

It was China’s first case concerning rights to voices generated by AI.

“The ruling has set boundaries for how AI should be applied and helped regulate the technology to better serve the public,” Sun said.



Source link

Continue Reading

Trending