Tools & Platforms
Secure AI for America’s future and humanity’s too – Marin Independent Journal
A technological revolution is unfolding — one that will transform our world in ways we can barely comprehend. As artificial intelligence rapidly evolves and corporate America’s investment in AI continues to explode, we stand at a crossroads that will determine not just America’s future but humanity’s as well.
Many leading experts agree that artificial general intelligence (AGI) is within sight. There is a growing consensus that it could be here within the next two to five years. This is a fundamental shift that will lead to scientific and technological advances beyond our imagination. Some have referred to the development of advanced AI as the Second Industrial Revolution, but the truth is that it will be more significant than that — perhaps incomprehensibly so — and we are not prepared.
The potential benefits of AGI are extraordinary. It could discover cures for diseases we have battled for generations, find solutions to the most difficult mathematical and physics problems, and create trillions of dollars in new wealth.
However, there is real cause for concern that we are racing toward an unprecedented technological breakthrough without considering the many dangers it poses. This includes dangers to our labor force, U.S. national security, and even humanity’s very existence. As Anthropic CEO Dario Amodei recently suggested, AI could lead to a “bloodbath” for job-seekers trying to find meaningful work, and that is just one threat.
The same technology that could eradicate cancer may also create bioweapons of unprecedented lethality. Systems designed to optimize energy distribution could be weaponized to destroy critical infrastructure. As countries sprint to develop advanced AI, the one conversation we are not having is about the possibility that the same tools that might solve our greatest challenges could create catastrophic and even existential risks.
Back in 2014, Stephen Hawking warned, “The development of full artificial intelligence could spell the end of the human race.” More recently, OpenAI CEO Sam Altman claimed, “AI will probably most likely lead to the end of the world, but in the meantime, there will be great companies.” According to Bill Gates, not even doctors and lawyers are safe from AI replacement.
AI advancement is developing at warp speed without any brakes. We are unprepared to deal with those risks.
For this reason, we are launching The Alliance for Secure AI, with a mission to ensure advanced AI innovation continues with security and safety as top priorities. We have no interest in stifling critical technological advancement. America can continue to lead the world in AI development while also establishing the necessary safeguards to protect humanity from catastrophe.
Safeguards begin with effective communication across political lines. We will host strategy meetings with coalition partners across the technology, policy, and national security sectors, ensuring that conversations are informed about the dangers of AGI.
Beyond the halls of Congress, this will require a public education push. Most Americans are unaware of the unprecedented threats that AI may pose. Our educational efforts will make complex AI concepts accessible for everyday Americans who must understand that their livelihoods are at risk.
By convening AI experts, policymakers, journalists, and other key stakeholders, we can connect leaders who must work together to get this right for America, and humanity. We have no choice but to build a community committed to responsible AI advancement.
I am profoundly optimistic about AI’s potential to improve our lives. And yet, alongside its potential benefits, AGI will introduce serious and dangerous problems that we will all need to work together to solve.
The advanced AI revolution will be far more consequential than anything in history. Daily activities for everyday Americans will be forever changed. AGI will impact the economy, national security, and the understanding of consciousness itself. Google is already hiring for a “post-AGI” world where AI is smarter than the smartest human being in all cognitive tasks.
It is critical that the U.S. maintains its technological leadership while ensuring AI systems align with human values and American principles. Without safeguards, we risk a future in which the most powerful technology ever created could threaten human liberty and prosperity.
This is about asking fundamental questions: What role should AI play in society? What are the trade-offs we need to consider? What limits should we place on autonomous systems?
Finding the answers to these questions requires broad public engagement — not just from Big Tech, but from every single American.
Brendan Steinhauser is the CEO of The Alliance for Secure AI, a nonprofit organization dedicated to educating the public about the implications of advanced artificial intelligence. ©2025 New York Daily News. Distributed by Tribune Content Agency, LLC.
Tools & Platforms
He Lost Half His Vision. Now He’s Using AI to Spot Diseases Early.
At 26, Kevin Choi got a diagnosis that changed his life: glaucoma.
It’s a progressive eye disease that damages the optic nerve, often without symptoms until it’s too late. By the time doctors caught it, Choi had lost half his vision.
An engineer by training — and a former rifleman in South Korea’s Marine Corps — Choi thought he had a solid handle on his health.
“I was really frustrated I didn’t notice that,” he said.
The 2016 diagnosis still gives him “panic.” But it also sparked something big.
That year, Choi teamed up with his doctor, a vitreoretinal surgeon, to cofound Mediwhale, a South Korea-based healthtech startup.
Their mission is to use AI to catch diseases before symptoms show up and cause irreversible harm.
“I’m the person who feels the value of that the most,” Choi said.
The tech can screen for cardiovascular, kidney, and eye diseases through non-invasive retinal scans.
Mediwhale’s technology is primarily used in South Korea, and hospitals in Dubai, Italy, and Malaysia have also adopted it.
Mediwhale said in September that it had raised $12 million in its Series A2 funding round, led by Korea Development Bank.
Antoine Mutin for BI
AI can help with fast, early screening
Choi believes AI is most powerful in the earliest stage of care: screening.
AI, he said, can help healthcare providers make faster, smarter decisions — the kind that can mean the difference between early intervention and irreversible harm.
In some conditions, “speed is the most important,” Choi said. That’s true for “silent killers” like heart and kidney disease, and progressive conditions like glaucoma — all of which often show no early symptoms but, unchecked, can lead to permanent damage.
For patients with chronic conditions like diabetes or obesity, the stakes are even higher. Early complications can lead to dementia, liver disease, heart problems, or kidney failure.
The earlier these risks are spotted, the more options doctors — and patients — have.
Choi said Mediwhale’s AI makes it easier to triage by flagging who’s low-risk, who needs monitoring, and who should see a doctor immediately.
Screening patients at the first point of contact doesn’t require “very deep knowledge,” Choi said. That kind of quick, low-friction risk assessment is where AI shines.
Mediwhale’s tool lets patients bypass traditional procedures — including blood tests, CT scans, and ultrasounds — when screening for cardiovascular and kidney risks.
Choi also said that when patients see their risks visualized through retinal scans, they tend to take it more seriously.
Antoine Mutin for BI
AI won’t replace doctors
Despite his belief in AI’s power, Choi is clear: It’s not a replacement for doctors.
Patients want to hear a human doctor’s opinion and reassurance.
Choi also said that medicine is often messier than a clean dataset. While AI is “brilliant at solving defined problems,” it lacks the ability to navigate nuance.
“Medicine often requires a different dimension of decision-making,” he said.
For example: How will a specific treatment affect someone’s life? Will they follow through? How is their emotional state affecting their condition? These are all variables that algorithms still struggle to read, but doctors can pick up. These insights “go beyond simple data points,” Choi said.
And when patients push back — say, hesitating to start a new medication — doctors are trained to both understand why and guide them.
They are able to “navigate patients’ irrational behaviours while still grounding decisions in quantitative data,” he said.
“These are complex decision-making processes that extend far beyond simply processing information.”
Tools & Platforms
First AI-powered self-monitoring satellite launched into space
A satellite the size of a mini fridge is about to make big changes in space technology—and it’s happening fast.
Researchers from UC Davis have created a new kind of satellite system that can monitor and predict its own condition in real time using AI. This marks the first time a digital brain has been built into a spacecraft that will operate independently in orbit. And the most impressive part? The entire project, from planning to launch, will be completed in just 13 months—an almost unheard-of pace in space missions.
A Faster Path to Space
Most satellite projects take years to develop and launch. But this mission, set to take off in October 2025 from a base in California, has broken records by cutting the timeline to just over a year. That’s due in part to a partnership between university scientists and engineers and Proteus Space. Together, they’ve built what’s being called the first “rapid design-to-deployment” satellite system of its kind.
A Smart Brain for the Satellite
The most exciting feature of this mission is the custom payload—a special package inside the satellite built by researchers. This package holds a digital twin, which is a computer model that acts like a mirror of the satellite’s power system. But unlike earlier versions of digital twins that stay on Earth and get updates sent from space, this one lives and works inside the satellite itself.
That means the satellite doesn’t need to “phone home” to understand how it’s doing. Instead, it uses built-in sensors and software to constantly check the health of its own batteries, monitor power levels, and decide what might happen next.
“The spacecraft itself can let us know how it’s doing, which is all done by humans now,” explained Adam Zufall, a graduate researcher helping to lead the project.
By using artificial intelligence, the satellite’s brain doesn’t just collect data. It also learns from it. Over time, the system should get better at guessing how its batteries and systems will behave next. That helps the satellite adjust its operations on its own, even before problems arise.
“It should get smarter as it goes,” said Professor Stephen Robinson, who directs the lab that built the payload. “And be able to predict how it’s going to perform in the near future. Current satellites do not have this capability.”
Working Together Across Disciplines
Building this kind of technology takes teamwork. The project brings together experts in robotics, space systems, computer science, and battery research. In addition to Robinson and Zufall, the team includes another mechanical engineering professor who focuses on battery management. His lab studies how batteries behave under different conditions, including in space.
Graduate students in engineering and computer science also play major roles. One student helps design the spacecraft’s software, while others work on how the AI makes predictions and responds to changes in power levels.
Together, they’ve built a model that can monitor voltage and other readings to understand how much energy the satellite can store and use.
The satellite will carry several other payloads, both commercial and scientific. But the real highlight is this AI-powered system that watches itself and adjusts on the fly.
What Happens After Launch
Once launched from Vandenberg Space Force Base, the satellite will move into low Earth orbit. It’s designed to stay active for up to 12 months, gathering data and testing its smart brain in the harsh conditions of space. This type of orbit sits a few hundred miles above the Earth’s surface—far enough to test the systems, but close enough for short communication times.
After its mission ends, the satellite will continue to orbit for another two years. By the end of its life, gravity and drag will pull it back toward Earth, where it will burn up safely in the atmosphere. This kind of planned decay helps keep space clean and reduces the risk of debris collisions.
The whole mission shows how fast and flexible future space projects might become. Instead of waiting years to build and test systems, researchers could design, launch, and operate smart satellites in a matter of months. That could open the door to more frequent missions, more advanced designs, and smarter satellites across the board.
Changing the Future of Spacecraft
Satellites that can take care of themselves offer big advantages. Right now, spacecraft rely on ground teams to tell them what to do, run checks, and respond to problems. This creates delays, increases costs, and adds risk.
By placing real-time digital twins on board, future satellites could adjust to problems on their own. They could shut down failing parts, save power when needed, or warn engineers of upcoming issues days in advance.
This would reduce the workload for ground teams and improve the life and safety of space missions.
The team behind this project believes their work is just the beginning. With more advanced AI tools and faster build times, space technology could move at a much quicker pace. More importantly, it could become smarter, more reliable, and more responsive to change. This satellite might be small, but it could help start a big shift in how space systems are built and run.
Tools & Platforms
Femtech technology enhances women's health with AI and robotics in Korea – CHOSUNBIZ – Chosunbiz
-
Funding & Business6 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers5 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business3 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers3 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit
-
Funding & Business6 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business3 days ago
HOLY SMOKES! A new, 200% faster DeepSeek R1-0528 variant appears from German lab TNG Technology Consulting GmbH