Connect with us

Tools & Platforms

Sam Altman’s AI Empire Relies on Brutal Labor Exploitation

Published

on


Artificial intelligence (AI) is quite possibly the most hyped technology in history. For well over half a century, the potential for AI to replace most or all human skills has crisscrossed in the public imagination between sci-fi fantasy and scientific mission.

From the predictive AI of the 2000s that brought us search engines and apps, to the generative AI of the 2020s that is bringing us chatbots and deepfakes, every iteration of AI is apparently one more leap toward the summit of human-comparable AI, or what is now widely termed Artificial General Intelligence (AGI).

The strength of Karen Hao’s detailed analysis of America’s AI industry, Empire of AI, is that her relentlessly grounded approach refuses to play the game of the AI hype merchants. Hao makes a convincing case that it is wrong to focus on hypotheticals about the future of AI when its present incarnation is fraught with so many problems. She also stresses that exaggerated “doomer” and “boomer” perspectives on what is coming down the line both end up helping the titans of the industry to build a present and future for AI that best serves their interests.

Moreover, AI is a process, not a destination. The AI we have today is itself the product of path dependencies based on the ideologies, infrastructure, and IPs that dominate in Silicon Valley. As such, AI is being routed down a highly oligopolistic developmental path, one that is designed deliberately to minimize market competition and concentrate power in the hands of a very small number of American corporate executives.

However, the future of AI remains contested territory. In what has come as a shock to the Silicon Valley bubble, China has emerged as a serious rival to US AI dominance. As such, AI has now moved to the front and center of great-power politics in a way comparable to the nuclear and space races of the past. To understand where AI is and where it is going, we must situate analysis of the technology within the wider economic and geopolitical context in which the United States finds itself.

Hao’s story revolves around OpenAI, the San Francisco company most famous for ChatGPT, the AI chatbot that brought generative AI to the world’s attention. Through the trials and tribulations of its CEO Sam Altman, we are brought into a world of Machiavellian deceit and manipulation, where highfalutin moral ambition collides constantly with the brutal realities of corporate power. Altman survives the various storms that come his way, but only by junking everything he once claimed to believe in.

OpenAI began with the mission of “building AGI that benefits humanity” as a nonprofit that would collaborate with others through openly sharing its research, without developing any commercial products. This objective stemmed from the convictions of Altman and OpenAI’s first major patron, Elon Musk, who believed that AI posed major risks to the world if it was developed in the wrong way. AI therefore required cautious development and tight government regulation to keep it under control.

OpenAI was thus a product of AI’s “doomer” faction. The idea was to be the first to develop AGI in order to be best positioned to rein it in. The fact that Altman would end up flipping OpenAI 180 degrees — creating a for-profit company that produces proprietary software, based on extreme levels of corporate secrecy and shark-like determination to outcompete its rivals in the speed of AI commercialization, regardless of the risks — testifies to his capacity to mutate into whatever he needs to be in the pursuit of wealth and power.

The motivation for the first shift toward what OpenAI would eventually become came from strategic considerations in relation to its doctrine of AI development, called “scaling.” The idea behind scaling was that AI could advance by leaps and bounds simply through the brute force of massive data power. This reflected a devout belief in “connectionism,” a school of AI development that was much easier to commercialize than its rival (“symbolism”).

The connectionists believed that the key to AI was to create “neural networks,” digital approximations of real neurons in the human brain. OpenAI’s big thinkers, most importantly its first chief scientist Ilya Sutskever, believed that if the firm had more data-processing nodes (“neurons”) available to it than anyone else, it would position itself at the cutting edge of AI development. The problem was that scaling, an intrinsically data-intensive strategy, required a huge amount of capital — much more than a nonprofit was capable of attracting.

Driven by the need to scale, OpenAI created a for-profit arm in 2019 to raise capital and build commercial products. As soon as it did so, there was a scramble between Altman and Musk to take over as CEO. Altman won out and Musk, having been sidelined, turned from ally to enemy overnight, accusing Altman of raising funds as a nonprofit under false pretenses. This was a criticism that would later develop into litigation against OpenAI.

But Musk’s ideological justification for the split was an afterthought. If he had won the power struggle, the world’s richest man planned to lash OpenAI to his electric car company, Tesla. Whoever became CEO, OpenAI was on an irreversible path toward becoming just like any other Big Tech giant.

Yet because of the company’s origins, it was left with a strange governance structure that gave board-level control to an almost irrelevant nonprofit arm, based on the ludicrous pretense that, despite its newly embraced profit motive, OpenAI’s mission was still to build AGI for humanity. The Effective Altruism (EA) movement gave a veneer of coherence to the Orwellian ideological precepts of OpenAI. EA promotes the idea that the best way of doing good is to become as rich as possible and then give your money to philanthropic causes.

This junk philosophy found massive support in Silicon Valley, where the idea of pursuing maximum wealth accumulation and justifying it on moral terms was highly convenient. Altman, who in 2025 glad-handed Saudi Prince Mohammed Bin Salman alongside Trump just after the despotic ruler announced his own AI venture, epitomizes the inevitable endgame of EA posturing: power inevitably becomes its own purpose.

Just four months after the for-profit launched, Altman secured a $1 billion investment from Microsoft. With Musk out of the picture, OpenAI found an alternative Big Tech benefactor to fund its scaling. More willing to trample on data protection rules than big competitors like Google, OpenAI began to extract data from anywhere and everywhere, with little care for its quality or content — a classic tech start-up “disruptor” mentality, akin to Uber or Airbnb.

This data bounty was the raw material that fueled OpenAI’s scaling. Driven by a desire to impress Microsoft founder and former CEO Bill Gates, who wanted to see OpenAI create a chatbot that would be useful for research, the company developed ChatGPT, expecting it to be moderately successful. To everyone’s surprise, within two months ChatGPT became the fastest-growing consumer app in history. The generative AI era was born.

From that point onward, OpenAI became relentlessly focused on commercialization. But the shockwaves of ChatGPT were felt well beyond the company. Scaling became the standard-bearer for AI development: observers deemed whichever company could marshal the greatest amount of “compute” (data power) to be the likely winner of the AI tech race. Alphabet and Meta started to spend sums on AI development that dwarfed those marshalled by the US Government and the European Commission.

As Big Tech raced to get ahead on generative AI, the funding rush swept up almost all of the talent in the field. This transformed the nature of AI research, including in universities, with leading professors increasingly tied to one of the Big Tech players. As the stakes grew higher, research from within companies became increasingly secretive and dissent frowned upon. Corporate proprietary walls were dividing up the field of AI development.

We have to place this heavily commodified form of AI development within its overall conjuncture. If generative AI was developed in the US in the 1950s, it would have gone years or even decades being largely backed by US military R&D budgets. Even after it had undergone commercialization, the state would have remained the main purchaser of the technology for decades. This was the developmental path of semiconductors.

However, in the 2020s, at the tail end of the neoliberal era, it is the corporate–state nexus that drives and frames technological development in the United States, reducing incentives for long-term thinking, and stunting any open, pedagogical process of scientific inquiry. That will have long-term consequences for how AI is developed that are unlikely to be positive, whether for society in general or for American global leadership in particular.

One of the myths of AI is that it is a technology that does not rely on workers. There are essentially three parts to the generative AI re-production process: extracting the data, crunching the data, and testing or fixing the data. The extraction part relies on dead, rather than living, labor. For example, OpenAI scraped data from LibraryGenesis, an online repository of books and scholarly articles, making use of centuries of intellectual labor for free.

The crunching data part of generative AI is all about computing power, which relies on labor only to the extent that the infrastructure required for “compute,” most importantly data centers, is based on a long digital value chain that includes Taiwanese chip manufacturers and Chilean copper miners. While the testing and fixing of the data is the part of generative AI re-production that is most often forgotten about, it is also the part that is most directly dependent on workers.

There are two types of digital workers required for testing and fixing the enormous data requirements of generative AI. The first are click workers, also known as data annotators (or data labelers). These are gig workers who earn piece rates for completing short digital tasks, such as categorizing what is contained in an image.

Click workers are vital because without them, AI systems like ChatGPT would be riddled with errors, especially when it comes to “edge cases”: rare or unusual situations that sit at the boundaries of AI’s categorization parameters. Click workers turn the data of generative AI systems from low grade to high quality. This is especially important for OpenAI, since so much of the company’s data has been extracted from the gutters of the internet.

The barriers to entry for click work are extremely low, because anyone who can access the internet can perform the most basic tasks. Click workers are operating in a global labor market with little connection to their fellow workers, meaning they have very limited leverage over their digital bosses. As such, the pay rates are rock-bottom and the conditions as precarious as it gets.

Hao finds that Venezuela became the global hotbed of click work for a period of time, due to its high education levels, good internet access, and massive economic crisis. The tough US sanctions on Venezuela didn’t preclude its AI companies from exploiting the South American country’s desperate and impoverished workforce. Once click-work outsourcing firms like RemoTasks feel that they have maximized the labor exploitation of one crisis-hit country, or start to face resistance over working conditions, they simply “robo-fire” workers from that location and bring workers on board from somewhere else.

The second type of worker in the AI industry is the content moderator. Because OpenAI and other AI companies are scraping the detritus of the internet for data, a substantial portion is saturated with racism, sexism, child pornography, fascist views, and every other ugly thing one can think of. A version of AI that doesn’t have the horrors of the internet filtered out will develop these characteristics in its responses; indeed, earlier versions of what would become ChatGPT did produce neo-Nazi propaganda, alarming OpenAI’s compliance team.

The solution has been to turn to human content moderators to extract the filth out of the AI’s system, in the same way content moderators have been tasked for years now with policing social media content. Unlike click workers, the content moderator workforce tends to be subject to a regime of digital Taylorism, rather than one of piece work. This takes the form of a call center-style setup where workers are motivated by bonuses in target-driven environments, all the time under the watchful eyes of human supervisors and digital surveillance.

Like the click workers, they are completing small digital tasks by annotating data, but the data they are annotating consists of the vilest content humans can produce. Because they are training the AI, it’s necessary for content moderators to look closely at all the gory details that flash up on their screen in order to label each part correctly. Being exposed to this repeatedly and exhaustively is a mental health nightmare.

Hao follows the story of Mophat Okinyi, a Kenyan content moderator working for outsourcing firm Sama, moderating Meta and OpenAI content. The longer Okinyi worked for Sama, the more his behavior became erratic and his personality changed, destroying his relationship and leading to spiraling costs for mental health support.

Having reported on content moderation myself, I know that Okinyi’s case is by no means exceptional. It is the norm for content moderators to have their minds systematically broken down by the relentless brutality they must witness repeatedly just to do their job.

While most click work and content moderation is done in the Global South, there are signs that as AI becomes more complex, it will increasingly need data workers in the Global North as well. The main reason for this is the increasing importance of Reinforcement Learning from Human Feedback (RLHF) to AI development.

RLHF is a more complex form of data annotation, because click workers need to compare two responses from an AI and be able to explain why one is better than the other. As AI tools are developed for specific industries, the need for specialist expertise as well as an understanding of culturally specific cues means that RLHF increasingly requires high-skill workers to enter the AI industry.

In keeping with the style of the book, Hao does not speculate on where RLHF might lead, but it is worth briefly considering its potential impact on the future of work. If generative AI tools can produce content which is as good as or better than material from a human, then it is not inconceivable that such tools could replace the worker in any content-producing industry.

However, that would not mean that the skills of those workers would disappear entirely: there would still be a need, for example, for paralegals, but their job would be to test and fix the paralegal AI. At that point, these professional service-sector jobs would be exposed to the Uberized model of work that click workers in the Global South have now experienced for years. It’s not for nothing that Altman has said “there will be some change required to the social contract.”

Of course, there remain significant question marks about generative AI’s true capacities in a wide range of content production. But wherever you sit on the scale between skeptic and true believer, there’s little doubt that AI will increasingly be relevant not only to the jobs of the most impoverished sections of the working class, but also to workers who are used to having some level of financial security due to their position higher up the labor-market ladder. The drawing of a much larger pool of workers into the precariat could have explosive social consequences.

AI’s effect on the environment is likely to be just as dramatic as its impact on labor, if not more so. Generative AI’s enormous data usage requires gigantic data centers rammed with high-energy usage GPU chips to service it. These data centers need vast amounts of land to build on and huge quantities of water to cool them down. As generative AI products become increasingly widely used, the ecological footprint of the industry expands relentlessly.

Hao highlights some stunning statistics and predictions. Every AI-generated image has the equivalent energy consumption of charging a smartphone by 25 percent. AI’s water usage could match half of all the water used in the UK by 2027. By 2030, AI could be using more energy than all of India, the world’s third-largest consumer of electricity.

The environmental consequences have already been significant. Iowa, two years into a drought, had Microsoft guzzling 11.5 million tonnes of the state’s potable water. Uruguay, a country that has experienced repeated droughts, saw mass protests after the courts forced its government to reveal the extent of the drinking water that Google’s data centers in the country were using. “This is not drought, it’s pillage,” graffiti in Montevideo reads.

What makes the arrival of data centers en masse especially hard to stomach for local populations is the fact that they provide hardly any upsides. Data centers generate very few jobs in the places they are located, while draining local areas of their land and water, thus actively damaging more labor-intensive industries.

In spite of this, following the logic of the scaling doctrine, we should expect data centers to grow ever bigger as “compute” expands to keep AI moving forward. While Altman has invested heavily in a nuclear fusion start-up as the golden ticket to abundant and free energy, just like AGI, it is a bet on a miracle cure tomorrow that distracts from the real problems AI scaling is causing today.

However, in a rare bit of good news for the world’s ecology, the scaling doctrine received a hammerblow from the far east in January. DeepSeek, a Chinese generative AI chatbot, launched and quickly surpassed ChatGPT as the most downloaded app in the United States.

The remarkable thing about DeepSeek is not what it did, but how it did it. The chatbot cost just $6 million to train — about one-fiftieth of the cost of ChatGPT, with a higher quality on some benchmarks. DeepSeek was trained on old GPU chips, designed by Nvidia to be of lower quality to comply with US chip export restrictions to China. Because of its efficiency, DeepSeek’s energy consumption is 90 percent lower than that of ChatGPT. The technical workings behind this engineering marvel were made open-source, so anyone could see how it was done.

DeepSeek was a technological marvel and a geopolitical earthquake rolled into one. Not only did it mark China’s arrival as a tech superpower, but it also demonstrated that the scaling doctrine embraced by the whole of Silicon Valley as the go-to methodology for generative AI had proven to be behind the curve, at best. The shock that a Chinese company could embarrass Silicon Valley was so great that it triggered panic in Wall Street about whether the trillions already invested in American AI constituted a bet gone badly wrong.

In one day, the fall in the market capitalization of tech stocks was equivalent to the entire financial value of Mexico. Even Donald Trump weighed in to say that DeepSeek’s emergence was “a wake-up call” for US Big Tech. On X, Altman struck a positive tone in response, but OpenAI quickly started to brief the press that DeepSeek might have “distilled” OpenAI’s models in creating its chatbot, though little has been heard about this claim since. In any case, distillation can’t explain the enormous efficiency gains of DeepSeek compared to OpenAI.

It’s unfortunate that DeepSeek doesn’t appear in Empire of AI. Hao writes that she finished the book in January 2025, the month of DeepSeek’s launch. It would have been wise for the publisher to have given Hao a six-month extension to write a chapter on DeepSeek and the fallout in the US, especially considering how much of the book is a critique of the dogma that scaling is the only way to seriously develop AI.

However, she has commented elsewhere on DeepSeek’s dramatic arrival and the flaws it reveals about the US AI industry: “DeepSeek has demonstrated that scaling up AI models relentlessly, a paradigm OpenAI introduced and champions, is not the only, and far from the best, way to develop AI.”

DeepSeek also raised more profoundly ideological questions about AI development. If a Chinese company could develop cutting-edge tech on an open-source basis, giving everyone else the opportunity to test the underlying assumptions of their innovation and build on them, why were American companies busy constructing giant proprietary software cages around their tech — a form of enclosure that was bound to inhibit the speed of scientific progress? Some have started asking whether Chinese communism offers a better ecosystem for AI development than American capitalism.

In fact, the question of open-source versus proprietary approaches just scratches the surface of the debates that society should be having about artificial intelligence. Ultimately, both DeepSeek and ChatGPT operate based on capitalist business models, just with different principles of technical development. While the Android open-source software operating system differentiates Google from Apple, no one today invests any hopes in Google as a model for socially just tech development. The bigger question we should be asking is this: if we can’t trust oligopolistic capitalist enterprises with a technology as powerful as this, how should AI be governed?

Hao only really gets her teeth into this point in the book’s epilogue, “How the Empire Falls.” She takes inspiration from Te Hiku, a Māori AI speech recognition project. Te Hiku seeks to revitalize the te reo language through putting archived audio tapes of te reo speakers into an AI speech recognition model, teaching new generations of Māori who have few human teachers left.

The tech has been developed on the basis of consent and active participation from the Māori community, and it is only licensed to organizations that respect Māori values. Hao believes Te Hiku shows there is “another way” of doing AI:

Models can be small and task specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers. The creation of AI can be community driven, consensual, respectful of local context and history; its application can uplift and strengthen marginalised communities; its governance can be inclusive and democratic.

More broadly, Hao says we should be aiming for “redistributing power” in AI along three axes: knowledge, resources, and influence. There should be greater funding for organizations pursuing new directions of AI research, holding Big Tech to account, or developing community-based AI tools like Te Hiku. There should also be transparency over AI training data, its environmental impact, supply chains, and land leases.

Labor unions should be supported to develop power among data workers and workers whose jobs are under threat from automation. Finally, “broad-based education” is required to bust the myths surrounding AI, so that the public can come to a more grounded understanding of how AI tools are built, what their constraints are, and whose interests they serve.

Although these are important ideas, in and of themselves they wouldn’t threaten the power of companies like OpenAI. The state is notably absent from Hao’s vision for bringing down the AI tech giants. The questions of how AI should be regulated and what ownership structure it should have go unexplored in Empire of AI.

Perhaps in the age of Trump there is a feeling of skepticism among progressives that the state can be anything other than a tool for entrenching the power of corporate elites. It is certainly hard not to be cynical when confronted with projects like Stargate, an Open AI-backed private sector collaboration to invest $500 billion in AI infrastructure. Stargate is underpinned by a commitment from the Trump administration that it will bend and break regulations as necessary to ensure the project gets the energy supply it needs — a clear case of the state–corporate nexus working seamlessly, with little care about the consequences for society at large.

Yet the left can’t sidestep the question of state power and AI. While projects like Te Hiku are no doubt valuable, by definition they cannot be scaled-up alternatives to the collective power of American AI capital, which commands resources far greater than many of the world’s states. If it becomes normal for AI tools like ChatGPT to be governed by and for Silicon Valley, we risk seeing the primary means of content production concentrated in the hands of a tiny number of tech barons.

We therefore need to put big solutions on the table. Firstly, regulation: there must be a set of rules that place strict limits on where AI companies get their data from, how their models are trained, and how their algorithms are managed. In addition, all AI systems should be forced to operate within tightly regulated environmental limits: energy usage for generative AI cannot be a free-for-all on a planet under immense ecological stress. AI-powered automated weapons systems should be prohibited. All of this should be subject to stringent, independent audits to ensure compliance.

Secondly, although the concentration of market power in the AI industry took a blow from DeepSeek’s arrival, there remain strong tendencies within AI — and indeed in digital tech as a whole — towards monopolization. Breaking up the tech oligarchy would mean eliminating gatekeepers that concentrate power and control data flows.

Finally, the question of ownership should be a serious part of the debate. Te Hiku shows that when AI tools are built by organizations with entirely different incentive structures in place, they can produce wildly different results. As long as artificial intelligence is designed for the purposes of the competitive accumulation of capital, firms will continue to find ways to exploit labor, degrade the environment, take short cuts in data extraction, and compromise on safety, because if they don’t, one of their competitors will.

It is possible to imagine a world where a socialized AI serves as a genuine aid to humanity. It would be one where instead of displacing jobs, AI would be designed to help workers reduce the amount of time they spend on technical and bureaucratic tasks, focusing human energies on problem-solving instead, and reducing the length of the working week. Rather than gobbling up water supplies, AI would function as a resource planning tool to help identify waste and duplication within and across energy systems.

These possibilities are far removed from the fantasies of AGI, whereby Artificial Intelligence will supposedly become so powerful that it will resolve problems deeply embedded in the social relations of capitalism. Instead, this is a vision for AI that presupposes structural change.

On May 29, the US Department of Energy tweeted the following message: “AI is the next Manhattan Project, and THE UNITED STATES WILL WIN.” US government agencies are not the only ones to have compared AI favorably to the Manhattan Project.

From the days when OpenAI was just an idea in Altman’s head, he was proposing a “Manhattan Project for AI” to Musk. When he watched Oppenheimer, the Oscar-winning biopic of the man who led the Manhattan Project, Altman’s conclusion was that the mushroom cloud over Japan was a bad look for the atomic age — it’s important to get the PR right when it comes to emerging technologies. The obvious moral lesson of the story — the idea that scientists with good intentions can cause monstrous damage by naively assuming they are (and will always be) on the side of the good guys — never seemed to cross his mind.

The Manhattan Project is an imperfect analogy for the AI tech race. While geopolitical tension is undoubtedly growing between the United States and China, with technology at the heart of it, we are thankfully not yet in the midst of a world war.

The point at which the comparison holds best is this: in both cases, the scientists at the technological vanguard were and are the ones most loudly warning about the risks. As in the case of the Manhattan Project, the interests of US politicians in seeing their scientists develop the technology faster than anyone else is drowning out the warnings about the risks for humanity as a whole.

“A nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction,” Leo Szilard, who first developed the concept of the nuclear chain reaction, wrote, “may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.” Today, it is the likes of Geoffrey Hinton, known as the “godfather of AI,” who are playing the role of Szilard. Hinton resigned from Google over the “existential risk” posed by the way the technology is being developed.

Moreover, we don’t need to speculate about future risks — we can see it in the here and now. The Israeli military has used an AI machine called “Lavender” to identify “targets” in its genocide in Gaza. Lavender’s kill list was made up of thirty-seven thousand Palestinians, with no requirement to check why those individuals were on the list. Human input was limited to a rubberstamp process, even though its overseers knew that Lavender makes errors in at least 10 percent of cases. As has been long clear, Palestine serves as a laboratory for the use of emerging military technologies, which are refined and then exported to the world.

The Left should actively oppose a Manhattan Project for AI. A frenzied geopolitical competition to develop highly militarized and propagandistic use cases for AI is not in the interests of the United States, China, or the rest of the world. Whether it is Gaza today or Hiroshima and Nagasaki in the 1940s, we should recognize what the United States “winning” the tech race in this sense looks like. Loosening the grip of the US corporate–state nexus over AI should be a key priority for all those interested in a world of peace and social justice.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

He Lost Half His Vision. Now He’s Using AI to Spot Diseases Early.

Published

on


At 26, Kevin Choi got a diagnosis that changed his life: glaucoma.

It’s a progressive eye disease that damages the optic nerve, often without symptoms until it’s too late. By the time doctors caught it, Choi had lost half his vision.

An engineer by training — and a former rifleman in South Korea’s Marine Corps — Choi thought he had a solid handle on his health.

“I was really frustrated I didn’t notice that,” he said.

The 2016 diagnosis still gives him “panic.” But it also sparked something big.

That year, Choi teamed up with his doctor, a vitreoretinal surgeon, to cofound Mediwhale, a South Korea-based healthtech startup.

Their mission is to use AI to catch diseases before symptoms show up and cause irreversible harm.

“I’m the person who feels the value of that the most,” Choi said.

The tech can screen for cardiovascular, kidney, and eye diseases through non-invasive retinal scans.

Mediwhale’s technology is primarily used in South Korea, and hospitals in Dubai, Italy, and Malaysia have also adopted it.

Mediwhale said in September that it had raised $12 million in its Series A2 funding round, led by Korea Development Bank.


Kevin Choi

Antoine Mutin for BI



AI can help with fast, early screening

Choi believes AI is most powerful in the earliest stage of care: screening.

AI, he said, can help healthcare providers make faster, smarter decisions — the kind that can mean the difference between early intervention and irreversible harm.

In some conditions, “speed is the most important,” Choi said. That’s true for “silent killers” like heart and kidney disease, and progressive conditions like glaucoma — all of which often show no early symptoms but, unchecked, can lead to permanent damage.

For patients with chronic conditions like diabetes or obesity, the stakes are even higher. Early complications can lead to dementia, liver disease, heart problems, or kidney failure.

The earlier these risks are spotted, the more options doctors — and patients — have.

Choi said Mediwhale’s AI makes it easier to triage by flagging who’s low-risk, who needs monitoring, and who should see a doctor immediately.

Screening patients at the first point of contact doesn’t require “very deep knowledge,” Choi said. That kind of quick, low-friction risk assessment is where AI shines.

Mediwhale’s tool lets patients bypass traditional procedures — including blood tests, CT scans, and ultrasounds — when screening for cardiovascular and kidney risks.

Choi also said that when patients see their risks visualized through retinal scans, they tend to take it more seriously.


Kevin Choi on the street in Seoul

Choi said AI can help healthcare providers make faster, smarter decisions — the kind that can mean the difference between early intervention and irreversible harm.

Antoine Mutin for BI



AI won’t replace doctors

Despite his belief in AI’s power, Choi is clear: It’s not a replacement for doctors.

Patients want to hear a human doctor’s opinion and reassurance.

Choi also said that medicine is often messier than a clean dataset. While AI is “brilliant at solving defined problems,” it lacks the ability to navigate nuance.

“Medicine often requires a different dimension of decision-making,” he said.

For example: How will a specific treatment affect someone’s life? Will they follow through? How is their emotional state affecting their condition? These are all variables that algorithms still struggle to read, but doctors can pick up. These insights “go beyond simple data points,” Choi said.

And when patients push back — say, hesitating to start a new medication — doctors are trained to both understand why and guide them.

They are able to “navigate patients’ irrational behaviours while still grounding decisions in quantitative data,” he said.

“These are complex decision-making processes that extend far beyond simply processing information.”





Source link

Continue Reading

Tools & Platforms

First AI-powered self-monitoring satellite launched into space

Published

on


A satellite the size of a mini fridge is about to make big changes in space technology—and it’s happening fast.

Researchers from UC Davis have created a new kind of satellite system that can monitor and predict its own condition in real time using AI. This marks the first time a digital brain has been built into a spacecraft that will operate independently in orbit. And the most impressive part? The entire project, from planning to launch, will be completed in just 13 months—an almost unheard-of pace in space missions.

A Faster Path to Space

Most satellite projects take years to develop and launch. But this mission, set to take off in October 2025 from a base in California, has broken records by cutting the timeline to just over a year. That’s due in part to a partnership between university scientists and engineers and Proteus Space. Together, they’ve built what’s being called the first “rapid design-to-deployment” satellite system of its kind.

UC Davis graduate students Ayush Patnaik (left) and Adam Zufall (right) working on a payload that will travel into space this fall. The payload is a digital twin that will use AI software to measure the activity and predict the future state of the battery. (CREDIT: Mario Rodriguez)

A Smart Brain for the Satellite

The most exciting feature of this mission is the custom payload—a special package inside the satellite built by researchers. This package holds a digital twin, which is a computer model that acts like a mirror of the satellite’s power system. But unlike earlier versions of digital twins that stay on Earth and get updates sent from space, this one lives and works inside the satellite itself.

That means the satellite doesn’t need to “phone home” to understand how it’s doing. Instead, it uses built-in sensors and software to constantly check the health of its own batteries, monitor power levels, and decide what might happen next.

“The spacecraft itself can let us know how it’s doing, which is all done by humans now,” explained Adam Zufall, a graduate researcher helping to lead the project.

By using artificial intelligence, the satellite’s brain doesn’t just collect data. It also learns from it. Over time, the system should get better at guessing how its batteries and systems will behave next. That helps the satellite adjust its operations on its own, even before problems arise.

“It should get smarter as it goes,” said Professor Stephen Robinson, who directs the lab that built the payload. “And be able to predict how it’s going to perform in the near future. Current satellites do not have this capability.”

Working Together Across Disciplines

Building this kind of technology takes teamwork. The project brings together experts in robotics, space systems, computer science, and battery research. In addition to Robinson and Zufall, the team includes another mechanical engineering professor who focuses on battery management. His lab studies how batteries behave under different conditions, including in space.

A satellite made by Proteus Space with payload help from UC Davis researchers, is planned for an October launch that the aerospace company says would make it the fastest-ever launch-qualified satellite. (CREDIT: Proteus Space)

Graduate students in engineering and computer science also play major roles. One student helps design the spacecraft’s software, while others work on how the AI makes predictions and responds to changes in power levels.

Together, they’ve built a model that can monitor voltage and other readings to understand how much energy the satellite can store and use.

The satellite will carry several other payloads, both commercial and scientific. But the real highlight is this AI-powered system that watches itself and adjusts on the fly.

What Happens After Launch

Once launched from Vandenberg Space Force Base, the satellite will move into low Earth orbit. It’s designed to stay active for up to 12 months, gathering data and testing its smart brain in the harsh conditions of space. This type of orbit sits a few hundred miles above the Earth’s surface—far enough to test the systems, but close enough for short communication times.

The Proteus team just broke the record for the fastest launch qualified custom bus satellite ever. (CREDIT: Proteus Space)

After its mission ends, the satellite will continue to orbit for another two years. By the end of its life, gravity and drag will pull it back toward Earth, where it will burn up safely in the atmosphere. This kind of planned decay helps keep space clean and reduces the risk of debris collisions.

The whole mission shows how fast and flexible future space projects might become. Instead of waiting years to build and test systems, researchers could design, launch, and operate smart satellites in a matter of months. That could open the door to more frequent missions, more advanced designs, and smarter satellites across the board.

Changing the Future of Spacecraft

Satellites that can take care of themselves offer big advantages. Right now, spacecraft rely on ground teams to tell them what to do, run checks, and respond to problems. This creates delays, increases costs, and adds risk.

Another angle of the satellite. (CREDIT: Proteus Space)

By placing real-time digital twins on board, future satellites could adjust to problems on their own. They could shut down failing parts, save power when needed, or warn engineers of upcoming issues days in advance.

This would reduce the workload for ground teams and improve the life and safety of space missions.

The team behind this project believes their work is just the beginning. With more advanced AI tools and faster build times, space technology could move at a much quicker pace. More importantly, it could become smarter, more reliable, and more responsive to change. This satellite might be small, but it could help start a big shift in how space systems are built and run.





Source link

Continue Reading

Tools & Platforms

Femtech technology enhances women's health with AI and robotics in Korea – CHOSUNBIZ – Chosunbiz

Published

on



Femtech technology enhances women’s health with AI and robotics in Korea – CHOSUNBIZ  Chosunbiz



Source link

Continue Reading

Trending