Connect with us

Business

Sam Altman Says AI Will Speed up Job Turnover, Hit Service Roles First

Published

on


Sam Altman thinks artificial intelligence is about to hit the job market, and customer service workers will be the first to feel it.

“I’m confident that a lot of current customer support that happens over a phone or computer, those people will lose their jobs, and that’ll be better done by an AI,” the OpenAI CEO said in an interview on “The Tucker Carlson Show” on Wednesday.

But Altman doesn’t see the shift as unprecedented. He compared it to the long arc of employment shifts, noting that societies historically cycle through large job changes every few generations.

“Someone told me recently that the historical average is about 50% of jobs significantly change. Maybe they don’t totally go away, but significantly change every 75 years on average,” he said.

Where past disruptions stretched over decades, Altman believes AI could compress that timeline.

“My controversial take would be that this is going to be like a punctuated equilibria moment where a lot of that will happen in a short period of time,” he said. “But if we zoom out, it’s not going to be dramatically different than the historical rate.”

He suggested that the result could be a wave of turnover that looks familiar in scale but unfolds faster than previous industrial shifts.

Still, he said, professions like nurses are likely safe because “people really want the deep human connection with a person,” while he said he was uncertain about programming.

Developers are more productive than ever with AI, Altman said, but “if we fast forward another five or ten years, what does that look like? Is it more jobs or less? That one I’m uncertain on.”

Other experts see AI as a break with history, not just a faster version of it

Tech leaders and AI researchers have also drawn their own historical parallels — some dire, others more optimistic — to try to predict how far AI could reshape or eliminate jobs.

Adam Dorr, director of research at the RethinkX think tank, told The Guardian in July that AI and robotics could make jobs obsolete by 2045, comparing workers to horses in the age of cars, or traditional cameras in the age of digital photography.

Other experts are more cautious in drawing straight lines from the past.

Ethan Mollick, an entrepreneurship and innovation professor at the University of Pennsylvania’s Wharton School, told NPR in 2023 that technology often boosts productivity and, over time, creates better jobs. “It’s possible that in the end, we get better jobs, but in the short term, there’s a lot of disruption,” he said.

There are also cases where automation actually expanded employment.

In 2023, Lindsey Raymond, a then-Ph.D. Candidate at the MIT Sloan School of Management, highlighted the cotton gin invention in the late 1700s, which made production so much cheaper that demand soared, boosting overall employment, though she warned that a similar dynamic in customer service could mean more competition and lower wages.

Historians warned that disruption doesn’t always balance out. In 2023, Brian Merchant, author of “Blood in the Machine: The Origins of the Rebellion Against Big Tech,” compared today’s moment to the Luddite rebellions of the early 1800s, when clothworkers smashed the weaving machines that devalued their craft.

The Industrial Revolution didn’t erase jobs outright, he said, but often replaced skilled, well-paid roles with dangerous, low-wage factory work. “Just about everyone loses except the factory owners,” he said.





Source link

Business

Could AI nursing robots help healthcare staffing shortages?

Published

on


Around the world, health care workers are in short supply, with a shortage of 4.5 million nurses expected by 2030, according to the World Health Organization (WHO).

Nurses are already feeling the pressure: around one-third of nurses globally are experiencing burnout symptoms, like emotional exhaustion, and the profession has a high turnover rate.

That’s where Nurabot comes in. The autonomous, AI-powered nursing robot is designed to help nurses with repetitive or physically demanding tasks, such as delivering medication or guiding patients around the ward.

According to Foxconn, the Taiwanese multinational behind Nurabot, the humanoid can reduce nurses’ workload by up to 30%.

“This is not a replacement of nurses, but more like accomplishing a mission together,” says Alice Lin, director of user design at Foxconn, also known as Hon Hai Technology Group in Taiwan.

By taking on repetitive tasks, Nurabot frees up nurses for “tasks that really need them, such as taking care of the patients and making judgment calls on the patient’s conditions, based on their professional experience,” Lin told CNN in a video call.

Nurabot, which took just 10 months to develop, has been undergoing testing at a hospital in Taiwan since April 2025 — and now, the company is readying the robot for commercial launch early next year. Foxconn does not currently have an estimate for its retail price.

Foxconn partnered with Japanese robotics company Kawasaki Heavy Industries to build Nurabot’s hardware.

The firm adapted Kawasaki’s “Nyokkey” service robot model, which moves around autonomously on wheels, uses its two robotic arms to lift and hold items, and has multiple cameras and sensors to help it recognize its surroundings.

Based on its initial research on nurses’ daily routines and pain points — such as walking long distances across the ward to deliver samples — Foxconn added features, like a space to safely store bottles and vials.

The robot uses Foxconn’s Chinese large language model for its communication, while US tech giant NVIDIA provided Nurabot’s core AI and robotics infrastructure. NVIDIA says it combined multiple proprietary AI platforms to create Nurabot’s programming, which enables the bot to navigate the hospital independently, schedule tasks, and react to verbal and physical cues.

AI was also used to train and test the robot in a virtual version of the hospital, which Foxconn says helped its speedy development.

AI allows Nurabot to “perceive, reason, and act in a more human-like way” and adapt its behavior “based on the specific patient, context, and situation,” David Niewolny, director of business development for health care and medical at NVIDIA, told CNN in an email.

Staffing shortages aren’t the only issue facing the health care sector.

The world’s elderly population is growing rapidly: the number of people aged 60 and over is expected to increase by 40% by 2030, compared to 2019, according to the WHO. By the mid-2030s, the UN predicts that the number of individuals aged 80 and older will outnumber infants.

Over the past decade, the number of health care workers has steadily increased, but not fast enough to beat population growth and aging. Southeast Asia is expected to be one of the worst-impacted regions for health care workforce shortages.

With these impending stressors on the health care system, AI-enhanced systems can provide huge time and cost savings, says nursing and public health professor Rick Kwan, associate dean at Tung Wah College in Hong Kong.

“AI-assisted robots can really replace some repetitive work, and save lots of manpower,” says Kwan.

Foxconn plans to commercially launch Nurabot in 2026.

There will be challenges, though: Kwan highlights patient preference for human interaction and the need for infrastructure changes in hospitals.

“You can look at the hospitals in Hong Kong: very crowded and everywhere is very narrow, so it doesn’t really allow robots to travel around,” says Kwan. Hospitals are designed around human needs and systems, and if robots are to become central to the workflow, this will need to be reimagined in hospital design going forward, he adds.

Safety is also paramount, says Kwan — not just in terms of mitigating physical risks, but the development of ethical and data protection protocols, too — and he encourages a slow and cautious approach that allows for rigorous testing and assessment.

Robots are not entirely new to health care: surgical robots, like da Vinci, have been around for decades and help improve accuracy during operations.

But increasingly, free-moving humanoids are assisting hospital staff and patients.

In Singapore, Changi General Hospital currently has more than 80 robots helping doctors and nurses with everything from administrative work to medicine delivery.

Robots are revolutionizing the healthcare industry with increased precision and diagnostics power. Changi General Hospital, pictured, employs more than 50 robots to help care for patients. <strong>Scroll through to see more innovative robots reinventing healthcare.</strong>

And in the US, nearly 100 “Moxi” autonomous health care bots, built by Texas-based Diligent Robots with NVIDIA’s AI platforms, carry medications, samples, and supplies across hospital wards, according to NVIDIA.

But the jury is still out on how helpful nursing robots are to staff. A recent review of robots in nursing found that, while there was a perception among nurses of increased efficiency and reduced workload, there is a lack of experiential evidence to confirm this — and technical malfunctions, communication difficulties and the need for ongoing training all presented challenges.

Tech companies are investing heavily health care: in addition to NVIDIA, the likes of Amazon and Google are both exploring new opportunities in the $9.8 trillion health care market.

The smart hospital sector is a small, but rapidly expanding, component of this. It was estimated at $72.24 billion in 2025, according to market research company Mordor Intelligence, with the Asia Pacific region the fastest-growing market.

Nurabot is currently being piloted in Taichung Veterans General Hospital in Taiwan, on a ward that treats diseases associated with the lungs, face and neck, including lung cancer and asthma.

During this experimental phase, the robot has limited access to the hospital’s data system, and Foxconn is “stress testing” its functionality on the ward. This includes tracking metrics like the reduction in walking distance for nurses and the delivery accuracy, as well as qualitative feedback from patients and nurses. Early results indicate that Nurabot is reducing the daily nursing workload by around 20–30%, according to Foxconn.

Taichung Veterans General Hospital declined to comment on Nurabot for this story.

According to Lin, Nurabot will be formally integrated into daily nursing operations later this year, including connecting to the hospital information system and running tasks autonomously, ahead of its commercial debut in early 2026.

While Nurabot won’t solve the lack of nurses, Lin says it can help “alleviate the problems caused by an aging society, and hospitals losing talent.”





Source link

Continue Reading

Business

FTC Inquiry into AI Chatbot Companions: What It Means for Business Owners

Published

on


What’s Happening

The U.S. Federal Trade Commission (FTC) has launched a formal inquiry into companies that build AI “chatbot companions,” including major players like Meta, OpenAI, Alphabet (Google), Snap, xAI, Character.AI, and Instagram.

The focus is on how these companies measure potential harms, test and monitor safety, especially for minors and teens; how they monetize user engagement; the safeguards in place; and how transparent they are with users (and parents) about risks.

Why It Matters for Business Owners

If you run or are considering running a business in or near the AI / chatbot / digital companions space, or even if you use chatbots in customer service, this inquiry could have ripple effects. Below are areas of impact, risks, and opportunities.

Potential Risks

1. Regulatory Compliance Costs Increase

Companies may need to invest more in safety, monitoring, auditing, reporting systems, especially those affecting minors. If your business deals with chatbots, you may need legal advice, safety engineering, privacy consulting. These can add up.

2. Stricter Legal Liability

If a chatbot with insufficient safeguards causes harm (e.g. gives bad advice, misleads, causes emotional distress), the business could face lawsuits, regulatory penalties, or demands for recalls or modifications.

3. Transparency & Parental Controls Requirements

The FTC is demanding disclosures about how chatbots work, data collection, monetization, etc. Businesses will likely need to inform users—and parents if children are involved—more clearly. Not doing so could be seen as deceptive or unfair practice.

4. Limitations on Monetization Models

Features that drive engagement through addictive-like loops, reward mechanics, or which exploit emotional connection might come under scrutiny. Business models that rely heavily on capturing attention via “companionship” features may need to be retooled.

5. Potential State-Level Regulation

Not just federal; there’s already a California bill (SB 243) moving through that seeks to regulate AI companion chatbots (definitions, safety, reporting, liability). If state laws differ, compliance could get complex, especially if operating in multiple states.

6. Reputational Risks

If your AI/chatbot product is involved in negative news (misinformation, emotional harm, misuse), that could damage brand, trust, and sales. Consumers are increasingly sensitive about ethical and safe AI.

Opportunities & Advantages

1. Competitive Edge for Responsible Providers

Businesses that proactively build in safety, transparency, parental controls, and ethical design will likely win trust. If regulation is coming, being ahead means lower friction later.

2. New Value-Added Features

Products that clearly document how they protect users, especially minors; provide opt-in/opt-out or adjustable safety settings; or use tools to detect distress/emotional risk might appeal more to consumers or business clients.

3. Partnerships & Certifications

There may emerge third-party certifications or audits for AI safety. Businesses could offer “compliant chatbot” status as a marketing point.

4. Tailoring Services for Specific Demographics

Given the scrutiny of minors and young users, there’s opportunity in designing chatbots for adult use, or specialized chatbots with heightened safeguards for children (education, health, wellness etc.), which might become a regulated niche.

5. Product Innovation around Safety Tools

There is likely to be demand for technologies that help with moderation, detection of harmful content, managing interactions, logging, and analytics around user emotional state. Businesses developing those tools could see growth.

What Business Owners Should Be Doing Now

Audit Existing Chatbot / AI Use: If you already use conversational AI (customer service, chat companions, virtual assistants), evaluate how safe and responsible the design is. Are there loopholes that could be abused?

Document Safety Protocols: Start or update policies: how chatbots are trained, monitored, how they respond (especially to sensitive topics), what safety escalations exist, and how user data is handled.

Be Transparent: Make your terms, privacy policies, and user disclosures clear—especially if minors may use the service. Ensure that what you advertise matches how your system behaves.

Plan for Data Handling & Privacy: What user inputs do you collect? What do you do with them? How are they stored, shared, monetized? Regulations like COPPA (for kids), FTC’s standards, state laws all matter.

Monitor Regulatory Landscape: Federal inquiries often lead to new rules or laws. State laws like California SB 243 are moving fast. Being aware means you can adapt early.

Insurance and Legal Advice: Talk with legal counsel about risk mitigation, possibly insurance for AI-related risks.

Broader Implications

Increased Oversight is Coming: What the FTC is doing now is an information request (under its Section 6(b) authority) to gather data. But this often leads to reports, recommendations, and possibly regulations or legal actions. Businesses in AI must expect more oversight soon.

Consumer Expectations Shift: As media and regulators highlight stories of harm (teen suicides allegedly linked to chatbot advice, etc.), consumers will expect safer, more ethical technology. Brands ignoring this may pay a reputational price.

Cost of Non-Compliance Might Rise Sharply: If regulations mandate certain safety features, or impose fines or damages, businesses slow to adapt may suffer financial consequences.

Example Scenarios

A startup building an AI companion app for teens will probably need to build in parental insight tools, define age-targeted conversation limits, monitor for self-harm or suicidal ideation, and have protocols for escalation.

A business deploying chatbots for customer service could be impacted if the bot interacts with younger audience members (either directly or indirectly)—it may need to add disclaimers or restrict certain topics.

A company monetizing via in-chat purchases, emotional engagement loops, or advertising via bots may have to rework monetization models to avoid regulatory risks.

So what’s happening

The FTC’s inquiry isn’t just about Meta, OpenAI, or large tech giants—it signals a shifting regulatory and societal expectation around AI that business owners cannot ignore. Whether you are building AI products, using chatbots for operations, or even just planning future investments, the message is: safety, transparency, and user wellbeing are rapidly becoming not just ethical concerns, but business imperatives.

Fire busywork, not people. Discover AI employees that work 24/7 without breaks at Ai.fireyouremployees.com.”

 

 



Source link

Continue Reading

Business

Telekom MMS Study Reveals Consumer Trust and Business Readiness for AI Agents

Published

on



AI agents, programs that perform tasks independently, are becoming the next big technology trend, with 39 percent of consumers already actively using them or considering using them. At the same time, 75 percent of the IT decision-makers surveyed expect customers to increasingly use AI agents for certain service issues in the future. In fact, however, only around a third of those consumers who already work with AI agents use them specifically for customer service and contract management.


This is the result of a survey conducted by the opinion research institute YouGov on behalf of Telekom MMS entitled Trust in AI agents: What customers really want – and what companies should make of it.


From a company’s point of view, new business models around AI agents are still emerging: 37 percent plan to expand existing products with integrated AI services, and 24 percent are considering AI-supported after-sales services as a subscription model. On the consumer side, on the other hand, there is a reluctant willingness to pay: 71 percent would only use free offers.


Companies are still in the early stages of AI agents


The survey also reveals the current level of maturity of companies: 32 percent of the IT decision-makers surveyed say that their company has not yet dealt with the topic of AI agents in customer service. Only 12 percent are already using active agent-based solutions.


Transparency and control as central requirements


The requirements for AI agents when using customer service differ significantly between consumers and companies: For 45 percent of consumers, the traceability of AI decisions is crucial. 37 percent also want to have the opportunity to take on tasks themselves again at any time. On the corporate side, on the other hand, legal certainty is the top priority – 45 percent of IT decision-makers cite this as an important prerequisite for the successful use of AI agents. Among the concerns, consumers are dominated by potential wrong decisions (45 percent) and the disclosure of sensitive data (43 percent).


Translations and everyday organization as main fields of activity


The study shows clear preferences in the areas of application: Consumers currently use AI agents primarily for translations (56 percent), everyday organization (44 percent) and customer service (32 percent). For more than a third of the users surveyed (38 percent), saving time is the most important advantage. 54 percent of users find it pleasant when AI agents seem particularly human, while only 16 percent rate this as unpleasant. Users are flexible when interacting with AI agents: 43 percent do not have a preferred input method and want to use both text and voice input depending on the situation.


The complete study Trust in AI Agents – What Customers Really Want – and What Companies Should Make of It is now available for download.


Methodology


The study was conducted by YouGov on behalf of Telekom MMS. The basis is two online surveys: Between July 14 and 18, 2025, 1,020 consumers in Germany were surveyed representatively. The survey was quoted according to age, gender and Nielsen region and the results were then weighted accordingly.


In addition, 162 IT decision-makers from German companies took part in an online survey between July 15 and 21, 2025.


Ralf Pechmann, Managing Director at Telekom MMS


For companies, this means that they have to make the added value of their AI-supported services particularly clear in order to promote acceptance and willingness to pay. The study shows that we are in an early adaptation phase. To successfully deploy AI agents, organizations need to prioritize visibility, control, and privacy. This is the only way organizations can create the trust among the population that is necessary for broad acceptance



Source link

Continue Reading

Trending