Connect with us

Tools & Platforms

Ways to help workers suffering from AI-related job losses

Published

on


Job losses are starting to accelerate due to AI, robotics, and automation. Anthropic CEO Dario Amodei warns that AI could cut half of all entry-level white-collar jobs in the coming years. A McKinsey report finds that “current gen AI and other technologies have the potential to automate work activities that absorb up to 70 percent of employees’ time today.” A Wall Street Journal article also cites leading CEOs who claim “AI will wipe out jobs.” OpenAI CEO Sam Altman has ominously written that the “2030s are likely going to be wildly different from any time that has come before.” 

AI’s impact on the workforce is hard to forecast because so many factors affect economic projections. There are uncertainties concerning trade imbalances, exchange rates, technology deployment rates, shifts in business models, political developments, geopolitical forces, and CEO proclivities toward their own workers. Former Harvard University President Larry Summers once quipped that predictions are so fraught with error that prognosticators never should use a date and prediction in the same sentence. 

But given the magnitude of the phenomenon, it is important to think about ways to help workers who are likely to bear the impact of AI-related job losses. As I argue in my Brookings Press book, “The Future of Work:  Robots, AI, and Automation,” several policy reforms would help people navigate the transition to a digital economy. This piece offers an overview of them.

Encourage companies to retrain workers 

Businesses are on the frontlines of worker layoffs and have some responsibility for retraining employees. They need to finance job reskilling and upskilling for their employees so those individuals don’t get left behind. Governments should consider expanding tax credits for businesses that retrain laid-off employees, as these investments benefit the entire society. It is not in the country’s national interest for large numbers of people to become part of a permanent underclass struggling and failing to pay their monthly expenses despite their best efforts. Replacing these workers with AI would doom many individuals to a dismal financial future and create a host of social, economic, and political problems, many of them unique to AI’s role in worker displacement. 

Make health benefits portable 

It is vital that workers retain access to health insurance if they lose their jobs. Since much of American healthcare is handled through employers, job-related disruptions are especially harmful for workers. They may lose access to insurance or be forced to shift to another medical network that does not include their current healthcare providers. One of the benefits of the current Affordable Care Act exchanges is healthcare coverage offerings for those between jobs. Given the increasing threat of AI-related job losses, policymakers should ensure these mechanisms remain strong in safeguarding American workers’ access to healthcare as job disruptions spread. 

Reduce vesting requirements for retirement benefits 

Many organizations have lengthy vesting requirements, sometimes spanning three to six years, before people become eligible to get employer matches for retirement and keep that money. With the increasing rates of “job churn” due to AI and automation, people may end up working for many different organizations in a span of a few years. Lengthy vesting requirements will hurt workers who lose vesting time at each job transition, substantially jeopardizing their financial futures post-retirement. Employers should consider reducing vesting time to three or six months to aid those moving from job to job. 

Loosen job licensing requirements 

People who lose their jobs due to AI or automation may want to break into other occupations, but these shifts are often not easy. Many new occupations require specialized training, certification, and testing that can take months or sometimes years to secure. When certification is related to health and safety, these state-level licensing requirements make complete sense. Yet, there are some certifications and testing requirements not linked to health and safety that states should consider loosening to make it easier for people to enter new fields. This is especially true for individuals who are otherwise well-equipped to safely practice in those fields but face limitations due to their nontraditional education or language proficiency. For example, some states require degrees even if the individual has years of experience doing certain tasks. Policymakers could change those requirements to allow educational credentials or work experience to fulfill certification requirements. 

Create worker retraining accounts 

In the advent of AI, to incentivize society-wide upskilling, policymakers should consider creating worker retraining accounts that function similarly to retirement accounts. These accounts enable workers to use tax-deferred money to pay for job retraining. The program will help people gain new skills to keep up with the pace of innovation and navigate the vicissitudes of the digital marketplace. To sustain advancements in American innovation, policymakers should consider creative ways to maintain a digitally apt workforce. 

Pay earned income tax credits monthly 

Many individuals receive earned income tax credits each year after they file their income taxes. This allows them to get refunds if they have low incomes or high childcare expenses. Unless people want to make a large purchase or upgrade their homes, an annual payment will not help them with monthly expenses or provide a regular source of income. Making these payments monthly would create flexibility in terms of income support and provide more continuity over time, especially for workers in more vulnerable frontline jobs.  

Clarify independent contractor rules 

There should be clearer rules regarding the classification of workers as independent contractors. Right now, many individuals who work full-time are not classified as full-time employees and are thereby ineligible for health, retirement, or disability insurance benefits. This phenomenon is particularly prominent in the tech industry, which heavily relies on temp workers to perform core functions such as data analysis, content moderation, or administrative tasks. Having clearer rules would ensure that those working full-time get the benefits traditionally associated with full-time employment.

Fund job retraining programs in higher education 

Some community colleges and other institutions of higher education are offering retraining programs for adults who need to upgrade their skills. As a way of assisting with the transition to a digital economy and helping individual workers suffering from job losses, governments should consider financially supporting these programs. Colleges and universities are a vital part of the retraining ecosystem and deserve to be helped as they carry out this important mission.

Make sure laid-off workers have access to high-speed internet 

Twenty-four million Americans still lack access to high-speed internet at home, hindering their ability to utilize digital job boards and resources that help them learn new skills. For instance, many adult retraining programs take place online. To ensure all American workers can access these retraining programs, policymakers must guarantee all Americans have access to high-speed internet. My colleague Nicol Turner Lee has argued in her book,Digitally Invisible: How the Internet Is Creating the New Underclass,” that a lack of access to high-speed broadband dooms people to poverty and creates huge problems for the individuals affected, as well as the country as a whole. As more job listings, training, and hybrid opportunities migrate online, it is imperative that individuals and employers reduce the barriers to seamless access and help those seeking new positions.   

Ensure data-based job evaluations are fair 

Employees are increasingly performing their tasks on computers, generating data analytics known as key performance indicators (KPI) that can serve as the basis for determining who gets laid off. As hybrid work becomes increasingly common, employees may receive laptops from their employers and take them home, which may inadvertently allow employers access to online activity that takes place on company equipment or networks. Individuals may be downgraded or lose their jobs because of data analytics that may portray them unfairly. Business leaders need to make sure these job-related analytics are fair and impartial and not unjustifiable intrusions into personal privacy and ensure that employees have clear expectations about their privacy in the hybrid workplace. Employers must also take into account those who successfully integrate AI into their job functions and judge them fairly for their performance rather than traditional productivity metrics. 

Mitigate disparate job impacts 

Not every worker is going to share the same workplace experiences. There are known disparities by race, gender, age, disability, immigration status, and veteran status, among other characteristics. Some individuals receive visas linked to a specific job or company, and encounter problems when they are laid off. Policymakers should make sure the impact of AI-related job losses does not fall disproportionately on those least able to retrain themselves, get new jobs, or obtain appropriate visas. Otherwise, there will be disparate impacts on marginalized populations that guarantee a poor economic future. 

Consider a four-day workweek as workers become more efficient and productive 

As workers become more efficient and productive due to AI and other digital tools, companies should share the profits with them by considering four-day workweeks. Some employers have moved in this direction and have found positive results. One software company saw a 130% increase in revenue in addition to fewer sick days. Forward-looking companies should experiment with novel methods of work management and reward employees whose hard work and efficient performance improve the bottom line. 

These and other recommendations require careful consideration by policymakers, industries, and employees, so that the AI transition can succeed with fewer casualties in the workplace and the broad labor market. 



Source link

Tools & Platforms

Quantum science: Rewriting the future of physics, AI and tech

Published

on


Quantum science is one of today’s most talked-about fields, full of buzz and seemingly limitless potential to reshape how we understand the world — and what technology can achieve. Including subsets like quantum information science and quantum mechanics, the field is a subject more people have heard of than can explain, often surrounded by bold claims, from floating, earthquake-proof cities to making time travel possible.

But for Anastasia Pipi, the focus remains grounded in real science rather than in science fiction. Growing up in Cyprus, Pipi was always fascinated by physics. But explaining her desire to make it a career was sometimes a challenge.

“Physics didn’t seem like a common career path among the people I knew; many saw it as limiting,” she said. “But I was naturally drawn to it — it just made sense to me. I knew that pursuing it could open many more doors.”

Excelling in science throughout high school, Pipi was captivated by her first physics class, where her teacher kindled her curiosity by opening each chapter with deceptively simple questions — such as how an object would move in the vacuum of space — inviting students to reason from first principles before they had learned the formal laws.

Intrigued by the challenge of theorizing about the unknown and driven by a love for math, she went on to study mathematical physics at the University of Edinburgh, where she was first introduced to quantum science.

Eager to innovate in a cutting-edge field, she traveled to the U.S. to join UCLA’s master’s program in quantum science and technology, or MQST.

“I was excited that UCLA offered opportunities to explore not only theory, but also the computational and experimental sides,” Pipi said. “It was a great way to learn how to apply my skills in practice — and it was incredibly motivating to see everyone here pushing boundaries at such an inspiring, accelerated pace.”

Anastasia Pipi

Roger Lee/UCLA

What is quantum science?

The power of quantum, Pipi says, lies in its ability to revolutionize secure communication, offering unprecedented protection for sensitive data in an increasingly digital world; to tackle complex pharmaceutical challenges such as personalized medicine and targeted drug design; and to explore fundamental questions in physics, from the nature of gravity to the mystery of dark matter and beyond.

Still, she emphasizes that the foremost goal — both for her and her colleagues — is to solve the practical challenges that stand in the way of making quantum technologies truly viable.

“When we think about the future of quantum, it’s easy to get swept up in the hype,” she said. “But the real excitement lies in the tangible, transformative progress we’re making — even if it comes with big challenges.”

But what, exactly, is quantum?

“In a nutshell, quantum physics is our framework for understanding nature at the smallest scales,” Pipi said. “While Newtonian physics helps us make sense of things like planetary motion or how a ball rolls across the floor, those laws break down when we look at microscopic particles. The behavior of something like an electron is probabilistic — instead of tracing a neat, predictable path, we can only calculate the likelihood of where it might be at any given time.”

Pipi’s scientific curiosity and drive to explore the potential of quantum technologies made her a natural fit for UCLA’s MQST program.

“Anastasia was a standout member of our inaugural cohort and represents exactly the type of student our program was designed for,” said Richard Ross, MQST program director. “She showed an impressive aptitude and curiosity for this interdisciplinary field and is well prepared to make her mark in it.”

Bringing research to life with Nvidia, Caltech and more

Pipi’s time at UCLA was so rewarding that she stayed on after earning her MQST degree to pursue a doctorate in physics under the mentorship of Professor Prineha Narang, a leader in physical sciences and electrical and computer engineering. With Narang’s guidance, Pipi is advancing research at the intersection of fundamental physics and emerging technology, developing quantum control methods powered by artificial intelligence in atomic, molecular and optical systems, in collaboration with scientists at Caltech and the technology company Nvidia.

As she looks beyond her graduation, Pipi is eager to deepen her work on developing computational tools that can help make quantum technologies more practical and scalable. In the meantime, she’s fully embraced life on and off campus, steadily building her international profile as a researcher. In addition to presenting her work on quantum logic spectroscopy as a lead author at the American Physical Society, she traveled to Denmark earlier this year to attend the prestigious AI4Quantum: Accelerating Quantum Computing with AI conference, organized by the global health care company Novo Nordisk.

But Pipi’s interests extend far outside the lab. A certified open-water diver, she is also passionate about ballet, piano and snow skiing. She sees creativity not as separate from science, but as an essential part of it — a perspective that continues to shape her approach to research and life as she continues to explore new and exciting horizons.

“Physics offers a unique outlet for creativity,” she said. “Science is an art form where imagination can be just as important as logic.”


 

Explore more of the UCLA College’s State of Mind

Animated gif of various people included in feature

 

 

 



Source link

Continue Reading

Tools & Platforms

Chief Technology Officer Ahmet Kayıran talks how RNV.ai manages retail in real-time — Retail Technology Innovation Hub

Published

on


Q: “Collecting data for efficiency isn’t enough, you must translate it into the system’s language.” How do you enable this transformation for brands? How do you overcome resistance in transitioning from manual to automated systems?

A: Actually, for brands, the real challenge is not gathering data – it’s transforming data into a decision ready language. Typically, data lives outside systems – in spreadsheets, emails, field notes… when data is recorded, it’s easy to systematise, but many insights are internally processed by individuals and not formally documented.

So we begin by focusing on both recorded and informal data, then plan how to formalise that data. In this process we map data sources, note frequency, and establish a data ownership framework. Then we convert this data into a mathematical language the system can understand: normalising, labelling, building relational structures. Finally, we process it through our models and connect it with decision-makers – augmenting workflows as decision support and expert systems.

When moving from manual to automated systems, resistance often arises because users fear losing control. That’s why we design automation to assist, not replace humans. Our recommendation systems also explain the reasons behind decisions. Users can see not only what should be done but why. As trust grows, resistance fades and turns into collaborative engagement.

Q: Near-future demand forecasting is increasingly important. How do your AI enabled systems predict the immediate future? How often do they update? How do they adapt?

A: Merely looking at historical data or knowing “what’s happening today” is now insufficient. We need to anticipate tomorrow.

In our systems, near-future forecasts run not just on past data but on real-time behavioral signals, market pulse, local shifts, pricing and promotional inputs. For example, when a product’s turnover rate changes in a store, it’s interpreted not just as “low stock,” but as a “change in demand pattern” signal.

We monitor such changes daily, not weekly, because missing a week in retail means missing a season. Updates involve not just retraining but context specific shifts: models reprioritise variables, adjust feature importance.

We don’t use AI only to forecast based on historical data – we complement forecasting algorithms with optimisation tools that adapt to uncertain environments, offer scenario-based modeling, and propose solution sets satisfying all possible outcomes.

Q: Many chains still rely on regional managers’ intuition for ordering. How should efficiency and intuitive decisions be balanced? How can technology optimise this?

It’s a very real situation. Many large chains still make order decisions based on “I know that region.” But the real question is: knowing versus feeling. Experience is certainly valuable, but if it isn’t systematic, it’s not sustainable.

We don’t replace intuition – we strengthen it with data. For instance, when the system generates an order recommendation, it tells the user: “This recommendation worked previously on this specific behavior.” So decision-making isn’t just about numbers – it has context and narrative.

Technology here strikes a balance: it doesn’t exclude intuition but makes it measurable and testable. Users sometimes override the system; we record and feed those interventions back. Thus the system learns over time, enabling both efficiency and expert insight to coexist.

Q: Which KPIs do you recommend retailers track to measure the benefits from your systems? For example: stock-out time, shrinkage rate, product availability score?

A: At RNV.ai, we go beyond delivering forecasting accuracy. We also observe how forecast accuracy impacts corporate culture, operations, and profitability – crucial both for clarifying ROI and making AI’s real effect visible.

We track metrics across operational, financial, and decision-quality dimensions: stock holding time, inventory turnover, stock-out rate, product availability, etc. Plus, our self-service BI tools allow end users to create their own data sets and reports.

Q: As summer 2025 begins, which product groups see the most forecasting errors? How do demand forecasting systems adapt to such seasonal fluctuations?

A: The year 2025 has been a period when retail has been more sensitive than ever to macroeconomic factors. Consumer purchasing behavior changed significantly – decisions once made easily became delayed and scrutinised.

Special holiday promotions underperformed, and campaigns no longer drew the same reaction. It wasn’t just economic slowdown – nature driven factors also challenged retailers: for instance, a delayed summer season or regionally extended heat waves led to large deviations in seasonal launch timing.

These changes present serious problems for traditional forecasting systems, which still rely on old behaviour patterns – leading to underperformance. We address these issues with dynamic forecast adaptation. When the gap between forecasts and actual sales for certain product groups becomes meaningful, models are retrained with different feature sets.

Declines are interpreted via causality-based algorithms, and feature weightings are adjusted accordingly. As a result, I can confidently say: in this period, the most successful brands aren’t those with the highest accuracy – they are those that adapt fastest. RNV.ai systems are designed for exactly this flexibility. We read changes, recognise signals, and recalculate recommendations.



Source link

Continue Reading

Tools & Platforms

TGA ‘stepping up’ regulation of AI scribes in healthcare | Information Age

Published

on


Australia’s Therapeutic Goods Administration (TGA) says it is “stepping up its efforts” to regulate digital scribes, including those using artificial intelligence technology, following calls for greater oversight of the software as it becomes increasingly prevalent in healthcare settings.

AI scribes typically use large language models (LLMs) to quickly transcribe and summarise discussions between patients and healthcare practitioners.

Some systems are also able to suggest potential treatments, write referral letters, make follow-up phone calls, propose billing opportunities, and draft healthcare plans.

Experts have called for advanced AI scribes to be regulated as medical devices by the TGA, given the sensitive data they handle and their ability to make medical recommendations, as Information Age reported in August.

The TGA announced on Friday it was reviewing AI scribes amid concerns some systems were introducing features “such as diagnostic and treatment suggestions”, which may need to be considered medical devices and thereby formally regulated before being sold or advertised.

While most AI scribes do not include such features yet, a TGA review published in July found software which did propose diagnoses or treatment options were “potentially being supplied in breach of the [Therapeutic Goods] Act”.

The TGA had begun responding to complaints and reports of non-compliance, it said, while also addressing “unlawful advertising and supply” of some AI scribes.

“We may take targeted action in response to alleged non-compliance,” the regulator said.

The TGA did not comment on how many complaints or reports of non-compliance it had received.

The regulator said it encouraged consumers to report concerns through its website.

Australian firms welcome regulatory scrutiny

Australian companies such as Heidi Health and Lyrebird Health have seen significant success in the AI scribe industry, amid competition from smaller providers such as i-scribe and mAIscribe — all of which were contacted for comment.

Heidi Health co-founder and CEO Dr Thomas Kelly told Information Age his team “welcome the TGA’s sharpened compliance focus”.

While Kelly said Heidi did not currently have features which would render it a medical device under the TGA definition, he said the firm would engage with the regulator “if we ever introduce features that give Heidi a therapeutic purpose”.

“Proportionate, risk‑based enforcement protects patients and ensures a level playing field for responsible developers,” he said.



Australian AI scribe company Heidi Health says its software does not yet meet the definition of a medical device. Image: Heidi Health / YouTube

Akuru, the health tech company behind i-scribe, said while its product also did not meet the definition of a medical device, it welcomed “the regulator’s latest focus on taking targeted action” against unregulated products which provided diagnostic advice or treatment suggestions.

“We know the scribing market is crowded, and unfortunately, some solutions do cross that line,” Akuru medical director Dr Emily Powell said in a statement.

“… We welcome wider regulatory guidance that empowers clinicians to make informed decisions about secure, compliant, and appropriate software.”

Adoption ‘running ahead of governance’

University of Queensland associate professor of business information systems Dr Saeed Akhlaghpour, who has studied the use of AI scribes in healthcare, described the TGA’s focus on such technology as “a positive move” given their increasing use.

“Bringing AI scribes under safety and medical-device rules gives patients greater peace of mind, reduces legal uncertainty for clinicians, and offers vendors a clearer, more predictable path to compliance,” he said.

“The reality is that adoption is already running ahead of governance — industry surveys suggest nearly half of Australian doctors are already using, or planning to use, AI scribes.

“That scale of uptake makes timely guardrails essential now, not later.”

Regulators in the United States, European Union, and United Kingdom were also “moving to treat scribe tools that go beyond transcription as clinical technologies, not just productivity aids”, Akhlaghpour said.



Akuru, the Australian company behind i-scribe, says it welcomes the TGA’s scrutiny of digital scribe software. Image: i-scribe / Supplied

Expert calls for ongoing reviews

Australian AI governance expert Dr Kobi Leins — who last month told Information Age she was turned away from a medical practice after not consenting to its use of an AI scribe — said ongoing industry-wide expert reviews and training were needed to maintain public confidence.

“Where the data goes and how it is collected and stored is critical, as implications may be profound if shared with insurers, employers, or others — and in the case of genetic and family related health, may have implications for family members not present,” she said.

Ongoing reviews of AI scribes needed to be triggered when systems were “modified, connected, or repurposed”, said Leins, who called for such reviews to analyse “cybersecurity, AI, ethical, legal, vulnerability, medical and other lenses … to capture the wide range of risks and legal compliance required”.

“Included in that review needs to be a plan for ongoing training of the medical profession as to how to use the tools effectively, including seeking consent and always providing the option to opt out,” she said.

“Ensure independent deep expertise to review, not vendor reviews … and ensure that vendors — like with cybersecurity — have the responsibility to notify of changes to systems to practitioners.”

Healthcare professionals should “regularly assess” digital scribe software before using it, including when software updates may introduce new functionality or change data protection and privacy safeguards, the TGA said in August.

Aside from complying with the TGA’s rules, AI scribes used in healthcare may also need to uphold obligations under laws such as the Privacy Act, Cyber Security Act, and Australian Consumer Law, the regulator added.





Source link

Continue Reading

Trending