Connect with us

Tools & Platforms

AI took your job — can retraining help? — Harvard Gazette

Published

on


Many people worry that AI is going to take their job. But a recent survey conducted by the Federal Reserve Bank of New York found that rather than laying off workers, many AI-adopting firms are retraining their workforces to use the new technology. Yet there’s little research into whether existing job-training programs are helping workers successfully adapt to an evolving labor market.

A new working paper starts to fill that gap. A team of researchers, including doctoral candidate Karen Ni of the Harvard Kennedy School, analyzed worker outcomes after they participated in job-training programs through the U.S. government’s Workforce Innovation and Opportunity Act. Researchers looked at administrative earnings records spanning the quarters before and after workers completed training. Then they analyzed workers’ earning when transitioning from or into an occupation that was highly “AI-exposed” — a term that refers to the extent of tasks that have the potential to be automated, both in the traditional computerization sense and through generative AI technology.

Across the board, the training programs demonstrated a positive impact, with displaced workers seeing increased earnings after entering a new occupation. Still, those earnings were less for someone who targeted a high AI-exposed occupation than someone who targeted a low AI-exposed occupation.

In this edited conversation, Ni explains the role that job-training programs play as AI use is transforming the labor market.


With all the discussion around job displacement and AI, what led you to focus on retraining in particular?

When thinking about the disruptions that a new large-scale technology might have for the labor market, it’s important to understand whether it’s possible for us to help workers who might be displaced by these technologies to transition into other work. So we homed in on, OK, we know that some of these workers are being displaced. Now, what can job training services do for them? Can they improve their job prospects? Can they help them move up in terms of earnings? Is it possible to retrain some of these workers for highly AI-exposed roles?

We wanted to help document the transition and adaptability for these displaced workers, especially those who are lower income. Because then we can think about how we can support these workers, whether it be better investing in these kinds of workforce-development programs or training programs, or adapting those programs to the evolving labor market landscape.

“We wanted to help document the transition and adaptability for these displaced workers, especially those who are lower income.”

What can we learn by looking at data from government workforce development programs?

One of the big advantages of using these trainees is that it’s nationwide, and so it’s nationally representative. That allows us to take a broad look at trainees across the entire country and capture a fair bit of heterogeneity in terms of their occupations and backgrounds. For the large part, our sample captures displaced workers who tend to be lower income, making an average of $40,000 a year. Some are making big transitions from one occupation to a completely different one. We also see a fair number of people who end up going into the same types of jobs that they had before. We think those workers are likely trying to develop new skills or credentials that might be helpful to enter back into a similar occupation. Some of these people might be displaced from their occupation because of AI. But the job displacement in this sample could be for any reason, like a regional office shutting down.

Can you provide some examples of highAI-exposed careers versus low AI-exposed careers?

AI exposure refers to the extent of tasks within an occupation that could potentially be completed by a machine or a large language model. Among our sample of job trainees, some of the most common high AI-exposed occupations were customer service representatives, cashiers, office clerks. On the other end of the spectrum, the lowest AI-exposed workers tended to be manual laborers, such as movers, industrial truck drivers, or packagers.

AI retrainability by occupation

What were your main findings?

We first looked at the split before entering job training: if they were displaced from a low AI-exposed or high AI-exposed occupation. After training, we find pretty positive earnings returns across the board. However, workers who are coming from high AI-exposed jobs have, on average, 25 percent lower earnings returns after training compared to workers initially coming from low AI-exposed occupations.

Then we looked at the split after job training, if they were targeting high AI-exposed jobs or low AI-exposed jobs. If you break it down that way, we find that workers generally are better off targeting jobs that are lower AI-exposed compared to the workers who are targeting jobs that are more highly AI-exposed. Those who are targeting the high AI-exposed fields tend to face a penalty of 29 percent in terms of earnings, relative to workers who target more general skills training.

Are there any recommendations that displaced workers could take away from those findings?

I would cautiously say our findings seem to suggest that, for these AI-exposed workers going through job-training programs, going for jobs that are less AI-exposed tends to give them a better outcome. That said, the fact that we do see positive returns for all of these groups suggests that there’s probably other factors that need to be considered. For instance, what are the specific types of training that they’re receiving? What kinds of skills are they targeting? There’s an immense heterogeneity across the different job-training centers throughout the country, in terms of the quality, intensity, and even the types of occupations that they can offer services for. There’s a lot of potential for future work to consider how those factors might affect outcomes.

Also, in this case, the training program is predominantly serving displaced workers from lower parts of the income distribution. So I don’t think we can generalize across the board and say, “everyone should go do a job-training program.” We were focused on this specific population. 

You also created an AI Retrainability Index to rank occupations that both prepare workers well for jobs that are more AI-exposed and also earn more than their past occupation. What did the index reveal about which occupations are most “retrainable”?

We wanted to have a way of measuring by occupation how retrainable workers are if they were to be displaced. Our index ranking shows that, depending on where they’re starting from, you might have more or less capability of being retrained for highly AI-exposed roles. The only three occupational categories that had a positive index value — meaning that we consider these to be occupations that are highly AI-retrainable — were legal, computation and mathematics, and arts, design, and media. So someone coming from a legal profession is more retrainable for high-paying, high AI-exposed roles than someone coming from, say, a customer service job.

Overall, we found that 25 to 40 percent of occupations are AI retrainable, which, to us, is surprisingly high. You might think that if someone is coming from a lower-wage job, it might be really hard to retrain them for a job that has more AI exposure. But what we found is that there may actually be a large potential for retraining.



Source link

Tools & Platforms

Tech giants to pour billions into UK AI. Here’s what we know so far

Published

on


Microsoft CEO Satya Nadella speaks at Microsoft Build AI Day in Jakarta, Indonesia, on April 30, 2024.

Adek Berry | AFP | Getty Images

LONDON — Microsoft said on Tuesday that it plans to invest $30 billion in artificial intelligence infrastructure in the U.K. by 2028.

The investment includes $15 billion in capital expenditures and $15 billion in its U.K. operations, Microsoft said. The company said the investment would enable it to build the U.K.’s “largest supercomputer,” with more than 23,000 advanced graphics processing units, in partnership with Nscale, a British cloud computing firm.

The spending commitment comes as President Donald Trump embarks on a state visit to Britain. Trump arrived in the U.K. Tuesday evening and is set to be greeted at Windsor Castle on Wednesday by King Charles and Queen Camilla.

During his visit, all eyes are on U.K. Prime Minister Keir Starmer, who is under pressure to bring stability to the country after the exit of Deputy Prime Minister Angela Rayner over a house tax scandal and a major cabinet reshuffle.

On a call with reporters on Tuesday, Microsoft President Brad Smith said his stance on the U.K. has warmed over the years. He previously criticized the country over its attempt in 2023 to block the tech giant’s $69 billion acquisition of video game developer Activision-Blizzard. The deal was cleared by the U.K.s competition regulator later that year.

“I haven’t always been optimistic every single day about the business climate in the U.K.,” Smith said. However, he added, “I am very encouraged by the steps that the government has taken over the last few years.”

“Just a few years ago, this kind of investment would have been inconceivable because of the regulatory climate then and because there just wasn’t the need or demand for this kind of large AI investment,” Smith said.

Starmer and Trump are expected to sign a new deal Wednesday “to unlock investment and collaboration in AI, Quantum, and Nuclear technologies,” the government said in a statement late Tuesday.



Source link

Continue Reading

Tools & Platforms

Workday previews a dozen AI agents, acquires Sana

Published

on


After introducing its first AI agents for its HR and financial users last year, Workday returns this year with more prebuilt agents, a data layer for agents to feed analytics systems, and developer tools for  custom agents.

The company also said it entered a definitive agreement to acquire Sana, whose AI-based tools enable learning and content creation. Workday said the acquisition will cost $1.1 billion and expects it to close by Jan. 31.

Workday has been on a tear with acquisitions this year. It reached an agreement to buy Paradox, an AI agent builder that automates tasks such as candidate screening, texting and interview scheduling. The deal is expected to close by the end of October. In April, Workday acquired Flowise, an AI agent builder.

HR software, in general, is complex compared with enterprise systems such as CRM, said Josh Bersin, an independent HR technology analyst. Because of that, some HR vendors will have to add agentic AI functionality through acquisition. Workday’s acquisitions this year coincide with the hiring of former SAP S/4HANA and analytics leader Gerrit Kazmaier as its president of product and technology.

“Workday knows that the architecture they have is not going to quickly get them to the world of agents — they can’t build agents fast enough to work across the proprietary workflow system that they have,” Bersin said. “Their direct competitors, SAP and Oracle, are all in the same boat.”

Agents, tools to come

Workday previewed several agents to automate HR work, including the Business Process Copilot Agent, which configures Workday for individual user tasks; Document Intelligence for Contingent Labor Agent, which manages scope of work processes and aligns contracts; Employee Sentiment Agent, which analyzes employee feedback; Job Architecture Agent, which automates job creation, titles and management; and Performance Agent, which surveys data across Workday and assembles it for performance reviews.

Another tool, Case Agent, can potentially be a significant time-saver for HR workers, said Peter Bailis, chief technology officer at Workday. He is a former Google AI for cloud analytics executive who also recently joined the company.

“One of the biggest challenges in HR [is when] an employee has a critical question,” Bailis said. “But their questions are often complex, and processing times for HR departments are often long.”

The case agent can review similar cases in HR, apply the right regional and compliance context, and draft a tailored response for humans to review and deliver.

“The most important part — caring for employees — stays human,” Bailis said.

On the financials side, Workday previewed Cost & Profitability Agent, which enables users to define allocation rules with natural language to derive insights; Financial Close Agent, which automates closing processes; and Financial Test Agent, which analyzes financials to detect fraud and enable compliance. For the education vertical, Workday plans to release Student Administration Agent and Academic Requirements Agent.

Workday also plans agents that bring the functionality of recent acquisitions Paradox and Flowise to its platform.

Expected in the next platform update is the zero-copy Workday Data Cloud, which brings together Workday data with other operational systems such as sales and risk management for analytics, forecasting and planning. Also in the works is Workday Build, a developer platform that includes no-code features from Flowise that enables the creation of custom agents.

How HR vendors will use generative AI

How AI will affect HR jobs

The AI transformation Workday and the rest of the enterprise HR software market is undergoing will likely affect the ratios of HR workers to employees for large businesses, Bersin said.

Currently, many companies aim for an industry standard of one HR employee per 100 employees; with AI agents automating many administrative processes, he said he sees the potential for ratios of 1:200, 1:250, or — in the case of one client that Bersin’s company interviewed — possibly 1:400.

As such, automation will enable companies to do more work with smaller HR teams.

“In recruiting, there are sourcers, screeners, interview schedulers, people that do assessment, people that look at pay, people that write job offers, people that create start dates, people that do onboarding,” Bersin said. “Those jobs, maybe a third of them will go away. In learning and development, there’s a new era where a lot of the training content is being generated by AI.”

Workday previewed these features and announced the Sana acquisition in conjunction with its Workday Rising user conference in Las Vegas Sept. 15-18.

Don Fluckinger is a senior news writer for Informa TechTarget. He covers customer experience, digital experience management and end-user computing. Got a tip? Email him.



Source link

Continue Reading

Tools & Platforms

Humanity’s Best or Worst Invention? – Pacific Index

Published

on


Artificial Intelligence is everywhere– and professors at Pacific are less than thrilled

Half the world seems to be under the impression that the creation of Artificial Intelligence (a.k.a AI) is the greatest invention since the wheel, while another half seems to worry that AI will roll over humanity and crush it.

For students, though, it especially seems like humanity has struck gold. Why whittle away precious hours doing homework when AI can spit out an entire essay in seconds? (Plus, it can create fun images like the one you see accompanying this article.) It can even make videos that look so real you begin to doubt everything you see. AI is the robot that’s smarter than you, faster than you, and more creative than you—but maybe it’s not as good as some make it seem. 

“I think it’s a mistake to think of it as a tool,” voiced Professor Sang-hyoun Pahk. “They are replacing a little bit too much of the thinking that we want to do and want our students to do.” Professor Pahk recently gave a presentation to Pacific’s faculty on the topic, along with professors Aimee Wodda, Dana Mirsalis, and Rick Jobs. Like many universities around the world, AI has become an increasingly popular topic at Pacific, with some accepting the technology and others shunning it.  Professor Pahk explained that there are a lot of opportunities for faculty to learn techniques for using AI as a tool in the classroom, but that not all faculty have a desire to go down that road. “There’s less…kind of strategies for what you do when you don’t want to use it,” he shared, firmly expressing that it’s something he wishes to stay far away from. “It’s part of what we were trying to start when we presented.”

Professor Pahk and his colleagues presented a mere week before school was back in-session, so there was little time for faculty to change their syllabi as a way to safe-guard students from using AI on coursework. Still, many professors seemed to be on a similar page, tweaking their lesson plans and teaching methods to ensure that AI is a technology students can’t even be tempted to use. 

“I’ve tried to, if you will, AI proof my classes to a certain degree,” Professor Jules Boykoff shared. “Over the recent years, I’ve changed the assignments quite a bit; one example is I have more in-class examinations.” Professor Boykoff is no stranger to AI and admits that he’s done his fair share of testing out the technology. Still, the cons seem to greatly outweigh the pros, especially when it comes to education. “I’m a big fan of students being able to write with clarity and confidence and I’m concerned that overall, AI provides an incentive to not work as hard at writing with clarity and confidence.” Professor Boykoff, like many of his colleagues, stresses that AI short-circuits one’s ability to learn and develop by doing all the heavy lifting for them. 

Philosophy Professor Richard Frohock puts it like this, “It would be like going to the gym and turning on the treadmill, and then just sitting next to it.” Professor Frohock said this with good humor, but his analogy rings true. “Thinking is the actual act of running. It’s hard, sometimes it sucks, we never really want to do it– and it’s not about having five miles on your watch…it’s about that process, that getting to five miles. And using AI is skipping that process, so it’s not actually helping you.” Similar to his colleagues, Professor Frohock doesn’t allow any AI usage in his classes, especially since students are still in the process of developing their minds. “I don’t want it to be us vs the students, and like we’re policing what you guys do,” he admits, explaining that he has no desire to make student learning more difficult, but rather the opposite. “If we want to use AI to expand our mind, first we actually have to have the skills to be thinkers independently without AI.” 

This is just one of many reasons that professors warn against using AI, but they’re not naïve to the fact that students will use it, nonetheless. It’s become integrated into all Google searches and social media, which means students interact with AI whether they want to or not. “I have come to the conclusion that it’s counterproductive to try and control in some way student use of AI,” commented Professor Michael Huntsberger. Like other faculty, Professor Huntsberger has adjusted his lesson plans to make using AI more challenging for students, but he recognizes that this may not be foolproof. Still, he warns students to be very cautious when approaching AI and advises, “Don’t use it past your first step–so as a place to start your research…I think that’s a great way to make use of these things, but then tread very carefully.” He suggests that students should leave behind the technology altogether once they’ve established their starting point so that their work can still maintain enough human interaction to be considered student work and not AI. 

The problem with any AI usage is that the results that pop up are the product of other people’s work, which encroaches on the grounds of plagiarism. “This is the big fight right now between creators and the big tech companies because the creators are saying ‘you’re drawing on our work,’” explained Professor Huntsberger. “And of course, those creators are, A, not being compensated, and B, are not being recognized in any way, and ultimately it’s stealing their work.” 

Pacific’s own Professor Boykoff recognizes that his work is victim of this process, explaining that a generous chunk of his writing has been stolen by this technology. “A big conglomerate designed to make money is stealing my hard-earned labor,” he articulated. “It’s not just me, it’s not just like it’s a personal thing, I’m just saying, as a general principle it’s offensive.” 

Alongside those obvious concerns, Professor Pahk adds a few more items to the cons list saying, “Broadly, there’s on the one hand, the social, and environmental, and political costs of artificial intelligence.” 

AI 0, Professor Pahk 1.

Looking past all the cons, Professor Pahk acknowledged a bright side to the situation. “It’s just…culturally a less serious problem here,” claimed Professor Pahk, sharing that from his experience, students at Pacific want to learn and aren’t here just to mark off courses on a to-do list. “It’s not that it’s not a problem here, but it’s not the same kind of problem.” 



Source link

Continue Reading

Trending