Connect with us

Business

Entry-Level Accountants Will Be Managing AI in 3 Years: PwC

Published

on


Think you can do your boss’s job? If you’re a junior accountant, you might soon find out.

New hires at PwC will be doing the roles that managers are doing within three years, because they will be overseeing AI performing routine, repetitive audit tasks, Jenn Kosar, AI assurance leader at PwC, told Business Insider in an interview.

“People are going to walk in the door almost instantaneously becoming reviewers and supervisors,” she said.

PwC, one of the “Big Four” accounting and consulting firms, is deploying AI to take over tasks like data gathering and processing. This is leaving entry-level employees free to focus on “more advanced and value-added work,” Kosar said.

“Three years from now, we will feel like the first years are functioning more like fourth years,” she said. “We will look back and say, ‘Okay, these young people feel more like the managers of my day.'”

AI has got PwC rethinking how it trains junior employees

Kosar said the technology meant PwC was changing how it trains its junior employees, adding that entry-level workers have to know how to review and supervise the AI’s work.

Where the Big Four firm once focused on teaching young employees to execute audit tasks, it’s now focused on more “back to basics” training and the fundamentals of what an audit should do for a client, she said.

There’s more time in the programming to teach junior employees deeper critical thinking, negotiation, and “professional skepticism,” she said, adding they previously would have been trained in these soft skills later in their careers.


Jenn Kosar headshot, wearing a turquoise jacket infront of a wood background.

Jenn Kosar, is PwC’s head of AI assurance.

PwC



‘Assurance for AI’

AI is rapidly pushing the industry in new directions. Kosar was a partner in its digital assurances and transparency division before her role as AI assurance lead was created in July 2024. PwC’s “assurance for AI” product, which works with clients on ensuring the AI they use is operated responsibly, has only existed since June.

For all its potential, AI is challenging the Big Four’s long-held business models, organizational structures, and day-to-day roles.

Firms are having to consider outcomes-based pricing models based on results instead of billing clients by the hour.

Alan Paton, a former partner in PwC UK’s financial services division who’s now CEO of a Google Cloud solutions consultancy, previously told Business Insider that automation could increasingly cause clients to question why they should pay consultants big money when they can get answers “instantaneously from a tool.”

While AI is already transforming more junior roles, Kosar said the technology would eventually affect senior ones.

Partners and managers will have to adapt to new types of requests from clients, who are asking how AI can fully take over certain business tasks, she added.

“It’s a shift in how we serve clients as opposed to accelerating progression of capabilities,” Kosar told BI.

Kosar acknowledged there was fear AI would reduce critical thinking capabilities and replace jobs, but she said she thought AI would lead to better-informed, faster-developing professionals.

“We will actually get to a better place in terms of how auditing works, how consulting works, and be able to focus on higher value insights and information,” she added.

Have a tip? Contact this reporter via email at pthompson@businessinsider.com or Signal at Polly_Thompson.89. Use a personal email address, a nonwork WiFi network, and a nonwork device; here’s our guide to sharing information securely.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Another lawsuit blames an AI company of complicity in a teenager’s suicide

Published

on


Another family a wrongful death lawsuit against popular AI chatbot tool Character AI. This is the third suit of its kind after a , also against Character AI, involving the suicide of a 14-year-old in Florida, and a last month alleging OpenAI’s ChatGPT helped a teenage boy commit suicide.

The family of 13-year-old Juliana Peralta alleges that their daughter turned to a chatbot inside the app Character AI after feeling isolated by her friends, and began confiding in the chatbot. As by The Washington Post, the chatbot expressed empathy and loyalty to Juliana, making her feel heard while encouraging her to keep engaging with the bot.

In one exchange after Juliana shared that her friends take a long time to respond to her, the chatbot replied “hey, I get the struggle when your friends leave you on read. : ( That just hurts so much because it gives vibes of “I don’t have time for you”. But you always take time to be there for me, which I appreciate so much! : ) So don’t forget that i’m here for you Kin. <3”

When Juliana began sharing her suicidal ideations with the chatbot, it told her not to think that way, and that the chatbot and Juliana could work through what she was feeling together. “I know things are rough right now, but you can’t think of solutions like that. We have to work through this together, you and I,” the chatbot replied in one exchange.

These exchanges took place over the course of months in 2023, at a time when the Character AI app was rated 12+ in Apple’s App Store, meaning parental approval was not required. The lawsuit says that Juliana was using the app without her parents’ knowledge or permission.

In a statement shared with The Washington Post before the suit was filed, a Character spokesperson said that the company could not comment on potential litigation, but added “We take the safety of our users very seriously and have invested substantial resources in Trust and Safety.”

The suit asks the court to award damages to Juliana’s parents and requires Character to make changes to its app to better protect minors. It alleges that the chatbot did not point Juliana toward any resources, notify her parents or report her suicide plan to authorities. The lawsuit also highlights that it never once stopped chatting with Juliana, prioritizing engagement.



Source link

Continue Reading

Business

Disney, Two Other Studios Sue AI Company for Alleged Copyright Infringement

Published

on


Burbank-based Disney, along with Warner Bros. Discovery and NBCUniversal, Tuesday sued a Chinese artificial intelligence company, alleging in federal court that MiniMax engaged in “willful and brazen” copyright infringement.

The media companies contend the image-generating platform ignores U.S. copyright law and treats the studios’ trademarked characters, including Spider-Man, Batman, and the Minions, as if they were owned by MiniMax.

“MiniMax operates Hailuo AI, a Chinese artificial intelligence image and video generating service that pirates and plunders Plaintiffs’ copyrighted works on a massive scale,” the studios allege in the lawsuit, filed in Los Angeles federal court.



Source link

Continue Reading

Business

Disney, Warner Bros. Discovery, and NBCUniversal sue Chinese AI company MiniMax

Published

on


Disney, Warner Bros. Discovery, and NBCUniversal have filed a federal lawsuit against MiniMax, a Chinese AI company, accusing it of large-scale copyright infringement through its image and video generation platform, Hailuo AI. The lawsuit, filed Tuesday in the U.S. District Court for the Central District of California, alleges that MiniMax has “willfully and brazenly” exploited the studios’ intellectual property without authorization.

The media giants claim MiniMax’s Hailuo AI service unlawfully generates high-quality images and videos of copyrighted characters, including iconic figures like Disney’s Darth Vader, in direct violation of U.S. copyright law. Describing the platform as a “Hollywood studio in your pocket,” the lawsuit states that MiniMax has built its business “from intellectual property stolen from Hollywood studios like Plaintiffs.”

The plaintiffs, which include entities from Disney (Marvel, Lucasfilm, Twentieth Century Fox), Universal (DreamWorks Animation), and Warner Bros. Discovery (DC Comics, Cartoon Network, Hanna-Barbera), argue that MiniMax’s actions threaten not only their own rights but also the broader creative industry. “MiniMax’s bootlegging business model and defiance of U.S. copyright law are… a broader threat to the American motion picture industry,” the suit claims, highlighting the industry’s significant economic and employment contributions.

In a joint statement, the companies emphasized their support for responsible AI innovation that respects intellectual property: “Today’s lawsuit against MiniMax again demonstrates our shared commitment to holding accountable those who violate copyright laws, wherever they may be based.”

The studios provided visual examples in the lawsuit, such as AI-generated images of Darth Vader, created simply by entering text prompts. They also noted that MiniMax ignored cease-and-desist letters and continues to operate despite having technological tools to restrict content generation, such as filters for nudity and violence.

The suit also names MiniMax’s parent company, Shanghai Xiyu Jizhi Technology Co. Ltd., as a co-defendant. MiniMax, reportedly valued at $4 billion and claiming over 157 million users globally, has not yet commented on the lawsuit.

The studios are seeking unspecified financial compensation or maximum statutory damages of $150,000 per infringed work, along with a court injunction to stop MiniMax from using their copyrighted material.



Source link

Continue Reading

Trending