Connect with us

AI Insights

Copyright, Fair Use, and AI: The Current State of the Law – Events – Morgan Lewis

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Why Apple is sidestepping Silicon Valley’s AI bloodsport

Published

on


A version of this story appeared in CNN Business’ Nightcap newsletter. To get it in your inbox, sign up for free here.


New York
 — 

As expected, Apple rolled out a bunch of gadget upgrades during its closely watched marketing event on Tuesday. But perhaps the most notable thing about its crisply edited, hour-and-ten-minute propaganda reel was this: Apple went really quiet when it came to artificial intelligence.

The theme of the day was Apple’s bread and butter: hardware. It’s got new AirPods that can translate bilingual conversations in real time, a watch that can monitor your blood pressure and, of course, a skinny phone. There was plenty of hype, to be sure — CEO Tim Cook’s opening monologue heralded the iPhone 17 as “the biggest leap ever for iPhone.”

But not once did an Apple executive grandstand about how their artificial intelligence models would upend the global economy. Heck, they barely talked about AI upending their own products. The words “Apple Intelligence,” the company’s proprietary AI, rarely came up. (I counted four passing references to it in the entire video.)

Also telling: No one talked about Siri, the voice assistant feature that has become a vector of Apple’s AI ambitions. Not even once.

Long story short, Apple overhauled Siri to incorporate Apple Intelligence last year, and it was a disaster. Apple had to claw back key features, including its (very funny but very inaccurate) text message and news app summaries.

It was a rare stumble for the most brand-conscious tech company on the planet, and it’s not about to risk another “overpromise, underdeliver” moment.

“Apple (is) sidestepping the heart of the AI arms race while positioning itself as a longtime innovator on the AI hardware front,” Emarketer analyst Gadjo Sevilla said in a note Tuesday. “It’s a reminder that Apple’s competitive advantage remains rooted in product experience rather than raw AI as a product.”

Apple has declined to give a timeline for AI-powered Siri’s revival, though Bloomberg’s Mark Gurman has reported it’s scheduled for spring 2026.

Back in June, Apple’s software lead Craig Federighi assured developers at another event that the Siri upgrade “needed more time to reach our high quality bar, and we look forward to sharing more about it in the coming year.”

At the time, I wrote that the Siri pullback was a sign that — despite the tired tech narrative about Apple falling behind its rivals — it is actually the only big tech company in the Valley using its brain when it comes to AI. The past three months have only reaffirmed my theory.

Because here’s the thing: Apple’s homegrown AI is not good. Its main function so far has been both underwhelming (it summarizes texts and news alerts) and unreliable (it misreads said texts and generates alarmingly inaccurate headlines, like the one where it told users that accused murderer Luigi Mangione had shot himself or that tennis star Rafael Nadal had come out as gay — neither of which was true.)

But Apple’s AI is lame in the same way Google’s Gemini is lame (remember when it told us to eat rocks?) and OpenAI’s ChatGPT is really, really lame. Apple has not found a reliable use case for its AI in consumer products. And neither has anyone else — at least, not to the degree needed to justify the massive valuations and investment dollars they’re pouring into these projects.

But that’s not stopping the biggest names in tech from burning through hundreds of billions of dollars to try to manifest the model that will do… something. Never mind that large language models have so far proven useless at 95% of the companies that have made their workforces try to use them, researchers from MIT recently found.

Apple hasn’t abandoned AI, to be sure, but it is clearly doubling down on what it does best – making gadgets that we’re addicted to, inside an ecosystem that is rather annoying to leave.

“Apple is thinking pragmatically,” Bloomberg tech columnist Dave Lee wrote Monday. “It may not make much sense to sink billions of dollars into building its own AI when, as the leading hardware maker, it has the power to go out into the marketplace and choose whatever models it considers to be well suited. It can use the dominance of the iPhone to help push for the best possible terms, playing potential partners against one another, much in the way it squeezes those responsible for its components and manufacturing.”

In other words: Let the hotheads duke it out over this still-speculative technology. Apple will be there waiting, sitting on a mountain of cash, ready to partner with (or outright acquire) whichever operation cracks the code.





Source link

Continue Reading

AI Insights

Transparency, Not Speed, Could Decide AI’s Future in Finance

Published

on


Corporate finance has long been among the early adopters of automation. From Lotus 1-2-3 to robotic process automation (RPA), the field has a history of embracing tools that reduce manual workload while maintaining strict governance.

Generative artificial intelligence (AI) has come to increasingly fit neatly into that lineage.

Findings from the PYMNTS Intelligence July 2025 PYMNTS Data Book, “The Two Faces of AI: Gen AI’s Triumph Meets Agentic AI’s Caution,” reveal that CFOs love generative AI. Nearly 9 in 10 report strong ROI from pilot deployments, and an overwhelming 98% say they’re comfortable using it to inform strategic planning.

Yet when the conversation shifts from copilots and dashboards to fully autonomous “agentic AI” systems, software that can act on instructions, make decisions, and execute workflows without human hand-holding, the enthusiasm from the finance function plummets. Just 15% of finance leaders are even considering deployment.

This trust gap is more than a cautious pause. It reveals a deeper tension in corporate DNA: between a legacy architecture designed to mitigate risk and a new generation of systems designed to act. Where generative AI has found traction in summarizing reports or accelerating analysis, agentic AI demands something CFOs are far less ready to give: permission to decide.

Why Agentic AI Feels Different

Generative AI won finance leaders over by making their lives easier without upending the rules. It accelerates analysis, drafts explanations, and surfaces hidden risks. It works inside existing processes and leaves final decisions to people.

That made the ROI for generative AI obvious: faster closes, better forecasts and teams that can do more with less. It’s the kind of technology finance chiefs have embraced for decades.

Agentic AI is different. These systems don’t just suggest — they act. They can reconcile accounts, process transactions or file compliance reports automatically. That autonomy is exactly what the PYMNTS Intelligence report found rattles finance chiefs. Executives who love Gen AI when it writes reports or crunches scenarios can slam on the brakes when agentic machines start to move money or approve deals.

Governance is the first worry. Who signs off when a machine moves money? Visibility is another. Once an AI agent logs into a system over encrypted channels, security teams may have no idea what it’s really doing. And accountability is the big one: if an autonomous system makes a mistake in a tax filing, no regulator will accept “the software decided” as an excuse.

Read the report: The Two Faces of AI: Gen AI’s Triumph Meets Agentic AI’s Caution

The black-box nature of AI doesn’t help. Unlike traditional scripts or rules engines, agentic systems use probabilistic reasoning. They don’t always produce a clear audit trail. For executives whose careers depend on being able to explain every number, that’s a deal breaker.

Legacy infrastructure makes things worse. Finance data is scattered across enterprise software, procurement platforms, and banking portals. To work autonomously, AI would need seamless access to all of them, which means threading through a maze of authentication systems and siloed permissions.

Enterprises already struggle to manage those identities for employees. Extending them to machines that act like employees, only faster and harder to monitor, could be a recipe for hesitation.

If autonomous systems are going to move beyond experiments, they’ll need to prove their value in hard numbers. Finance chiefs want to see cycle times shrink, errors fall, and working capital improve. They want audits to be faster, not messier.

The irony is that CFOs don’t need AI to be flawless. They need it to be explainable. In other words, transparency is the killer feature.

Unless agentic AI can show that kind of return, it may stay parked in the “idea” column instead of the project pipeline.



Source link

Continue Reading

AI Insights

‘A burgeoning epidemic’: Why some kids are forming extreme emotional relationships with AI

Published

on


As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT.

As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT — and that is taking a toll on their mental health.

“AI psychosis,” while not an official clinical diagnosis, is a term clinicians are using to describe children who appear to be forming emotional bonds with AI, according to Dr. Ashley Maxie-Moreman, clinical psychologist at Children’s National Hospital in D.C.

Maxie-Moreman said symptoms can include delusions of grandeur, paranoia, fantastical relationships with AI, and even detachment from reality.

“Especially teens and young adults are engaging with generative AI for excessive periods of time, and forming these sort of fantastical relationships with AI,” she said.

In addition to forming close bonds with AI, those struggling with paranoia may see their condition worsen, with AI potentially affirming paranoid beliefs.

“I think that’s more on the extreme end,” Maxie-Moreman said.

More commonly, she said, young people are turning to generative AI for emotional support. They are sharing information about their emotional well-being, such as feeling depressed, anxious, socially isolated or having suicidal thoughts. The responses they receive from AI vary.

“And I think on the more concerning end, generative AI, at times, has either encouraged youth to move forward with plans or has not connected them to the appropriate resources or flagged any crisis support,” Maxie-Moreman said.

“It almost feels like this is a burgeoning epidemic,” she added. “Just in the past couple of weeks, I’ve observed cases of this.”

Maxie-Moreman said kids who are already struggling with anxiety, depression, social isolation or academic stress are most at risk of developing these bonds with AI. That’s why, she said, if you suspect your child is suffering from those conditions, you should seek help.

“I think it’s really, really important to get your child connected to appropriate mental health services,” she said.

With AI psychosis, parents need to be on the lookout for symptoms. One could be a lack of desire to go to school.

“They’re coming up with a lot of excuses, like, ‘I’m feeling sick,’ or ‘I feel nauseous,’ and maybe you’re finding that the child is endorsing a lot of physical symptoms that are sometimes unfounded in relation to attending school,” Maxie-Moreman said.

Another sign is a child who appears to be isolating themselves and losing interest in things they used to look forward to, such as playing sports or hanging out with friends.

“I don’t want to be alarmist, but I do think it’s important for parents to be looking out for these things and to just have direct conversations with their kiddos,” she said.

Talking to a child about mental health concerns can be tricky, especially if they are teens who, as Maxie-Moreman noted, can be irritable and a bit moody. But having a conversation with them is key.

“I think not skirting around the bush is probably the most helpful thing. And I think teens tend to get a little bit annoyed with indirectness anyhow, so being direct is probably the best approach,” she said.

To help prevent these issues, Maxie-Moreman suggested parents start doing emotional check-ins with their children from a young age.

“Just making it sort of a norm in your household to have conversations about how your child is doing emotionally, checking in with them on a regular basis, is important. So starting at a young age is what I would recommend on the preventative end,” she said.

She also encouraged parents to talk to their children about the limits of the technology they use, including generative AI.

“I think that’s probably one of the biggest interventions that will be most helpful,” she said.

Maxie-Moreman said tech companies must also be held accountable.

“Ultimately, we have to hold our tech companies accountable, and they need to be implementing better safeguards, as opposed to just worrying about the commercialization of their products,” she said.

Get breaking news and daily headlines delivered to your email inbox by signing up here.

© 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.



Source link

Continue Reading

Trending