Connect with us

Tools & Platforms

Apple Silently Acquires Two AI Startups To Enhance Vision Pro Realism And Strengthen Apple Intelligence With Smarter, Safer, And More Privacy-Focused Technology

Published

on


Apple seems to be focused on boosting not only the work it has been doing on the Vision Pro headset but also in escalating its AI ambitions further by advancing its Apple Intelligence initiatives. To help with driving its efforts it seems to be resorting to a a technique of acquiring smaller firms time after time that would be solely focused on excelling in the technology. It seems to not be slowing down any time soon as it has recently acquired two more companies to help strengthen not only its talent pool but also with growing its innovation through the new technology stacks added up.

Apple has now bought two companies in to help it strengthen its next wave of innovation and advance in Apple Intelligence

MacGeneration was the one to uncover about Apple recently taking over two additional companies to continue with its low-profile strategy of growing Apple Intelligence by slowly building its talent and technology. One of the acquired companies is TrueMeeting, a startup with expertise in AI avatars and facial scanning. All the users need is an iPhone to scan their faces and then could see a hyper realistic version of themselves being created. While the official website has been taken down, but the technology company has seems to align with Apple’s ambitions regarding its Vision Pro and the attempts at an immersive experience.

TrueMeeting’s main expertise lies in the CommonGround Human AI that is meant to make virtual interactions feel more natural and human and can be integrated seamlessly with a wide range of applications. Although there has been no official comment on the acquisition by either of the parties but it looks like Apple has went ahead with it to further its development of Personas in the Apple Vision Pro headset, which are basically the lifelike digital avatars and refine its technology to improve on the spatial computing experience.

Apple additionally has also acquired WhyLabs, a firm focused on improving the reliability of these large language models (LLMs). It excels in dealings with issues such as bugs and AI hallucinations by helping developers with maintaining consistency and accuracy in the AI systems. Apple by taking over this company wants to not only advance further its Apple Intelligence but also ensure the tools are reliable and safe, which are the core values of the company and something direly needed to help integrate the models across varied platforms and ensure a consistent experience.

WhyLabs is not only focused on monitoring the performance of these models and ensuring reliability but also has expertise in providing safeguards for these systems to help combat misuse owing to security vulnerabilities. It is able to block any harmful output in these AI models and again aligns completely with Apple’s stance on privacy and user trust. This acquisition is especially vital with the growing expansion of Apple Intelligence capabilities across the ecosystem.

Apple seems to be doubling its efforts on the AI front and ensuring a more immersive experience without compromising on the the technology remaining safe and the systems acting responsibly.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Maximizing AI ROI: Avoiding Data – New Technology

Published

on



Financial services firms are increasingly embracing artificial intelligence (AI) technologies to revolutionize their revenue generation, client experience and operations excellence.


United States
Technology


To print this article, all you need is to be registered or login on Mondaq.com.

Financial services firms are increasingly embracing
artificial intelligence (AI) technologies to revolutionize their
revenue generation, client experience and operations excellence.
The financial sector’s spending on AI is set to more than
double by 2027, according to the International Monetary Fund citing
IDC data.1

While institutions have identified key areas of opportunity to
unlock value and competitive advantages with AI, issues can arise
during the implementation of the solution, with potential failures
due to challenges related to data, processes, organizational design
and role alignment.

According to the 2024 Gartner® Tech CIO Insight: Adoption
Rates for AI and GenAI Across Verticals report, “the ambition
to adopt artificial intelligence within 12 months of the survey
date has, since 2019, hovered between 17% and 25%. However, in the
subsequent year to any given survey, the actual growth in adoption
has never exceeded 5%.”2

Based on our experience working on AI projects with financial
services clients, we have identified a few common pitfalls, which
generally fall in the following categories:

  1. Bad/poorly understood data or data limitations

    AI models rely heavily on accurate, high quality data
    sets. Poor input training data can lead to issues such as data
    hallucinations, or in the case of simpler models, inconclusive or
    imprecise results. It also hinders transparency on quantitative and
    behavioral impact on AI outputs and complicates the ability to
    identify root causes for incorrect outputs, which is essential for
    addressing business and regulatory concerns about market
    manipulation and pricing inaccuracies.

Typical issues that compromise the caliber of data include:

  • Data discrepancies across systems – from a definition
    perspective, the same data can be labelled differently, or the
    element is labelled similarly but represents completely different
    or nuanced data values.

  • Data history is incomplete and / or unclear –
    insufficient historical data or unclear logic and values in
    historical data sets.

  • Poor data quality – caused by incorrect data value
    selection / population, inaccurate calculations and incomplete data
    sets.

Data Issues Preventing AI Adoption

1647398a.jpg

Source: A&M analysis

  1. Legacy operating models and processes

    AI use cases can also fail when existing processes and
    operating models are not refreshed or redesigned to incorporate AI
    outcomes into business-as-usual operations. This leads to
    underutilization or misuse of AI capabilities. Some common problems
    include:


    • Insufficient or delayed evaluation of operating model –
      which often leads to misalignment between AI results and
      operational workflows.

    • Redundant processes – that fail to integrate AI outputs
      to effectively streamline and enhance existing processes.

    • Lack of integration of operating procedures into AI model
      feedback loops – as continuous improvement of AI models
      requires defined feedback pathways so review and enhancements can
      occur frequently.


  2. Unclear impact on organization and role design

    AI implementation requires alignment on possibly impacted
    roles, organizational structure, and new day-to-day
    responsibilities. However, institutions often fail to recognize the
    importance of organizational and role design at an early stage,
    leading to challenges such as fear of role displacement among
    employees, reluctance to adopt new tools and underutilization of AI
    efforts.

Deploying AI tools can have a significant impact on the
following organizational aspects:

  • Changes in roles and responsibilities – where
    responsibilities are more streamlined, shifting towards automation,
    supervision, and providing contextual insights, allowing employees
    to focus on higher-value tasks that complement AI systems.

  • Organizational structural changes – to focus more on
    cross functional placements between business and technology to
    enhance AI models, implement robust controls, etc.

  • Controls and data focus competencies – increased shared
    responsibility for ensuring data integrity, with more direct
    ownership and enforcement of controls to maintain high-quality data
    and reliable AI outputs.

Key Tactical Steps

Addressing these foundational issues can significantly improve
the success rate of AI use cases. In a recent survey conducted by
the Bank of England, data-related risks comprised four out of the
top five perceived risks of AI currently and in the next three
years3 . Moreover, several identified risks aligned to operational
process deficiencies (such as execution and process management and
inappropriate uses of models) and organization design elements
(such as change management and account and responsibility).

We have seen the following steps to be helpful in mitigating the
challenges outline in the previous section:

1647398b.jpg

In summary, issues around AI implementation are widespread and
can occur because of challenges in data, operating models or
organizational design. To fully realize the potential of AI,
financial institutions must not only address these issues but do so
with a strong coordinated approach between various business
aspects. It is also essential for firms to identify and tackle
specific use cases to maximize value rather than adopt a “pie
in the sky” view.

Footnotes

1. AI’s Reverberations across Finance

2. GartnerTech CEO Insight: Adoption rates for AI and
GenAI across verticals, Whit Andrews, 11 March 2024 GARTNER is a
registered trademark and service mark of Gartner, Inc. and/or its
affiliates in the U.S. and internationally and is used herein with
permission. All rights reserved.

Originally published 01 July, 2025

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading

Tools & Platforms

AI is running rampant on college campuses as professors and students lean on artificial intelligence

Published

on


AI use is continuing to cause trouble on college campuses, but this time it’s professors who are in the firing line. While it was once faculty at higher institutions who were up in arms about students’ use of AI, now some students are getting increasingly irked about their professors’ reliance on it.

On forums like Rate My Professors, students have complained about lectures’ overreliance on AI.

Some students argue that instructors’ use of AI diminishes the value of their education, especially when they’re paying high tuition fees to learn from human experts.

The average cost of yearly tuition at a four-year institution in the U.S. is $17,709. If students study at an out-of-state public four-year institution, this average cost jumps to $28,445 per year, according to the research group Education Data.

However, others say it’s unfair that students can be penalised for AI use while professors fly largely under the radar.

One student at Northeastern University even filed a formal complaint and demanded a tuition refund after discovering her professor was secretly using AI tools to generate notes.

College professors told Fortune the use of AI for things like class preparation and grading has become “pervasive.”

However, they say the problem lies not in the use of AI but rather the faculty’s tendency to conceal just why and how they are using the technology.

Automated Grading

One of the AI uses that has become the most contentious is using the technology to grade students.

Rob Anthony, part of the global faculty at Hult International Business School, told Fortune that automating grading was becoming “more and more pervasive” among professors.

“Nobody really likes to grade. There’s a lot of it. It takes a long time. You’re not rewarded for it,” he said. “Students really care a lot about grades. Faculty don’t care very much.”

That disconnect, combined with relatively loose institutional oversight of grading, has led faculty members to seek out faster ways to process student assessments.

“Faculty, with or without AI, often just want to find a really fast way out of grades,” he said. “And there’s very little oversight…of how you grade.”

However, if more and more professors simply decide to let AI tools make a judgment on their students’ work, Anthony is worried about a homogenized grading system where students increasingly get the same feedback from professors.

“I’m seeing a lot of automated grading where every student is essentially getting the same feedback. It’s not tailored, it’s the same script,” he said.

One college teaching assistant and full-time student, who asked to remain anonymous, told Fortune they were using ChatGPT to help grade dozens of student papers.

The TA said the pressure of managing full-time studies, a job, and a mountain of student assignments forced them to look for a more efficient way to get through their workload.

“I had to grade something between 70 to 90 papers. And that was a lot as a full-time student and as a full-time worker,” they said. “What I would do is go to ChatGPT…give it the grading rubric and what I consider to be a good example of a paper.”

While they said they reviewed and edited the bot’s output, they added the process did feel morally murky.

“In the moment when I’m feeling overworked and underslept… I’m just going to use artificial intelligence grading so I don’t read through 90 papers,” they said. “But after the fact, I did feel a little bad about it… it still had this sort of icky feeling.”

They were particularly uneasy about how AI was making decisions that could impact a student’s academic future.

“I am using artificial intelligence to grade someone’s paper,” they said. “And we don’t really know… how it comes up with these ratings or what it is basing itself off of.”

‘Bots Talking to Bots’

Some of the frustration is due to the students’ use of AI, professors say.

“The voice that’s going through your head is a faculty member that says: ‘If they’re using it to write it, I’m not going to waste my time reading.’ I’ve seen a lot of just bots talking to bots,” Anthony said.

A recent study suggests that almost all students are using AI to help them with assignments to some degree.

According to a survey conducted earlier this year by the UK’s Higher Education Policy Institute, in 2025, almost all students (92%) now use AI in some form, up from 66% in 2024.

When ChatGPT was first released, many schools either outright banned or put restrictions on the use of AI.

Students were some of the early adopters of the technology after its release in late 2022, quickly finding they could complete essays and assignments in seconds.

The widespread use of the tech created a distrust between students and teachers as professors struggled to identify and punish the use of AI in work.

Now, many colleges are encouraging students to use the tech, albeit in an “appropriate way.” Some students still appear to be confused—or uninterested—about where that line is.

The TA, who primarily taught and graded intro classes, told Fortune “about 20 to 30% of the students were using AI blatantly in terms of writing papers.”

Some of the signs were obvious, like those who submitted papers that had nothing to do with the topic. Others submitted work that read more like unsourced opinion pieces than research.

Instead of penalizing students for using AI directly, the TA said they docked marks for failing to include evidence or citations, rather than critiquing the use of AI.

They added that the papers written by AI were marked favourably when automated grading was used.

They said when they submitted an obviously AI-written student paper into ChatGPT for grading, the bot graded it “really, really well.”

Lack of transparency

For Ron Martinez, the problem with professors’ use of AI is the lack of transparency.

The former UC Berkeley lecturer and current Assistant Professor of English at the Federal University of Paraná (UFPR), told Fortune he’s upfront with his students about how, when, and why he’s using the tech.

“I think it’s really important for professors to have an honest conversation with students at the very beginning. For example, telling them I’m using AI to help me generate images for slides. But believe me, everything on here is my thoughts,” he said.

He suggests being upfront about AI use, explaining how it benefits students, such as allowing more time for grading or helping create fairer assessments.

In one recent example of helpful AI use, the university lecturer began using large language models like ChatGPT as a kind of “double marker” to cross-reference his grading decisions.

“I started to think, I wonder what the large language model would say about this work if I fed it the exact same criteria that I’m using,” he said. “And a few times, it flagged up students’ work that actually got… a higher mark than I had given.”

In some cases, AI feedback forced Martinez to reflect on how unconscious bias may have shaped his original assessment.

“For example, I noticed that one student who never talks about their ideas in class… I hadn’t given the student their due credit, simply because I was biased,” he said. Martinez added that the AI feedback led to him adjusting a number of grades, typically in the student’s favor.

While some may despair that widespread use of AI may upend the entire concept of higher education, some professors are already starting to see the tech’s usage among students as a positive thing.

Anthony told Fortune he had gone from feeling “this whole class was a waste of time” in early 2023 to “on balance, this is helping more than hurting.”

“I was beginning to think this is just going to ruin education, we are just going to dumb down,” he said.

“Now it seems to be on balance, helping more than hurting… It’s certainly a time saver, but it’s also helping students express themselves and come up with more interesting ideas, they’re tailoring it, and applying it.”

“There’s still a temptation [to cheat]…but I think these students might realize that they really need the skills we’re teaching for later life,” he said.



Source link

Continue Reading

Tools & Platforms

Harnessing AI And Technology To Deliver The FCA’s 2025 Strategic Priorities – New Technology

Published

on


LS

Lewis Silkin





We have two things at our core: people – both ours and yours – and a focus on creativity, technology and innovation.
Whether you are a fast growth start up or a large multinational business, we help you realise the potential in your people and navigate your strategic HR and legal issues, both nationally and internationally. Our award-winning employment team is one of the largest in the UK, with dedicated specialists in all areas of employment law and a track record of leading precedent setting cases on issues of the day. The team’s breadth of expertise is unrivalled and includes HR consultants as well as experts across specialisms including employment, immigration, data, tax and reward, health and safety, reputation management, dispute resolution, corporate and workplace environment.



Jessica Rusu, chief data, information and intelligence officer at the FCA, recently gave a speech on using AI and tech to deliver the FCA’s strategic priorities.


United Kingdom
Technology


To print this article, all you need is to be registered or login on Mondaq.com.

Jessica Rusu, chief data, information and intelligence officer
at the FCA, recently gave a
speech
on using AI and tech to deliver the FCA’s strategic
priorities.

The FCA’s strategic priorities are:

  • Innovation will help firms attract new customers and serve
    their existing ones better.

  • Innovation will help fight financial crime, allowing the FCA
    and firms to be one step ahead of the criminals who seek to disrupt
    markets.

  • Innovation will help the FCA to be a smarter regulator,
    improving its processes and allowing it to become more efficient
    and effective. For example, it will stop asking firms for data that
    it does not need.

  • Innovation will help support growth.

Industry and innovators, entrepreneurs and explorers want a
practical, pro-growth and proportionate regulatory environment.The
FCA is starting a new supercharged Sandbox in October which is
likely to cover topics such as financial inclusion, financial
wellbeing, and financial crime and fraud.

The FCA has carried out joint surveys with the Bank of England
which found that 75% of firms have already adopted some form of AI.
However, most are using it internally rather than in ways that
could benefit customers and markets. The FCA understands from its
own experience of tech adoption that it’s often internal
processes that are easier to develop. It is testing large language
models to analyse text and deliver efficiencies in its
authorisations and supervisory processes. It wants to respond, make
decisions and raise concerns faster, without compromising
quality.

The FCA’s synthetic data expert group is about to publish
its second report offering industry-led insight into navigating the
use of synthetic data.

Firms have also expressed concerns to the FCA about potentially
ambiguous governance frameworks stopping firms from innovating with
AI. The FCA believes that its existing frameworks, such as the
Senior Managers Regime and the Consumer Duty, give it oversight of
AI in financial services and mean that it does not need new rules.
In fact, it says that avoiding new regulation allows it to remain
nimble and responsive as technology and markets change and its
processes aren’t fast enough to keep up with AI
developments.

The speech follows a
consultation
by the FCA on AI live testing, which ended on 13
June 2025. The FCA plans to launch AI Live Testing, as part of the
existingAI
Lab
, to support the safe and responsible deployment of AI by
firms and achieve positive outcomes for UK consumers and
markets.

The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.



Source link

Continue Reading

Trending