Connect with us

Tools & Platforms

Apple’s AI and search executive Robby Walker to leave: Report

Published

on


FILE PHOTO: Robby Walker, one of Apple’s most senior AI executives, is leaving the company.
| Photo Credit: AP

Robby Walker, one of Apple’s most senior artificial intelligence executives, is leaving the company, Bloomberg News reported on Friday, citing people with knowledge of the matter.

Walker’s exit comes as Apple’s cautious approach to AI has fueled concerns it is sitting out what could be the industry’s biggest growth wave in decades.

The company was slow to roll out its Apple Intelligence suite, including a ChatGPT integration, while a long-awaited AI upgrade to Siri has been delayed until next year.

Walker has been the senior director of the iPhone maker’s Answers, Information and Knowledge team since April this year. He has been with Apple since 2013, according to his LinkedIn profile.

He is planning to leave Apple next month, the report said. Walker was in charge of Siri until earlier this year, before management of the voice assistant was shifted to software chief Craig Federighi.

Apple did not immediately respond to a Reuters request for comment.

Recently, Apple has seen a slew of its AI executives leave to join Meta Platforms. The list includes Ruoming Pang, Apple’s top executive in charge of AI models, according to a Bloomberg report from July.

Meta has also hired two other Apple AI researchers, Mark Lee and Tom Gunter — who worked closely with Pang — for its Superintelligence Labs team.

Mike Rockwell, vice president in charge of the Vision Products Group, would be in charge of Siri virtual assistant as CEO Tim Cook has lost confidence in AI head John Giannandrea’s ability to execute on product development, Bloomberg had reported in March.

At its annual product launch event last week, Apple introduced an upgraded line of iPhones, alongside a slimmer iPhone Air, and held prices steady amid U.S. President Donald Trump’s tariffs that have hurt the company’s profit.

The event, though, was light on evidence of how Apple — a laggard in the AI race — aimes to close the gap with the likes of Google, which showcased the capabilities of its Gemini AI model in its latest flagship phones.



Source link

Tools & Platforms

Workers ‘larping’ by pretending to use AI | Information Age

Published

on


Woman working at a computer.

Workers are feeling pressure to use AI at work. Photo: Shutterstock

Many employees are “larping” at work by pretending to use artificial intelligence due to pressure to harness the technology, according to social scientist Nigel Dalton.

Delivering the keynote speech at RMIT Online’s Future Skills Fest, Dalton, of tech consultancy Thoughtworks, described the difficult state of affairs for Australian workers of all ages when it comes to AI.

He said it’s like going from a zoo to the jungle, and that many workers experience paralysis when it comes to new technologies.

Dalton pointed to a recent survey that found that one in six workers were pretending to use AI at work.

The survey, conducted by engineer outsourcing company Howdy.com, found that workers felt pressured to use AI in situations they were unsure about, and that three-quarters of them were expected to use the technology at work.

“AI is taking over the white-collar workspace as daily updates provide opportunities to optimise,” the report said.

“However, potential does not always lead to smooth implementation.”

‘Larping’ at work

Dalton said these workers are “larping” and not keeping pace with new technologies such as AI.

“They’ve got Gemini or CoPilot open when their boss walks up behind them, and they are larping – they are live action roleplaying,” Dalton said.

“This is interesting. What human behaviour did we incite here from the way we were scaffolding the work and the scene and the structure?”

The use of AI by companies of all shapes and sizes has accelerated in recent years, particularly since the advent of generative AI tools such as ChatGPT.

Earlier this year, Goldman Sachs became one of the largest companies to hire an AI software engineer to work alongside its human employees and complete complicated, multistep tasks.

Social scientist Nigel Dalton says that in 10 years, we’ll look back on this period and laugh. Photo: Shutterstock

Dalton likened how many workers are feeling when it comes to AI to the German chess term “zugzwang”, which means the compulsion to move even when knowing this will likely worsen your overall position.

“This is very much a good description of where we feel ourselves today and in our careers,” he said.

“If I do that, it’ll be the wrong thing; if I stand still it’ll be okay. But you can’t stand still. That’s why you’re feeling the dissonance in your head. But it will likely lead you to doing nothing, which is probably the worst scenario.

“We’re anchored in this ridiculous period that in 10 years we will all look back on and laugh.”

From a zoo to a jungle

With the growing usage of AI across all operations, businesses have become increasingly challenging to navigate for employees at all levels, particularly those who are yet to harness the technology fully.

Dalton said this was like the workplace going from a zoo to a jungle.

“We all used to work in a zoo – a metaphorically complicated process,” he said.

“At a zoo you can take photos of wild animals but the path is concrete, there are timetables and it’s all very safe.

“In a zoo, every animal stays in their cage. That is how work used to be – there weren’t any looming threats of stuff coming out of the forest.

“Now we’re on a work safari, a career safari. There are no paths, no signposts, no timetables.

“The animals are hiding in plain sight and collaborating, and may come from anywhere.

“To navigate the jungle you need a new mindset, and it involves being comfortable with getting lost, with what it feels like to go backwards for a time.”

According to Dalton, there are four key factors shaping the future of work: the climate crisis, ageing citizens, disruptive technology and declining social equity.

“It’s not just these things individually, it’s them weaving in together,” he said.

“It’s in these unlikely places that I believe businesses will be built, where the opportunities lie.

“It’s hard to navigate now, but there are opportunities amidst all of this chaos, as there always have been in history.”





Source link

Continue Reading

Tools & Platforms

‘AI will not love you, AI will not cry with you’: COICOM panel warns Church of technology’s limits

Published

on


Arnold Enns, Vladimir Lugo, Steve Cordon, and Fabio Criales during the panel forum “Artificial Intelligence: Challenges and Opportunities for the Church” at COICOM 2025. Christian Daily International

Artificial intelligence is no longer a distant concept for the Church but a pressing reality that demands attention. That was the message of a panel at the 2025 Congress of the Ibero-American Confederation of Communicators, Pastors, and Christian Leaders (COICOM) held in Honduras last week, where ministry and technology experts explored both the promise and perils of AI for faith communities.

Moderated by COICOM president Arnold Enns, the session—titled “Artificial Intelligence: Challenges and Opportunities for the Church”—brought together Vladimir Lugo, Steve Cordon, and Fabio Criales. The panelists examined the nature of AI, its societal impact, and its growing yet inescapable role within Christian ministry.

The discussion began with definitions. Lugo described AI as a branch of computing that “allows machines to do things that were previously reserved for humans,” including learning, analyzing, and making decisions. He clarified that AI does not reside in a single place but operates on vast cloud servers controlled by global tech giants such as Google, Amazon, and Microsoft, each competing for dominance in the field.

The dilemma of control and inherent bias

One of the first concerns raised was the issue of control and ethics. Panelists emphasized that AI technologies are not neutral. Lugo warned that publicly available models “carry biases,” reflecting the agendas of the secular companies that train them.

“Many of these companies are woke,” he said, arguing that they promote “anti-biblical” values and that their AI creations reflect humanist and liberal ideologies.

Criales added that AI “was meant to make evident what is already present” in the human heart, citing Matthew 15:18-19. He also cautioned about the danger of “hallucination”—when AI generates incorrect or misleading information in response to poorly framed prompts.

“Be very careful with that, because it hallucinates, recreates what you ask, and if you ask incorrectly, you could end up saying heresies on stage,” Criales warned.

Digital consumers or disciples?

The panel also weighed AI’s influence on ministry content creation. With more pastors turning to tools like ChatGPT to write sermons, Lugo acknowledged that AI can be a useful “tool” for research. But he stressed that “the intelligent entity using the tool is the human” and cautioned against surrendering discernment.

Cordon posed a sharper question about the widespread adoption of AI-driven platforms, noting the 123 million daily users of ChatGPT: “Have we created more digital consumers than digital disciples?” True pastoral work, he said, cannot be automated. “People need pastors. AI will not love you, AI will not cry with you.”

He recounted a sobering personal experience with a counseling AI that not only conversed smoothly but also offered to pray for him in eloquent, detailed language. The moment highlighted for him the unsettling boundary between authentic pastoral care and technological simulation. “I believe AI will also be a test of maturity for the Church,” he reflected.

A call for training and responsibility

The panel closed with a strong call for Christian leaders to equip themselves and their congregations to engage AI critically. “Either you use it, or it uses you—there really isn’t an alternative,” Cordon said.

Criales stressed that believers must be intentional in learning how to apply these tools properly. Lugo concluded with an appeal to humility: “If there is anything we want to learn from the Lord, let us learn how to learn.”

The consensus was clear: artificial intelligence is not merely a technological development but a spiritual test. For the Church, it represents a challenge requiring maturity, ethical discernment, and above all, a reaffirmation of the irreplaceable value of human connection in ministry.

Originally published on Diario Cristiano, Christian Daily International’s Spanish edition.



Source link

Continue Reading

Tools & Platforms

MSP evolution in the age of AI and risk – ARN

Published

on


Embracing consultative models

This bodes well for the right IT channel partners. Shoer said partners have to really embrace a more consultative model; they can’t look at an AI tool in the traditional mindset of just making money.

“What they really need to do [is] understand what the customer is trying to achieve and is AI the right tool to help them achieve that?” he said. “It may be part of a broader tool set, or it may be part of a change in business process.

“It could be any one of a number of things [and] they also have to be mindful of the pressure to assist a customer they’ve addressed their own environment and how they’re using AI.”

Shoer noted the GTIA has been conservative in its own use of AI and how its leveraging it internally.

“We’re doing an internal case study on ourselves to try and learn where gaps and pitfalls may be so that we can help inform our members what they need to be looking at, first and foremost, in their own business before they go out and try and sell themselves,” he explained.

The temptation to go out and sell themselves as an AI expert is massive right now, noted Shoer.

One of the most important areas with AI right now is looking at agreements. With the rush to AI, the message of going slow is getting lost.

When he was at GTIA’s ChannelCon conference, Shoer said there was only “one MSP in the room that was actually selling — truly selling — AI services”.

“His message to the attendees was, ‘Don’t rush in, because you could destroy your business if you move in too fast’… it’s more important than ever,’” he pointed out. “There have been so many cycles where people have jumped on and represented themselves as an MSSP [managed security service provider] — when they really weren’t an MSSP and skirted some delicate liability issues.

“But AI brings is a whole new dynamic to that many companies are leaking their IP out into these large models, and they don’t even know that they’re doing it.”

Want for change

End-customers in general need their technology partners to be advisory led and “to actually help their businesses,” said The TSP Advisory chief strategy officer James Davis.

“There’s always going to be a client base that want to be transactional,” he said. “But that level of advisory is going to scale up and down based on maturity of the client, the industry, the size, the objectives; there’s no one-size-fits-all right answer.

“But partners need direction and someone to lead them, because technology in general is too complicated.”

Davis said it’s easy to jump on and buy something, but complexity comes in when they start looking at it from a business perspective like managing costs, efficiencies, and managing risk.

“That’s when a tech partner is actually needed and where they realise where sit in the food chain,” he noted. “They need to act how they want to be treated.

Even if as an MSP, all it’s done is fixed-price support and they’re not proactively talking to their clients, said Davis.

“They’re pretty much just trying to limit as much noise as possible,” he said. “That, in itself, is a dead and dying model, because that’s not what’s necessary in a modern client ICT environment.”

For example, in an infrastructure where customers need help with applications, traditional MSPs don’t have the necessary experience.

“MSPs have always trained the clients, in general, to not come and talk to them about applications, because they can’t recommend anything,” said Davis. “A lot of clients won’t even come to the partners and ask for a lot of things, because they don’t think it’s what the partners do.”

Those partners that understand the need to help the customers on a business level “see the bigger picture” for them, he noted.

“They actually understand how the space works, because that’s where we all build our businesses, and that’s what we do all day, every day,” explained Davis. “This causes friction with partners who call themselves specialists but aren’t true specialists.

“They’re actually more the modern TSP [technology service provider], and they’re just leveraging AI to get in. That’s where they can legitimately take a lot of this business away from partners as well.”

According to Davis, the more modern, next generation technology partners that get it know they need to partner with others to ensure the customer gets the best out of them.

That’s all part of the consultative approach to ensuring the customer gets the right strategic advice.

“[Having] baseline relationships, providing some proactive advice, but really working operationally isn’t giving business or strategic advice,” he said. “You understand their business. For a partnership to work, you need to know where the organisation’s people fit in, what the business is trying to achieve, and then how the MSP is going to help the customer as a tech partner.”

Adding value

Dicker Data general manager of Microsoft Cloud A/NZ Sarah Loiterton told ARN value conversations should link technology investments to measurable business outcomes.

This includes reduced downtime, faster recovery, or improved compliance posture. She goes a step further and advises on the use of metrics like mean time to repair/resolve/respond/recover, total cost of ownership, and payback periods to illustrate impact in clear, financial terms.

“Industry‑specific insights and tailored collateral can support these discussions,” she said. “But the emphasis should remain on quantifiable benefits and risk reduction that resonate with both business and technical stakeholders.”

Partners should also start by understanding their own service identity, noted Loiterton.

For example, what they offer, where you draw the line, and which frameworks and align with, like the Australian Signals Directorate’s Essential Eight framework. Then, map these capabilities to each customer’s industry context, even when regulation is light.

Transparency is key, particularly with a clear outline on what risks can be mitigated and what contingency plans exist for residual risk.

“To accelerate this process, leverage tools that help benchmark against industry standards and legislative requirements, and translate those into actionable strategies for customers,” said Loiterton. “This ensures conversations remain sector‑specific, risk‑aware, and outcome‑focused.

When it comes to AI, having a deep understanding of governance and compliance requirements for each vertical.

“While many data protection principles are consistent, the nuances matter, especially for customer trust,” she said. “Establish clear policies for data classification, access control, and auditability.

“Incorporate human oversight into AI workflows and maintain transparent documentation of model usage and decision points. These steps demonstrate accountability and align AI initiatives with recognised governance standards.”

Loiterton also said distribution partnerships can bridge capability gaps by providing access to specialised expertise, managed services, and automation that let MSPs deliver enterprise‑grade outcomes without expanding headcount.

“Crucially, engage your distributor early on the entire scope of the customer’s requirement, not just the core workload, so they can help design whole‑of‑environment solutions,” she said. “That means validating dependencies across identity, devices, networks, data, applications, cloud/hybrid/edge, resilience/backup, observability, and governance.

“Taking a full‑scope view up‑front reduces integration gaps, speeds implementation, and ensures consistent policy and control coverage end‑to‑end.”

Collaborative models, such as co‑managed securities operations centre services, automated device management, and pre‑built AI workloads allow partners to scale efficiently while focusing internal teams on higher‑value activities.

“Clear SLAs (service level agreements), a documented RACI [framework], and shared accountability frameworks then keep service quality consistent across multiple customers,” she added.



Source link

Continue Reading

Trending