Connect with us

Tools & Platforms

AI: The Church’s Response to the New Technological Revolution

Published

on


Artificial intelligence (AI) is transforming everyday life, the economy, and culture at an unprecedented speed. Capable of processing vast amounts of data, mimicking human reasoning, learning, and making decisions, this technology is already part of our daily lives: from recommendations on Netflix and Amazon to medical diagnoses and virtual assistants.

But its impact goes far beyond convenience or productivity. Just as with the Industrial Revolution, the digital revolution raises social, ethical, and spiritual questions. The big question is:  How can we ensure that AI serves the common good without compromising human dignity?

A change of era

Pope Francis has described artificial intelligence as a true “epochal change,” and his successor, Pope Leo XIV, has emphasized both its enormous potential and its risks. There is even talk of a future encyclical entitled Rerum Digitalium, inspired by the historic Rerum Novarum of 1891, to offer moral guidance in the face of the “new things” of our time.

The Vatican insists that AI should not replace human work, but rather enhance it. It must be used prudently and wisely, always putting people at the centre. The risks of inequalities, misinformation, job losses, and military uses of this technology necessitate clear limits and global regulations.

The social doctrine of the Church and AI

The Church proposes applying the four fundamental principles of social doctrine to artificial intelligence  :

  • Dignity of the person: the human being should never be treated as a means, but as an end in itself.

  • Common good: AI must ensure that everyone has access to its benefits, without exclusions.

  • Solidarity: Technological development must serve the most needy in particular.

  • Subsidiarity: problems should be solved at the level closest to the people.

Added to this are the values ​​of truth, freedom, justice, and love , which guide any technological innovation towards authentic progress.

Opportunities and risks

Artificial intelligence already offers advances in medicine, education, science, and communication. It can help combat hunger, climate change, or even better convey the Gospel. However, it also poses risks:

  • Massive job losses due to automation.

  • Human relationships replaced by fictitious digital links.

  • Threats to privacy and security.

  • Use of AI in autonomous weapons or disinformation campaigns.

Therefore, the Church emphasizes that AI is not a person: it has no soul, consciousness, or the capacity to love. It is merely a tool, powerful but always dependent on the purposes assigned to it by humans.

A call to responsibility

The Antiqua et nova (2025) document   reminds us that all technological progress must contribute to human dignity and the common good. Responsibility lies not only with governments or businesses, but also with each of us, in how we use these tools in our daily lives.

Artificial intelligence can be an engine of progress, but it can never be a substitute for humankind. No machine can experience love, forgiveness, mercy, or faith. Only in God can perfect intelligence and true happiness be found.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Apple’s AI and search executive Robby Walker to leave: Report

Published

on


FILE PHOTO: Robby Walker, one of Apple’s most senior AI executives, is leaving the company.
| Photo Credit: AP

Robby Walker, one of Apple’s most senior artificial intelligence executives, is leaving the company, Bloomberg News reported on Friday, citing people with knowledge of the matter.

Walker’s exit comes as Apple’s cautious approach to AI has fueled concerns it is sitting out what could be the industry’s biggest growth wave in decades.

The company was slow to roll out its Apple Intelligence suite, including a ChatGPT integration, while a long-awaited AI upgrade to Siri has been delayed until next year.

Walker has been the senior director of the iPhone maker’s Answers, Information and Knowledge team since April this year. He has been with Apple since 2013, according to his LinkedIn profile.

He is planning to leave Apple next month, the report said. Walker was in charge of Siri until earlier this year, before management of the voice assistant was shifted to software chief Craig Federighi.

Apple did not immediately respond to a Reuters request for comment.

Recently, Apple has seen a slew of its AI executives leave to join Meta Platforms. The list includes Ruoming Pang, Apple’s top executive in charge of AI models, according to a Bloomberg report from July.

Meta has also hired two other Apple AI researchers, Mark Lee and Tom Gunter — who worked closely with Pang — for its Superintelligence Labs team.

Mike Rockwell, vice president in charge of the Vision Products Group, would be in charge of Siri virtual assistant as CEO Tim Cook has lost confidence in AI head John Giannandrea’s ability to execute on product development, Bloomberg had reported in March.

At its annual product launch event last week, Apple introduced an upgraded line of iPhones, alongside a slimmer iPhone Air, and held prices steady amid U.S. President Donald Trump’s tariffs that have hurt the company’s profit.

The event, though, was light on evidence of how Apple — a laggard in the AI race — aimes to close the gap with the likes of Google, which showcased the capabilities of its Gemini AI model in its latest flagship phones.



Source link

Continue Reading

Tools & Platforms

Nano Banana AI: ChatGPT vs Qwen vs Grok vs Gemini; the top alternatives to try in 2025 – The Times of India

Published

on



Nano Banana AI: ChatGPT vs Qwen vs Grok vs Gemini; the top alternatives to try in 2025  The Times of India



Source link

Continue Reading

Tools & Platforms

Judges call for joint oversight of AI expansion

Published

on


Beijing judges have called for stronger regulatory collaboration focused on artificial intelligence developers and service providers, with the aim of supporting innovation in the industry while enhancing the protection of individual rights.

Zhao Changxin, vice-president of the Beijing Internet Court, emphasized the need to supervise AI development and application across sectors. He suggested judicial bodies promptly communicate issues encountered in handling AI-related cases to departments such as cyberspace management, public security, market regulation and intellectual property.

“This joint approach aims to strengthen the regulation and guidance of AI use, and to clearly delineate the responsibilities and obligations of the technology developers, providers and users,” Zhao said on Wednesday.

Since the court’s establishment in September 2018, it has concluded more than 245,000 cases.

“Among them, those involving AI have been rapidly growing, primarily focusing on issues such as the ownership of copyright for AI-generated works and whether AI-powered products or services constitute online infringement,” he said.

As AI expands into more areas, disputes are no longer limited to the internet sector but are emerging in culture, entertainment, finance and advertising sectors, Zhao said.

“While introducing new products and services, the fast development of the technology has also brought new legal risks such as AI hallucinations and algorithmic problems,” he said, adding that judicial decisions should balance encouraging technological innovation with upholding social ethics.

In handling AI-related disputes, Zhao said priority should be given to safeguarding people’s dignity and rights. He cited a landmark ruling by the court as an example.

In 2024, the court heard a lawsuit in which a voice-over artist surnamed Yin claimed her voice had been used without her consent in audiobooks circulating online. The voice was processed by AI, according to Sun Mingxi, another vice-president of the court.

Yin sued five companies, including a cultural media corporation that provided recordings of her voice for unauthorized use, an AI software developer and a voice-dubbing app operator.

The court found the cultural media company had sent Yin’s recordings to the software developer without her permission. The software firm then used AI to mimic Yin’s voice and offered the AI-generated products for sale.

Sun said the AI-powered voice mimicked Yin’s vocal characteristics, intonation and pronunciation style to a high degree.

“This level of similarity allowed for the identification of Yin’s voice,” Sun said.

The court ruled that the actions of the cultural media company and the AI software developer infringed on Yin’s voice rights and ordered them to pay her 250,000 yuan ($35,111) in compensation. The other defendants were not held liable as they unknowingly used the AI-generated voice products.

It was China’s first case concerning rights to voices generated by AI.

“The ruling has set boundaries for how AI should be applied and helped regulate the technology to better serve the public,” Sun said.



Source link

Continue Reading

Trending