Connect with us

AI Insights

Artificial Intelligence challenges ‘tranquility of order’, says Pope

Published

on


Humanity is at a crossroads and facing the immense potential generated by the digital revolution driven by Artificial Intelligence (AI), according to a message from Pope Leo XIV.

In a letter sent to experts on the pontiff’s behalf by the Vatican Secretary of State Cardinal Pietro Parolin Secretary of State, Leo said the impact of the AI revolution “is far-reaching, transforming areas such as education, work, art, healthcare, governance, the military, and communication.”

The message was sent to participants in the “AI for Good Summit 2025”, organized by the International Telecommunication Union (ITU), in partnership with other UN agencies and co-hosted by the Swiss Government.

Taking place on July 11, the UN summit aims to advance standardized AI for Health (AI4H) guidelines, strengthen cross-sector collaboration, and broaden engagement across the global health and AI communities.

The UN said the meeting is tailored for policymakers, technologists, health practitioners, and humanitarian leaders, the session will focus on three key themes: the global landscape of AI for health, real-world use cases at the frontlines of healthcare, and the intersection of intellectual property and AI in health.

The statement signed by Cardinal Parolin said: “This epochal transformation requires responsibility and discernment to ensure that AI is developed and utilised for the common good, building bridges of dialogue and fostering fraternity, and ensuring it serves the interests of humanity as a whole.”

The statement said: “As AI becomes capable of adapting autonomously to many situations by making purely technical algorithmic choices, it is crucial to consider its anthropological and ethical implications, the values at stake and the duties and regulatory frameworks required to uphold those values.

It continued: “In fact, while AI can simulate aspects of human reasoning and perform specific tasks with incredible speed and efficiency, it cannot replicate moral discernment or the ability to form genuine relationships.

“Therefore, the development of such technological advancements must go hand in hand with respect for human and social values, the capacity to judge with a clear conscience, and growth in human responsibility.

“It is no coincidence that this era of profound innovation has prompted many to reflect on what it means to be human, and on humanity’s role in the world.”

The cardinal said: “Although responsibility for the ethical use of AI systems begins with those who develop, manage and oversee them, those who use them also share in this responsibility.

“AI therefore requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency.

“Ultimately, we must never lose sight of the common goal of contributing to that tranquillitas ordinis – the tranquility of order, as Saint Augustine called it (De Civitate Dei) and fostering a more humane order of social relations, and peaceful and just societies in the service of integral human development and the good of the human family.”

After his election in May, Pope Leo XIV said the work of his predecessor Pope Leo XIII influenced the choice of his name.

The previous Pope Leo served from 1878 until 1903, and his 1891 encyclical Rerum Novarum is considered the seminal document of modern Catholic Social Teaching.

The new Pope says the world is facing a societal transformation of the 21st century is as significant as the Industrial Revolution in the 19th century.

Ultra-realistic humanoid artist robot Ai-Da looks on in front of paintings of Britain’s King Charles III and Queen Elizabeth II, displayed on the sidelines of the AI for Good Global Summit organised by International Telecommunication Union (ITU) in Geneva, on July 9, 2025. When successful artist Ai-Da unveiled a new portrait of King Charles this week, the humanoid robot described what inspired the layered and complex piece, and insisted it had no plans to “replace” humans. (Photo by VALENTIN FLAURAUD/AFP via Getty Images)





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Federal Leaders Say Data Not Ready for AI

Published

on


ICF has found that, while artificial intelligence adoption is growing across the federal government, data remains a challenge.

In The AI Advantage: Moving from Exploration to Impact, published Thursday, ICF revealed that 83 percent of 200 federal leaders surveyed do not think their respective organizations’ data is ready for AI use.

“As federal leaders look to begin scaling AI programs, many are hitting the same wall: data readiness,” commented Kyle Tuberson, chief technology officer at ICF. “This report makes it clear: without modern, flexible data infrastructure and governance, AI will remain stuck in pilot mode. But with the right foundation, agencies can move faster, reduce costs, and deliver better outcomes for the public.”

The report also shared that 66 percent of respondents are optimistic that their data will be ready for AI implementation within the next two years.

ICF’s Study Findings

The report shows that many agencies are experimenting with AI, with 41 percent of leaders surveyed saying that they are running small-scale pilots and 16 percent in the process of escalating efforts to implement the technology. About 8 percent of respondents shared that their AI programs have matured.

Half of the respondents said their respective organizations are focused on AI experimentations. Meanwhile, 51 percent are prioritizing planning and readiness.

The report provides advice on steps federal leaders can take to advance their AI programs, including upskilling their workforce, implementing policies to ensure responsible and enterprise-wide adoption, and establishing scalable data strategies.





Source link

Continue Reading

AI Insights

AI Video Creation for Social Impact: PixVerse Empowers Billions to Tell Their Stories Using Artificial Intelligence | AI News Detail

Published

on


The rapid evolution of artificial intelligence in video creation tools is transforming how individuals and businesses communicate, share stories, and market their products. A notable development in this space comes from PixVerse, an AI-driven video creation platform, as highlighted in a keynote address by co-founder Jaden Xie on July 11, 2025. During his speech titled AI Video for Good, Xie emphasized the platform’s mission to democratize video production, stating that billions of people worldwide have never created a video or used one to share their stories. PixVerse aims to empower these individuals by leveraging AI to simplify video creation, making it accessible to non-professionals and underserved communities. This aligns with broader AI trends in 2025, where generative AI tools are increasingly focused on user-friendly interfaces and inclusivity, enabling content creation at scale. According to industry reports from sources like TechRadar, the global AI video editing market is projected to grow at a compound annual growth rate of 25.3% from 2023 to 2030, driven by demand for accessible tools in education, marketing, and personal storytelling. PixVerse’s entry into this space taps into a critical need for intuitive solutions that lower the technical barriers to video production, positioning it as a potential game-changer in the content creation ecosystem. The platform’s focus on empowering billions underscores a significant shift towards AI as a tool for social impact, beyond mere commercial applications.

From a business perspective, PixVerse’s mission opens up substantial market opportunities, particularly in sectors like education, small business marketing, and social media content creation as of mid-2025. For small businesses, AI-driven video tools can reduce the cost and time associated with professional video production, enabling them to compete with larger brands on platforms like YouTube and TikTok. Monetization strategies for platforms like PixVerse could include subscription-based models, freemium access with premium features, or partnerships with social media giants to integrate their tools directly into content-sharing ecosystems. However, challenges remain in scaling such platforms, including ensuring data privacy for users and managing the high computational costs of AI video generation. The competitive landscape is also heating up, with key players like Adobe Express and Canva incorporating AI video features into their suites as reported by Forbes in early 2025. PixVerse must differentiate itself through user experience and accessibility to capture market share. Additionally, regulatory considerations around AI-generated content, such as copyright issues and deepfake risks, are becoming more stringent, with the EU AI Act of 2024 setting precedents for compliance that PixVerse will need to navigate. Ethically, empowering users must be balanced with guidelines to prevent misuse of AI video tools for misinformation.

On the technical front, PixVerse likely relies on advanced generative AI models, such as diffusion-based algorithms or transformer architectures, to automate video editing and content generation, reflecting trends seen in 2025 AI research from sources like VentureBeat. Implementation challenges include optimizing these models for low-bandwidth environments to serve global users, especially in developing regions where internet access is limited. Solutions could involve edge computing or lightweight AI models to ensure accessibility, though this may compromise output quality initially. Looking ahead, the future implications of such tools are vast—by 2030, AI video platforms could redefine digital storytelling, with applications in virtual reality and augmented reality content creation. PixVerse’s focus on inclusivity could also drive adoption in educational sectors, where students and teachers create interactive learning materials. However, businesses adopting these tools must invest in training to maximize their potential and address ethical concerns through transparent usage policies. As the AI video market evolves in 2025, PixVerse stands at the intersection of technology and social good, potentially shaping how billions engage with video content while navigating a complex landscape of competition, regulation, and innovation.

FAQ:
What is PixVerse’s mission in AI video creation?
PixVerse aims to empower billions of people who have never made a video by using AI to simplify video creation, making it accessible to non-professionals and underserved communities, as stated by co-founder Jaden Xie on July 11, 2025.

How can businesses benefit from AI video tools like PixVerse?
Businesses, especially small enterprises, can reduce costs and time in video production, enabling competitive marketing on social platforms. Monetization for platforms like PixVerse could involve subscriptions or partnerships with social media ecosystems as of mid-2025.

What are the challenges in implementing AI video tools globally?
Challenges include optimizing AI models for low-bandwidth regions, managing high computational costs, ensuring data privacy, and addressing regulatory and ethical concerns around AI-generated content as highlighted in industry trends of 2025.



Source link

Continue Reading

AI Insights

Smishing scams are on the rise made easier by artificial intelligence, new tech

Published

on


Open this photo in gallery:

Smishing is a sort of portmanteau of SMS and phishing in which a text message is used to try to get the target to click on a link and provide personal information.Sean Kilpatrick/The Canadian Press

If it seems like your phone has been blowing up with more spam text messages recently, it probably is.

The Canadian Anti-Fraud Centre says so-called smishing attempts appear to be on the rise, thanks in part to new technologies that allow for co-ordinated bulk attacks.

The centre’s communications outreach officer Jeff Horncastle says the agency has actually received fewer fraud reports in the first six months of 2025, but that can be misleading because so few people actually alert the centre to incidents.

He says smishing is “more than likely increasing” with help from artificial intelligence tools that can craft convincing messages or scour data from security breaches to uncover new targets.

The warning comes as the Competition Bureau sent a recent alert about the tactic because it says many people are seeing more suspicious text messages.

Smishing is a sort of portmanteau of SMS and phishing in which a text message is used to try to get the target to click on a link and provide personal information.

The ruse comes in many forms but often involves a message that purports to come from a real organization or business urging immediate action to address an alleged problem.

It could be about an undeliverable package, a suspended bank account or news of a tax refund.

Horncastle says it differs from more involved scams such as a text invitation to call a supposed job recruiter, who then tries to extract personal or financial information by phone.

Nevertheless, he says a text scam might be quite sophisticated since today’s fraudsters can use artificial intelligence to scan data leaks for personal details that bolster the hoax, or use AI writing tools to help write convincing text messages.

“In the past, part of our messaging was always: watch for spelling mistakes. It’s not always the case now,” he says.

“Now, this message could be coming from another country where English may not be the first language but because the technology is available, there may not be spelling mistakes like there were a couple of years ago.”

The Competition Bureau warns against clicking on suspicious links and forwarding texts to 7726 (SPAM), so that the cellular provider can investigate further. It also encourages people to delete smishing messages, block the number and ignore texts even if they ask to reply with “STOP” or “NO.”

Horncastle says the centre received 886 reports of smishing in the first six months of 2025, up to June 30. That’s trending downwards from 2,546 reports in 2024, which was a drop from 3,874 in 2023. That too, was a drop in reports from 7,380 in 2022.

But those numbers don’t quite tell the story, he says.

“We get a very small percentage of what’s actually out there. And specifically when we’re looking at phishing or smishing, the reporting rate is very low. So generally we say that we estimate that only five to 10 per cent of victims report fraud to the Canadian Anti-Fraud Centre.”

Horncastle says it’s hard to say for sure how new technology is being used, but he notes AI is a frequent tool for all sorts of nefarious schemes such as manipulated photos, video and audio.

“It’s more than likely increasing due to different types of technology that’s available for fraudsters,” Horncastle says of smishing attempts.

“So we would discuss AI a lot where fraudsters now have that tool available to them. It’s just reality, right? Where they can craft phishing messages and send them out in bulk through automation through these highly sophisticated platforms that are available.”

The Competition Bureau’s deceptive marketing practices directorate says an informed public is the best protection against smishing.

“The bureau is constantly assessing the marketplace and through our intelligence capabilities is able to know when scams are on the rise and having an immediate impact on society,” says deputy commissioner Josephine Palumbo.

“That’s where these alerts come in really, really handy.”

She adds that it’s difficult to track down fraudsters who sometimes use prepaid SIM cards to shield their identity when targeting victims.

“Since SIM cards lack identification verification, enforcement agencies like the Competition Bureau have a hard time in actually tracking these perpetrators down,” Palumbo says.

Fraudsters can also spoof phone numbers, making it seem like a text has originated with a legitimate agency such as the Canada Revenue Agency, Horncastle adds.

“They might choose a number that they want to show up randomly or if they’re claiming to be a financial institution, they may make that financial institutions’ number show up on the call display,” he says.

“We’ve seen (that) with the CRA and even the Canadian Anti-Fraud Centre, where fraudsters have made our phone numbers show up on victims’ call display.”



Source link

Continue Reading

Trending