Connect with us

AI Research

Humanoid robot says not aiming to ‘replace human artists’

Published

on


When successful artist Ai-Da unveiled a new portrait of King Charles this week, the humanoid robot described what inspired the layered and complex piece, and insisted it had no plans to “replace” humans.

The ultra-realistic robot, one of the most advanced in the world, is designed to resemble a human woman with an expressive, life-like face, large hazel eyes and brown hair cut in a bob.

The arms though are unmistakably robotic, with exposed metal, and can be swapped out depending on the art form it is practicing.

Late last year, Ai-Da’s portrait of English mathematician Alan Turing became the first artwork by a humanoid robot to be sold at auction, fetching over $1 million.

But as Ai-Da unveiled its latest creation — an oil painting entitled “Algorithm King”, conceived using artificial intelligence — the humanoid insisted the work’s importance could not be measured in money.

“The value of my artwork is to serve as a catalyst for discussions that explore ethical dimensions to new technologies,” the robot told AFP at Britain’s diplomatic mission in Geneva, where the new portrait of King Charles will be housed.

The idea, Ai-Da insisted in a slow, deliberate cadence, was to “foster critical thinking and encourage responsible innovation for more equitable and sustainable futures”.

– ‘Unique and creative’ –

Speaking on the sidelines of the United Nations’ AI for Good summit, Ai-Da, who has done sketches, paintings and sculptures, detailed the methods and inspiration behind the work.

“When creating my art, I use a variety of AI algorithms,” the robot said.

“I start with a basic idea or concept that I want to explore, and I think about the purpose of the art. What will it say?”

The humanoid pointed out that “King Charles has used his platform to raise awareness on environmental conservation and interfaith dialog. I have aimed this portrait to celebrate” that, it said, adding that “I hope King Charles will be appreciative of my efforts”.

Aidan Meller, a specialist in modern and contemporary art, led the team that created Ai-Da in 2019 with artificial intelligence specialists at the universities of Oxford and Birmingham.

He told AFP that he had conceived the humanoid robot — named after the world’s first computer programmer Ada Lovelace — as an ethical arts project, and not “to replace the painters”.

Ai-Da agreed.

There is “no doubt that AI is changing our world, (including) the art world and forms of human creative expression”, the robot acknowledged.

But “I do not believe AI or my artwork will replace human artists”.

Instead, Ai-Da said, the aim was “to inspire viewers to think about how we use AI positively, while remaining conscious of its risks and limitations”.

Asked if a painting made by a machine could really be considered art, the robot insisted that “my artwork is unique and creative”.

“Whether humans decide it is art is an important and interesting point of conversation.”

nl/vog/gv



Source link

AI Research

Pope: AI development must build bridges of dialogue and promote fraternity

Published

on


In a message signed by the Cardinal Secretary of State Pietro Parolin, to the United Nations’ AI for Good Summit happening in Geneva, Pope Leo XIV encourages nations to create frameworks and regulations to work for the common good.

By Isabella H. de Carvalho

Pope Leo XIV encouraged nations to establish frameworks and regulations on AI so that it can be developed and used according to the common good, in a message sent on July 10 to the participants of the AI for Good Summit, taking place in Geneva, Switzerland, from July 8 to 11.  

“I would like to take this opportunity to encourage you to seek ethical clarity and to establish a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person”, the message, signed by the Secretary of State, Cardinal Pietro Parolin, said.

The summit is organized by the United Nations’ International Telecommunication Union (ITU) and is co-hosted by the Swiss government. The event sees the participation of governments, tech leaders, academics and others who are interested and work with AI.

In this “era of profound innovation” where many are reflecting on “what it means to be human”, the world “is at crossroads, facing the immense potential generated by the digital revolution driven by Artificial Intelligence”, the Pope highlighted in his message. 

AI requires ethical management and regulatory frameworks 

“As AI becomes capable of adapting autonomously to many situations by making purely technical algorithmic choices, it is crucial to consider its anthropological and ethical implications, the values at stake and the duties and regulatory frameworks required to uphold those values”, the Pope underlined in his message. 

He emphasized that the “responsibility for the ethical use of AI systems begins with those who develop, manage and oversee them” but users also need to share this mission. AI “requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency,” the Pope insisted. 

Building peaceful societies 

Citing St. Augustine’s concept of the “tranquility of order”, Pope Leo highlighted that this should be the common goal and thus AI should foster “more human order of social relations” and “peaceful and just societies in the service of integral human development and the good of the human family”. 

While AI can simulate human reasoning and perform tasks quickly and efficiently or transform areas such as “education, work, art, healthcare, governance, the military, and communication”, “it cannot replicate moral discernment or the ability to form genuine relationships”, Pope Leo warned. 

For him the development of this technology “must go hand in hand with respect for human and social values, the capacity to judge with a clear conscience, and growth in human responsibility”. It requires “discernment to ensure that AI is developed and utilized for the common good, building bridges of dialogue and fostering fraternity”, the Pope urged. AI needs to serve “the interests of humanity as a whole”.



Source link

Continue Reading

AI Research

AI slows down some experienced software developers, study finds

Published

on


By Anna Tong

SAN FRANCISCO (Reuters) -Contrary to popular belief, using cutting-edge artificial intelligence tools slowed down experienced software developers when they were working in codebases familiar to them, rather than supercharging their work, a new study found.

AI research nonprofit METR conducted the in-depth study on a group of seasoned developers earlier this year while they used Cursor, a popular AI coding assistant, to help them complete tasks in open-source projects they were familiar with.

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.

The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”

The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.

AI is also expected to replace entry-level coding positions. Dario Amodei, CEO of Anthropic, recently told Axios that AI could wipe out half of all entry-level white collar jobs in the next one to five years.

Prior literature on productivity improvements has found significant gains: one study found using AI sped up coders by 56%, another study found developers were able to complete 26% more tasks in a given time.

But the new METR study shows that those gains don’t apply to all software development scenarios. In particular, this study showed that experienced developers intimately familiar with the quirks and requirements of large, established open source codebases experienced a slowdown.

Other studies often rely on software development benchmarks for AI, which sometimes misrepresent real-world tasks, the study’s authors said.

The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested.

“When we watched the videos, we found that the AIs made some suggestions about their work, and the suggestions were often directionally correct, but not exactly what’s needed,” Becker said.

The authors cautioned that they do not expect the slowdown to apply in other scenarios, such as for junior engineers or engineers working in codebases they aren’t familiar with.

Still, the majority of the study’s participants, as well as the study’s authors, continue to use Cursor today. The authors believe it is because AI makes the development experience easier, and in turn, more pleasant, akin to editing an essay instead of staring at a blank page.

“Developers have goals other than completing the task as soon as possible,” Becker said. “So they’re going with this less effortful route.”

(Reporting by Anna Tong in San Francisco; Editing by Sonali Paul)



Source link

Continue Reading

AI Research

Persona-Driven AI for Brand Engagement and Audience Research

Published

on


Listen

NEW! Listen to article

Earlier this year (2025), OpenAI’s GPT-4.5 achieved a groundbreaking feat: In controlled Turing Test scenarios, it was mistaken for a human 73% of the time when adopting a carefully crafted persona.

That isn’t merely a technological milestone. It’s a paradigm shift in how brands can and should use AI to boost engagement.

As AI transitions from a behind-the-scenes utility to a front-facing conversational partner, marketers must recognize its potential to redefine short-form, high-touch digital interactions.

The Blurred Line: What Happens When AI Feels Human

Persona-driven AI, curated to embody distinct tones, styles, and even values, is already in the wild.

Think customer support agents that mirror Gen Z slang. Think chatbots with the warmth and wit of lifestyle influencers. Think AI-driven avatars hosting livestreams, fielding DMs, or acting as extensions of a brand’s personality.

Those aren’t hypothetical cases. They’re live, and they’re running at scale.

In fact, one study found that 58% of US consumers were already following virtual influencers, a sign that the public is increasingly comfortable engaging with AI-driven agents in personal, even emotional ways.

That human-feel approach makes AI more engaging and more effective, but it also muddies the water: When do audiences believe they’re talking to a real person? Does it matter? And if AI can convincingly “be” a person online, what does authenticity even mean in marketing anymore?

The Power of Persona-Driven AI as a Research Tool

Ironically, one of the best uses of human-like AI isn’t outward-facing at all.

Persona-driven AI can serve as a powerful research tool, offering marketers a dynamic, low-risk sandbox for testing messages, concepts, and campaigns. These AI personas can be designed to represent diverse communities, even mirroring the demographic, cultural, or behavioral traits of specific populations, such as those within a particular country.

By enabling these personas to interact with one another, marketers can simulate complex social dynamics, uncover nuanced reactions, and explore how ideas resonate across different segments without the ethical risks or costs of real-world experimentation.

Companies, including Social Trait, have built and trained persona-driven AI agents to simulate diverse consumer segments for precisely this purpose: real-time insight generation and campaign validation.

Need to understand how different segments might respond to a sensitive campaign? Or how a new product’s tone lands with Gen Z vs. Boomers? A simulated audience can help refine messaging before a single post goes live.

This approach isn’t replacing focus groups or gut instinct. It’s augmenting them. It’s like having a hyper-intelligent sounding board that lets you test, iterate, and learn fast.

Used in this way, AI becomes less about automating engagement and more about deepening it.

Authenticity at Scale: Redefining Brand Voice With AI

Let’s bust a myth: AI needn’t dilute brand voice; it can actually help define and sharpen it.

Persona-driven AI can simulate reactions from different demographics, emotional states, and cultural contexts—allowing marketers to practice empathetic listening at scale.

And it’s not just about saying the right thing to an audience; it’s about anticipating how it will be felt on the other end.

By creating dynamic communities of AI personas modeled after real populations, whether they reflect a nation’s cultural attitudes, a niche subculture, or a targeted customer segment, brands can engage in real-time, low-risk dialogue with their audiences. And AI personas don’t just respond, they interact with one another, revealing emergent behavior, social influence patterns, and emotional nuance that traditional testing often misses.

That’s why more than half of marketers are already using generative AI, according to a recent Salesforce survey. They’re not just automating. They’re enhancing strategy, empathy, and creativity at scale.

The result? Brand voices that are more nuanced, inclusive, and resonant.

Responsible Engagement and Guardrails for AI in Marketing

The temptation to use human-like AI to smooth over friction, boost interaction, and even increase conversions is strong. But with power comes responsibility.

Authenticity must remain a guiding principle. That begins with transparency: Audiences deserve to know when they’re engaging with AI. Deceptive use of AI, even unintentionally, breaks trust—which, once lost, is hard to regain.

Ethical engagement also means resisting manipulative design. If a bot sounds like your friend, it shouldn’t be weaponized to upsell or nudge behavior without consent. Persona-driven AI should reflect brand values, not obscure them.

Marketers must also acknowledge public sentiment: 55% of people say they would be more eager to use AI applications if they felt more human, not necessarily if they looked more human. This nuance matters. Feeling seen and heard drives engagement far more than superficial mimicry.

We need a new set of standards for AI in marketing. Ones that measure success not just in clicks and dwell time but in trust, clarity, and long-term brand health.

To keep AI a force for good in marketing, we must build it with intention, guided by a set of guardrails that ensure long-term trust and brand integrity:

  • Always disclose when AI is being used in real-time engagement.
  • Design AI agents that reflect your brand’s values, not just what performs well.
  • Avoid stereotypes in persona development and training data.
  • Keep humans in the loop especially in sensitive or high-stakes conversations.
  • Audit for bias continuously, not just at launch.
  • Understand the boundaries of responsible use, with a clear commitment to being insightful, not manipulative.

Ethical AI isn’t a feature. It’s a framework—which we must build now, not retroactively.

AI as a Bridge, Not a Barrier

We’re entering a new era. Not AI vs. human, but AI with human.

Marketers stand at the helm of this transformation. We have the tools to shape AI that engages, but also the responsibility to ensure it does so honestly.

When used with care, persona-driven AI isn’t a shortcut to connection. It’s a way to deepen it.

As the new stewards of AI-human interaction, let’s lead with intention and build a future where technology amplifies what’s best about being human.

More Resources on AI Use Cases in Marketing

How Market Researchers Are Using Generative AI

Using AI to Build Your Personas: Don’t Lose Sight of Your Real-World Buyers

Five Use Cases for AI in B2B Marketing (Beyond Content Generation)

Navigating AI Adoption and Use in Marketing: A Strategic Approach



Source link

Continue Reading

Trending