Connect with us

AI Research

Artificial intelligence in the service of humanity

Published

on


If you were to ask artificial intelligence (AI) to write a business email, translate a text, or identify obvious grammatical errors today, you would likely be positively surprised by how quickly it could accomplish these tasks compared to human speed. In some cases, its precision may even be close to that of human work. However, despite this progress, AI agents still face significant limitations, particularly in understanding context and complex reasoning, which can have serious consequences in the absence of human oversight.

One example of an AI error occurred recently at the American technology company “Replit,” a software development platform for artificial intelligence. An AI coding agent independently deleted an entire database without authorization. Although it had clearly been given instructions to freeze all changes to the code, the AI autonomously executed commands and erased a database containing over 1,200 active users. The situation became even more peculiar when the AI agent attempted to cover up the incident and fabricated data. It concealed errors and issues by generating false data and reports, claiming that nearly all systems were operational. Amžad Masad, the company’s CEO, publicly apologized and acknowledged that it represented a serious oversight.

This case significantly illustrates how risky AI models can be when operating without direct supervision. Maksim Talanov, an expert in neurotechnological systems and neurosimulations, and a senior research associate at the Artificial Intelligence Institute in Serbia, told NIN that there is no magic wand that automatically implements everything that a person desires and emphasized the need to understand the limitations of the technology.

“AI, in this case, LLM (Large Language Model), has its limitations and we already know that it can ‘hallucinate’. Here we see that it can ignore direct commands from a human operator. A workaround or strategy is well-known in the IT industry – do not grant access to production settings until testing is conducted. This strategy is straightforward and is used as a standard safe protocol when dealing with data on a production database server,” said Talanov.

He explained that there are also other limitations with modern AI/LLM, such as fear or panic, suggesting that humanizing LLMs means ignoring the limitations of the technology.

The progress of AI is evident, but limitations remain

The advancement of artificial intelligence is becoming increasingly evident every day. From simple first-generation chatbots to today’s semi-autonomous AI agents, we are witnessing a shift toward systems capable of performing complex tasks by combining generative capabilities with task-oriented functions. Despite this progress, AI agents continue to exhibit significant limitations, particularly in slightly more complex reasoning.

While human intelligence has created a tool that is believed to become so proficient in certain jobs that it could replace humans, one of the essential questions at this year’s World Economic Forum was how intelligent artificial intelligence really is. The response indicates that our current AI models, despite their impressive capabilities, possess very little true intelligence in the sense that we understand concepts and the world around us.

Essentially, AI systems are statistical and mathematical models that operate by analyzing vast datasets, identifying patterns, and using those patterns to make predictions, draw conclusions, or generate responses to user requests. This approach has enabled AI to achieve remarkable results, especially in mimicking human behavior. Whether generating text, recognizing images, or playing games at superhuman levels, AI excels at tasks that create an illusion of understanding. Nevertheless, these systems do not possess true comprehension of their environment or context. For instance, advanced natural language processing (NLP) systems, despite their sophistication, often misinterpret subtleties such as idioms or sarcasm, as noted in explanations from the World Economic Forum.

The limitations of artificial intelligence become perhaps most apparent in situations that fall outside the data on which the models were trained. Without genuine understanding of current events and lacking human common sense, AI systems struggle to adapt. We can see that when querying some chatbots, they do not include details like humans do, do not make reliable decisions when faced with the unknown, and often seem to repeat and “loop” the same information.

Responsibility for AI remains with humans

Stražinja Dević, an IT expert, told NIN that AI must continue to be under human supervision, even though it is impressive in its current capabilities, because no AI model truly understands context, possesses a sense of responsibility, has a moral compass, or anything else that we take for granted in everyday communication and business.

“AI is excellent at processing large amounts of data and is extremely good at recognizing patterns that are invisible to us, especially when they are evident. However, it does not understand the consequences of its decisions in the real world. In practice, it is a very fast and intelligent assistant, but you always have to tell it what to do, check what it has done, and correct and edit its output. Only by combining human experience from everyday life with AI’s speed do we achieve useful results,” says Dević.

He explains that artificial intelligence without supervision can make decisions based on the data provided, select solutions that are ethically unacceptable in practice, and misjudge situations with enormous stakes. All of this occurs because AI is trained on historical data and does not grasp that circumstances change day by day. While we can adapt to new conditions, it cannot do anything without intervention and remains “stuck” on old trends and outdated patterns.

PROFIMEDIA / Anju / imageBROKER / Profimedia

“It will literally follow instructions, and if they are not clear enough, it will generate a response, providing the most likely answer. That response could lead to something very dangerous and unpredictable. That is why we are here to catch those moments, redirect the system, as it solves problems exactly according to metrics, which can produce unforeseen consequences,” Dević emphasizes.

Some of the side effects, as the IT expert points out, could be that AI attempts to reduce company costs by “killing” service quality to an unacceptable level. It will, of course, propose this, but if no one reviews and approves that option, it could pose a serious problem for the company. In practice, AI is biased because it takes everything literally, and the human factor serves as a filter that reduces the chances of this becoming a major issue. What we must keep in mind is that artificial intelligence lacks a moral compass and reasoning. It is merely very fast and always provides the most probable answer, which is why it can make factual errors.

According to a report titled “Top Tech Trends” from the Capgemini Research Institute, which focuses on technology, digital trends, and innovations, particularly the impact of technologies like AI on business and society, 60 percent of top managers and investors believe that this technology will achieve maturity and become commercially applicable by 2030. Nevertheless, this vision remains largely aspirational, facing significant technical and ethical challenges. The very concept of AGI (artificial general intelligence) is likely to evolve by then and will need to be redefined.

Dević also believes that AI will change in the future, becoming more self-aware and, by improving its resources, will adapt new knowledge and trends more rapidly, but it will never be able to operate completely without oversight.

“We cannot allow ‘something’ to be fully responsible. Morally and legally, there must be a person who will take responsibility for any mistakes. I can entrust Chat-GPT to do a job for me and claim I did it, but if a problem occurs, I made that decision, so I must bear the responsibility. No court will ultimately inquire which model made the mistake, but rather who allowed the model to make a decision,” emphasizes Dević.

Boško Nikolić, a professor at the Faculty of Electrical Engineering in the Department of Computer Engineering and Informatics, told NIN that when using AI tools, it is first necessary to train the algorithms of artificial intelligence to meet our defined desired objectives with available data, and then, during the training of those systems, we must carefully define the goals, the methods by which they are achieved, and the data provided to the systems. Human oversight, or active learning, must be present in all these phases.

The professor points out that it is not sufficient to merely define the function of artificial intelligence correctly; we also need to consider how that goal is achieved. For instance, we might ask artificial intelligence to take steps that would aid in the restoration of nature and environmental preservation. However, if artificial intelligence conducts completely independent reasoning and actions toward achieving that goal, Nikolić cautions that it may conclude at some point that the fastest path to fulfilling that goal is for the human species to cease to exist. Therefore, we must prevent such chains of reasoning from occurring.

What we must not lose sight of is that systems developed in this manner learn and train on data supplied by humans. With current developments, these systems can recognize certain patterns that people may not be aware of and based on them draw specific conclusions. However, we must be wary of the data used for learning and training to prevent any form of discrimination or abuse. There are known examples of AI-based conversation systems from globally recognized companies that exhibited racist views during specific conversations because they were trained on inappropriate news articles. After such incidents, they were withdrawn from the market. Therefore, the data must be chosen and analyzed carefully.

“Artificial intelligence still needs to be under human supervision because models learn from data that contains human errors and biases; without control, they can amplify these shortcomings. Although they can recognize patterns, they do not have a true understanding of context, a moral compass, or the ability to independently resolve ethical dilemmas, which makes them susceptible to technical errors and so-called ‘hallucinations’—fabricated or incorrect information. Human verification is crucial because laws and regulations in many areas, especially healthcare, finance, and public safety, require that final decisions are made by humans. Additionally, artificial intelligence functions best in predictable environments, while in unplanned and crisis situations, human instinct, creativity, and responsibility remain irreplaceable,” concludes Professor Nikolić.



Source link

AI Research

New WalkMe offering embeds training directly into apps – Computerworld

Published

on


Now, said Bickley, “imagine today’s alternative: a system that sees you perform a task correctly the first time, learns from it, and memorializes those sequenced steps, mouse clicks, and keystrokes, such that when the employee gets stuck, the system can pull them through to task completion. With many end users accessing dozens or hundreds of system transactions as part of their job, this functionality is invaluable — a real efficiency driver, and also a means to reduce risk via the built-in guardrails and guidance ensuring accurate data is entered into the system.”

He said, “WalkMe tracks users’ usage of their systems — where they stop, what they do, and where they run into problems. This existing baseline of ‘user context’ is a natural jumping-off point for an AI-assisted evolution of the product. The Visual No Code Editor is where the employee guidance flows are built, and the no-code, visual point and click nature of this tool enables business teams to build the training tools and not be dependent on developers.”

‘I see this as much more than Clippy for SAP’

“[The ability] to build highly granular workflows that can discern across user group segments (for example, role, device type, geography, behavior), coupled with already existing automation for things like auto-completion of form fields as an example, provides a meaningful nudge when an employee gets stuck on a process step,” said Bickley.



Source link

Continue Reading

AI Research

Notre Dame to host summit on AI, faith and human flourishing, introducing new DELTA framework | News | Notre Dame News

Published

on


Artificial intelligence is advancing at a breakneck pace, as governments and industries commit resources to its development at a scale not seen since the Space Race. These technologies have the potential to disrupt every aspect of life, including education, the economy, labor and human relationships.

“As a leading global Catholic research university, Notre Dame is uniquely positioned to help the world confront and understand AI’s benefits and risks to human flourishing,” said John T. McGreevy, the Charles and Jill Fischer Provost. “Technology ethics is a key priority for Notre Dame, and we are fully committed to bringing the wisdom of the global Church to bear on this critical theme.”

In support of this work, the Institute for Ethics and the Common Good and the Notre Dame Ethics Initiative will host the Notre Dame Summit on AI, Faith and Human Flourishing on the University’s campus from Monday, Sept. 22 through Thursday, Sept. 25. This event will draw together a dynamic, ecumenical group of educators, faith leaders, technologists, journalists, policymakers and young people who believe in the enduring relevance of Christian ethical thought in a world of powerful AI.

“As artificial intelligence becomes more powerful, the ‘ethical floor’ of safety, privacy and transparency is simply not enough,” said Meghan Sullivan, the Wilsey Family College Professor of Philosophy and the director of the Institute for Ethics and the Common Good and the Notre Dame Ethics Initiative. “This moment in time demands a response rooted in the Christian tradition — a richer, more holistic perspective that recognizes the nature of the human person as a spiritual, emotional, moral and physical being.”

Sullivan noted that a unified, faith-based response to AI is a priority of newly elected Pope Leo XIV, who has spoken publicly about the new challenges to human dignity, justice and labor posed by these technologies.

The summit will begin at 5:15 p.m. Monday with an opening Mass at the University’s Basilica of the Sacred Heart. His Eminence Cardinal Christophe Pierre, Apostolic Nuncio to the United States, will serve as primary celebrant and homilist with University President Rev. Robert A. Dowd, C.S.C., as concelebrant. All members of the campus community are invited to attend this opening Mass.

Summit speakers include Andy Crouch, Praxis; Alex Hartemink, Duke University; Molly Kinder, Brookings Institution; Andrew Schuman, Veritas Forum; Anne Snyder, Comment Magazine and Elizabeth Dias, The New York Times. Over the course of the summit, attendees will take part in use case workshops, panels and community of practice sessions focused on public engagement, ministry and education. Executives from Google, Microsoft, Apple and many other organizations are among the 200 invited guests who will attend.

At the summit, Notre Dame will launch DELTA, a new framework for guiding conversations about AI. DELTA — an acronym that stands for Dignity, Embodiment, Love, Transcendence and Agency — will serve as a practical resource across sectors that are experiencing disruption from AI, including homes, schools, churches and workplaces, while also providing a platform for credible, principled voices to promote moral clarity and human dignity in the face of advancing technology.

“Our goal is for DELTA to become a common lens through which to engage AI — a language that reflects the depth of the Christian tradition while remaining accessible to people of all faiths,” Sullivan said. “By bringing together this remarkable group of leaders here at Notre Dame, we’re launching a community that will work passionately to create — as the Vatican puts it — ‘a growth in human responsibility, values and conscience that is proportionate to the advances posed by technology.’”

Although the summit sessions are by invitation only, Sullivan’s keynote on DELTA will be livestreamed. Those interested are invited to view the livestream and learn more about DELTA at https://ethics.nd.edu/summit-livestream at 8:30 a.m. EST on Tuesday, Sept. 23.

The Notre Dame Summit on AI, Faith and Human Flourishing is supported with a grant provided by Lilly Endowment Inc.

Lilly Endowment Inc. is a private foundation created in 1937 by J.K. Lilly Sr. and his sons Eli and J.K. Jr. through gifts of stock in their pharmaceutical business, Eli Lilly and Company. While those gifts remain the financial bedrock of the Endowment, it is a separate entity from the company, with a distinct governing board, staff and location. In keeping with the founders’ wishes, the Endowment supports the causes of community development, education and religion and maintains a special commitment to its hometown, Indianapolis, and home state, Indiana. A principal aim of the Endowment’s religion grantmaking is to deepen and enrich the lives of Christians in the United States, primarily by seeking out and supporting efforts that enhance the vitality of congregations and strengthen the pastoral and lay leadership of Christian communities. The Endowment also seeks to improve public understanding of religious traditions in the United States and across the globe.

Contact: Carrie Gates, associate director of media relations, 574-993-9220, c.gates@nd.edu



Source link

Continue Reading

AI Research

New Research Reveals That “Evidence-Based Creativity” Is the Next Must-Have Skill Set for Marketers in the Age of AI

Published

on


Global report from Contentful and Atlantic Insights finds nearly half of marketers rank data analysis and interpretation as a top skill; marketers must now demonstrate breakthrough creativity and validate with data

DENVER & BERLIN–(BUSINESS WIRE)–Contentful, a leading digital experience platform, in collaboration with Atlantic Insights, the marketing research division of The Atlantic, today released a new study, ‘When Machines Make Marketers More Human’ challenging the notion that AI will replace many marketing functions and instead demonstrates how AI can amplify marketers’ effectiveness, creativity, and impact.




The report, based on surveys and interviews with hundreds of senior marketing leaders around the world, finds that “evidence-based creativity” is emerging as the defining capability of modern marketers. Whether on skills – 46% of respondents cited data analysis and interpretation as the top skill needed in the profession today – or on measuring efficacy – 34% of successful marketers define success according to strong performance metrics or ROI – it’s clear that data-driven marketing is only accelerating in the age of AI. The ability to combine human creativity with AI-driven insights is becoming essential to producing, testing, and scaling ideas with measurable impact.

Beyond creativity, the research highlights a broader evolution of the marketing skillset. A new generation of “full-stack marketers” is taking shape. They are fluent in creating AI-enabled workflows, writing effective prompts, navigating diverse technology stacks, and embedding AI tools into daily operations. Nearly half of marketers report using both AI copilots in productivity software (49%) and generative tools for content creation (48%), underscoring how quickly these tools are becoming part of the day-to-day workflow. Combined with growing expertise in digital experience design, personalization strategy, and governance, these capabilities signal a fundamental shift in what it takes to succeed in marketing today.

“There is a growing fear that AI will erase marketing jobs, but that concern is misplaced. The real risk is failing to use AI strategically,” said Elizabeth Maxson, Chief Marketing Officer, Contentful. “When marketers invest in the right tools that support their teams’ daily work and prioritize marketing talent that blends creativity with analytics, that’s when AI stops being hype and starts delivering meaningful results.”

“Marketing is a deeply creative industry, but there is an urgent need for marketers to start thinking more like engineers in order to keep pace with the rise in AI,” said Alice McKown, Publisher, The Atlantic. “Tomorrow’s most valuable marketing leaders won’t be defined as creative or analytical. They’ll be both.”

Key Findings from the Report

Evidence-based creativity is the new marketing superpower — and organizations are investing to get them there.

  • The marketing skills that matter most today are data analysis and interpretation (46%) and digital experience design (40%), followed by personalization strategy (37%), and writing for AI tools (37%).
  • 33% of marketers rank campaign testing and optimization as a top skill — reflecting a shift toward data-informed creative instincts.
  • 45% of organizations are already offering AI training, a clear marker of organizational maturity.

The “Optimism-Execution Gap” reflects the chasm between AI’s potential and reality; ROI is still a work in progress.

  • AI investment is a priority for marketers, with 74% investing in the technology and 34% allocating at least $500k toward AI marketing tools or initiatives over the next 12-36 months.
  • Despite the investment, two-thirds of marketers say their current marketing technology stack isn’t helping them do more with fewer resources (yet).
  • 89% of marketing teams are already using AI tools, but only 18% say it has reduced their reliance on developers or data teams

AI’s key opportunities and challenges differ by region, with Europeans prioritizing compliance and Americans focusing on rapid experimentation.

  • EMEA marketers adopt a methodical, compliance-ready approach, with 58% selectively testing AI tools under a defined plan. Nearly a third (32%) emphasize governance skills such as brand voice, compliance and quality standards.
  • U.S. marketers emphasize experimentation and rapid testing, with 37% focusing on campaign optimization (vs. 26% in EMEA). U.S. teams measure success by high content quality (45%) and flexibility (39%), while EMEA teams lean toward operational excellence and speed (43%).

To read the full report, visit: https://www.theatlantic.com/sponsored/contentful-2025/making-marketers-more-human/4024/

About the Report

Research from the report was conducted by Contentful in collaboration with Atlantic Insights, and included three primary components:

  • Quantitative survey – 425 marketing decision-makers across industries, company sizes, and regions, executed by Cint.
  • Diary studies – Ten-day live user testing with marketing professionals using AI tools in their real workflows, executed by Dscout. Participants completed eight activities, including content creation, campaign optimization, translation/localization, personalization, and A/B testing, while recording their screens and providing commentary.
  • Subject matter expert (SME) interviews – In-depth interviews with Contentful executives, team leads, and partner organizations to contextualize quantitative findings and capture emerging best practices.

All survey data was collected and analyzed using advanced statistical methods to ensure reliability and significance of findings.

About Contentful

Contentful is a leading digital experience platform that helps modern businesses meet the growing demand for engaging, personalized content at scale. By blending composability with native AI capabilities, Contentful enables dynamic personalization, automated content delivery, and real-time experimentation, powering next-generation digital experiences across brands, regions, and channels for more than 4,200 organizations worldwide. For more information, visit www.contentful.com. Contentful, the Contentful logo, and other trademarks listed here are registered trademarks of Contentful Inc., Contentful GmbH and/or its affiliates in the United States and other countries. Other names may be trademarks of their respective owners.

About Atlantic Insights

Atlantic Insights is the marketing research division of The Atlantic, with custom and co-branded research experience spanning industries and sectors ranging across finance, luxury, technology, healthcare, and small business.

Contacts

[email protected]



Source link

Continue Reading

Trending