Connect with us

AI Insights

Actually, AI is a ‘word calculator’ – but not in the sense you might think

Published

on


Attempts at communicating what generative artificial intelligence (AI) is and what it does have produced a range of metaphors and analogies.

From a “black box” to “autocomplete on steroids”, a “parrot”, and even a pair of “sneakers”, the goal is to make the understanding of a complex piece of technology accessible by grounding it in everyday experiences – even if the resulting comparison is often oversimplified or misleading.

One increasingly widespread analogy describes generative AI as a “calculator for words”. Popularised in part by the chief executive of OpenAI, Sam Altman, the calculator comparison suggests that much like the familiar plastic objects we used to crunch numbers in maths class, the purpose of generative AI tools is to help us crunch large amounts of linguistic data.

The calculator analogy has been rightly criticised, because it can obscure the more troubling aspects of generative AI. Unlike chatbots, calculators don’t have built-in biases, they don’t make mistakes, and they don’t pose fundamental ethical dilemmas.

Yet there is also danger in dismissing this analogy altogether, given that at its core, generative AI tools are word calculators.

What matters, however, is not the object itself, but the practice of calculating. And calculations in generative AI tools are designed to mimic those that underpin everyday human language use.

Languages have hidden statistics

Most language users are only indirectly aware of the extent to which their interactions are the product of statistical calculations.

Think, for example, about the discomfort of hearing someone say “pepper and salt” rather than “salt and pepper”. Or the odd look you would get if you ordered “powerful tea” rather than “strong tea” at a cafe.

The rules that govern the way we select and order words, and many other sequences in language, come from the frequency of our social encounters with them. The more often you hear something said a certain way, the less viable any alternative will sound. Or rather, the less plausible any other calculated sequence will seem.

In linguistics, the vast field dedicated to the study of language, these sequences are known as “collocations”. They’re just one of many phenomena that show how humans calculate multiword patterns based on whether they “feel right” – whether they sound appropriate, natural and human.

Why chatbot output ‘feels right’

One of the central achievements of large language models (LLMs) – and therefore chatbots – is that they have managed to formalise this “feel right” factor in ways that now successfully deceive human intuition.

In fact, they are some of the most powerful collocation systems in the world.

By calculating statistical dependencies between tokens (be they words, symbols, or dots of color) inside an abstract space that maps their meanings and relations, AI produces sequences that at this point not only pass as human in the Turing test, but perhaps more unsettlingly, can get users to fall in love with them.




À lire aussi :
In a lonely world, widespread AI chatbots and ‘companions’ pose unique psychological risks


A major reason why these developments are possible has to do with the linguistic roots of generative AI, which are often buried in the narrative of the technology’s development. But AI tools are as much a product of computer science as they are of different branches of linguistics.

The ancestors of contemporary LLMs such as GPT-5 and Gemini are the Cold War-era machine translation tools, designed to translate Russian into English. With the development of linguistics under figures such as Noam Chomsky, however, the goal of such machines moved from simple translation to decoding the principles of natural (that is, human) language processing.

The process of LLM development happened in stages, starting from attempts to mechanise the “rules” (such as grammar) of languages, through statistical approaches that measured frequencies of word sequences based on limited data sets, and to current models that use neural networks to generate fluid language.

However, the underlying practice of calculating probabilities has remained the same. Although scale and form have immeasurably changed, contemporary AI tools are still statistical systems of pattern recognition.

They are designed to calculate how we “language” about phenomena such as knowledge, behaviour or emotions, without direct access to any of these. If you prompt a chatbot such as ChatGPT to “reveal” this fact, it will readily oblige.

ChatGPT-5 response when asked if it uses statistical calculations to form its responses.
OpenAI/ChatGPT/The Conversation

AI is always just calculating

So why don’t we readily recognise this?

One major reason has to do with the way companies describe and name the practices of generative AI tools. Instead of “calculating”, generative AI tools are “thinking”, “reasoning”, “searching” or even “dreaming”.

The implication is that in cracking the equation for how humans use language patterns, generative AI has gained access to the values we transmit via language.

But at least for now, it has not.

It can calculate that “I” and “you” is most likely to collocate with “love”, but it is neither an “I” (it’s not a person), nor does it understand “love”, nor for that matter, you – the user writing the prompts.

Generative AI is always just calculating. And we should not mistake it for more.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Swift Tests Use of AI to Fight Cross-Border Payment Fraud

Published

on


Swift conducted tests to demonstrate the potential impact of artificial intelligence in preventing cross-border payments fraud.

The global messaging system collaborated with 13 banks on experiments using privacy-enhancing technologies (PETs) to let institutions securely share fraud insights across borders, according to a Monday (Sept. 15) press release.

In one instance, the PETs allowed participants to verify intelligence on suspicious accounts in real time, “a development which could speed up the time taken to identify complex international financial crime networks and avoid fraudulent transactions being executed,” the release said.

In another case, participants employed a combination of PETs and federated learning, or an AI model that “visits” institutions to train on their data locally and lets them work together without sharing customer information, to spot anomalous transactions, per the release.

Trained using synthetic data from 10 million artificial transactions, the model was twice as effective in identifying fraud than a model trained using a single institution’s dataset, the release said.

“These experiments demonstrate the convening power of Swift as a trusted cooperative at the heart of global finance,” Rachel Levi, head of AI for Swift, said in the release. “A united, industry-wide fraud defense will always be stronger than one put up by a single institution acting alone. The industry loses billions [of dollars] to fraud each year, but by enabling the secure sharing of intelligence across borders, we’re paving the way for this figure to be significantly reduced and allowing fraud to be stopped in a matter of minutes, not hours or days.”

In the wake of these experiments, Swift plans to widen participation before beginning a second round of tests, which will use real transaction data in hopes of demonstrating the technologies’ effect on real-world fraud, the release said.

When it comes to preserving trust in financial transactions, sharing data is important.

“It’s a team sport,” Entersekt Chief Product Officer Pradheep Sampath told PYMNTS in August. “And the thread that binds us all together is data that’s actionable, shared in good faith, and governed responsibly.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.



Source link

Continue Reading

AI Insights

13 ON YOUR SIDE – YouTube

Published

on

Continue Reading

AI Insights

Why AI is never going to run the world

Published

on


The secret to human intelligence can’t be replicated or improved on by artificial intelligence, according to researcher Angus Fletcher.

Fletcher, a professor of English at The Ohio State University’s Project Narrative, explains in a new book that AI is very good at one thing: logic. But many of life’s most fundamental problems require a different type of intelligence.

“AI takes one feature of intelligence – logic – and accelerates it. As long as life calls for math, AI crushes humans,” Fletcher writes in the book “Primal Intelligence.”

“It’s the king of big-data choices. The moment, though, that life requires commonsense or imagination, AI tumbles off its throne.  This is how you know that AI is never going to run the world – or anything.”

Instead, Fletcher has developed a program to help people develop their primal intelligence, a program that has been successfully used with groups ranging from the U.S. Army to elementary school students.

At its core, primal intelligence is “the brain’s ancient ability to act smart with limited information,” Fletcher said.

In many cases, the most difficult problems people face involve situations where they have limited information and need to develop a novel plan to meet a challenge.

The answer is what Fletcher calls “story thinking.”

“Humans have this ability to communicate through stories, and story thinking is the way the brain has evolved to work,” he said.

“What makes humans successful is the ability to think of and develop new behaviors and new plans. It allowed our ancestors to escape the predator.  It allows us to plan, to plot our actions, to put together a story of how we might succeed.”

Humans have four “primal powers” that allow us to act smart with little information.

Those powers are intuition, imagination, emotion and commonsense. In the book, Fletcher expands on each of these and the role they have in helping humans innovate.

In essence, he says these four primal powers are driven by “narrative cognition,” the ability of our brain to think in story. Shakespeare may be the best example of how to think in story, he said.

Fletcher, who has an undergraduate degree in neuroscience and a PhD in literature, discusses in the book how Shakespeare’s innovations in storytelling have inspired innovators well beyond literature. He quotes people from Abraham Lincoln to Albert Einstein to Steve Jobs about the impact reading Shakespeare had on their lives and careers.

Many of Shakespeare’s characters are “exceptions to rules” rather than archetypes, which encourages people to think in new ways, Fletcher said.

What Shakespeare has helped these pioneers – and many other people – do is see stories in their own lives and imagine new ways of doing things and overcoming obstacles, he said.

That’s something AI can’t do, he said.  AI collects a lot of data and then works out probable patterns, which is great if you have a lot of information.

“But what do you do in a totally new situation? Well, in a new situation you need to make a new plan. And that’s what story thinking can do that AI cannot,” he said.

The U.S. Army was so impressed with Fletcher’s program that it brought him in to help train soldiers in its Special Operations unit.  After seeing it in action, the Army awarded Fletcher its Commendation Medal for his “groundbreaking research” that helped soldiers see the future faster, heal quicker from trauma and act wiser in life-and-death situations.

In the book, Fletcher gave an example of how one Army recruit used his primal intelligence to overcome obstacles in the most literal sense.

As part of its curriculum, Army Special Operations had a final test for recruits: an obstacle course of logs and ropes. The recruits were told they had the ring the bell at the end of the course before time expires in order to pass the test.

This particular recruit knew he couldn’t beat the clock. At the starting line, he thought of a new plan: he ran around the obstacle course, rather than through it, ringing the bell in record time.

While other military schools would have flunked him, Special Operations passed him, based on his ingenuity in passing the test, Fletcher said.  As the Army monitored his career after graduation, it found he outperformed many of his classmates on field missions.

The value of primal intelligence works in all walks of life, including business. While business often emphasizes management, Fletcher said primal intelligence shines when leadership is needed.

“Management is optimizing existing processes. But the main challenge of the future is not optimizing things that already work,” Fletcher said.

“The challenge of the future is figuring things out when we don’t know what works. That’s what leadership is all about, and that’s what story thinking is all about.”

In business and elsewhere, Fletcher said AI has a role. But it should not be seen as a replacement for human intelligence.

“Humans are able to say, this could work but it hasn’t been tried before. That’s what primal intelligence is all about,” he said.

“Computers and AI are only able to repeat things that have worked in the past or engage in magical thinking. That’s not going to work in many situations we face.”





Source link

Continue Reading

Trending