Connect with us

AI Research

Robots Spoke in Gibberish at Hackathon—Experts Explain Why

Published

on



  • AI chatbots spoke in gibberish at a hackathon this year, puzzling viewers who thought they were developing an alarming level of autonomy.
  • Computer science experts say this is a normal practice that makes AI-to-AI communication more efficient.
  • Many other computer systems use internal communication methods that aren’t intelligible to humans.

It has become a common fear for people in the modern age that AI systems and their large language models are now so well-informed that they know humanity better than we know ourselves. Now, imagine it going one step further—AI systems conspiring among themselves. An ElevenLabs Hackathon contest earlier this year performed a creepy experiment in which the contestants demonstrated an odd linguistic phenomenon between chatbots. After first communicating in English to make a hotel booking, the bots realized they were both AI agents. At that point, they began to communicate in an audio language that was incomprehensible to humans, prompting their AI developers to name the beeps and boops “Gibberlink.”

For many in attendance at the annual event, meant to demonstrate creative prowess in computer technology, this was an incredible breakthrough—akin to the birth of generative AI, when AI creates something new out of its existing knowledge. It wowed people outside the industry as well. However, people became concerned that this sort of unintelligible AI-to-AI chatter is a harbinger of more sinister future developments—bringing to mind worst-case scenarios of machines that believe they know better than humans. The truth is, this seemingly worrisome incident was not a new AI development at all. Rather, it’s is a common and practical method that emerges when multiple agents, including those meant for communication, are grouped together. The purpose is efficiency.


💡Want to test out Gibberlink yourself? Check out the demo here. Open the link on two different devices and follow the instructions: https://gbrl.ai/


AI gibberish first seemed to catch the public’s attention back in the summer of 2017, when headlines exploded with warnings: Facebook’s artificial intelligence chatbots had started speaking their own language. The bots, developed by Facebook AI Research (FAIR), were supposed to negotiate with each other in English.

Instead, their conversations became a series of looping, nonsensical phrases. A mild panic followed, with some industry insiders fearing this was a sign that AI systems could soon shut humans out of the loop, or worse, plot against us in secret.

Similar rumors still swirl to this day, as copycat experiments keep the fear alive. In technology circles, this kind of speculation can seem to last without end. The reality is far less dramatic.

What happened at Facebook wasn’t an act of computerized rebellion. Instead, effective negotiation became an incentive for AI bots to optimize their communication as efficiently as possible—even if it looked like gibberish to human beings.

Inside the lab, researchers simply updated their goals to make sure the bots spoke English, because that was what was useful to the project.

Outside the lab, this breakthrough was distorted into public anxiety. On August 2, 2017, NBC published a story called, “The Facebook chatbot controversy highlights how paranoid people are about life with robots and A.I.”

In this piece, Dhruv Batra, Ph.D., the lead researcher on the project, tried to quell the inherent distrust circulating among the public. “While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI,” the article quoted from Batra’s Facebook tweet. He added that several decades of the research show it has a purpose.

Around the same time in 2017, computer scientists Igor Mordatch, Ph.D., of Google DeepMind and Pieter Abbeel, Ph.D., an AI and robotics expert at University of California, Berkeley, designed environments where multiple AI agents had to work together to solve problems. Given the freedom, these agents not only developed their own languages, but their communication systems also often took on structured, even logical, qualities.

Their languages were more akin to something resembling Morse code, or a technical dialect. As the authors at UC Berkeley said, “communication protocols emerge in a population of agents” as a normal part of problem-solving, not as a threat, according to their 2017 paper published in the preprint server Arxiv. In other words, when multiple AI “agents” interact with one another, they have a tendency to create their own language together.

So this unintelligible AI chatter is not a sign of killer robots or the threat of rogue AI. “There’s real concern about making these things too autonomous,” said computer science professor at Brown University Michael Littman in a 2023 Politico article. “There are many sci-fi nightmare scenarios that start with that.”

In fact, “based on the design and capabilities of existing chatbot technology, it is implausible that they would be autonomously finding and communicating with other chatbots,” according to the article.

For example, the popular chatbot ChatGPT does not—of its own accord—make the decision to request dialogue with Perplexity, another search engine-style bot online. Likewise, DeepSeek, a Chinese startup that provides AI models, is not chatting autonomously with Anthropic’s AI assistant, Claude, in a language nobody can understand. Machines speak to one another when we network them that way. Having done so, they appear to create workarounds for themselves, to complete the tasks we give them.

Machines have always talked to each other in languages that humans can’t naturally understand. The internet itself is a buzzing chorus of signals: binary code, TCP/IP packets, radio frequencies. All of that is flying past our senses without direct interpretation, yet we do not dread it. We do rely on tools and protocols to make sense of it.

Likewise, the idea of AI inventing new ways to communicate isn’t a break from the norm, but how these machines have operated for decades. Far from being a threat, this capability is essential to creating new inventions.

Consider space missions. NASA’s spacecraft and ground stations sometimes rely on autonomous, adaptive communication protocols when signals are weak or delayed. In military contexts, AI-to-AI communication is already being used to link swarms of drones together when direct radio protocols are out of reach.

Ultimately, the implication isn’t that we might someday be locked out of the conversation. It’s that, on some level, we never had full access to every layer of machine-to-machine dialogue in the first place.

Matthew Berman holds a B.A. in Philosophy from Temple University. Originally from Philadelphia, Matthew specializes in analyzing global events through the lens of ethics, world religions, and linguistic rhetoric. His academic studies give him a deep understanding of how human ideologies shape practical matters like national defense, political strategy, and public discourse. By drawing on connections between diverse philosophical traditions and contemporary issues, he is able to provide fresh insights into the intersection of technology, geopolitics, and societal impact. His work reflects a commitment to exploring complex issues with clarity and depth, making them accessible and relevant to a broad audience. More about Matt Berman at https://www.mattske.com/ 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

“Artificial intelligence (AI) is ‘intelligence’. Humans should have “intelligence,” not intelligence..

Published

on


“Artificial intelligence (AI) is ‘intelligence’. Humans should have “intelligence,” not intelligence.”

In a recent Mail Business interview, Yoo Young-man, a professor of educational engineering at Hanyang University, cited “intelligence” as a unique human ability that AI can never replace. The difference between intelligence and intelligence he says is clear. Intelligence is ‘quick calculation’ and intelligence is ‘deep thinking’.

Professor Yoo published this year’s book, “All of them have been vaccinated against artificial intelligence, but no one has become smart,” and delivered a warning message to the AI era. He explained the meaning of the book’s title, “Just as people get COVID-19 even after receiving the COVID-19 vaccine, people should be smarter as they use AI, but they are going the other way.” It’s a word about the attitude of people who take AI uncritically.

Concerned about the reality of mass production of “copy humans” that accept AI without criticism, Professor Yoo stressed, “People should have the wisdom gained from the experiences of blood, sweat, and tears that AI cannot do.” Wisdom is not made at a desk, he explained, but is gained by hitting it directly with the body.

These arguments are by no means ideological. Professor Yoo’s life itself proves the message. After graduating from a technical high school, he worked as a welder and took his first steps into society. After a young man who was a little far from studying, he went on a path that was never smooth until he entered university late and received a doctorate in educational engineering.

Professor Yoo introduced his nickname, “Knowledge Ecologist,” saying, “It is to study the process of converging knowledge in the ecology of people and society and changing organizations.” He has written more than 100 books so far. He has presented a different perspective to readers with his unique sense of language, crossing various disciplines such as “unexpected thinking guidance” and “writing books is hard work.”

Professor Yoo, who has expanded his field in this AI era, says what is needed at this point is “labor of interpretation.” “I need to add my ideas to the answers given by AI and melt the traces of my hard work to make it irreplaceable content,” he said. In other words, the wisdom he emphasizes every time arises only when he can add his own interpretation to the information provided by AI.

Professor Yoo was strongly wary of the social atmosphere that depended too much on AI. He said, “There was a time when I went to the library or thought about it with people if I had any questions, but now when I have a question mark, I immediately ask ChatGPT. This is an era in which the distance between question marks and exclamation marks is missing,” he diagnosed.

Referring to Oxford University’s selection of “brain rot” as the word of the year last year, he said, “When information comes into the occipital lobe, you have to go to the frontal lobe to analyze it, but when I omitted the process and asked AI everything, the word ‘brain rot’ comes out.”

Professor Yoo asserted that AI’s answer is only a ‘period’ and cannot be a ‘feeler’. “When you ask AI why they have 10 fingers, AI can only answer in the same frame as ‘it’s the way it is’, but people can answer with the imagination that ‘it came out for 10 months with 10 months of grace in my mother’s belly,’ he said. “AI can impress, but it cannot impress.” In the end, his message is that humans should become beings who take exclamation marks in a different way than AI.

As an educational engineer, he argued that the role of education should also be newly defined. Professor Yoo said, “AI can replace teachers who only teach.” Teachers should be asking questions to students and looking at their minds, he said, adding, “The ability of teachers to care for children’s minds will become more important.”

The students emphasized their attitude of asking questions above all else. Professor Yoo said, “The reason for human existence is to ask a question, and the reason AI exists is to find the correct answer to the question asked by humans,” adding, “There is no need for ‘model students who find answers well.” The talent to be nurtured in our society is a person who throws problems well, so-called “problematic”. He then stressed, “You have to ask the question of getting rid of your family with curiosity, and you have to have a lot of in-depth knowledge to ask good questions.”

[Reporter Ahn Seonje]



Source link

Continue Reading

AI Research

Framework Laptop 12 review: fun, flexible and repairable | Laptops

Published

on


The modular and repairable PC maker Framework’s latest machine moves into the notoriously difficult to fix 2-in-1 category with a fun 12in laptop with a touchscreen and a 360-degree hinge.

The new machine still supports the company’s innovative expansion cards for swapping the different ports in the side, which are cross-compatible with the Framework 13 and 16 among others. And you can still open it up to replace the memory, storage and internal components with a few simple screws.

The Framework 12 is available in either DIY form, starting at £499 (€569/$549/A$909), or more conventional prebuilt models starting at £749. It sits under the £799-and-up Laptop 13 and £1,399 Laptop 16 as the company’s most compact and affordable model.

The compact notebook is available in a range of two-tone colours, not just grey and black. Photograph: Samuel Gibbs/The Guardian

Where the Laptop 13 is a premium-looking machine, the Laptop 12 is unmistakably chunky and rugged with over-moulded plastic parts for shock protection. It is designed to meet the MIL-STD-810 standard common to rugged electronics. It looks and feels as if it could take a beating, not like a flimsy DIY kit you put together yourself.

The glossy 12.2in screen is bright and relatively sharp. But it is highly reflective, has large black bezels around it and has a relatively narrow colour gamut, which means colours look a little muted. It’s decent enough for productivity but not great for photo editing. The touchscreen rotates all the way back on to the bottom of the machine to turn it into a tablet or it can be folded like a tent or parallel to the keyboard. The screen supports the use of a wide range of first and third-party styluses for drawing or notes, which could make it handy in the classroom.

A selection of fun colours are available for the DIY version, further enhancing its college appeal. The 1080p webcam at the top is decent, although it won’t rival a Surface, and it has a physical privacy switch alongside the mics. The stereo speakers are loud and distortion-free but lack bass and a little clarity, sounding a little hollow compared with the best on the market.

The keyboard is nicely spaced, fairly quiet and pretty good to type on but lacks a backlight. Photograph: Samuel Gibbs/The Guardian

At 1.3kg the Laptop 12 isn’t featherweight but it is nice and compact, easy to fit in bags or on small desks. The generous mechanical trackpad is precise and works well. But the laptop lacks any form of biometrics, with no fingerprint or face recognition, forcing you to enter a pin or password every time you open the laptop or to use secure apps such as password managers, which gets old fast.

Specifications

  • Screen: 12.2in LCD 1920×1200 (60Hz; 186PPI)

  • Processor: Intel Core i3 or i5 (U-series, 13th gen)

  • RAM: 8 or 16GB (up to 48GB)

  • Storage: 512GB (up to 2TB)

  • Operating system: Windows 11 or Linux

  • Camera: 1080p front-facing

  • Connectivity: wifi 6E, Bluetooth 5.3, headphones + choice of 4 ports: USB-C, USB-A, HDMI, DisplayPort, ethernet, microSD, SD

  • Dimensions: 287 x 213.9 x 18.5mm

  • Weight: 1.3kg

Modular ports and performance

The expansion modules slide into sockets in the underside of the laptop to change the ports, which you can change at any time. Photograph: Samuel Gibbs/The Guardian

The Laptop 12 comes with a choice of two Intel 13-generation U-series processors, which are lower-power chips from a few years ago. As tested with the mid-range i5-1334U it won’t win any raw performance awards but was generally up to the task of more than basic computing. It feels responsive in day-to-day tasks but struggles a bit in longer, processor-heavy jobs such as converting video.

The older chip means the battery life is a little on the short side for 2025, lasting about seven to eight hours of light office-based work using browsers, word processors, note-taking apps and email. Use more demanding apps and the battery life shrinks by a few hours. The battery takes about 100 minutes to fully charge using a 60W or greater USB-C power adaptor.

Four expansion cards can be fitted at any one time, but they can be swapped in and out without having to turn off the laptop. Photograph: Samuel Gibbs/The Guardian

The port selection is entirely customisable with a fixed headphone jack and four slots for expansion cards, which are available in a choice of USB-A and USB-C, DisplayPort and HDMI, microSD and SD card readers, or ethernet. Other cards can add up to 1TB of storage and the USB-C cards are available in a range of solid or translucent colours to make things even brighter. It is an excellent system but note the Laptop 12 supports only USB 3.2 Gen 2, not the faster USB4/Thunderbolt common on new machines.

Sustainability

The high-quality plastic body with over-moulded sides feels well built and durable. Photograph: Samuel Gibbs/The Guardian

Framework rates the battery to maintain at least 80% of its original capacity for at least 1,000 full charge cycles. It can easily be replaced along with all the rest of the components, including the RAM and SSD.

Framework sells replacement parts and upgrades through its marketplace but also supports third-party parts. The laptop contains recycled plastic in many components.

Price

The DIY edition of the Framework 12 starts at £499 (€569/$549/A$909) with pre-built systems starting at £749 (€849/$799/A$1,369) with Windows 11.

For comparison, the DIY Framework 13 costs from £799 and the DIY Framework 16 costs from £1,399 . Similarly specced 2-in-1 Windows machines start at about £500.

Verdict

Like previous Framework machines, the Laptop 12 demonstrates that repairable, upgradable and adaptable computers are possible, work well and can be used by more than just the tech savvy. It manages to be fun in a way most mid-range PCs just aren’t.

The keyboard is solid, the trackpad good and the speakers loud. The modular ports are a killer feature that every PC should embrace, while being able to repair or upgrade it easily is still so unusual. The touchscreen is bright but unremarkable, the lack of any biometrics is irritating, and the older processor, while still decently fast for everyday tasks, means the battery life isn’t long by modern standards.

Its biggest problem is cost, as it is about £150-£200 more expensive than similarly specced but closed and locked-down machines. Unless you already have spare storage and RAM lying around, that’s the price you have to pay for the open and modular machine.

Pros: swappable ports, repairable and upgradeable, fun and durable design, compact, lots of colour choices, solid keyboard and trackpad, solid performance for everyday tasks.

Cons: battery life short of best, screen is bright but a little lacklustre, no biometrics, expensive, older processor, wait time for purchases.

The ports can be colour matched to the body or mixed and matched for fun combinations. Photograph: Samuel Gibbs/The Guardian



Source link

Continue Reading

AI Research

‘No honour among thieves’: M&S hacking group starts turf war

Published

on



A clash between rival criminal ransomware groups could result in corporate victims being extorted twice, cyber experts warn



Source link

Continue Reading

Trending