Connect with us

Tools & Platforms

Can AI teach us how animals think?

Published

on


How is an animal feeling at a given moment? Humans have long recognised certain well-known behaviour-like a cat hissing as a warning, but in many cases we’ve had little clue of what’s going on inside an animal’s head.

Now we have a better idea, thanks to a Milan-based researcher who has developed an AI model that he claims can detect whether their calls express positive or negative emotions.

Stavros Ntalampiras’s deep-learning model, which was published in Scientific Reports, can recognise emotional tones across seven species of hoofed animals, including pigs, goats and cows. The model picks up on shared features of their calls, such as pitch, frequency range and tonal quality.

The analysis showed that negative calls tended to be more mid to high frequency, while positive calls were spread more evenly across the spectrum. In pigs, high-pitched calls were especially informative, whereas in sheep and horses the mid-range carried more weight, a sign that animals share some common markers of emotion but also express them in ways that vary by species.

For scientists who have long tried to untangle animal signals, this discovery of emotional traits across species is the latest leap forward in a field that is being transformed by AI.

The implications are far-reaching. Farmers could receive earlier warnings of livestock stress, conservationists might monitor the emotional health of wild populations remotely, and zookeepers could respond more quickly to subtle welfare changes.

This potential for a new layer of insight into the animal world also raises ethical questions. If an algorithm can reliably detect when an animal is in distress, what responsibility do humans have to act? And how do we guard against over-generalisation, where we assume that all signs of arousal mean the same thing in every species?

Of barks and buzzes

Tools like the one devised by Ntalampiras are not being trained to “translate” animals in a human sense, but to detect behavioural and acoustic patterns too subtle for us to perceive unaided.

Similar work is underway with whales, where New York-based research organisation Project Ceti (the Cetacean Translation Initiative) is analysing patterned click sequences called codas.

Long believed to encode social meaning, these are now being mapped at scale using machine learning, revealing patterns that may correspond to each whale’s identity, affiliation or emotional state.

In dogs, researchers are linking facial expressions, vocalisations and tail-wagging patterns with emotional states. One study showed that subtle shifts in canine facial muscles correspond to fear or excitement. Another found that tail-wag direction varies depending on whether a dog encounters a familiar friend or a potential threat.

At Dublin City University’s Insight Centre for Data Analytics, we are developing a detection collar worn by assistance dogs which are trained to recognise the onset of a seizure in people who suffer from epilepsy. The collar uses sensors to pick up on a dog’s trained behaviours, such as spinning, which raise the alarm that their owner is about to have a seizure.

The project, funded by Research Ireland, strives to demonstrate how AI can leverage animal communication to improve safety, support timely intervention, and enhance quality of life. In future we aim to train the model to recognise instinctive dog behaviours such as pawing, nudging or barking.

Honeybees, too, are under AI’s lens. Their intricate waggle dances – figure-of-eight movements that indicate food sources – are being decoded in real time with computer vision. These models highlight how small positional shifts influence how well other bees interpret the message.

Caveats

These systems promise real gains in animal welfare and safety. A collar that senses the first signs of stress in a working dog could spare it from exhaustion. A dairy herd monitored by vision-based AI might get treatment for illness hours or days sooner than a farmer would notice.

Detecting a cry of distress is not the same as understanding what it means, however. AI can show that two whale codas often occur together, or that a pig’s squeal shares features with a goat’s bleat. The Milan study goes further by classifying such calls as broadly positive or negative, but even this remains using pattern recognition to try to decode emotions.

Emotional classifiers risk flattening rich behaviours into crude binaries of happy/sad or calm/stressed, such as logging a dog’s tail wag as “consent” when it can sometimes signal stress. As Ntalampiras notes in his study, pattern recognition is not the same as understanding.

One solution is for researchers to develop models that integrate vocal data with visual cues, such as posture or facial expression, and even physiological signals such as heart rate, to build more reliable indicators of how animals are feeling.

AI models are also going to be most reliable when interpreted in context, alongside the knowledge of someone experienced with the species.

It’s also worth bearing in mind that the ecological price of listening is high. Using AI adds carbon costs that, in fragile ecosystems, undercut the very conservation goals they claim to serve. It’s therefore important that any technologies genuinely serve animal welfare, rather than simply satisfying human curiosity.

Whether we welcome it or not, AI is here. Machines are now decoding signals that evolution honed long before us, and will continue to get better at it.

The real test, though, is not how well we listen, but what we’re prepared to do with what we hear. If we burn energy decoding animal signals but only use the information to exploit them, or manage them more tightly, it’s not science that falls short – it’s us.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Big tech will pull the plug on free AI. Can creatives afford to pay?

Published

on

By


Remember when Netflix was eight dollars a month? Now it’s nearly tripled in price, carved into ad-riddled tiers, while free-to-air TV has been gutted into unwatchable dreck. The streaming giants hooked us with cheap content, killed the free alternatives, then cranked up prices once we were trapped.

Well, I reckon we’ll soon be watching the exact same playbook unfold with AI. Except this time, the stakes will be infinitely higher.



Source link

Continue Reading

Tools & Platforms

Infinities Technology Faces Revenue Decline Amid Strategic Shift Towards AI

Published

on


Elevate Your Investing Strategy:

  • Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.

Infinities Technology International (Cayman) Holding Limited ( (HK:1961) ) has provided an announcement.

Infinities Technology International reported a significant decline in revenue and gross profit for the first half of 2025, with an 85.8% drop in revenue compared to the same period in 2024. This decline is attributed to reduced revenue from its mobile games and digital media businesses, as well as the early-stage development of its AI application services, which have yet to generate substantial profits. Despite these challenges, the company remains focused on its strategic goal of expanding its digital entertainment platform globally, leveraging AI as a core component. The industry outlook is optimistic, with the Chinese government’s recent AI initiative expected to drive significant development and investment opportunities in the sector.

The most recent analyst rating on (HK:1961) stock is a Hold with a HK$0.50 price target. To see the full list of analyst forecasts on Infinities Technology International (Cayman) Holding Limited stock, see the HK:1961 Stock Forecast page.

More about Infinities Technology International (Cayman) Holding Limited

Infinities Technology International (Cayman) Holding Limited operates in the digital entertainment industry, focusing on mobile games, digital media, and gaming product supply. The company is committed to building a diversified digital entertainment service platform with a strong emphasis on artificial intelligence technologies.

Average Trading Volume: 404,103

Technical Sentiment Signal: Sell

Current Market Cap: HK$198.3M

For detailed information about 1961 stock, go to TipRanks’ Stock Analysis page.

Disclaimer & DisclosureReport an Issue



Source link

Continue Reading

Tools & Platforms

Humans are being hired to make AI slop look less sloppy

Published

on


Brands caught using AI have continued to face backlash from consumers. Last month, Guess sparked outcry online when it featured an AI-generated model in an advertisement that appeared in Vogue.

So even outside of any obvious mistakes made by AI tools, some artists say their clients simply want a human touch to distinguish themselves from the growing pool of AI-generated content online.

To Todd Van Linda, an illustrator and comic artist in Florida, AI art is easily discernible, if not by certain telltale inconsistencies in the details, then by the plasticine effect that defines AI-generated images across a range of styles.

“I can look at a piece and not only tell that it’s AI, I can tell you what descriptor they used to generate it,” Van Linda said. “When it comes to, especially, independent authors, they don’t want anything to do with that because it’s so formulaic, it’s obvious. It’s like they stopped off at Walmart to get a bargain cover for their book.”

Authors come to him, he said, because they know that AI-generated art fails to capture the hyperspecific “vibe” of their individual story. Often, his clients can only give him a rough idea of what they want. It’s then Van Linda’s job to decipher their preferences and create something that draws out the exact feeling each client seeks to evoke from their art.

Van Linda said he also gets approached by people who want him to “fix” their AI-generated art, but he avoids those jobs now because he has realized those clients are typically less willing to pay him what he believes his labor is worth.

“There would be more work involved in fixing those images than there would be in starting from a clean sheet of paper and doing it right, because what they have is a mismatched collection of generalities that really don’t follow what they’re trying to do,” he said. “But they’re trying to wedge the square peg into the round hole because they don’t want to spend any more money.”

The low pay from clients who have already cheaped out on AI tools has affected gig workers across industries, including more technical ones like coding. For India-based web and app developer Harsh Kumar, many of his clients say they had already invested much of their budget in “vibe coding” tools that couldn’t deliver the results they wanted.

But others, he said, are realizing that shelling out for a human developer is worth the headaches saved from trying to get an AI assistant to fix its own “crappy code.” Kumar said his clients often bring him vibe-coded websites or apps that resulted in unstable or wholly unusable systems.

His projects have included fixing an AI-powered support chatbot that gave customers inaccurate answers — and sometimes leaked sensitive system details due to poor safety measures — and rebuilding an AI content recommendation system that frequently crashed, gave irrelevant recommendations and exposed sensitive data.

“AI may increase productivity, but it can’t fully replace humans,” Kumar said. “I’m still confident that humans will be required for long-term projects. At the end of the day, humans were the ones who developed AI.”



Source link

Continue Reading

Trending