Connect with us

AI Research

ByteDance in drive to recruit fresh robotics talent as AI remains TikTok owner’s priority

Published

on


The more than 10 job vacancies recently posted on ByteDance’s website included two directors responsible for robot products and for hardware-embedded AI models.

The Beijing-based unicorn also posted vacancies for roles involving “robot motion control algorithms”, “performance optimisation for embodied intelligence reasoning” and “robot multimodal model”, among others.

Many of those posts mentioned that the company had been working on a “new product”, with one vacancy specifically describing the product as a “next-generation general-purpose robot”.

All the job vacancies are for roles in Beijing and Shanghai under ByteDance’s Seed department, which oversees the company’s AI research and large language model development. The department was established in 2023, following OpenAI’s launch of ChatGPT in November 2022 that sparked a global arms race in generative AI.

ByteDance did not immediately respond to a request for comment on Tuesday.



Source link

AI Research

Endangered languages AI tools developed by UH researchers

Published

on

By


Reading time: < 1 minute

University of Hawaiʻi at Mānoa researchers have made a significant advance in studying how artificial intelligence (AI) understands endangered languages. This research could help communities document and maintain their languages, support language learning and make technology more accessible to speakers of minority languages.

The paper by Kaiying Lin, a PhD graduate in linguistics from UH Mānoa, and Department of Information and Computer Sciences Assistant Professor Haopeng Zhang, introduces the first benchmark for evaluating large language models (AI systems that process and generate text) on low-resource Austronesian languages. The study focuses on three Formosan (Indigenous peoples and languages of Taiwan) languages spoken in Taiwan—Atayal, Amis and Paiwan—that are at risk of disappearing.

Using a new benchmark called FORMOSANBENCH, Lin and Zhang tested AI systems on tasks such as machine translation, automatic speech recognition and text summarization. The findings revealed a large gap between AI performance in widely spoken languages such as English, and these smaller, endangered languages. Even when AI models were given examples or fine-tuned with extra data, they struggled to perform well.

“These results show that current AI systems are not yet capable of supporting low-resource languages,” Lin said.

Zhang added, “By highlighting these gaps, we hope to guide future development toward more inclusive technology that can help preserve endangered languages.”

The research team has made all datasets and code publicly available to encourage further work in this area. The preprint of the study is available online, and the study has been accepted into the 2025 Conference on Empirical Methods in Natural Language Processing in Suzhou, China, an internationally recognized premier AI conference.

The Department of Information and Computer Sciences is housed in UH Mānoa’s College of Natural Sciences, and the Department of Linguistics is housed in UH Mānoa’s College of Arts, Languages & Letters.



Source link

Continue Reading

AI Research

Researchers can accurately tell someone’s age using AI and just a bit of DNA

Published

on


At the Hebrew University of Jerusalem, scientists created a new way to tell someone’s age using just a bit of DNA. This method uses a blood sample and a small part of your genetic code to give highly accurate results. It doesn’t rely on external features or medical history like other age tests often do. Even better, it stays accurate no matter your sex, weight, or smoking status.

Bracha Ochana and Daniel Nudelman led the team, guided by Professors Kaplan, Dor, and Shemer. They developed a tool called MAgeNet that uses artificial intelligence to study DNA methylation patterns. DNA methylation is a process that adds chemical tags to DNA as the body ages. By training deep learning networks on these patterns, they predicted age with just a 1.36-year error in people under 50.

How DNA Stores the Marks of Time

Time leaves invisible fingerprints on your cells. One of the most telling signs of age in your body is DNA methylation—the addition of methyl groups (CH₃) to your DNA. These chemical tags don’t change your genetic code, but they do affect how your genes behave. And over time, these tags build up in ways that mirror the passage of years.

450K/EPIC age-associated DNA methylation sites are often surrounded by additional CpGs correlated with age. (CREDIT: Cell Reports)

What makes the new method so effective is its focus. Instead of analyzing thousands of areas in the genome, MAgeNet zeroes in on just two short genomic regions. This tight focus, combined with high-resolution scanning at the single-molecule level, allows the AI to read the methylation patterns like a molecular clock. Professor Kaplan explains it simply: “The passage of time leaves measurable marks on our DNA. Our model decodes those marks with astonishing precision.”

Small Sample, Big Insights

The study, recently published in Cell Reports, used blood samples from more than 300 healthy individuals. It also included data from a 10-year follow-up of the Jerusalem Perinatal Study, which tracks health information across lifetimes. That long-term data, led by Professor Hagit Hochner from the Faculty of Medicine, helped the team confirm that MAgeNet works not just in the short term but also across decades.

Importantly, the model’s accuracy held up no matter the person’s sex, body mass index, or smoking history—factors that often throw off similar tests. That consistency means the tool could be widely used in both clinical and non-clinical settings.



From Medicine to Crime Scenes

The medical uses are easy to imagine. Knowing someone’s true biological age can help doctors make better decisions about care, especially when signs of aging don’t match the number of candles on a birthday cake. Personalized treatment plans could become more effective if based on what’s happening at the cellular level, not just what appears on a chart.

But this breakthrough also has major potential in the world of forensic science. Law enforcement teams could one day use this method to estimate the age of a suspect based solely on a few cells left behind. That’s a big step forward from current forensic DNA tools, which are good at identifying a person but struggle with age.

“This gives us a new window into how aging works at the cellular level,” says Professor Dor. “It’s a powerful example of what happens when biology meets AI.

A schematic view of targeted PCR sequencing following bisulfite conversion, facilitating concurrent mapping of multiple neighboring CpG sites at a depth >5,000×. (CREDIT: Cell Reports)

Ticking Clocks Inside Our Cells

As they worked with the data, the researchers noticed something else: DNA doesn’t just age randomly. Some changes happen in bursts. Others follow slow, steady patterns—almost like ticking clocks inside each cell. These new observations may help explain why people age differently, even when they’re the same age chronologically.

“It’s not just about knowing your age,” adds Professor Shemer. “It’s about understanding how your cells keep track of time, molecule by molecule.”

This could also impact the growing field of longevity research. Scientists are increasingly interested in how biological aging differs from the simple count of years lived. The ability to measure age so precisely from such a small DNA sample may become a key tool in developing future anti-aging therapies or drugs that slow down cellular wear and tear.

A deep neural network for age prediction from fragment-level targeted DNA methylation data. (CREDIT: Cell Reports)

Why This Research Changes Everything

The method created by the Hebrew University team marks a turning point in how we think about aging, identity, and health. In the past, DNA told us who we are. Now it can tell us how old we truly are—and possibly how long we’ll stay healthy. The implications stretch from hospital rooms to courtrooms.

As the world faces rising healthcare demands from aging populations, tools like MAgeNet offer a smarter, faster way to assess risk, track longevity, and understand what aging really means. It’s no longer just a number on your ID.

Thanks to AI and a deep dive into the chemistry of life, age has become something you can measure with stunning accuracy, from the inside out.





Source link

Continue Reading

AI Research

Anthropic’s $1.5-billion settlement signals new era for AI and artists

Published

on


Chatbot builder Anthropic agreed to pay $1.5 billion to authors in a landmark copyright settlement that could redefine how artificial intelligence companies compensate creators.

The San Francisco-based startup is ready to pay authors and publishers to settle a lawsuit that accused the company of illegally using their work to train its chatbot.

Anthropic developed an AI assistant named Claude that can generate text, images, code and more. Writers, artists and other creative professionals have raised concerns that Anthropic and other tech companies are using their work to train their AI systems without their permission and not fairly compensating them.

As part of the settlement, which the judge still needs to be approve, Anthropic agreed to pay authors $3,000 per work for an estimated 500,000 books. It’s the largest settlement known for a copyright case, signaling to other tech companies facing copyright infringement allegations that they might have to pay rights holders eventually as well.

Meta and OpenAI, the maker of ChatGPT, have also been sued over alleged copyright infringement. Walt Disney Co. and Universal Pictures have sued AI company Midjourney, which the studios allege trained its image generation models on their copyrighted materials.

“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said Justin Nelson, a lawyer for the authors, in a statement. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”

Last year, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson sued Anthropic, alleging that the company committed “large-scale theft” and trained its chatbot on pirated copies of copyrighted books.

U.S. District Judge William Alsup of San Francisco ruled in June that Anthropic’s use of the books to train the AI models constituted “fair use,” so it wasn’t illegal. But the judge also ruled that the startup had improperly downloaded millions of books through online libraries.

Fair use is a legal doctrine in U.S. copyright law that allows for the limited use of copyrighted materials without permission in certain cases, such as teaching, criticism and news reporting. AI companies have pointed to that doctrine as a defense when sued over alleged copyright violations.

Anthropic, founded by former OpenAI employees and backed by Amazon, pirated at least 7 million books from Books3, Library Genesis and Pirate Library Mirror, online libraries containing unauthorized copies of copyrighted books, to train its software, according to the judge.

It also bought millions of print copies in bulk and stripped the books’ bindings, cut their pages and scanned them into digital and machine-readable forms, which Alsup found to be in the bounds of fair use, according to the judge’s ruling.

In a subsequent order, Alsup pointed to potential damages for the copyright owners of books downloaded from the shadow libraries LibGen and PiLiMi by Anthropic.

Although the award was massive and unprecedented, it could have been much worse, according to some calculations. If Anthropic were charged a maximum penalty for each of the millions of works it used to train its AI, the bill could have been more than $1 trillion, some calculations suggest.

Anthropic disagreed with the ruling and didn’t admit wrongdoing.

“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel for Anthropic, in a statement. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

The Anthropic dispute with authors is one of many cases where artists and other content creators are challenging the companies behind generative AI to compensate for the use of online content to train their AI systems.

Training involves feeding enormous quantities of data — including social media posts, photos, music, computer code, video and more — to train AI bots to discern patterns of language, images, sound and conversation that they can mimic.

Some tech companies have prevailed in copyright lawsuits filed against them.

In June, a judge dismissed a lawsuit authors filed against Facebook parent company Meta, which also developed an AI assistant, alleging that the company stole their work to train its AI systems. U.S. District Judge Vince Chhabria noted that the lawsuit was tossed because the plaintiffs “made the wrong arguments,” but the ruling didn’t “stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

Trade groups representing publishers praised the Anthropic settlement on Friday, noting it sends a big signal to tech companies that are developing powerful artificial intelligence tools.

“Beyond the monetary terms, the proposed settlement provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models,” said Maria Pallante, president and chief executive of the Association of American Publishers in a statement.

The Associated Press contributed to this report.



Source link

Continue Reading

Trending