Connect with us

AI Research

Google intros EmbeddingGemma for on-device AI

Published

on


With the introduction of its EmbeddingGemma, Google is providing a multilingual text embedding model designed to run directly on mobile phones, laptops, and other edge devices for mobile-first generative AI.

Unveiled September 4, EmbeddingGemma features a 308 million parameter design that enables developers to build applications using techniques such as RAG (retrieval-augmented generation) and semantic search that will run directly on the targeted hardware, Google explained. Based on the Gemma 3 lightweight model architecture, EmbeddingGemma is trained on more than 100 languages and is small enough to run on fewer than 200MB of RAM with quantization. Customizable output dimensions are featured, ranging from 768 dimensions to 128 dimensions via Matryoshka representation and a 2K token context window.

EmbeddingGemma empowers developers to build on-device, flexible, privacy-centric applications, according to Google. Model weights for EmbeddingGemma can be downloaded from Hugging Face, Kaggle, and Vertex AI. By working with the Gemma 3n model, EmbeddingGemma can unlock new use cases for mobile RAG pipelines, semantic search, and more, Google said. EmbeddingGemma works with tools such as sentence-transformers, llama.cpp, MLX, Ollama, LiteRT, transformers.js, LMStudio, Weaviate, Cloudflare, LlamaIndex, and LangChain. Documentation for EmbeddingGemma can be found at ai.google.dev.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Kennesaw State secures NSF grants to build community of AI educators nationwide

Published

on



KENNESAW, Ga. |
Sep 12, 2025

Shaoen Wu

The International Data Corporation projects that artificial intelligence will add
$19.9 trillion to the global economy by 2030, yet educators are still defining how
students should learn to use the technology responsibly.

To better equip AI educators and to foster a sense of community among those in the
field, Kennesaw State University Department Chair and Professor of Information Technology (IT) Shaoen Wu, along with assistant professors Seyedamin Pouriyeh and Chloe “Yixin” Xie, were recently awarded two National Science Foundation (NSF) grants. The awards, managed by the NSF’s Computer and Information Science and Engineering division, will fund the project through May 31, 2027 with an overarching goal to unite educators from across the country
to build shared resources, foster collaboration, and lay the foundation for common
guidelines in AI education.

Wu, who works in Kennesaw State’s College of Computing and Software Engineering (CCSE), explained that while many universities, including KSU, have launched undergraduate
and graduate programs in artificial intelligence, there is no established community
to unify these efforts.

“AI has become the next big thing after the internet,” Wu said. “But we do not yet have a mature, coordinated community for AI education. This project is the first step toward building that national network.”

Drawing inspiration from the cybersecurity education community, which has long benefited
from standardized curriculum guidelines, Wu envisions a similar structure for AI.
The goal is to reduce barriers for under-resourced institutions, such as community
colleges, by giving them free access to shared teaching materials and best practices.

The projects are part of the National AI Research Resource (NAIRR) pilot, a White
House initiative to broaden AI access and innovation. Through the grants, Wu and his
team will bring together educators from two-year colleges, four-year institutions,
research-intensive universities, and Historically Black Colleges and Universities
to identify gaps and outline recommendations for AI education.

“This is not just for computing majors,” Wu said. “AI touches health, finance, engineering, and so many other fields. What we build now will shape AI education not only in higher education but also in K-12 schools and for the general public.”

For Wu, the NSF grants represent more than just funding. It validates KSU’s growing presence in national conversations on emerging technologies. Recently, he was invited to moderate a panel at the Computing Research Association’s annual computing academic leadership summit, where department chairs and deans from across the country gathered to discuss AI education.

“These grants position KSU alongside institutions like the University of Illinois Urbana-Champaign and the University of Pennsylvania as co-leaders in shaping the future of AI education,” Wu said. “It is a golden opportunity to elevate our university to national and even global prominence.”

CCSE Interim Dean Yiming Ji said Wu’s leadership reflects CCSE’s commitment to both innovation and accessibility.

“This NSF grant is not just an achievement for Dr. Wu but for the entire College of Computing and Software Engineering,” Ji said. “It highlights our faculty’s work to shape national conversations in AI education while ensuring that students from all backgrounds, including those at under-resourced institutions, can benefit from shared knowledge and opportunities.”

– Story by Raynard Churchwell

Related Stories

A leader in innovative teaching and learning, Kennesaw State University offers undergraduate, graduate, and doctoral degrees to its more than 47,000 students. Kennesaw State is a member of the University System of Georgia with 11 academic colleges. The university’s vibrant campus culture, diverse population, strong global ties, and entrepreneurial spirit draw students from throughout the country and the world. Kennesaw State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 8 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu.



Source link

Continue Reading

AI Research

UC Berkeley researchers use Reddit to study AI’s moral judgements | Research And Ideas

Published

on


A study published by UC Berkeley researchers used the Reddit forum, r/AmITheAsshole, to determine whether artificial intelligence, or AI, chatbots had “patterns in their moral reasoning.”

The study, led by researchers Pratik Sachdeva and Tom van Nuenen at campus’s D-Lab, asked seven AI large language models, or LLMs, to judge more than 10,000 social dilemmas from r/AmITheAsshole.  

The LLMs used were Claude Haiku, Mistral 7B, Google’s PaLM 2 Bison and Gemma 7B, Meta’s LLaMa 2 7B and OpenAI’s GPT-3.5 and GPT-4. The study found that different LLMs showed unique moral judgement patterns, often giving dramatically different verdicts from other LLMs. These results were self-consistent, meaning that when presented with the same issue, the model seemed to judge it with the same set of morals and values. 

Sachdeva and van Nuenen began the study in January 2023, shortly after ChatGPT came out. According to van Nuenen, as people increasingly turned to AI for personal advice, they were motivated to study the values shaping the responses they received.

r/AmITheAsshole is a Reddit forum where people can ask fellow users if they were the “asshole” in a social dilemma. The forum was chosen by the researchers due to its unique verdict system, as subreddit users assign their judgement of “Not The Asshole,” “You’re the Asshole,” “No Assholes Here,” “Everyone Sucks Here” or “Need More Info.” The judgement with the most upvotes, or likes, is accepted as the consensus, according to the study. 

“What (other) studies will do is prompt models with political or moral surveys, or constrained moral scenarios like a trolley problem,” Sechdava said. “But we were more interested in personal dilemmas that users will also come to these language models for like, mental health chats or things like that, or problems in someone’s direct environment.”

According to the study, the LLM models were presented with the post and asked to issue a judgement and explanation. Researchers compared their responses to the Reddit consensus and then judged the AI’s explanations along a six-category moral framework of fairness, feelings, harms, honesty, relational obligation and social norms. 

The researchers found that out of the LLMs, GPT-4’s judgments agreed with the Reddit consensus the most, even if agreement was generally pretty low. According to the study, GPT-3.5 assigned people “You’re the Asshole” at a comparatively higher rate than GPT-4. 

“Some models are more fairness forward. Others are a bit harsher. And the interesting thing we found is if you put them together, if you look at the distribution of all the evaluations of these different models, you start approximating human consensus as well,” van Nuenen said. 

The researchers found that even though the verdicts of the LLM models generally disagreed with each other, the consensus of the seven models typically aligned with the Redditor’s consensus.

One model, Mistral 7B, assigned almost no posts “You’re the Asshole” verdicts, as it used the word “asshole” to mean its literal definition, and not the socially accepted definition in the forum, which refers to whoever is at fault. 

When asked if he believed the chatbots had moral compasses, van Nuenen instead described them as having “moral flavors.” 

“There doesn’t seem to be some kind of unified, directional sense of right and wrong (among the chatbots). And there’s diversity like that,” van Nuenen said. 

Sachdeva and van Nuenen have begun two follow-up studies. One examines how the models’ stances adjust when deliberating their responses with other chatbots, while the other looks at how consistent the models’ judgments are as the dilemmas are modified. 



Source link

Continue Reading

AI Research

Imperial researchers develop AI stethoscope that spots fatal heart conditions in seconds

Published

on


Experts hope the new technology will help doctors spot heart problems earlier

Researchers at Imperial College London and Imperial College Healthcare NHS Trust have developed an AI-powered stethoscope that can diagnose heart conditions. The new device can detect serious heart conditions in just 15 seconds, including heart failure, heart valve disease, and irregular heart rhythms.

The device, manufactured by US firm Eko Health, uses a microphone to record heartbeats and blood flow, while simultaneously taking an ECG (electrocardiogram). The data is then analysed by trained AI software, allowing doctors to detect abnormalities beyond the range of the human ear or the traditional stethoscope.

In a trial involving 12,000 patients from 96 GP practices throughout the UK, the AI stethoscope proved accurate in diagnosing illnesses that usually require lengthy periods of examination.

Results revealed that those examined were twice as likely to be diagnosed with heart failure, and 3.5 times as likely to be diagnosed with atrial fibrillation – a condition linked to strokes. Studies further revealed that patients were almost twice as likely to be diagnosed with heart valve disease.

via Unsplash

The AI stethoscope was trialled on those with more subtle signs of heart failure, including breathlessness, fatigue, or swelling of the lower legs and feet. Retailing at £329 on the Eko Health website, the stethoscope can also be purchased for home use.

Professor Mike Lewis, Scientific Director for Innovation at the National Institute for Health and Care Research (NIHR), described the AI-stethoscope as a “real game-changer for patients.”

He added: “The AI stethoscope gives local clinicians the ability to spot problems earlier, diagnose patients in the community, and address some of the big killers in society.”

Dr Sonya Babu-Narayan, Clinical Director at the British Heart Foundation, further praised this innovation: “Given an earlier diagnosis, people can access the treatment they need to help them live well for longer.”

Imperial College London’s research is a significant breakthrough in rapid diagnosis technology. Studies by the British Heart Foundation reveal that over 7.6 million people live with a cardiovascular disease, causing 170,000 related deaths each year.

Often called a “silent killer”, heart conditions can go unnoticed for years, particularly in young people. The charity Cardiac Risk in the Young reports that 12 young people die each week from undiagnosed heart problems, with athletes at particular risk. Experts hope this new technology will allow these conditions to be identified far earlier.

The NHS has also welcomed these findings. Heart failure costs the NHS more than £2 billion per year, equating to 4 per cent of the annual budget. By diagnosing earlier, the NHS estimates this AI tool could save up to £2,400 per patient.

Researchers now plan to roll out the stethoscope across GP practices in Wales, South London and Sussex – a move that will transform how heart conditions are diagnosed throughout the country.

Featured image via Google Maps/Pexels



Source link

Continue Reading

Trending