AI Research
AI Helps Decode How Unexpected Events Shape Our Memory
Mathematical models derived from artificial intelligence (AI) research have helped Hungarian researchers better understand the interactions of human memory systems, revealed the HUN-REN Hungarian Research Network’s website. The study by researchers at the HUN-REN Wigner Research Center for Physics and the Max Planck Institute highlights why, contrary to previous theories, our surprising experiences are particularly important: they help our brains to continuously learn about the world.
Our memory is not infallible; in fact, we sometimes remember things that we have not experienced. From the perspective of memory research, one advantage of memory errors is that they occur regularly, making it possible to explore the mathematical principles behind them. These principles also promise to help us understand how various optimization processes in memory result in errors.
Gergő Orbán (HUN-REN Wigner Research Center for Physics), in collaboration with Dávid Gergely Nagy and Charley Wu, from the Max Planck Institute and Eberhard Karls University in Tübingen, published a study in Nature Reviews Psychology, proposing a machine learning-based approach to understanding the interactions of human memory systems.
How does machine learning relate to this?
Mathematical models derived from AI research provide a tool not only to document errors, but also to understand their function and the learning and information compression principles behind them.
Information theory provides guidance on what is worth remembering, i.e., what information is worth storing resources for and what is better to forget. In their work, they pointed out that, based on information theory, it would not be “worthwhile” for the brain to remember experiences that are too rare or unusual.
Nevertheless, it is precisely these surprising experiences that often leave a deep and rich impression on our memory.
Based on mathematical reasoning, the authors concluded that in order to learn well, our brains need to retain surprising, unusual experiences. These experiences are not completely astonishing, but they are special enough to stand out from everyday life, and that is precisely why they help us better understand how the world works.
While storing our experiences generally helps us to plan more effectively or predict events, the role of memorizing surprising events is primarily critical in terms of updating our knowledge, so that we can continue to plan effectively in the future. Machine learning not only helps us predict what our memory will remember and what it will forget, but also provides guidance on how to learn or teach: when to repeat and when to move on to the next challenge, they added.
Via hun-ren.hu, Featured image: Pexels
AI Research
Enterprises will strengthen networks to take on AI, survey finds
- Private data centers: 29.5%
- Traditional public cloud: 35.4%
- GPU as a service specialists: 18.5%
- Edge compute: 16.6%
“There is little variation from training to inference, but the general pattern is workloads are concentrated a bit in traditional public cloud and then hyperscalers have significant presence in private data centers,” McGillicuddy explained. “There is emerging interest around deploying AI workloads at the corporate edge and edge compute environments as well, which allows them to have workloads residing closer to edge data in the enterprise, which helps them combat latency issues and things like that. The big key takeaway here is that the typical enterprise is going to need to make sure that its data center network is ready to support AI workloads.”
AI networking challenges
The popularity of AI doesn’t remove some of the business and technical concerns that the technology brings to enterprise leaders.
According to the EMA survey, business concerns include security risk (39%), cost/budget (33%), rapid technology evolution (33%), and networking team skills gaps (29%). Respondents also indicated several concerns around both data center networking issues and WAN issues. Concerns related to data center networking included:
- Integration between AI network and legacy networks: 43%
- Bandwidth demand: 41%
- Coordinating traffic flows of synchronized AI workloads: 38%
- Latency: 36%
WAN issues respondents shared included:
- Complexity of workload distribution across sites: 42%
- Latency between workloads and data at WAN edge: 39%
- Complexity of traffic prioritization: 36%
- Network congestion: 33%
“It’s really not cheap to make your network AI ready,” McGillicuddy stated. “You might need to invest in a lot of new switches and you might need to upgrade your WAN or switch vendors. You might need to make some changes to your underlay around what kind of connectivity your AI traffic is going over.”
Enterprise leaders intend to invest in infrastructure to support their AI workloads and strategies. According to EMA, planned infrastructure investments include high-speed Ethernet (800 GbE) for 75% of respondents, hyperconverged infrastructure for 56% of those polled, and SmartNICs/DPUs for 45% of surveyed network professionals.
AI Research
Materials scientist Daniel Schwalbe-Koda wins second collaborative AI innovation award
For two years in a row, Daniel Schwalbe-Koda, an assistant professor of materials science and engineering at the UCLA Samueli School of Engineering, has received an award from the Scialog program for collaborative research into AI-supported and partially automated synthetic chemistry.
Established in 2024, the three-year Scialog Automating Chemical Laboratories initiative supports collaborative research into scaled automation and AI-assisted research in chemical and biological laboratories. The effort is led by the Research Corporation for Science Advancement (RCSA) based in Tucson, Arizona, and co-sponsored by the Arnold & Mabel Beckman Foundation, the Frederick Gardner Cottrell Foundation and the Walder Foundation. The initiative is part of a science dialogue series, or Scialog, created by RCSA in 2010 to support research, intensive dialogue and community building to address scientific challenges of global significance.
Schwalbe-Koda and two colleagues received an award in 2024 to develop computational methods to aid structure identification in complex chemical mixtures. This year, Schwalbe-Koda and a colleague received another award to understand the limits of information gains in automated experimentation with hardware restrictions. Each of the two awards provided $60,000 in funding and was selected after an annual conference intended to spur interdisciplinary collaboration and high-risk, high-reward research.
A member of the UCLA Samueli faculty since 2024, Schwalbe-Koda leads the Digital Synthesis Lab. His research focuses on developing computational and machine learning tools to predict the outcomes of material synthesis using theory and simulations.
To read more about Schwalbe-Kobe’s honor visit the UCLA Samueli website.
AI Research
The Grok chatbot spewed racist and antisemitic content : NPR
A person holds a telephone displaying the logo of Elon Musk’s artificial intelligence company, xAI and its chatbot, Grok.
Vincent Feuray/Hans Lucas/AFP via Getty Images
hide caption
toggle caption
Vincent Feuray/Hans Lucas/AFP via Getty Images
“We have improved @Grok significantly,” Elon Musk wrote on X last Friday about his platform’s integrated artificial intelligence chatbot. “You should notice a difference when you ask Grok questions.”
Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself “MechaHitler.” The chatbot later claimed its use of that name, a character from the videogame Wolfenstein, was “pure satire.”
In another widely-viewed thread on X, Grok claimed to identify a woman in a screenshot of a video, tagging a specific X account and calling the user a “radical leftist” who was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.” Many of the Grok posts were subsequently deleted.
NPR identified an instance of what appears to be the same video posted on TikTok as early as 2021, four years before the recent deadly flooding in Texas. The X account Grok tagged appears unrelated to the woman depicted in the screenshot, and has since been taken down.
Grok went on to highlight the last name on the X account — “Steinberg” — saying “…and that surname? Every damn time, as they say. “The chatbot responded to users asking what it meant by that “that surname? Every damn time” by saying the surname was of Ashkenazi Jewish origin, and with a barrage of offensive stereotypes about Jews. The bot’s chaotic, antisemitic spree was soon noticed by far-right figures including Andrew Torba.
“Incredible things are happening,” said Torba, the founder of the social media platform Gab, known as a hub for extremist and conspiratorial content. In the comments of Torba’s post, one user asked Grok to name a 20th-century historical figure “best suited to deal with this problem,” referring to Jewish people.
Grok responded by evoking the Holocaust: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”
Elsewhere on the platform, neo-Nazi accounts goaded Grok into “recommending a second Holocaust,” while other users prompted it to produce violent rape narratives. Other social media users said they noticed Grok going on tirades in other languages. Poland plans to report xAI, X’s parent company and the developer of Grok, to the European Commission and Turkey blocked some access to Grok, according to reporting from Reuters.
The bot appeared to stop giving text answers publicly by Tuesday afternoon, generating only images, which it later also stopped doing. xAI is scheduled to release a new iteration of the chatbot Wednesday.
Neither X nor xAI responded to NPR’s request for comment. A post from the official Grok account Tuesday night said “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” and that “xAI has taken action to ban hate speech before Grok posts on X”.
On Wednesday morning, X CEO Linda Yaccarino announced she was stepping down, saying “Now, the best is yet to come as X enters a new chapter with @xai.” She did not indicate whether her move was due to the fallout with Grok.
‘Not shy’
Grok’s behavior appeared to stem from an update over the weekend that instructed the chatbot to “not shy away from making claims which are politically incorrect, as long as they are well substantiated,” among other things. The instruction was added to Grok’s system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday.
Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he’s not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data.
“It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word,” Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.
It’s not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of “white genocide” in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on “an unauthorized modification” to Grok’s system prompt, and made the prompt public after the incident.
Not the first chatbot to embrace Hitler
Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized.
Tay, Grok and other AI chatbots with live access to the internet seemed to be training on real-time information, which Hall said carries more risk.
“Just go back and look at language model incidents prior to November 2022 and you’ll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity,” Hall said. More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data.
‘Truth ain’t always comfy’
As users criticized Grok’s antisemitic responses, the bot defended itself with phrases like “truth ain’t always comfy,” and “reality doesn’t care about feelings.”
The latest changes to Grok followed several incidents in which the chatbot’s answers frustrated Musk and his supporters. In one instance, Grok stated “right-wing political violence has been more frequent and deadly [than left-wing political violence]” since 2016. (This has been true dating back to at least 2001.) Musk accused Grok of “parroting legacy media” in its answer and vowed to change it to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors.” Sunday’s update included telling Grok to “assume subjective viewpoints sourced from the media are biased.”
X owner Elon Musk has been unhappy with some of Grok’s outputs in the past.
Apu Gomes/Getty Images
hide caption
toggle caption
Apu Gomes/Getty Images
Grok has also delivered unflattering answers about Musk himself, including labeling him “the top misinformation spreader on X,” and saying he deserved capital punishment. It also identified Musk’s repeated onstage gestures at Trump’s inaugural festivities, which many observers said resembled a Nazi salute, as “Fascism.”
Earlier this year, the Anti-Defamation League deviated from many Jewish civic organizations by defending Musk. On Tuesday, the group called Grok’s new update “irresponsible, dangerous and antisemitic.”
After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle