Connect with us

AI Insights

AI vs Supercomputers round 1: galaxy simulation goes to AI

Published

on


Jul. 10, 2025
Press Release

Physics / Astronomy


Computing / Math

In the first study of its kind, researchers led by Keiya Hirashima at the RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS) in Japan, along with colleagues from the Max Planck Institute for Astrophysics (MPA) and the Flatiron Institute, have used machine learning, a type of artificial intelligence, to dramatically speed up the processing time when simulating galaxy evolution coupled with supernova explosion. This approach could help us understand the origins of our own galaxy, particularly the elements essential for life in the Milky Way.

Understanding how galaxies form is a central problem for astrophysicists. Although we know that powerful events like supernovae can drive galaxy evolution, we cannot simply look to the night sky and see it happen. Scientists rely on numerical simulations that are based on large amounts of data collected from telescopes and other devices that measure aspects of interstellar space. Simulations must account for gravity and hydrodynamics, as well as other complex aspects of astrophysical thermo-chemistry.

On top of this, they must have a high temporal resolution, meaning that the time between each 3D snapshot of the evolving galaxy must be small enough so that critical events are not missed. For example, capturing the initial phase of supernova shell expansion requires a timescale of mere hundreds of years, which is 1000 times smaller than typical simulations of interstellar space can achieve. In fact, a typical supercomputer takes 1-2 years to carry out a simulation of a relatively small galaxy at the proper temporal resolution.

Getting over this timestep bottleneck was the main goal of the new study. By incorporating AI into their data-driven model, the research group was able to match the output of a previously modeled dwarf galaxy but got the result much more quickly. “When we use our AI model, the simulation is about four times faster than a standard numerical simulation,” says Hirashima. “This corresponds to a reduction of several months to half a year’s worth of computation time. Critically, our AI-assisted simulation was able to reproduce the dynamics important for capturing galaxy evolution and matter cycles, including star formation and galaxy outflows.”

Like most machine learning models, the researchers’ new model is trained using one set of data and then becomes able to predict outcomes based on a new set of data. In this case, the model incorporated a programmed neural network and was trained on 300 simulations of an isolated supernova in a molecular cloud that massed one million of our suns. After training, the model could predict the density, temperature, and 3D velocities of gas 100,000 years after a supernova explosion. Compared with direct numerical simulations such as those performed by supercomputers, the new model yielded similar structures and star formation history but took four times less time to compute.

According to Hirashima, “our AI-assisted framework will allow high-resolution star-by-star simulations of heavy galaxies, such as the Milky Way, with the goal of predicting the origin of the solar system and the elements essential for the birth of life.”

Currently, the lab is using the new framework to run a Milky Way-sized galaxy simulation.

Rate this article

Reference

Hirashima et al. (2025) ASURA-FDPS-ML: Star-by-star Galaxy Simulations Accelerated by Surrogate Modeling for Supernova Feedback. Astrophys J. doi: 10.3847/1538-4357/add689

Contact

Keiya Hirashima, Special Postdoctoral Researcher

Division of Fundamental Mathematical Science, RIKEN Center for Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS)

Adam Phillips
RIKEN Communications Division
Email: adam.phillips [at] riken.jp

The simulated galaxy after 200 million years. While the simulations look very similar with and without the machine learning AI model, the AI model performed 4 times as fast, completing large scale simulation in a matter of months rather than years.







Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

New York Passes RAISE Act—Artificial Intelligence Safety Rules

Published

on


The New York legislature recently passed the Responsible AI Safety and Education Act (SB6953B) (“RAISE Act”). The bill awaits signature by New York Governor Kathy Hochul.

Applicability and Relevant Definitions

The RAISE Act applies to “large developers,” which is defined as a person that has trained at least one frontier model and has spent over $100 million in compute costs in aggregate in training frontier models. 

  • “Frontier model” means either (1) an artificial intelligence (AI) model trained using greater than 10°26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds $100 million; or (2) an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost for such model produced by applying knowledge distillation exceeds $5 million.
  • “Knowledge distillation” is defined as any supervised learning technique that uses a larger AI model or the output of a larger AI model to train a smaller AI model with similar or equivalent capabilities as the larger AI model.

The RAISE Act imposes the following obligations and restrictions on large developers:

  • Prohibition on Frontier Models that Create Unreasonable Risk of Critical Harm: The RAISE Act prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of “critical harm.”
    • Critical harm” is defined as the death or serious injury of 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of a chemical, biological, radiological or nuclear weapon; or (2) an AI model engaging in conduct that (i) acts with no meaningful human intervention and (ii) would, if committed by a human, constitute a crime under the New York Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
  • Pre-Deployment Documentation and Disclosures: Before deploying a frontier model, large developers must:
    • (1) implement a written safety and security protocol;
    • (2) retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, for as long as the frontier model is deployed plus five years;
    • (3) conspicuously publish a redacted copy of the safety and security protocol and provide a copy of such redacted protocol to the New York Attorney General (“AG”) and the Division of Homeland Security and Emergency Services (“DHS”) (as well as grant the AG access to the unredacted protocol upon request);
    • (4) record and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and
    • (5) implement appropriate safeguards to prevent unreasonable risk of critical harm posed by the frontier model.
  • Safety and Security Protocol Annual Review: A large developer must conduct an annual review of its safety and security protocol to account for any changes to the capabilities of its frontier models and industry best practices and make any necessary modifications to protocol. For material modifications, the large developer must conspicuously publish a copy of such protocol with appropriate redactions (as described above).
  • Reporting Safety Incidents: A large developer must disclose each safety incident affecting a frontier model to the AG and DHS within 72 hours of the large developer learning of the safety incident or facts sufficient to establish a reasonable belief that a safety incident occurred.
    • “Safety incident” is defined as a known incidence of critical harm or one of the following incidents that provides demonstrable evidence of an increased risk of critical harm: (1) a frontier model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; (3) the critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or (4) unauthorized use of a frontier model. The disclosure must include (1) the date of the safety incident; (2) the reasons the incident qualifies as a safety incident; and (3) a short and plain statement describing the safety incident.

If enacted, the RAISE Act would take effect 90 days after being signed into law.



Source link

Continue Reading

AI Insights

Humanists pass global declaration on artificial intelligence and human values

Published

on


Luxembourg was the host city for the 2025 general assembly of Humanists International

Representatives of the global humanist community collectively resolved to pass The Luxembourg Declaration on Artificial Intelligence and Human Values at the 2025 general assembly of Humanists International, held in Luxembourg on Sunday 6 July. 

Drafted by Humanists UK with input from leading AI experts and other member organisations of Humanists International, the declaration outlines a set of ten shared ethical principles for the development, deployment, and regulation of artificial intelligence (AI) systems. It calls for AI to be aligned with human rights, democratic oversight, and the intrinsic dignity of every person, and for urgent action from governments and international bodies to make sure that AI serves as a tool for human flourishing, not harm.

Humanists UK patrons Professor Kate Devlin and Dr Emma Byrne were among the experts who consulted on an early draft of the declaration, prior to amendments from member organisations. Professor Devlin is Humanists UK’s commissioner to the UK’s AI Faith & Civil Society Commission.

Defining the values of our AI future 

Introducing the motion on the floor of the general assembly, Humanists UK Director of Communications and Development Liam Whitton urged humanists to recognise that the AI revolution was not a distant prospect on the horizon but already upon us. He argued that it fell to governments, international institutions, and ultimately civil society to define the values against which AI models should be trained, and the standards by which AI products and companies ought to be regulated.

Uniquely, humanists bring to the global conversation a principled secular ethics grounded in evidence, compassion, and human dignity. As governments and institutions grapple with the challenge of ‘AI alignment’ – ensuring that artificial intelligence reflects and respects human values – humanists offer a hopeful vision, rooted in a long tradition of thought about human happiness, moral progress, and the common good.

Read the Luxembourg Declaration on Artificial Intelligence and Human Values:

Adopted by the Humanists International General Assembly, Luxembourg, 2025.

In the face of artificial intelligence’s rapid advancement, we stand at a unique moment in human history. While new technologies offer unprecedented potential to enhance human flourishing, handled carelessly they also pose profound risks to human freedoms, human security, and our collective future.

AI systems already pervade innumerable aspects of human life and are developing far more rapidly than current ethical frameworks and governance structures can adapt. At the same time, the rapid concentration of these powerful capabilities within a small number of hands threatens to issue new challenges to civil liberties, democracies, and our vision of a more just and equal world.

In response to these historic challenges, the global humanist community affirms the following principles on the need to align artificial intelligence with human values rooted in reason, evidence, and our shared humanity:

  1. Human judgment: AI systems have the potential to empower and assist individuals and societies to achieve more in all aspects of human life. But they must never displace human judgment, human reason, human ethics, or human responsibility for our actions. Decisions that deeply affect people’s lives must always remain in human hands.
  2. Common good: Fundamentally, states must recognise that AI should be a tool to serve humanity rather than enrich a privileged few. The benefits of technological advancement should flow widely throughout society rather than concentrate power and wealth in ever-fewer hands. 
  3. Democratic governance: New technologies must be democratically accountable at all levels – from local communities and small private enterprises through to large multinationals and countries. No corporation, nation, or special interest should wield unaccountable power through technologies with potential to affect every sphere of human activity. Lawmakers, regulators, and public bodies must develop and sustain the expertise to keep pace with AI’s evolution and respond to emerging challenges.
  4. Transparency and autonomy: Citizens cannot meaningfully participate in democracies if the decisions affecting their lives are opaque. Transparency must be embedded not only in laws and regulations, but in the design of AI systems themselves — designed responsibly, with clear intent and purpose, and full human accountability. Laws should guarantee that every individual can freely decide how their personal data is used, and grant all citizens the means to query, contest, and shape how technologies are deployed.
  5. Protection from harm: Protecting people from harm must be a foundational principle of all AI systems, not an afterthought. As AI risks amplifying existing injustices in society – including racism, sexism, homophobia, and ableism – states and developers must act to prevent its use in discrimination, manipulation, unjust surveillance, targeted violence, or the suppression of lawful speech. Governments and business leaders must commit to long-term AI safety research and monitoring, aligning future AI systems with human goals, desires, and needs. 
  6. Shared prosperity: Previous industrial revolutions pursued progress without sufficient regard for human suffering. Today we must not. Technological advancement cannot be allowed to erode human dignity or entrench social divides. A truly human-centric approach demands bold investment in training, education, and social protections to enhance jobs, protect human dignity, and support those workers and communities most affected.
  7. Creators and artists: Properly harnessed, AI can help more people enjoy the benefits of creativity — expressing themselves, experimenting with new ideas, and collaborating in ways that bring personal meaning and joy. But we must continue to recognise and protect the unique value that human artists bring to creative work. Intellectual property frameworks must guarantee fair compensation, attribution, and protection for human artists and creators.
  8. Reason, truth, and integrity: Human freedom and progress depend on our ability to distinguish truth from falsehood and fact from fiction. As AI systems introduce new and far-reaching risks to the integrity of information, legal frameworks must rise to protect free inquiry, freedom of expression, and the health of democracy itself from the growing threat of misinformation, disinformation, and deliberate deception at scale.
  9. Future generations: The choices we make about AI today will shape the world for generations to come. Governments, civil society, and technology leaders must remain vigilant and act with foresight – prioritising the mitigation of environmental harms and long-term risks to human survival. These decisions must be guided by our responsibilities not only to one another, but to future generations, the ecosystem we rely on, and the wider animal kingdom.
  10. Human freedom, human flourishing: The ultimate value of AI will lie in its contribution to human happiness. To that end, we should embed shared values that promote human flourishing into AI systems — and be ambitious in using AI to maximise human freedom. For individuals, this could mean more time at leisure, pursuing passion projects, learning, reflecting, and making richer connections with other human beings. Collectively, we should realise these benefits by making advances in science and medicine, resolving pressing global challenges, and addressing inequalities within our societies. 

We commit ourselves as humanist organisations and as individuals to advocating these same principles in the governance, ethics, and deployment of AI worldwide.

We affirm the importance of humanist values to navigating these new frontiers – only by prioritising reason, compassion, dignity, freedom, and our shared humanity can human societies adequately navigate these emerging challenges. 

We call upon governments, corporations, civil society, and individuals to adopt these same principles through concrete policies, practices, and international agreements, taking this opportunity to renew our commitments to human rights, human dignity, and human flourishing now and always.

Previous Humanists International declarations – binding statements of organisational policy recognising outlooks, policies, and ethical convictions shared by humanist organisations in every continent – include the Auckland Declaration against the Politics of Division (2018), Reykjavik Declaration on the Climate Change Crisis (2019), and the Oxford Declaration on Freedom of Thought and Expression (2014). Traditionally, humanist organisations have marshalled these declarations as resources in their domestic and UN policy work, such as in Humanists UK’s advocacy of robust freedom of expression laws, or in formalising specific programmes of voluntary work, such as that of Humanist Climate Action in the UK.

Notes

For further comment or information, media should contact Humanists UK Director of Public Affairs and Policy Richy Thompson at press@humanists.uk or phone 0203 675 0959.

From 2022: The time has come: humanists must define the values that will underpin our AI future.

Humanists UK is the national charity working on behalf of non-religious people. Powered by over 150,000 members and supporters, we advance free thinking and promote humanism to create a tolerant society where rational thinking and kindness prevail. We provide ceremonies, pastoral care, education, and support services benefitting over a million people every year and our campaigns advance humanist thinking on ethical issues, human rights, and equal treatment for all.



Source link

Continue Reading

AI Insights

AI makes it increasingly difficult to know what’s real – Leader Publications

Published

on



AI makes it increasingly difficult to know what’s real  Leader Publications



Source link

Continue Reading

Trending