Connect with us

AI Research

New collaborative effort harnesses AI to drive ALS drug discovery in Louisiana

Published

on


Answer ALS is proud to announce the launch of a groundbreaking collaborative initiative aimed at accelerating AI-powered drug discovery for ALS and other neurodegenerative diseases. This effort, known as the Louisiana AI Drug Development Infrastructure for ALS (LADDIA), brings together leading institutions and innovators, including GATC Health, Pennington Biomedical Research Center, and Tulane University – a tech-bio innovator using validated AI models to accelerate drug discovery from large-scale multiomics data – to harness the power of artificial intelligence and one of the largest ALS datasets in the world. 

This initiative is made possible through a commitment from the State of Louisiana to advance neuroscience research and innovation across the state. By investing in LADDIA, Louisiana is helping to position itself as a national leader in the convergence of AI and biomedical discovery. 

At the center of this effort is Dr. Jeffrey Keller of Pennington Biomedical, working in close partnership with Dr. Aron Culotta of Tulane. Together, they will lead a coordinated statewide effort of connecting researchers with expertise in AI, drug discovery, neuroscience, and clinical care, while all working together to drive innovation toward ALS treatments. Currently, there are no known viable treatments for ALS, LADDIA’s goal is to help change that trajectory. 

This is more than a research partnership, it’s a strategic investment in the future of ALS discovery. By aligning Louisiana’s top talent and institutions with cutting-edge AI tools and our open-access Neuromine Data Portal, we are enabling real-time collaboration that could help identify druggable pathways and translate data into breakthroughs.” 


Clare Durrett, Executive Director of Answer ALS

The initiative will roll out in two phases: 

  • Phase One focuses on building the collaborative foundation, recruiting local talent, aligning institutional strengths, and preparing the infrastructure for AI-enabled drug discovery. 
  • Phase Two activates the foundation to advance collaborative projects, optimizing AI models, and generating high-impact scientific outputs across participating institutions. 

“With the gradual adoption of artificial intelligence in applications around the globe, to apply this incredible technology toward the pursuit of treatments for ALS and other neurodegenerative diseases is perhaps the most noble and worthwhile implementation of it,” said Dr. Keller, who is the principal investigator of Answer ALS’ open access data repository, Neuromine. “The open-access repository of the Neuromine Data Portal will be instrumental in this pursuit, and along with Dr. Culotta, I look forward to collaborating with researchers and AI experts to navigate currently unseen patterns to potential treatments.” 

The ultimate goal is to identify and prioritize therapeutic targets using AI-driven insights from the Answer ALS’ Neuromine Data Portal, the largest open-access ALS dataset in the world. 

 “GATC is proud to partner in this important mission to leverage our proprietary AI platform to identify druggable ALS targets with high predictive accuracy,” said a GATC Health president Dr. Rahul Gupta. “We believe this alliance of research data, academia and advanced AI is the new model for rapid discovery of novel therapeutics to treat diseases currently lacking effective treatment. The biomarkers identified through this collaboration will be shared with the research community, while also enabling GATC to pursue therapeutic development based on these discoveries.” 

Benchmarks for the initiative include joint research publications, data-driven discoveries, and a shared roadmap for long-term collaboration, positioning Louisiana as a leader in AI-driven medical innovation. The model being driven by LADDIA and GATC also represents a scalable framework for applying AI to other complex diseases, from Alzheimer’s to chronic pain, through public-private partnerships.

“This important collaboration highlights the power of AI to transform healthcare,” said Dr. Culotta. “Combining Tulane’s expertise in AI and biomedical research with partners across the state, we aim to accelerate AI-driven solutions for ALS and other health challenges.”

Answer ALS remains committed to building the tools, data, and partnerships needed to end ALS. With the launch of LADDIA, another chapter in that mission begins. 



Source link

AI Research

The hidden cost of AI in research: AI’s role in limiting thought

Published

on


Today, artificial intelligence has become a core element in research; it is now an inseparable component of the modern research process; ChatGPT has even become a co-author for many scholars. Despite the glamour with which the involvement of AI in research is portrayed, this involvement is not risk-free, since reliance on automated systems represents a threat to human intellectual abilities and skills, with long-term negative consequences for the human mind’s ability to seek deep knowledge and creativity.

The gradual increase in AI use in research may lead to the replacement of human intelligence with artificial intelligence in conducting research projects over time.

Relying on AI in research could gradually cause the next generation to lose awareness of all the stages and requirements of the scientific research process. When AI platforms automate every task needed for research, researchers won’t need to actively engage with the entire process; instead, they can simply ask AI to handle it. Thus, the risk for the next generation is a likely loss of our cognitive skills and memory over time. As AI takes on more components of the research process, it could decline in critical thinking and problem-solving abilities. The simplicity of the AI-powered research process, therefore, might lead to a new generation of researchers who are capable of delivering quick results but lack the intellectual depth, independence, and awareness needed to maintain meaningful research.

When researchers rely on AI’s automated responses, they risk becoming passive users and failing to develop analytical and questioning skills that high-quality research demands. Nor’ain Abdul Rashid states in his article: “Over-dependence on automation may undermine the reflective dimensions of the qualitative research process.” This highlights a key risk: research isn’t just about generating results but about understanding and thoughtfully engaging with the process, learning new insights from all its stages. It is also important to note that AI, which provides fake references and information, is likely to mislead researchers about the stages and components of research, leading to fake results.

If AI automates all research tasks that used to foster our intellectual growth, the human mind could become weaker, and research skills might gradually fade away. We are likely to lose those abilities and skills that differentiate us from all other creatures, including the angels, since we come to life with a purpose and a mission. Maintaining the essence of effective research requires human intellect and reflection to guide the process. The human mind should never be replaced by a machine; otherwise, research will lose its value.

Hajer al Balushi

The writer is a student of Sultan Qaboos University



Source link

Continue Reading

AI Research

How AI and Automation are Speeding Up Science and Discovery – Berkeley Lab News Center

Published

on


The Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) is at the forefront of a global shift in how science gets done—one driven by artificial intelligence, automation, and powerful data systems. By integrating these tools, researchers are transforming the speed and scale of discovery across disciplines, from energy to materials science to particle physics.

This integrated approach is not just advancing research at Berkeley Lab—it’s strengthening the nation’s scientific enterprise. By pioneering AI-enabled discovery platforms and sharing them across the research community, Berkeley Lab is helping the U.S. compete in the global race for innovation, delivering the tools and insights needed to solve some of the world’s most pressing challenges.

From accelerating materials discovery to optimizing beamlines and more, here are four ways Berkeley Lab is using AI to make research faster, smarter, and more impactful.

Automating Discovery: AI and Robotics for Materials Innovation

At the heart of materials science is a time-consuming process: formulating, synthesizing, and testing thousands of potential compounds. AI is helping Berkeley Lab speed that up—dramatically.

A-Lab
At Berkeley Lab’s automated materials facility, A-Lab, AI algorithms propose new compounds, and robots prepare and test them. This tight loop between machine intelligence and automation drastically shortens the time it takes to validate materials for use in technologies like batteries and electronics.

Autobot
Exploratory tools like Autobot, a robotic system at the Molecular Foundry, are being used to investigate new materials for applications ranging from energy to quantum computing, making lab work faster and more flexible.



Source link

Continue Reading

AI Research

Captions rebrands as Mirage, expands beyond creator tools to AI video research

Published

on


Captions, an AI-powered video creation and editing app for content creators that has secured over $100 million in venture capital to date at a valuation of $500 million, is rebranding to Mirage, the company announced on Thursday. 

The new name reflects the company’s broader ambitions to become an AI research lab focused on multimodal foundational models specifically designed for short-form video content for platforms like TikTok, Reels, and Shorts. The company believes this approach will distinguish it from traditional AI models and competitors such as D-ID, Synthesia, and Hour One.

The rebranding will also unify the company’s offerings under one umbrella, bringing together the flagship creator-focused AI video platform, Captions, and the recently launched Mirage Studio, which caters to brands and ad production.

“The way we see it, the real race for AI video hasn’t begun. Our new identity, Mirage, reflects our expanded vision and commitment to redefining the video category, starting with short-form video, through frontier AI research and models,” CEO Gaurav Misra told TechCrunch.

Image Credits:Mirage

The sales pitch behind Mirage Studio, which launched in June, focuses on enabling brands to create short advertisements without relying on human talent or large budgets. By simply submitting an audio file, the AI generates video content from scratch, with an AI-generated background and custom AI avatars. Users can also upload selfies to create an avatar using their likeness.

What sets the platform apart, according to the company, is its ability to produce AI avatars that have natural-looking speech, movements, and facial expressions. Additionally, Mirage says it doesn’t rely on existing stock footage, voice cloning, or lip-syncing. 

Mirage Studio is available under the business plan, which costs $399 per month for 8,000 credits. New users receive 50% off the first month. 

Techcrunch event

San Francisco
|
October 27-29, 2025

While these tools will likely benefit brands wanting to streamline video production and save some money, they also spark concerns around the potential impact on the creative workforce. The growing use of AI in advertisements has prompted backlash, as seen in a recent Guess ad in Vogue’s July print edition that featured an AI-generated model.

Additionally, as this technology becomes more advanced, distinguishing between real and deepfake videos becomes increasingly difficult. It’s a difficult pill to swallow for many people, especially given how quickly misinformation can spread these days.

Mirage recently addressed its role in deepfake technology in a blog post. The company acknowledged the genuine risks of misinformation while also expressing optimism about the positive potential of AI video. It mentioned that it has put moderation measures in place to limit misuse, such as preventing impersonation and requiring consent for likeness use. 

However, the company emphasized that “design isn’t a catch-all” and that the real solution lies in fostering a “new kind of media literacy” where people approach video content with the same critical eye as they do news headlines.



Source link

Continue Reading

Trending