Connect with us

AI Research

Synthetic Human Genome Project gets go ahead

Published

on


Gwyndaf Hughes

Science Videographer

How the researchers hope to create human DNA

Work has begun on a controversial project to create the building blocks of human life from scratch, in what is believed to be a world first.

The research has been taboo until now because of concerns it could lead to designer babies or unforeseen changes for future generations.

But now the World’s largest medical charity, the Wellcome Trust, has given an initial £10m to start the project and says it has the potential to do more good than harm by accelerating treatments for many incurable diseases.

Dr Julian Sale, of the MRC Laboratory of Molecular Biology in Cambridge, who is part of the project, told BBC News the research was the next giant leap in biology.

“The sky is the limit. We are looking at therapies that will improve people’s lives as they age, that will lead to healthier aging with less disease as they get older.

“We are looking to use this approach to generate disease-resistant cells we can use to repopulate damaged organs, for example in the liver and the heart, even the immune system,” he said.

But critics fear the research opens the way for unscrupulous researchers seeking to create enhanced or modified humans.

Dr Pat Thomas, director of the campaign group Beyond GM, said: “We like to think that all scientists are there to do good, but the science can be repurposed to do harm and for warfare”.

Details of the project were given to BBC News on the 25th anniversary of the completion of the Human Genome Project, which mapped the molecules in human DNA and was also largely funded by Wellcome.

Getty Images A stylised drawing of the DNA double Helix with 1s and 0s appearing in the bonds between the molecules to illustrate its synthetic natureGetty Images

Artwork: The aim is to build sections of human DNA from scratch

Every cell in our body, with the exception of red blood cells, contains a molecule called DNA which carries the genetic information it needs.

DNA is built from just four much smaller blocks referred to as A, G, C and T, which are repeated over and over again in various combinations. Amazingly it contains all the genetic information that physically makes us who we are.

The Human Genome Project enabled scientists to read all human genes like a bar code. The new work that is getting under way, called the Synthetic Human Genome Project, potentially takes this a giant leap forward – it will allow researchers not just to read a molecule of DNA, but to create parts of it – maybe one day all of it – molecule by molecule from scratch.

BBC News A petri dish lit up with a bright light with white spots of yeast.BBC News

Scientists will begin developing tools to create ever larger sections of human DNA

The scientists’ first aim is to develop ways of building ever larger blocks of human DNA, up to the point when they have synthetically constructed a human chromosome. These contain the genes that govern our development, repair and maintenance.

These can then be studied and experimented on to learn more about how genes and DNA regulate our bodies.

Many diseases occur when these genes go wrong so the studies could lead to better treatments, according to Prof Matthew Hurles, director of the Wellcome Sanger Insititute which sequenced the largest proportion of the Human Genome.

“Building DNA from scratch allows us to test out how DNA really works and test out new theories, because currently we can only really do that by tweaking DNA in DNA that already exists in living systems”.

BBC News Two scientists facing away from the frame at two high tech white sequencing machines tha tlook like fridges with a computer screen on topBBC News

Machines at the Sanger Institute were used to sequence the human genome

The project’s work will be confined to test tubes and dishes and there will be no attempt to create synthetic life. But the technology will give researchers unprecedented control over human living systems.

And although the project is hunting for medical benefits, there is nothing to stop unscrupulous scientists misusing the technology.

They could, for example, attempt to create biological weapons, enhanced humans or even creatures that have human DNA, according to Prof Bill Earnshaw, a highly respected genetic scientist at Edinburgh University who designed a method for creating artificial human chromosomes.

“The genie is out of the bottle,” he told BBC News. “We could have a set of restrictions now, but if an organisation who has access to appropriate machinery decided to start synthesising anything, I don’t think we could stop them”

Ms Thomas is concerned about how the technology will be commercialised by healthcare companies developing treatments emerging from the research.

“If we manage to create synthetic body parts or even synthetic people, then who owns them. And who owns the data from these creations? “

Given the potential misuse of the technology, the question for Wellcome is why they chose to fund it. The decision was not made lightly, according to Dr Tom Collins, who gave the funding go-ahead.

“We asked ourselves what was the cost of inaction,” he told BBC News.

“This technology is going to be developed one day, so by doing it now we are at least trying to do it in as responsible a way as possible and to confront the ethical and moral questions in as upfront way as possible”.

A dedicated social science programme will run in tandem with the project’s scientific development and will be led by Prof Joy Zhang, a sociologist, at the University of Kent.

“We want to get the views of experts, social scientists and especially the public about how they relate to the technology and how it can be beneficial to them and importantly what questions and concerns they have,” she said.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

First ‘vibe hacking’ case shows AI cybercrime evolution and new threats

Published

on


NEWYou can now listen to Fox News articles!

A hacker has pulled off one of the most alarming AI-powered cyberattacks ever documented. According to Anthropic, the company behind Claude, a hacker used its artificial intelligence chatbot to research, hack, and extort at least 17 organizations. This marks the first public case where a leading AI system automated nearly every stage of a cybercrime campaign, an evolution that experts now call “vibe hacking.”

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

HOW AI CHATBOTS ARE HELPING HACKERS TARGET YOUR BANKING ACCOUNTS

Simulated ransom guidance created by Anthropic’s threat intelligence team for research and demonstration purposes. (Anthropic)

How a hacker used an AI chatbot to strike 17 targets

Anthropic’s investigation revealed how the attacker convinced Claude Code, a coding-focused AI agent, to identify vulnerable companies. Once inside, the hacker:

  • Built malware to steal sensitive files.
  • Extracted and organized stolen data to find high-value information.
  • Calculated ransom demands based on victims’ finances.
  • Generated tailored extortion notes and emails.

Targets included a defense contractor, a financial institution and multiple healthcare providers. The stolen data included Social Security numbers, financial records and government-regulated defense files. Ransom demands ranged from $75,000 to over $500,000.

Why AI cybercrime is more dangerous than ever

Cyber extortion is not new. But this case shows how AI transforms it. Instead of acting as an assistant, Claude became an active operator scanning networks, crafting malware and even analyzing stolen data. AI lowers the barrier to entry. In the past, such operations required years of training. Now, a single hacker with limited skills can launch attacks that once took a full criminal team. This is the frightening power of agentic AI systems.

HOW AI IS NOW HELPING HACKERS FOOL YOUR BROWSER’S SECURITY TOOLS

Webpage of AI generated ransom note

A simulated ransom note template that hackers could use to scam victims. (Anthropic)

What vibe hacking reveals about AI-powered threats

Security researchers refer to this approach as vibe hacking. It describes how hackers embed AI into every phase of an operation.

  • Reconnaissance: Claude scanned thousands of systems and identified weak points.
  • Credential theft: It extracted login details and escalated privileges.
  • Malware development: Claude generated new code and disguised it as trusted software.
  • Data analysis: It sorted stolen information to identify the most damaging details.
  • Extortion: Claude created alarming ransom notes with victim-specific threats.

This systematic use of AI marks a shift in cybercrime tactics. Attackers no longer just ask AI for tips; they use it as a full-fledged partner.

GOOGLE AI EMAIL SUMMARIES CAN BE HACKED TO HIDE PHISHING ATTACKS

A dark web page selling ransomware services

A cybercriminal’s initial sales offering on the dark web seen in January 2025. (Anthropic)

How Anthropic is responding to AI abuse

Anthropic says it has banned the accounts linked to this campaign and developed new detection methods. Its threat intelligence team continues to investigate misuse cases and share findings with industry and government partners. The company admits, however, that determined actors can still bypass safeguards. And experts warn that these patterns are not unique to Claude; similar risks exist across all advanced AI models.

How to protect yourself from AI cyberattacks

Here’s how to defend against hackers now using AI tools to their advantage:

1. Use strong, unique passwords everywhere

Hackers who break into one account often attempt to use the same password across your other logins. This tactic becomes even more dangerous when AI is involved because a chatbot can quickly test stolen credentials across hundreds of sites. The best defense is to create long, unique passwords for every account you have. Treat your passwords like digital keys and never reuse the same one in more than one lock.

Next, see if your email has been exposed in past breaches. Our No. 1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. 

Check out the best expert-reviewed password managers of 2025 at Cyberguy.com/Passwords

2. Protect your identity and use a data removal service

The hacker who abused Claude didn’t just steal files; they organized and analyzed them to find the most damaging details. That illustrates the value of your personal information in the wrong hands. The less data criminals can find about you online, the safer you are. Review your digital footprint, lock down privacy settings, and reduce what’s available on public databases and broker sites.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.

Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/Delete

Get a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan

Chinese hackers target US telecoms: What you need to know to protect your data

Illustration of a hacker at work. (Kurt “CyberGuy” Knutsson)

3. Turn on two-factor authentication (2FA)

Even if a hacker obtains your password, 2FA can stop them in their tracks. AI tools now help criminals generate highly realistic phishing attempts designed to trick you into handing over logins. By enabling 2FA, you add an extra layer of protection that they cannot easily bypass. Choose app-based codes or a physical key whenever possible, as these are more secure than text messages, which are easier for attackers to intercept.

4. Keep devices and software updated

AI-driven attacks often exploit the most basic weaknesses, such as outdated software. Once a hacker knows which companies or individuals are running old systems, they can use automated scripts to break in within minutes. Regular updates close those gaps before they can be targeted. Setting your devices and apps to update automatically removes one of the easiest entry points that criminals rely on.

5. Be suspicious of urgent messages

One of the most alarming details in the Anthropic report was how the hacker used AI to craft convincing extortion notes. The same tactics are being applied to phishing emails and texts sent to everyday users. If you receive a message demanding immediate action, such as clicking a link, transferring money or downloading a file, treat it with suspicion. Stop, check the source and verify before you act.

6. Use a strong antivirus software

The hacker in this case built custom malware with the help of AI. That means malicious software is getting smarter, faster and harder to detect. Strong antivirus software that constantly scans for suspicious activity provides a critical safety net. It can identify phishing emails and detect ransomware before it spreads, which is vital now that AI tools make these attacks more adaptive and persistent.

Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com/LockUpYourTech

Hacker typing code.

Over 40,000 Americans were previously exposed in a massive OnTrac security breach, leaking sensitive medical and financial records. (Jakub Porzycki/NurPhoto via Getty Images)

7. Stay private online with a VPN

AI isn’t only being used to break into companies; it’s also being used to analyze patterns of behavior and track individuals. A VPN encrypts your online activity, making it much harder for criminals to connect your browsing to your identity. By keeping your internet traffic private, you add another layer of protection for hackers trying to gather information they can later exploit.

For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at Cyberguy.com/VPN

CLICK HERE TO GET THE FOX NEWS APP  

Kurt’s key takeaways

AI isn’t just powering helpful tools; it’s also arming hackers. This case proves that cybercriminals can now automate attacks in ways once thought impossible. The good news is, you can take practical steps today to reduce your risk. By making smart moves, such as enabling two-factor authentication (2FA), updating devices, and using protective tools, you can stay one step ahead.

Do you think AI chatbots should be more tightly regulated to prevent abuse? Let us know by writing to us at Cyberguy.com/Contact

Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER

Copyright 2025 CyberGuy.com. All rights reserved.



Source link

Continue Reading

AI Research

Universities Waive Ethics Reviews for AI Synthetic Medical Data Studies

Published

on

By


In a groundbreaking shift that’s reshaping medical research, universities across North America and Europe are increasingly bypassing traditional ethics reviews for studies involving AI-generated synthetic medical data. According to a recent report in Nature, representatives from four prominent medical research centers—including institutions in Canada, the United States, and Italy—have confirmed they’ve waived standard institutional review board (IRB) approvals for such projects. The rationale? Synthetic data, created by algorithms that mimic real patient records without containing traceable personal information, doesn’t pose the same privacy risks as actual human data. This move is accelerating fields like drug discovery and disease modeling, where access to vast datasets is crucial but often hampered by regulatory hurdles.

Proponents argue that this approach could unlock unprecedented innovation. For instance, AI systems can generate hypothetical patient profiles—complete with symptoms, genetic markers, and treatment outcomes—based on anonymized real-world patterns. Researchers at these centers told Nature that by eliminating the need for lengthy ethics approvals, which can delay projects by months, they’re speeding up trials for rare diseases and personalized medicine. A similar sentiment echoes in a WebProNews analysis, which highlights how synthetic data is being used to train machine-learning models for predicting cancer progression without ever touching sensitive health records.

The Ethical Tightrope: Balancing Speed and Scrutiny in AI-Driven Research
This waiver trend isn’t without controversy, as critics warn it could erode foundational safeguards. Ethical guidelines from the World Health Organization, outlined in their 2024 guidance on AI in healthcare, emphasize the need for governance to address biases in large multi-modal models. If synthetic data inherits flaws from the original datasets—such as underrepresentation of minority groups—it might perpetuate inequities in medical AI, leading to skewed diagnostics or treatments. Posts on X (formerly Twitter) reflect growing public concern, with users debating privacy implications and calling for stricter oversight, often citing fears that “synthetic” doesn’t mean “safe” from algorithmic errors.

Moreover, a 2025 study in Frontiers in Medicine reviews a decade of global AI medical device regulations, noting that while synthetic data sidesteps patient consent issues, it raises questions about accountability. Who verifies the accuracy of AI-generated datasets? In one example from the Nature report, a Canadian university used synthetic data to simulate COVID-19 vaccine responses, bypassing IRB review and completing the study in weeks rather than months. Yet, as another Nature piece cautions, artificially generated data must be rigorously validated to avoid misleading results that could harm real-world applications.

Regulatory Gaps: Calls for Harmonized Standards Amid Rapid AI Adoption
The pushback is intensifying, with experts advocating for updated frameworks. A 2024 article in Humanities and Social Sciences Communications identifies key challenges like health equity and international cooperation, urging harmonized regulations to prevent a patchwork of standards. In the U.S., the FDA has begun scrutinizing AI tools, but synthetic data often falls into a gray area, as noted in PMC’s 2021 overview of AI ethics in medicine. European regulators, influenced by GDPR, are more cautious, yet Italian centers are among those waiving reviews, per Nature.

Industry insiders see this as a double-edged sword: faster research could lead to breakthroughs, but without robust checks, trust in AI healthcare might falter. Recent X discussions amplify this, with tech influencers warning of “bias amplification” in synthetic datasets. As one researcher quoted in WebProNews put it, the shift demands “updated regulations to balance innovation with accountability.” Looking ahead, organizations like WHO are pushing for global guidelines, potentially mandating third-party audits for synthetic data projects.

Future Implications: Navigating Innovation and Risk in a Data-Driven Era
Ultimately, this development signals a broader transformation in how AI intersects with medicine. By 2025, as per Frontiers’ analysis, AI integration in diagnostics is expected to surge, with synthetic data playing a pivotal role. However, ethical lapses could undermine public confidence, especially if biases lead to real harms. Universities must collaborate with regulators to ensure synthetic data’s promise doesn’t come at the cost of integrity, setting a precedent for responsible AI use worldwide.



Source link

Continue Reading

AI Research

Oxford University is using AI to find supernovae in the sky

Published

on


AI is everywhere, it can be overwhelming, and lots of folks will be sick of hearing about it. But it’s also important to continue to recognize where AI can make a real difference, including in helping our understanding of the universe.

That’s exactly what’s been happening at Oxford University, one of the UK’s most respected academic centers. A new tool built by its researchers is enabling them to find “the needles in a cosmic haystack” while significantly reducing the workload on its scientists conducting the research.



Source link

Continue Reading

Trending