Connect with us

AI Research

Ancient Egyptian history may be rewritten by a DNA bone test

Published

on


Pallab Ghosh profile image
Liverpool John Moores University. Nature A stark, white human skull is depicted against a black BackroundLiverpool John Moores University. Nature

Tests on the skull could give new insights into ancient history

A DNA bone test on a man who lived 4,500 years ago in the Nile Valley has shed new light on the rise of the Ancient Egyptian civilisation.

An analysis of his skeleton shows he was 60 years old and possibly worked as a potter, but also that a fifth of his DNA came from ancestors living 1,500km away in the other great civilisation of the time, in Mesopotamia or modern day Iraq.

It is the first biological evidence of links between the two and could help explain how Egypt was transformed from a disparate collection of farming communities to one of the mightiest civilisations on Earth.

The findings lend new weight to the view that writing and agriculture arose through the exchange of people and ideas between these two ancient worlds.

Liverpool John Moores University/Nature Nearly full skeleton missing his right lower arm as well as fingers and toes on each hand and foot appearing against a black backgroundLiverpool John Moores University/Nature

The skeleton has revealed extraordinary details of the man’s life

The lead researcher, Prof Pontus Skoglund at the Francis Crick Institute in London, told BBC News that being able to extract and read DNA from ancient bones could shed new light on events and individuals from the past, allowing black and white historical facts to burst into life with technicolour details.

“If we get more DNA information and put it side by side with what we know from archaeological, cultural, and written information we have from the time, it will be very exciting,” he said.

Our understanding of our past is drawn in part from written records, which is often an account by the rich and powerful, mostly about the rich and powerful.

Biological methods are giving historians and scientists a new tool to view history through the eyes of ordinary people.

The DNA was taken from a bone in the inner ear of remains of a man buried in Nuwayrat, a village 265km south of Cairo.

He died between 4,500 and 4,800 years ago, a transformational moment in the emergence of Egypt and Mesopotamia. Archaeological evidence indicated that the two regions may have been in contact at least 10,000 years ago when people in Mesopotamia began to farm and domesticate animals, leading to the emergence of an agricultural society.

Many scholars believe this social and technological revolution may have influenced similar developments in ancient Egypt – but there has been no direct evidence of contact, until now.

Garstang Museum/Liverpool University/Nature A black and white picture showing a colleaction of bones including the person's skill in a vircular vesselGarstang Museum/Liverpool University/Nature

The remains were discovered in 1902 in a ceramic pottery coffin

Adeline Morez Jacobs, who analysed the remains as part of her PhD at Liverpool John Moores University, says this is the first clear-cut evidence of significant migration of people and therefore information between the two centres of civilisation at the time.

“You have two regions developing the first writing systems, so archaeologists believe that they were in contact and exchanging ideas. Now we have the evidence that they were.

“We hope that future DNA samples from ancient Egypt can expand on when precisely this movement from West Asia started and its extent.”

The man was buried in a ceramic pot in a tomb cut into the hillside. His burial took place before artificial mummification was standard practice, which may have helped to preserve his DNA.

By investigating chemicals in his teeth, the research team were able to discern what he ate, and from that, determined that he had probably grown up in Egypt.

But the scientific detective story doesn’t stop there.

The Metropolitan Museum of Art At the top are eight crudely drawn bowls below which is a seated potter on the right using his left hand to turn a potters wheel on whihc sits a vessel they are creating. On the left is another figure bending over about to lift a bowlThe Metropolitan Museum of Art

A pictogram in the tomb of Amenemhat near Nuwayrat shows how potters worked

Prof Joel Irish at Liverpool John Moores University conducted a detailed analysis of the skeleton to build up a picture of the man as an individual.

“What I wanted to do was to find out who this guy was, let’s learn as much about him as possible, what his age was, his stature was, what he did for a living and to try and personalise the whole thing rather than treat him as a cold specimen,” he said.

The bone structure indicated that the man was between 45 and 65 years old, though evidence of arthritis pointed to the upper end of the scale. He was just over 5ft 2in tall, which even then was short.

Prof Irish was also able to establish he was probably a potter. The hook-shaped bone at the back of his skull was enlarged, indicating he looked down a lot. His seat bones are expanded in size, suggesting that he sat on hard surfaces for prolonged periods. His arms showed evidence of extensive movement back and forth, and there were markings on his arms where his muscles had grown, indicating that he was used to lifting heavy objects.

“This shows he worked his tail off. He’s worked his entire life,” the American-born academic told BBC News.

Dr Linus Girdland Flink explained that it was only because of a tremendous stroke of luck that this skeleton was available to study and reveal its historic secrets.

“It was excavated in 1902 and donated to World Museum Liverpool, where it then survived bombings during the Blitz that destroyed most of the human remains in their collection. We’ve now been able to tell part of the individual’s story, finding that some of his ancestry came from the Fertile Crescent, highlighting mixture between groups at this time,” he said.

The new research has been published in the journal Nature.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Cyber Command creates new AI program in fiscal 2026 budget

Published

on


U.S. Cyber Command’s budget request for fiscal 2026 includes funding to begin a new project specifically for artificial intelligence.

While the budget proposal would allot just $5 million for the effort — a small portion of Cybercom’s $1.3 billion research and development spending plan — the stand-up of the program follows congressional direction to prod the command to develop an AI roadmap.

In the fiscal 2023 defense policy bill, Congress charged Cybercom and the Department of Defense chief information officer — in coordination with the chief digital and artificial intelligence officer, director of the Defense Advanced Research Projects Agency, director of the National Security Agency and the undersecretary of defense for research and engineering — to jointly develop a five-year guide and implementation plan for rapidly adopting and acquiring AI systems, applications, supporting data and data management processes for cyber operations forces.

Cybercom created its roadmap shortly thereafter along with an AI task force.

The new project within Cybercom’s R&D budget aims to develop core data standards in order to curate and tag collected data that meet those standards to effectively integrate data into AI and machine learning solutions while more efficiently developing artificial intelligence capabilities to meet operational needs.

The effort is directly related to the task of furthering the roadmap.

As a result of that roadmap, the command decided to house its task force within its elite Cyber National Mission Force.  

The command created the program by pulling funds from its operations and maintenance budget and moving them to the R&D budget from fiscal 2025 to fiscal 2026.

The command outlined five categories of various AI applications across its enterprise and other organizations, including vulnerabilities and exploits; network security, monitoring, and visualization; modeling and predictive analytics; persona and identity; and infrastructure and transport.

Specifically, the command’s AI project, Artificial Intelligence for Cyberspace Operations, will aim to develop and conduct pilots while investing in infrastructure to leverage commercial AI capabilities. The command’s Cyber Immersion Laboratory will develop, test and evaluate cyber capabilities and perform operational assessments performed by third parties, the budget documents state.

In fiscal 2026, the command plans to spend the $5 million to support the CNMF in piloting AI technologies through an agile 90-day pilot cycle, according to the documents, which will ensure quick success or failure. That fast-paced methodology allows the CNMF to quickly test and validate solutions against operational use cases with flexibility to adapt to evolving cyber threats.

The CNMF will also look to explore ways to improve threat detection, automate data analysis, and enhance decision-making processes in cyber operations, according to budget documents.


Written by Mark Pomerleau

Mark Pomerleau is a senior reporter for DefenseScoop, covering information warfare, cyber, electronic warfare, information operations, intelligence, influence, battlefield networks and data.



Source link

Continue Reading

AI Research

Researchers Use Hidden AI Prompts to Influence Peer Reviews: A Bold New Era or Ethical Quandary?

Published

on


AI Secrets in Peer Reviews Uncovered

Last updated:

Edited By

Mackenzie Ferguson

AI Tools Researcher & Implementation Consultant

In a controversial yet intriguing move, researchers have begun using hidden AI prompts to potentially sway the outcomes of peer reviews. This cutting-edge approach aims to enhance review processes, but it raises ethical concerns. Join us as we delve into the implications of AI-assisted peer review tactics and how it might shape the future of academic research.

Banner for Researchers Use Hidden AI Prompts to Influence Peer Reviews: A Bold New Era or Ethical Quandary?

Introduction to AI in Peer Review

Artificial Intelligence (AI) is rapidly transforming various facets of academia, and one of the most intriguing applications is its integration into the peer review process. At the heart of this evolution is the potential for AI to streamline the evaluation of scholarly articles, which traditionally relies heavily on human expertise and can be subject to biases. Researchers are actively exploring ways to harness AI not just to automate mundane tasks but to provide deep, insightful evaluations that complement human judgment.

The adoption of AI in peer review promises to revolutionize the speed and efficiency with which academic papers are vetted and published. This technological shift is driven by the need to handle an ever-increasing volume of submissions while maintaining high standards of quality. Notably, hidden AI prompts, as discussed in recent studies, can subtly influence reviewers’ decisions, potentially standardizing and enhancing the objectivity of reviews (source).

Incorporating AI into peer review isn’t without challenges. Ethical concerns about transparency, bias, and accountability arise when machines play an integral role in shaping academic discourse. Nonetheless, the potential benefits appear to outweigh the risks, with AI offering tools that can uncover hidden biases and provide more balanced reviews. As described in TechCrunch’s exploration of this topic, there’s an ongoing dialogue about the best practices for integrating AI into these critical processes (source).

Influence of AI in Academic Publishing

The advent of artificial intelligence (AI) is reshaping various sectors, with academic publishing being no exception. The integration of AI tools in academic publishing has significantly streamlined the peer review process, making it more efficient and less biased. According to an article from TechCrunch, researchers are actively exploring ways to integrate AI prompts within the peer review process to subtly guide reviewers’ evaluations without overt influence (). These AI systems analyze vast amounts of data to provide insightful suggestions, thus enhancing the quality of published research.

The inclusion of AI in peer review is not without its challenges, though. Experts caution that the deployment of AI-driven tools must be done with significant oversight to prevent any undue influence or bias that may occur from automated processes. They emphasize the importance of transparency in how AI algorithms are used and the nature of data fed into these systems to maintain the integrity of peer review (TechCrunch).

While some scholars welcome AI as a potential ally that can alleviate the workload of human reviewers and provide them with analytical insights, others remain skeptical about its impact on the traditional rigor and human judgment in peer evaluations. The debate continues, with public reactions reflecting a mixture of excitement and cautious optimism about the future potential of AI in scholarly communication (TechCrunch).

Public Reactions to AI Interventions

The public’s reaction to AI interventions, especially in fields such as scientific research and peer review, has been a mix of curiosity and skepticism. On one hand, many appreciate the potential of AI to accelerate advancements and improve efficiencies within the scientific community. However, concerns remain over the transparency and ethics of deploying hidden AI prompts to influence processes that traditionally rely on human expertise and judgment. For instance, a recent article on TechCrunch highlighted researchers’ attempts to integrate these AI-driven techniques in peer review, sparking discussions about the potential biases and ethical implications of such interventions.

Further complicating the public’s perception is the potential for AI to disrupt traditional roles and job functions within these industries. Many individuals within the academic and research sectors fear that an over-reliance on AI could undermine professional expertise and lead to job displacement. Despite these concerns, proponents argue that AI, when used effectively, can provide invaluable support to researchers by handling mundane tasks, thereby allowing humans to focus on more complex problem-solving activities, as noted in the TechCrunch article.

Moreover, the ethical ramifications of using AI in peer review processes have prompted a call for stringent regulations and clearer guidelines. The potential for AI to subtly shape research outcomes without the overt consent or awareness of the human peers involved raises significant ethical questions. Discussions in media outlets like TechCrunch indicate a need for balanced discussions that weigh the benefits of AI-enhancements against the necessity to maintain integrity and trust in academic research.

Future of Peer Review with AI

The future of peer review is poised for transformation as AI technologies continue to advance. Researchers are now exploring how AI can be integrated into the peer review process to enhance efficiency and accuracy. Some suggest that AI could assist in identifying potential conflicts of interest, evaluating the robustness of methodologies, or even suggesting suitable reviewers based on their expertise. For instance, a detailed exploration of this endeavor can be found at TechCrunch, where researchers are making significant strides toward innovative uses of AI in peer review.

The integration of AI in peer review does not come without its challenges and ethical considerations. Concerns have been raised regarding potential biases that AI systems might introduce, the transparency of AI decision-making, and how reliance on AI might impact the peer review landscape. As discussed in recent events, stakeholders are debating the need for guidelines and frameworks to manage these issues effectively.

One potential impact of AI on peer review is the democratization of the process, opening doors for a more diverse range of reviewers who may have been overlooked previously due to geographical or institutional biases. This could result in more diverse viewpoints and a richer peer review process. Additionally, as AI becomes more intertwined with peer review, expert opinions highlight the necessity for continuous monitoring and adjustment of AI tools to ensure they meet the ethical standards of academic publishing. This evolution in the peer review process invites us to envision a future where AI and human expertise work collaboratively, enhancing the quality and credibility of academic publications.

Public reactions to the integration of AI in peer review are mixed. Some welcome it as a necessary evolution that could address long-standing inefficiencies in the system, while others worry about the potential loss of human oversight and judgment. Future implications suggest a field where AI-driven processes could eventually lead to a more streamlined and transparent peer review system, provided that ethical guidelines are strictly adhered to and biases are meticulously managed.



Source link

Continue Reading

AI Research

Xbox producer tells staff to use AI to ease job loss pain

Published

on


An Xbox producer has faced a backlash after suggesting laid-off employees should use artificial intelligence to deal with emotions in a now deleted LinkedIn post.

Matt Turnbull, an executive producer at Xbox Game Studios Publishing, wrote the post after Microsoft confirmed it would lay off up to 9,000 workers, in a wave of job cuts this year.

The post, which was captured in a screenshot by tech news site Aftermath, shows Mr Turnbull suggesting tools like ChatGPT or Copilot to “help reduce the emotional and cognitive load that comes with job loss.”

One X user called it “plain disgusting” while another said it left them “speechless”. The BBC has contacted Microsoft, which owns Xbox, for comment.

Microsoft previously said several of its divisions would be affected without specifying which ones but reports suggest that its Xbox video gaming unit will be hit.

Microsoft has set out plans to invest heavily in artificial intelligence (AI), and is spending $80bn (£68.6bn) in huge data centres to train AI models.

Mr Turnbull acknowledged the difficulty of job cuts in his post and said “if you’re navigating a layoff or even quietly preparing for one, you’re not alone and you don’t have to go it alone”.

He wrote that he was aware AI tools can cause “strong feelings in people” but wanted to try and offer the “best advice” under the circumstances.

The Xbox producer said he’d been “experimenting with ways to use LLM Al tools” and suggested some prompts to enter into AI software.

These included career planning prompts, resume and LinkedIn help, and questions to ask for advice on emotional clarity and confidence.

“If this helps, feel free to share with others in your network,” he wrote.

The Microsoft cuts would equate to 4% of Microsoft’s 228,000-strong global workforce.

Some video game projects have reportedly been affected by the cuts.

More on this story



Source link

Continue Reading

Trending