Connect with us

AI Research

Hundreds of wildfires sparked by ‘live-fire’ military manoeuvres

Published

on


Malcolm PriorRural affairs producer

BBC A large red warning sign sits on a red barrier across a fenced dirt track that runs through a countryside plain. Large white writing warns "Danger. Impact area closed due to range fires". In the distance a cloud of smoke rises into the sky.BBC

Live army manoeuvres have caused 385 wildfires on MoD training sites across the UK countryside since 2023

Live-fire military training has sparked hundreds of wildfires across the UK countryside since 2023, with unexploded shells often making it too dangerous to tackle them.

Fire crews battling a vast moorland blaze in North Yorkshire this month have been hampered by exploding bombs and tank shells dating back to training on the moors during the Second World War.

Figures obtained by the BBC show that of the 439 wildfires on Ministry of Defence (MoD) land between January 2023 and last month, 385 were caused by present-day army manoeuvres themselves.

The MoD said it has a robust wildfire policy which monitors risk levels and limits live ammunition use when necessary.

HIWFRS Orange flames burning behind trees on a MoD training range is sending large amounts of grey-black smoke high into the skyHIWFRS

Wildfires left to burn can cause large amounts of smoke pollution and have a significant environmental impact

But locals near the sites of recent fires told the BBC they felt the MoD needed to do more to prevent them, including completely banning live fire training in the driest months.

Wildfires in the countryside can start for many reasons, including discarded cigarettes, unattended campfires and BBQs and deliberate arson, and the scale of them can be made worse by dry, hot conditions and the amount of vegetation on the land.

But according to data obtained by the BBC under the Freedom of Information Act, there have been 1,178 wildfires in total linked to present-day MoD training sites since 2020 – with 101 out of 134 wildfires in the first six months of this year caused by military manoeuvres or training.

More than 80 of the fires caused by training itself so far this year have been in so-called “Range Danger Areas” – also known as “impact zones”.

A graphic showing MoD locations that have had wildfires this year

These are areas where the level of danger means the local fire service is usually not allowed access and the fire is left to burn out on its own, albeit contained by firebreaks.

The large amounts of smoke produced can lead to road closures, disruption and health risks to local residents, who are directed to keep their windows shut despite it often being the hottest time of the year.

One villager who lives near the MoD’s training site on Salisbury Plain said wildfires there, like the recent one in May, were “a perennial problem” and the MoD had to do more to control them and restrict the use of live ordnance to outside of the hottest months.

Neil Lockhart, from Great Cheverell, near Devizes in Wiltshire, said the smoke from fires left to burn was a major environmental issue and posed a risk to the health and safety of locals.

A head-and-shoulders shot of Neil Lockhart in a buttoned-up white polo shirt, stood in front of grassland and a field with trees silhouetted on the horizon under a grey sky.

Villager Neil Lockhart has asthma and says the smoke produced by the wildfires causes him particular challenges

“It’s the pollution. If you suffer like I do with asthma, and it’s the height of the summer and you’ve got to keep all your windows closed, then it’s an issue,” explained Mr Lockhart.

Arable farmer Tim Daw, whose land at All Cannings overlooks the MoD training site on Salisbury Plain, said he “must have seen three or four big fires this year” but found the smoke only a “mild annoyance”.

He said many locals were worried about the impact of the wildfires on wildlife and the landscape, saying the extent of the area affected by the blazes often looked “fairly horrendous”, and likened it to a “burnt savannah”.

But he said the MoD was “very proactive” in keeping locals informed about the risks of wildfires and any ongoing problems on their land.

Wartime “legacy”

Aside from the problem of live military training sparking fires, old unexploded ordnance left behind from previous manoeuvres make wildfires harder to fight,

This month has seen a major fire burning on Langdale Moor, in the North York Moors National Park, since Monday, 11 August.

It has seen a number of bombs explode in areas that were once used for military training dating back to the Second World War.

One local landowner, George Winn-Darley, said the peat fire had produced “an enormous cloud of pollution” that could have been prevented if there had not been live ordnance on site.

“If that unexploded ordnance had been cleared up and wasn’t there then this wildfire would have been able to be dealt with, probably completely, nearly two weeks ago,” he told the BBC.

Mr Winn-Darley called for the MoD to clear up any major munitions left on the moors.

“That would seem to be the absolute minimum that they should be doing,” he said.

“It seems ridiculous that here we are, 80 years after the end of the Second World War, and we’re still dealing with this legacy.”

A MoD spokesperson said the fire at Langdale had not started on land currently owned by the MoD but that an Army Explosive Ordnance Disposal (EOD) team had responded on four occasions to requests for assistance from North Yorkshire Police.

“Various Second World War-era unexploded ordnance items were discovered as a result of the wildfires, which the EOD operator declared to be inert practice projectiles. They were retrieved for subsequent disposal,” he explained.

He added that the MoD monitors the risk of fires across its training estate throughout the year and restricts the use of ordnance, munitions and explosives when training is taking place during periods of elevated wildfire risk.

“Impact areas” are constructed with fire breaks, such as stone tracks, around them to prevent the wider spread of fire and grazing is used to keep the amount of combustible vegetation down.

Earlier this month, the MoD launched its “Respect the Range” campaign, designed to raise the public’s awareness of the dangers of accessing military land, such as live firing, unexploded ordnance and wildfires.

A spokeswoman for the National Fire Chiefs Council (NFCC) said it worked closely with the MoD to “understand the risk and locations of munitions and to create plans to effectively extinguish fires”.

“We always encourage military colleagues to account for the conditions and the potential for wildfire when considering when to carry out their training,” she added.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Physicians Lose Cancer Detection Skills After Using Artificial Intelligence

Published

on


Artificial intelligence shows great promise in helping physicians improve both their diagnostic accuracy of important patient conditions. In the realm of gastroenterology, AI has been shown to help human physicians better detect small polyps (adenomas) during colonoscopy. Although adenomas are not yet cancerous, they are at risk for turning into cancer. Thus, early detection and removal of adenomas during routine colonoscopy can reduce patient risk of developing future colon cancers.

But as physicians become more accustomed to AI assistance, what happens when they no longer have access to AI support? A recent European study has shown that physicians’ skills in detecting adenomas can deteriorate significantly after they become reliant on AI.

The European researchers tracked the results of over 1400 colonoscopies performed in four different medical centers. They measured the adenoma detection rate (ADR) for physicians working normally without AI vs. those who used AI to help them detect adenomas during the procedure. In addition, they also tracked the ADR of the physicians who had used AI regularly for three months, then resumed performing colonoscopies without AI assistance.

The researchers found that the ADR before AI assistance was 28% and with AI assistance was 28.4%. (This was a slight increase, but not statistically significant.) However, when physicians accustomed to AI assistance ceased using AI, their ADR fell significantly to 22.4%. Assuming the patients in the various study groups were medically similar, that suggests that physicians accustomed to AI support might miss over a fifth of adenomas without computer assistance!

This is the first published example of so-called medical “deskilling” caused by routine use of AI. The study authors summarized their findings as follows: “We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.”

Consider the following non-medical analogy: Suppose self-driving car technology advanced to the point that cars could safely decide when to accelerate, brake, turn, change lanes, and avoid sudden unexpected obstacles. If you relied on self-driving technology for several months, then suddenly had to drive without AI assistance, would you lose some of your driving skills?

Although this particular study took place in the field of gastroenterology, I would not be surprised if we eventually learn of similar AI-related deskilling in other branches of medicine, such as radiology. At present, radiologists do not routinely use AI while reading mammograms to detect early breast cancers. But when AI becomes approved for routine use, I can imagine that human radiologists could succumb to a similar performance loss if they were suddenly required to work without AI support.

I anticipate more studies will be performed to investigate the issue of deskilling across multiple medical specialties. Physicians, policymakers, and the general public will want to ask the following questions:

1) As AI becomes more routinely adopted, how are we tracking patient outcomes (and physician error rates) before AI, after routine AI use, and whenever AI is discontinued?

2) How long does the deskilling effect last? What methods can help physicians minimize deskilling, and/or recover lost skills most quickly?

3) Can AI be implemented in medical practice in a way that augments physician capabilities without deskilling?

Deskilling is not always bad. My 6th grade schoolteacher kept telling us that we needed to learn long division because we wouldn’t always have a calculator with us. But because of the ubiquity of smartphones and spreadsheets, I haven’t done long division with pencil and paper in decades!

I do not see AI completely replacing human physicians, at least not for several years. Thus, it will be incumbent on the technology and medical communities to discover and develop best practices that optimize patient outcomes without endangering patients through deskilling. This will be one of the many interesting and important challenges facing physicians in the era of AI.



Source link

Continue Reading

AI Research

AI exposes 1,000+ fake science journals

Published

on


A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.

The study, published Aug. 27 in the journal “Science Advances,” tackles an alarming trend in the world of research.

Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers — for a hefty fee.

Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.

“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”

His group’s new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?

Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.

But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.

“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”

The shake down

When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality — or, at least, that’s the goal.

A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.

Often, they target researchers outside of the United States and Europe, such as in China, India and Iran — countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.

“They will say, ‘If you pay $500 or $1,000, we will review your paper,'” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”

A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)

But keeping pace with the spread of those publications has been daunting for humans.

To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.

Among those journals, the AI initially flagged more than 1,400 as potentially problematic.

Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.

“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”

A firewall for science

Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.

“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”

The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.

The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data — what he calls a “firewall for science.”

“As a computer scientist, I often give the example of when a new smartphone comes out,” he said. “We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”

Co-authors on the study included Han Zhuang at the Eastern Institute of Technology in China and Lizheng Liang at Syracuse University in the United States.



Source link

Continue Reading

AI Research

The Artificial Intelligence Is In Your Home, Office And The IRS Edition

Published

on




Source link

Continue Reading

Trending