Connect with us

AI Research

GPs are using artificial intelligence to record patient consultations but how safe is your personal data?

Published

on


For the last 12 months Dr Grant Blashki has used what he calls a “medical intern” in every appointment.

His intern is completely digital. It is an artificial intelligence scribe that listens to every word his patients say.

“It’s mostly surprisingly accurate,” the GP told 7.30.

“Occasionally it will mishear the name of something. Occasionally it will mishear a diagnosis.”

He says patient consent when using AI scribes in a clinical setting is essential but that most people don’t have an issue.

How Heidi Health is marketed online. (heidihealth.com.au)

“I do ask for consent. Occasionally people don’t want me to use it, which is absolutely fine, but almost everyone is comfortable with it and it just streamlines the work,” Dr Blashki said.

“It’s good for the patient because I’m concentrating on them.”

Dr Blashki says he has become so reliant on the scribe that he would struggle to conduct appointments without it.

“I use it almost in every consultation,” he said.

If I was going to forget my stethoscope or my scribe software, I’ll take the scribe software. It’s such a part of my work now.

How safe is patient data?

A smiling man wearing glasses, a tie and blazer.

Dr Blashki says he deletes all his transcriptions off the AI software. (Supplied: Beyond Blue)

As patients reveal intimate details about their medical history with Dr Blashki, he says the scribe is constantly collecting sensitive data.

“Maybe an infectious disease, maybe a social issue that they might not want a partner to know about — all sorts of complexities that can come into the notes,”

he said.

“So I make sure that at the end of each consultation I actually delete all the transcriptions off my software.”

Dr Blashki uses software from Melbourne-based company Heidi Health, which is one of the main AI scribe tools used by clinicians in Australia.

Heidi Health declined 7.30’s request for an interview but its CEO and co-founder Dr Thomas Kelly provided a written response to questions about patient privacy.

“Heidi now supports almost two million visits a week, and that’s around the world from Australia, New Zealand, Canada, to the US and the UK,” Dr Kelly said.

Text on a computer screen.

Doctors can delete patient notes from Heidi Health.

“In each region, data is stored in compliance with the healthcare regulations and privacy policies of the region.”

“Here it’s Australian Privacy Principles (APP), in the EU that would be GDPR, in the US that would be HIPAA. All data is protected according to ISO 27K and SOC2 requirements, which are the highest enterprise standards that exist. We get audited by third parties to protect our data and ensure the security that we have.”

Dr Kelly says all data is protected according to “the highest enterprise standards that exist”. 

We get audited by third parties to protect our data and ensure the security that we have.

Lyrebird Health is another AI scribe software company that is based in Melbourne.

It is used by GPs, surgeons, psychiatrists and paediatricians — the company says the software was used in “200,000 consults” in Australia last week.

“All data is stored 100 per cent in Australian sovereign databases if you’re an Australian customer — it’s different obviously if you’re overseas,” Lyrebird Health CEO Kai Van Lieshout told 7.30.

A man wearing a t-shirt.

Lyrebird Health’s CEO Kai Van Lieshout says all patient notes are automatically deleted after seven days. (ABC News: Dan Fermer)

We have not been hacked before and that’s something that is incredibly important.

Patient notes are deleted automatically after seven days from Lyrebird Health’s system (doctors need to back up the notes if they want to keep them), but users do have the option to manually extend this period to six months. 

“For us it is definitely really gone,” Mr Van Lieshout said.

“I know that because we’ve had doctors that have needed something that we’ve had … that don’t realise that it’s deleted after seven days and there’s nothing we can do.”

John Lalor is an Assistant Professor of IT, Analytics and Operations at the University of Notre Dame, he warns there is always an element of risk when storing digital data.

“A lot of those models, they’re very data-driven, so the more data they have, usually the better they get,” Mr Lalor told 7.30.

A smiling man wearing a suit jacket and tie.

John Lalor from the University of Notre Dame says there’s always an element of risk when it comes to digital data. (Supplied: University of Notre Dame)

“So on the one hand, if it has a lot more data from patients, that can typically improve the models, but on the other hand, there’s the privacy risk of the data being exposed if it’s leaked or hacked.”

He says patients and doctors should be making sure AI scribe companies are clear with how they are storing and using data.

“Making sure that the firms are clear with how exactly the data is being used, because if there’s ambiguity in what they say, then there could be ambiguity in the interpretation as well,” he said.

“With individuals, if they’re uncomfortable with using something like that, they could speak with their physician to see if it’s optional or see if they could get more information about what exactly is being done when the data is taken into scribe system.”

‘Magic’ notes

To show how Heidi Health’s AI scribe works, Dr Blashki has taken 7.30 through a mock appointment about a headache.

We discuss how the headache has been “on and off pretty much every morning for the last month” and that there’s no history of migraines.

Heidi Health then processes the conversation, in a process it calls ‘Making Magic’, then produces consultation notes.

Text on a computer screen.

Notes generated by Heidi Health after 7.30’s consultation with Dr Blashki. (ABC News: Richard Sydenham)

The software also suggests “differential diagnoses” including a “tension-type headache” and a “cervicogenic headache”.

“We’re seeing some of the medical softwares and some of the AI generally come up with differential diagnoses, make suggestions, and … the doctor really needs to turn their mind to it and look at them more as suggestions than the answer,” Dr Blashki said.

Dr Kelly said the software “aims to be more than a literal summary and is able to identify the clinical rationale underpinning a line of questioning”.

In response to 7.30’s mock consultation, Dr Kelly said: “We summarise the clinical encounter reflecting [the doctor’s] lines of questioning and using appropriate clinical terminology to describe them. 

Heidi does not provide a differential diagnosis absent the clinician and it is still up to the clinician to review their documentation for accuracy.

Mr Van Lieshout said Lyrebird Health doesn’t produce potential diagnoses after a consultation.

“We won’t try to tell the clinician what to do, if that makes sense,” he said.

“It’s subjective to: what did the patient describe? Did I do any forms of examination? What was their blood pressure assessment? What’s my diagnosis or assessment of situation? Then plan what’s the next steps.

“We will break up that conversation into those categories.”

‘Indispensable’ tool

Brett Sutton stands in front of a purple and black background.

Brett Sutton says AI scribes have become “indispensable” for some GPs. (AAP: James Ross)

Dr Blashki said about 50 per cent of the doctors in the GP clinic he works at in Melbourne are using AI scribe software for every consultation.

He says he has also received referral letters from specialists that look like they’ve been created by AI.

“I have had one letter where I think, ‘Oh, I don’t think they’ve checked this properly. They’ve clearly got one of the diagnoses not quite right’,” he said.

It’s like the GPS in the car. You are still the driver, there’s suggestions, but you have to check it.

Former Victorian Chief Health Officer Brett Sutton believes AI scribes have become “indispensable” although he conceded that protecting patient data is the greatest concern for the industry.

“I think the regulators need to make sure that it’s safe,” Dr Sutton said.

“Obviously the clinicians who are using it have a responsibility for sensitive health information to be properly recorded and stored and made safe, so that it’s treated in exactly the same way as any other clinical notes would be treated historically.”

Watch 7.30, Mondays to Thursdays 7:30pm on ABC iview and ABC TV



Source link

AI Research

Physicians Lose Cancer Detection Skills After Using Artificial Intelligence

Published

on


Artificial intelligence shows great promise in helping physicians improve both their diagnostic accuracy of important patient conditions. In the realm of gastroenterology, AI has been shown to help human physicians better detect small polyps (adenomas) during colonoscopy. Although adenomas are not yet cancerous, they are at risk for turning into cancer. Thus, early detection and removal of adenomas during routine colonoscopy can reduce patient risk of developing future colon cancers.

But as physicians become more accustomed to AI assistance, what happens when they no longer have access to AI support? A recent European study has shown that physicians’ skills in detecting adenomas can deteriorate significantly after they become reliant on AI.

The European researchers tracked the results of over 1400 colonoscopies performed in four different medical centers. They measured the adenoma detection rate (ADR) for physicians working normally without AI vs. those who used AI to help them detect adenomas during the procedure. In addition, they also tracked the ADR of the physicians who had used AI regularly for three months, then resumed performing colonoscopies without AI assistance.

The researchers found that the ADR before AI assistance was 28% and with AI assistance was 28.4%. (This was a slight increase, but not statistically significant.) However, when physicians accustomed to AI assistance ceased using AI, their ADR fell significantly to 22.4%. Assuming the patients in the various study groups were medically similar, that suggests that physicians accustomed to AI support might miss over a fifth of adenomas without computer assistance!

This is the first published example of so-called medical “deskilling” caused by routine use of AI. The study authors summarized their findings as follows: “We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.”

Consider the following non-medical analogy: Suppose self-driving car technology advanced to the point that cars could safely decide when to accelerate, brake, turn, change lanes, and avoid sudden unexpected obstacles. If you relied on self-driving technology for several months, then suddenly had to drive without AI assistance, would you lose some of your driving skills?

Although this particular study took place in the field of gastroenterology, I would not be surprised if we eventually learn of similar AI-related deskilling in other branches of medicine, such as radiology. At present, radiologists do not routinely use AI while reading mammograms to detect early breast cancers. But when AI becomes approved for routine use, I can imagine that human radiologists could succumb to a similar performance loss if they were suddenly required to work without AI support.

I anticipate more studies will be performed to investigate the issue of deskilling across multiple medical specialties. Physicians, policymakers, and the general public will want to ask the following questions:

1) As AI becomes more routinely adopted, how are we tracking patient outcomes (and physician error rates) before AI, after routine AI use, and whenever AI is discontinued?

2) How long does the deskilling effect last? What methods can help physicians minimize deskilling, and/or recover lost skills most quickly?

3) Can AI be implemented in medical practice in a way that augments physician capabilities without deskilling?

Deskilling is not always bad. My 6th grade schoolteacher kept telling us that we needed to learn long division because we wouldn’t always have a calculator with us. But because of the ubiquity of smartphones and spreadsheets, I haven’t done long division with pencil and paper in decades!

I do not see AI completely replacing human physicians, at least not for several years. Thus, it will be incumbent on the technology and medical communities to discover and develop best practices that optimize patient outcomes without endangering patients through deskilling. This will be one of the many interesting and important challenges facing physicians in the era of AI.



Source link

Continue Reading

AI Research

AI exposes 1,000+ fake science journals

Published

on


A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.

The study, published Aug. 27 in the journal “Science Advances,” tackles an alarming trend in the world of research.

Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers — for a hefty fee.

Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.

“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”

His group’s new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?

Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.

But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.

“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”

The shake down

When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality — or, at least, that’s the goal.

A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.

Often, they target researchers outside of the United States and Europe, such as in China, India and Iran — countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.

“They will say, ‘If you pay $500 or $1,000, we will review your paper,'” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”

A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)

But keeping pace with the spread of those publications has been daunting for humans.

To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.

Among those journals, the AI initially flagged more than 1,400 as potentially problematic.

Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.

“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”

A firewall for science

Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.

“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”

The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.

The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data — what he calls a “firewall for science.”

“As a computer scientist, I often give the example of when a new smartphone comes out,” he said. “We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”

Co-authors on the study included Han Zhuang at the Eastern Institute of Technology in China and Lizheng Liang at Syracuse University in the United States.



Source link

Continue Reading

AI Research

The Artificial Intelligence Is In Your Home, Office And The IRS Edition

Published

on




Source link

Continue Reading

Trending