Connect with us

AI Insights

Fascinating video reveals the 3 ways AI is improving healthcare with 1 major concern

Published

on


Many people are concerned about artificial intelligence these days. Has this technology arrived to save or destroy us? Unfortunately, it’s completely unanswerable at the moment, regardless of opinion. However, when it comes to medicine, AI can definitely help. In a YouTube video posted by ABC News 730, How AI is changing the way doctors treat their patients, they shared some promising information.

Speaking on the benefits and challenges of AI, doctors are finding great ways to take this technology into their profession while learning to navigate some of the difficult side effects. The video below breaks down how AI is an amazing scribe, data protector, and cancer hunter. It is also a source of dangerous misinformation, like chatbot posts about untrue vaccine concerns.


Despite that significant drawback, medical research does show that the three benefits of AI in healthcare are substantial.

– YouTube www.youtube.com

Artificial Intelligence is a fantastic scribe.

Robot with headset.Image via Canva – Photo by PhonlamaiPhoto’s Images

When a patient sits down with their doctor, the AI scribe called ‘Heidi’ is literally there to listen to every word in the exchange. Dr. Grant Blashki, a General Practitioner of medicine for over 20 years in Melbourne, Australia says in the video, “So that records our consultation and types out all my notes for me.” The AI will share very precise and clear notes. It also offers some suggested diagnoses for the problems described by the patient. Blashki continues by saying, “The doctor really needs to turn their mind to it and look at it, they’re more suggestions than the answer. And, I guess with the new generation of doctors coming up who will be living with AI, they need to understand its benefits and its limits.”

In a statement presented in the video, Heidi CEO and Co-Founder Dr. Thomas Kelly shared, “We summarize the clinical encounter reflecting their lines of questioning and using appropriate clinical terminology to describe them. Heidi does not provide a differential diagnosis absent the clinician, and it is still up to the clinician to review their documentation for accuracy.”

Patient data encryption

data encryption, cyber security, hacking, stolen data, patient confidentiality, data protection Abstract internet cyber security conceptImage via Canva – Photo by Vertigo3d

Most of us have received an email telling us that some school or organization we’ve belonged to has been hacked and the information stolen. If this hasn’t happened to you yet, be aware that it may be inevitable, as it’s becoming more common. When it comes to our healthcare facts and information, if AI is going to be taking detailed notes, that also means very private and important data that should be protected is being stored with the doctors.

An AI scribe software company called Lyrebird Health is also tackling the problem of patient confidentiality. CEO Kai Van Lieshout shared that patient notes are automatically deleted after seven days, unless patients opt to extend their save for six months. Lieshout expressed that the notes are completely gone and unrecoverable after seven days, noting, “We’ve had doctors that have needed something that we’ve had or wanted it… [and] they don’t realize that it’s deleted after seven days and there’s nothing we can do.”

AI investigates the structures of cancer cells

cancer, tumors, structures, cancer cells, tumor structures, technology, eradicate, researchers Digital creation from Harvard Medical School media0.giphy.com

Researchers are starting to use AI to understand the structure of cancerous tumors. One of the biggest hurdles in treating cancers is how differently they behave and are built. Breast cancer is very different from liver cancer, which is also very different from skin cancer. The National Library of Medicine explains that what works for one type of cancer—or even one patient—often doesn’t work for another.

Associate Professor Christine Chaffer at the Garvan Institute of Medical Research is using her AI to look at these structures. Her aim is to find how one cancer cell is similar to another. She explains the power of using the AI, sharing, “What we do with that information then is to work out ways to eradicate some of the key components of each groups of cells.” In this way, although each type of cancer is different, the AI can help us understand the structure of a cell and learn how to hunt the key components to eradicate the cancer itself. The aim is that when a person presents with a type of cancer, the cells could be immediately pursued and destroyed.

The dangerous challenges presented by AI misinformation

chatbot, misinformation, vaccines, IT, analytics, social media, AI bots, health officer, medical advancements Chat Bot with AI technologyImage via Canva – Photo by Supatman

The video highlights a growing concern about AI chatbots spreading misinformation. It’s becoming increasingly difficult to determine whether details in a post are created by humans or AI. John Lalor, an assistant professor of IT, analytics, and operations at the University of Notre Dame says, “The bot can reply to posts, make new posts under very strict conditions when it sees a certain post that has a certain keyword or post by a certain individual, and then it becomes much more automatic and automated.”

Unfortunately, it’s up to the social media companies to identify and delete the misleading information, and, at this point, they really aren’t doing that. The video shared an example of a Reddit thread titled “Why the world needs fewer vaccines”:

  • “The US has some of the highest vaccine rates in the developed world, but also the highest vaccine injury rates.”
  • “I got vaccinated, went to the doctors, and was in pain. All I could do was cry on the couch. I was so scared.”
  • “What’s the deal with ‘multidose’ vaccines? Are they just a marketing gimmick?”

Each of these posts was made by a chatbot, not an actual human. Brett Sutton, the former Health Chief Officer of Victoria, has a real concern over vaccine misinformation online. He believes it’s having a real-world negative impact, saying, “Vaccine hesitancy has been on the rise to a certain degree… Vaccine uptake has been dropped by a couple of percentage points.” Misinformation can create health risks affecting us on a global scale. Measles outbreaks are again affecting different community pockets throughout the United States. A major cause is this misinformation that social media shares about vaccines.

As technology continues to advance, the hope is AI will develop new benefits quicker than the negative side effects. Hopefully, the people leading this cutting-edge science will continue to do everything they can to protect us from misinformation and other issues that this innovative tech brings with it.



Source link

AI Insights

AI-powered hydrogel dressings transform chronic wound care

Published

on


As chronic wounds such as diabetic ulcers, pressure ulcers, and articular wounds continue to challenge global healthcare systems, a team of researchers from China has introduced a promising innovation: AI-integrated conductive hydrogel dressings for intelligent wound monitoring and healing.

This comprehensive review, led by researchers from China Medical University and Northeastern University, outlines how these smart dressings combine real-time physiological signal detection with artificial intelligence, offering a new paradigm in personalized wound care.

Why it matters:

  • Real-time monitoring: Conductive hydrogels can track key wound parameters such as temperature, pH, glucose levels, pressure, and even pain signals-providing continuous, non-invasive insights into wound status.
  • AI-driven analysis: Machine learning algorithms (e.g., CNN, KNN, ANN) process sensor data to predict healing stages, detect infections early, and guide treatment decisions with high accuracy (up to 96%).
  • Multifunctional integration: These dressings not only monitor but also actively promote healing through electroactivity, antibacterial properties, and drug release capabilities.

Key features:

  • Material innovation: The review discusses various conductive materials (e.g., CNTs, graphene, MXenes, conductive polymers) and their roles in enhancing biocompatibility, sensitivity, and stability.
  • Smart signal output: Different sensing mechanisms-such as colorimetry, resistance variation, and infrared imaging-enable multimodal monitoring tailored to wound types.
  • Clinical applications: The paper highlights applications in pressure ulcers, diabetic foot ulcers, and joint wounds, emphasizing the potential for home care, remote monitoring, and early intervention.

Challenges & future outlook:

Despite promising advances, issues such as material degradation, signal stability, and AI model generalizability remain. Future efforts will focus on multidimensional signal fusion, algorithm optimization, and clinical translation to bring these intelligent dressings into mainstream healthcare.

This work paves the way for next-generation wound care, where smart materials meet smart algorithms-offering hope for millions suffering from chronic wounds.

Stay tuned for more innovations at the intersection of biomaterials, AI, and personalized medicine!

Source:

Journal reference:

She, Y., et al. (2025). Artificial Intelligence-Assisted Conductive Hydrogel Dressings for Refractory Wounds Monitoring. Nano-Micro Letters. doi.org/10.1007/s40820-025-01834-w



Source link

Continue Reading

AI Insights

To ChatGPT or not to ChatGPT: Professors grapple with AI in the classroom

Published

on


As shopping period settles, students may notice a new addition to many syllabi: an artificial intelligence policy. As one of his first initiatives as associate provost for artificial intelligence, Michael Littman PhD’96 encouraged professors to implement guidelines for the use of AI. 

Littman also recommended that professors “discuss (their) expectations in class” and “think about (their) stance around the use of AI,” he wrote in an Aug. 20 letter to faculty. But, professors on campus have applied this advice in different ways, reflecting the range of attitudes towards AI.

In her nonfiction classes, Associate Teaching Professor of English Kate Schapira MFA’06 prohibits AI usage entirely. 

“I teach nonfiction because evidence … clarity and specificity are important to me,” she said. AI threatens these principles at a time “when they are especially culturally devalued” nationally.

She added that an overreliance on AI goes beyond the classroom. “It can get someone fired. It can screw up someone’s medication dosage. It can cause someone to believe that they have justification to harm themselves or another person,” she said.

Nancy Khalek, an associate professor of religious studies and history, said she is intentionally designing assignments that are not suitable for AI usage. Instead, she wants students “to engage in reflective assignments, for which things like ChatGPT and the like are not particularly useful or appropriate.”

Khalek said she considers herself an “AI skeptic” — while she acknowledged the tool’s potential, she expressed opposition to “the anti-human aspects of some of these technologies.”

But AI policies vary within and across departments. 

Professors “are really struggling with how to create good AI policies, knowing that AI is here to stay, but also valuing some of the intermediate steps that it takes for a student to gain knowledge,” said Aisling Dugan PhD’07, associate teaching professor of biology.

In her class, BIOL 0530: “Principles of Immunology,” Dugan said she allows students to choose to use artificial intelligence for some assignments, but that she requires students to critique their own AI-generated work. 

She said this reflection “is a skill that I think we’ll be using more and more of.”

Dugan added that she thinks AI can serve as a “study buddy” for students. She has been working with her teaching assistants to develop an AI chatbot for her classes, which she hopes will eventually answer student questions and supplement the study videos made by her TAs.

Despite this, Dugan still shared concerns over AI in classrooms. “It kind of misses the mark sometimes,” she said, “so it’s not as good as talking to a scientist.”

For some assignments, like primary literature readings, she has a firm no-AI policy, noting that comprehending primary literature is “a major pedagogical tool in upper-level biology courses.”

“There’s just some things that you have to do yourself,” Dugan said. “It (would be) like trying to learn how to ride a bike from AI.”

Assistant Professor of the Practice of Computer Science Eric Ewing PhD’24 is also trying to strike a balance between how AI can support and inhibit student learning. 

This semester, his courses, CSCI 0410: “Foundations of AI and Machine Learning” and CSCI 1470: “Deep Learning,” heavily focus on artificial intelligence. He said assignments are no longer “measuring the same things,” since “we know students are using AI.”

While he does not allow students to use AI on homework, his classes offer projects that allow them “full rein” use of AI. This way, he said, “students are hopefully still getting exposure to these tools, but also meeting our learning objectives.”

Get The Herald delivered to your inbox daily.

Ewing also added that the skills required of graduated students are shifting — the growing presence of AI in the professional world requires a different toolkit.

He believes students in upper level computer science classes should be allowed to use AI in their coding assignments. “If you don’t use AI at the moment, you’re behind everybody else who’s using it,” he said. 

Ewing says that he identifies AI policy violations through code similarity — last semester, he found that 25 students had similarly structured code. Ultimately, 22 of those 25 admitted to AI usage.

Littman also provided guidance to professors on how to identify the dishonest use of AI, noting various detection tools. 

“I personally don’t trust any of these tools,” Littman said. In his introductory letter, he also advised faculty not to be “overly reliant on automated detection tools.” 

Although she does not use detection tools, Schapira provides specific reasons in her syllabi to not use AI in order to convince students to comply with her policy. 

“If you’re in this class because you want to get better at writing — whatever “better” means to you — those tools won’t help you learn that,” her syllabus reads. “It wastes water and energy, pollutes heavily, is vulnerable to inaccuracies and amplifies bias.”

In addition to these environmental concerns, Dugan was also concerned about the ethical implications of AI technology. 

Khalek also expressed her concerns “about the increasingly documented mental health effects of tools like ChatGPT and other LLM-based apps.” In her course, she discussed with students how engaging with AI can “resonate emotionally and linguistically, and thus impact our sense of self in a profound way.”

Students in Schapira’s class can also present “collective demands” if they find the structure of her course overwhelming. “The solution to the problem of too much to do is not to use an AI tool. That means you’re doing nothing. It’s to change your conditions and situations with the people around you,” she said.

“There are ways to not need (AI),” Schapira continued. “Because of the flaws that (it has) and because of the damage (it) can do, I think finding those ways is worth it.”



Source link

Continue Reading

AI Insights

This Artificial Intelligence (AI) Stock Could Outperform Nvidia by 2030

Published

on


When investors think about artificial intelligence (AI) and the chips powering this technology, one company tends to dominate the conversation: Nvidia (NASDAQ: NVDA). It has become an undisputed barometer for AI adoption, riding the wave with its industry-leading GPUs and the sticky ecosystem of its CUDA software that keep developers in its orbit. Since the launch of ChatGPT about three years ago, Nvidia stock has surged nearly tenfold.

Here’s the twist: While Nvidia commands the spotlight today, it may be Taiwan Semiconductor Manufacturing (NYSE: TSM) that holds the real keys to growth as we look toward the next decade. Below, I’ll unpack why Taiwan Semi — or TSMC, as it’s often called — isn’t just riding the AI wave, but rather is building the foundation that brings the industry to life.

What makes Taiwan Semi so critical is its role as the backbone of the semiconductor ecosystem. Its foundry operations serve as the lifeblood of the industry, transforming complex chip designs into the physical processors that power myriad generative AI applications.

Continue reading

Source Fool.com



Source link

Continue Reading

Trending