Connect with us

AI Research

LG launches upgraded pathology AI model – The Korea Times

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Artificial Intelligence in Cataract Surgery and Optometry at Large with Harvey Richman, OD, and Rebecca Wartman, OD

Published

on


At the 2025 American Optometric Association Conference in Minneapolis, MN, Harvey Richman, OD, Shore Family Eyecare, and Rebecca Wartman, OD, optometrist chair of AOA Coding and Reimbursement Committee, presented their lecture on the implementation of artificial intelligence (AI) devices in cataract surgery and optometry at large.1

AI has been implemented in a variety of ophthalmology fields already, from analyzing and interpreting ocular imaging to determining the presence of diseases or disorders of the retina or macula. Recent studies have tested AI algorithms in analyzing fundus fluorescein angiography, finding the programs extremely effective at enhancing clinical efficiency.2

However, there are concerns as to the efficacy and reliability of AI programs, given their propensity for hallucination and misinterpretation. To that end, Drs. Richman and Wartman presented a study highlighting the present and future possibilities of AI in cataract surgery, extrapolating its usability to optometry as a whole.

Richman spoke to the importance of research in navigating the learning curve of AI technology. With the rapid advancements and breakneck pace of implementation, Richman points out the relative ease with which an individual can fall behind on the latest developments and technologies available to them.

“The problem is that the technology is advancing much quicker than the people are able to adapt to it,” Richman told HCPLive. “There’s been research done on AI for years and years; unfortunately, the implementation just hasn’t been as effective.”

Wartman warned against the potential for AI to take too much control in a clinical setting. She cautioned that clinicians should be wary of letting algorithms make all of the treatment decisions, as well as having a method of undoing those decisions.

“I think they need to be very well aware of what algorithms the AI is using to get to its interpretations and be a little cautious when the AI does all of the decision making,” Wartman said. “Make sure you know how to override that decision making.”

Richman went on to discuss the 3 major levels of AI: assistive technology, augmented technology, and autonomous intelligence.

“Some of those are just bringing out data, some of them bring data and make recommendations for treatment protocol, and the third one can actually make the diagnosis and treatment protocol and implement it without a physician even involved,” Richman said. “In fact, the first artificial intelligence code that was approved by CPT had to do with diabetic retina screening, and it is autonomous. There is no physician work involved in that.”

Wartman also informed HCPLive that a significant amount of surgical technology is already using artificial intelligence, mainly in the form of pattern recognition software and predictive devices.

“A lot of our equipment is already using some form of artificial intelligence, or at least algorithms to give you patterns and tell you whether it’s inside or outside the norm,” Wartman said.

References
  1. Richman H, Wartman R. A.I. in Cataract Surgery. Presented at the 2025 American Optometric Association in Minneapolis, MN, June 25-28, 2025.
  2. Shao A, Liu X, Shen W, et al. Generative artificial intelligence for fundus fluorescein angiography interpretation and human expert evaluation. NPJ Digit Med. 2025;8(1):396. Published 2025 Jul 2. doi:10.1038/s41746-025-01759-z



Source link

Continue Reading

AI Research

Artificial intelligence could hire you. Now it could also fire you

Published

on


Use of artificial intelligence in the job candidate interview and hiring process, at least at some level, is becoming more common at U.S. companies. Proponents say it saves time, filters out candidates that aren’t qualified for the job and present hiring managers with the most suitable pool of candidates.

Use of artificial intelligence in the job candidate interview and hiring process, at least at some level, is becoming more common at U.S. companies. Proponents say it saves time, filters out candidates that aren’t qualified for the job and presents hiring managers with the most suitable pool of candidates.

Opponents say AI has shown bias in candidate selection, and falls short of judging applicants on softer skills and personality traits.

AI is now finding its way into managing employees long after they’ve been hired, and that too is raising concerns.

A survey of more than 1,300 office managers with direct reports conducted by Resume Builder found a majority are now using AI to make personnel decisions, including promotions, raises and even terminations.

“It’s one thing if you are using it for some sort of transactional thing in your job, but now we’re talking about peoples’ livelihoods and their jobs,” said Stacie Haller, chief career coach at Resume Builder. “My hope is that the human part of the process in Human Resources and overseeing peoples’ careers don’t just become left up to AI.”

Haller said overreliance on artificial intelligence in making high-stakes personnel decisions can become a slippery slope for companies.

“It also leads the organization to have some liabilities if somebody feels they were unfairly fired or didn’t get a raise, and it was AI and the information wasn’t correct,” she said. “I think there are some liabilities there.”

In the survey, six in 10 mangers said they rely on AI to make decisions about the employees they manage, including 78% who said they use AI to determine raises, 77% for promotions, 66% for layoffs and even 64% for terminations.

Most concerning, two-thirds of managers using AI to manage employees said they have not received any formal AI training, according to the survey.

“Organizations need to find some uniformity and training and build this in like they build in any other process,” Haller said. “And it has to be verified, But when it comes to peoples’ careers and lives, I think the human aspect needs to play a bigger piece here.”

An overwhelming majority of HR managers surveyed said they do maintain control over AI recommendations.

“The good news is, most of these folks have told us that if they don’t agree with the decision, they will override it,” Haller said. “But it seems that too many in our surveys are leaning to use it in that direction, and it feels a little Wild West out there.”

When asked which tool they rely on most, ChatGPT was cited by 53% of managers, followed by 29% for Microsoft’s Copilot and 16% for Google’s Gemini.

Most are also using AI for personnel issues that are productive without affecting careers, such as training materials, employee development plans and draft performance improvement plans.

Results from Resume Builder’s survey on HR manager use of artificial intelligence are online.

Get breaking news and daily headlines delivered to your email inbox by signing up here.

© 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.



Source link

Continue Reading

AI Research

Northumbria to roll out new AI platform for staff and students

Published

on


Northumbria University is to provide its students and staff with access to Claude for Education – a leading AI platform specifically tailored for higher education.

Northumbria will become only the second university in the UK, alongside the London School of Economics and other leading international institutions, to offer Claude for Education as a tool to its university community.

With artificial intelligence rapidly transforming many aspects of our lives, Northumbria’s students and staff will now be provided with free access to many of the tools and skills they will need to succeed in the new global AI-environment.

Claude for Education is a next-generation AI assistant built by Anthropic and trained to be safe, accurate and secure. It provides universities with ethical and transparent access to AI that ensures data security and copyright compliance and acts as a 24/7 study partner for students, designed to guide learning and develop critical thinking rather than providing direct answers.

Known as a UK leader in responsible AI-based research and education, Northumbria University recently launched its Centre for Responsible AI and is leading a multi-million pound UKRI AI Centre for Doctoral Training in Citizen-Centred Artificial Intelligence to train the next generation of leaders in AI development.

Professor Graham Wynn explained: “Today’s students are digitally native and recent data show many use AI routinely. They expect their universities to provide a modern, technology-enhanced education, providing access to AI tools along with clear guidance on the responsible use of AI.

“We know that the availability of secure and ethical AI tools is a significant consideration for our applicants and our investment in Claude for Education will position Northumbria as a forward-thinking leader in ethical AI innovation.

“Empowering students and staff, providing cutting-edge learning opportunities, driving social mobility and powering an inclusive economy are at the heart of everything we do. We know how important it is to eliminate digital poverty and provide equitable access to the most powerful AI tools, so our students and graduates are AI literate with the skills they need for the workplaces of the future.

“The introduction of Claude for Education will provide our students and staff with free universal access to cutting-edge AI technology, regardless of their financial circumstances.”

The University is now working with Anthropic to establish the technical infrastructure and training to roll out Claude for Education in autumn 2025.



Source link

Continue Reading

Trending