Connect with us

AI Research

Oracle Health Deploys AI to Tackle $200B Administrative Challenge

Published

on


Oracle Health introduced tools aimed at easing administrative healthcare burdens and costs.

The company’s new artificial intelligence-powered offerings are designed to simplify and lower the cost of processes such as prior authorizations, medical coding, claims processing and determining eligibility, according to a Thursday (Sept. 11) press release.

“Oracle Health is working to solve long-standing problems in healthcare with AI-powered solutions that simplify transactions between payers and providers,” Seema Verma, executive vice president and general manager, Oracle Health and Life Sciences, said in the release. “Our offerings can help minimize administrative complexity and waste to improve accuracy and reduce costs for both parties. With these capabilities, providers can better navigate payer-specific coverage, medical necessity and billing rules while enabling payers to lower administrative workloads by receiving more accurate claims from the start.”

Annual administrative costs tied to healthcare billing and insurance are estimated at roughly $200 billion, the release said. That figure continues to rise, largely due to the complexity of medical and financial processing rules and evolving payment models. The rules and models are time-consuming and inefficient for providers to follow and adopt, so they use manual processes, which make them prone to errors.

The PYMNTS Intelligence report “Healthcare Payments Need Modernization to Drive Financial Health” found that healthcare’s lingering reliance on manual payment systems is proving to be a bottleneck for its financial health and operational efficiency.

The worldwide market for healthcare digital payments is forecast to increase at a compound annual growth rate of 19% between 2024 and 2030, indicating a shift and market opportunity for digital solutions, per the report.

The report also explored how these outdated systems strain revenues and create inefficiencies, contrasting the sector’s slower adoption with other industries that have embraced digital payment tools.

“On the patient side, the benefits are equally compelling,” PYMNTS wrote in June. “Digital transactions offer hassle-free experiences, which are a driver for patient satisfaction and, ultimately, patient retention.”

The research found that 67% of executives and decision-makers in healthcare payer organizations said that their firms’ manual payment platforms were actively hindering efficiency. In addition, 74% said these platforms put their organizations at greater risk for regulatory fines and penalties.



Source link

AI Research

3 Arguments Against AI in the Classroom

Published

on


Generative artificial intelligence is here to stay, and K-12 schools need to find ways to use the technology for the benefit of teaching and learning. That’s what many educators, technology companies, and AI advocates say.

In response, more states and districts are releasing guidance and policies around AI use in the classroom. Educators are increasingly experimenting with the technology, with some saying that it has been a big time saver and has made the job more manageable.

But not everyone agrees. There are educators who are concerned that districts are buying into the AI hype too quickly and without enough skepticism.

A nationally representative EdWeek Research Center survey of 559 K-12 educators conducted during the summer found that they are split on whether AI platforms will have a negative or positive impact on teaching and learning in the next five years: 47% say AI’s impact will be negative, while 43% say it will be positive.

Education Week talked to three veteran teachers who are not using generative AI regularly in their work and are concerned about the potential negative effects the technology will have on teaching and learning.

Here’s what they think about using generative AI in K-12.

AI provides ‘shortcuts’ that are not conducive for learning

Dylan Kane, a middle school math teacher at Lake County High School in Leadville, Colo., isn’t “categorically against AI,” he said.

He has experimented with the technology personally, using it to help him improve his Spanish-language skills. AI is a “half decent” Spanish tutor, if you understand its limitations, he said. For his teaching job, Kane has experimented with AI tools to generate student materials like many other teachers, but it takes too many iterations of prompting to generate something he would actually put in front of his classes.

“I will do a better job just doing it myself and probably take less time to do so,” said Kane, who is in his 14th year of teaching. Creating student materials himself means he can be “more intentional” about the questions he asks, how they’re sequenced, how they fit together, how they build on each other, and what students already know.

His biggest concern is how generative AI will affect educators and students’ critical-thinking skills. Too often, people are using these tools to take “shortcuts,” he said.

“If I want students to learn something, I need them to be thinking about it and not finding shortcuts to avoid thinking,” Kane said.

The best way to prepare students for an AI-powered future is to “give them a broad and deep collection of knowledge about the world and skills in literacy, math, history and civics, and science,” so they’ll have the knowledge they need to understand if an AI tool is providing them with a helpful answer, he said.

That’s true for teachers, too, Kane said. The reason he can evaluate whether AI-generated material is accurate and helpful is because of his years of experience in education.

“One of my hesitations about using large language models is that I won’t be developing skills as a teacher and thinking really hard about what things I put in front of students and what I want them to be learning,” Kane said. “I worry that if I start leaning heavily on large language models, that it will stunt my growth as a teacher.”

And the fact that teachers have to use generative AI tools to create student materials “points to larger issues in the teaching profession” around the curricula and classroom resources teachers are given, Kane said. AI is not “an ideal solution. That’s a Band-Aid for a larger problem.”

Kane’s open to using AI tools. For instance, he said he finds generative AI technology helpful for writing word problems. But educators should “approach these things with a ton of skepticism and really ask ourselves: ‘Is this better than what we should be doing?’”

Experts and leaders haven’t provided good justifications for AI use in K-12

Jed Williams, a high school math and science teacher in Belmont, Mass., said he hasn’t heard any good justifications for why generative AI should be implemented in schools.

The way AI is being presented to teachers tends to be “largely uncritical,” said Williams, who teaches computer science, physics, and robotics at Belmont High School. Often, professional development opportunities about AI don’t provide a “critical analysis” of the technology and just “check the box” by mentioning that AI tools have downsides, he said.

For instance, one professional development session he attended only spent “a few seconds” on the downsides of AI tools, Williams said. The session covered the issue of overreliance on AI tools, but Williams criticized it for not talking about “labor exploitation, overuse of resources, sacrificing the privacy of students and faculty,” he said.

“We have a responsibility to be skeptical about technologies that we bring into the classroom,” Williams said, especially because there’s a long history of ed-tech adoption failures.

Williams, who has been teaching since 2006, is also concerned that AI tools could decrease students’ cognitive abilities.

“So much of learning is being put into a situation that is cognitively challenging,” he said. “These tools, fundamentally, are built on relieving the burden of cognitive challenge.

“Especially in introductory courses, where students aren’t familiar with programming and you want them to try new things and experiment and explore, why would you give them this tool that completely removes those aspects that are fundamental to learning?” Williams said.

Williams is also worried that a rushed implementation of AI tools would sacrifice students and teachers’ privacy and use them as “experimental subjects in developing technologies for tech companies.”

Education leaders “have a tough job,” Williams said. He understands the pressure they feel around implementing AI, but he hopes they give it “critical thought.”

Decisionmakers need to be clear about what technology is being proposed, how they anticipate teachers and students using it, what the goal of its use is, and why they think it’s a good technology to teach students how to use, Williams said.

“If somebody has a good answer for that, I’m very happy to hear proposals on how to incorporate these things in a healthy, safe way,” he said.

Educators shouldn’t fall for the ‘fallacy’ that AI is inevitable

Elizabeth Bacon, a middle school computer science teacher in California, hasn’t found any use cases with generative AI tools that she feels will be beneficial for her work.

“I would rather do my own lesson plan,” said Bacon, who has been teaching for more than 20 years. “I have an idea of what I want the students to learn, of what’s interesting to them, and where they are and the entry points for them to engage in it.”

Teachers have a lot of pressure to do more with less. That’s why Bacon said she doesn’t judge other teachers who want to use AI to get the job done. It’s “a systemic problem,” but teaching and learning shouldn’t be replaced by machines, she said.

Bacon believes it’s “particularly dangerous” for middle school students to be using “a machine emulating a person.” Students are still developing their character, their empathy, their ability to socialize with peers and work collectively toward a goal, she said, and a chatbot would undermine that.

She can foresee using generative AI tools to explain to her students what large language models are. It’s important for them to learn about generative AI, that it’s a statistical model predicting the next likely word based on data it’s been trained on, that there’s no meaning [or feelings] behind it, Bacon said.

Last school year, she asked her high school students what they wanted to know about AI. Their answers: the technology’s social and environmental impacts.

Bacon doesn’t think educators should fall for the “fallacy” that AI is the inevitable future because technology companies are the ones saying that and they have an incentive to say that, she said.

“Educators have basically been told, in a lot of ways, ‘don’t trust your own instincts about what’s right for your students, because [technology companies are] going to come in and tell you what’s going to be good for your students,” she said.

It’s discouraging to see that a lot of the AI-related professional development events she’s attended have “essentially been AI evangelism” and “product marketing,” she said. There should be more thought about why this technology is necessary in K-12, she said.

Technology experts have talked up AI’s potential to increase productivity and efficiency. But as an educator, “efficiency is not one of my values,” Bacon said.

“My value is supporting students, meeting them where they are, taking the time it takes to connect with these students, taking the time that it takes to understand their needs,” she said. “As a society, we have to take a hard look: Do we value education? Do we value doing our own thinking?”





Source link

Continue Reading

AI Research

University Of Utah Teams With HPE, NVIDIA To Boost AI Research

Published

on


The University of Utah (the U) is planning to join forces with two powerhouse tech firms to accelerate research and discovery using artificial intelligence (AI). The agreement with Hewlett Packard Enterprise (HPE) and AI chipmaker NVIDIA will amplify the U’s capacity for understanding cancer, Alzheimer’s disease, mental health, and genetics. The initiative is projected to enable medical breakthroughs, driving innovation, and scientific discovery across disciplines.

“The U has a proud legacy of pioneering technological breakthroughs,” said Taylor Randall, president of the University of Utah. “Our goal is to make the state awash in computing power by building a robust AI ecosystem benefiting our entire system of higher education, driving research to find new cures, and igniting Utah’s entrepreneurial spirit.”

(Photo: The University of Utah / Facebook)

The partnership, which includes a $50 million investment of funds from both public and philanthropic sources, is projected to increase the U’s computing capacity 3.5-fold. The flagship school’s Board of Trustees gave preliminary approval to the proposed arrangement on September 9.

The structure paves a path for substantial advances in computing storage and infrastructure required for Utah-based projects in AI and innovation. The goal is to lay the foundation for a scalable AI ecosystem available to researchers, learners, and entrepreneurs across Utah. The multi-year initiative would build upon existing capabilities in AI, giving the U access to substantially more computing power.

Brynn and Peter Huntsman along with the Huntsman Family Foundation will provide a lead philanthropic gift to the U that is intended to initiate the project and help encourage other supporters to make investments required to move the work forward through AI “supercomputer” systems designed to handle enormous processing and storage needs. The university will seek remaining funds from the state of Utah and other sources.

“This AI initiative will accelerate world class cancer research that enhances capabilities in ways we hardly imagined just a few years ago,” said Peter Huntsman, CEO and chairman, Huntsman Cancer Foundation. “Huntsman Cancer Foundation recently announced our commitment to support the expansion of the educational, research, and clinical care capacity of the world renown Huntsman Cancer Institute in Vineyard, Utah, which will serve as a hub for cancer AI research. These investments will speed discoveries and enhance the state of Utah’s leadership in AI education and economic opportunity.”

Mental health will be a major focus of the AI research endeavor. 

“As the Huntsman Mental Health Institute opens its new 185,000-square-foot Translational Research Building this coming year, we’re looking forward to increasing momentum around mental health research, including the impact of this technology,” said Christena Huntsman Durham, Huntsman Mental Health Foundation CEO and co-chair. “We know so many people are struggling with mental health challenges; we’re thrilled we will be able to move even faster to get help to those who need it most.”

Check out all the latest news related to Utah economic development, corporate relocation, corporate expansion and site selection.



Source link

Continue Reading

AI Research

F5 to acquire AI security firm CalypsoAI for $180 million

Published

on


F5, a Seattle-based application delivery and security company, announced Thursday it will acquire Dublin-based CalypsoAI for $180 million in cash, highlighting the mounting security challenges enterprises face as they rapidly integrate artificial intelligence into their operations.

The acquisition comes as companies across industries rush to deploy generative AI systems while grappling with new categories of cybersecurity threats that traditional security tools struggle to address. CalypsoAI, founded in 2018, specializes in protecting AI systems against emerging attack methods, including prompt injection and jailbreak attacks.

“AI is redefining enterprise architecture and the attack surface companies must defend,” said François Locoh-Donou, F5’s president and CEO. The company plans to integrate CalypsoAI’s capabilities into its Application Delivery and Security Platform to create what it describes as a comprehensive AI security solution.

Companies are embedding AI into products and operations at an unprecedented pace, but this rapid adoption has created compliance gaps and heightened regulatory scrutiny. CalypsoAI addresses these challenges through what the company calls “model-agnostic” security, providing protection regardless of which AI models or cloud providers enterprises use. 

The platform conducts automated red-team testing against thousands of attack scenarios monthly, generating risk assessments and implementing real-time guardrails to prevent data leakage and policy violations.

“Enterprises want to move fast with AI while reducing the risk of data leaks, unsafe outputs, or compliance failures,” said CalypsoAI CEO Donnchadh Casey. The company’s approach focuses on the inference layer where AI models process requests, rather than securing the models themselves.

The acquisition comes during a flurry of similar moves by established companies in the cybersecurity space that are looking to add AI-powered offerings to their customers. 

F5 has also been active this year with what it considers strategic purchases. The company acquired San Francisco-based Fletch in June and observability firm MantisNet in August, demonstrating a pattern of building capabilities through acquisition rather than internal development.

The deal is expected to close by Sept. 30. 


Written by Greg Otto

Greg Otto is Editor-in-Chief of CyberScoop, overseeing all editorial content for the website. Greg has led cybersecurity coverage that has won various awards, including accolades from the Society of Professional Journalists and the American Society of Business Publication Editors. Prior to joining Scoop News Group, Greg worked for the Washington Business Journal, U.S. News & World Report and WTOP Radio. He has a degree in broadcast journalism from Temple University.



Source link

Continue Reading

Trending