Connect with us

AI Insights

NCCN holds AI policy summit

Published

on


The National Comprehensive Cancer Network (NCCN) hosted a policy summit in Washington, DC, on September 9 to explore the role of AI in cancer care both now and in the future.

Participants, including patients and patient advocates, clinicians, and policymakers, discussed AI’s emerging success in improving oncology care, as well as areas of possible concern.

Among topics raised were issues of implementation and integration into different platforms, oversight (both internal and governmental), and avoiding disparities and increasing access to AI-based software. 

Collaboration between medical and technological organizations in the creation and implementation of AI tools for oncology was also highlighted, with the observation that many of the challenges faced with diagnostic AI software have also been faced in other fields that use AI applications.

The speed at which AI models are evolving was a common theme with panelists, NCCN said, with some comparing its potential to advances in care that represented major technological shifts, such as the transition to electronic medical records.

AI and cancer care was also the topic of a plenary session during the NCCN 2025 Annual Conference. Sessions are available for viewing at the NCCN Continuing Education Portal 

NCCN will be hosting a Patient Advocacy Summit on December 9 on the cancer care needs of veterans and first responders.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

How federal tech leaders are rewriting the rules for AI and cyber hiring

Published

on


Terry Gerton Well, there’s a lot of things happening in your world. Let’s talk about, first, the new memo that came out at the end of August that talks about FedRAMP 20x. Put that in plain language for folks and then tell us what it means for PSC and its stakeholders.

Jim Carroll Yeah, I think really what it means, it’s a reflection of what’s happening in the industry overall, the GovCon world, as well as probably everything that we do, you know, even as individual citizens, which is more and more reliance on AI. What we’re seeing is the artificial intelligence world has really picked up steam, not only  I saw mention of it on the news today and they were talking about — every Google search now incorporates AI. So what we’re seeing with this GSA and FedRAMP initiative is really trying to fast track the authorization of the cloud-based services side of AI. Because it really is becoming more and more part of every basic use, not only in our private lives, like they talk about, but also in the federal contracting space. And what we are seeing are more and more federal government officials using it for routine things. And so I think what this is is really a reflection that they are going to move this as quickly as possible, in recognition that the world is changing right in front of us.

Terry Gerton So is this more for government contractors who are offering AI products, or for government contractors who are using AI in their internal products?

Jim Carroll It’s really for AI-based cloud services who are able to use AI tools that not only allow them, but really allow federal workers to be able to access AI in a much faster space. And, you know, there’s certainly some challenges with AI. I think, you’re hearing some of the futurists talk about, do we really understand AI enough to embrace it to the extent that we have? I don’t think anyone really knows the answer to that, but we know it’s out there and there is this recognition that there will be an ongoing routine federal use of AI. So let’s at least have the major players that are doing it the best authorized to be able to provide the service. And so much is happening right now in the AI space. And I think everyone knows the acronym. There’s a lot of acronyms we’re going to talk about today that are happening, but AI is an acronym that really is. And we did a poll and looked at our 400 member companies at PS Council. And I think it was 45% or 50% mentioned the use of AI on their homepage. And so I think there’s just recognition that GSA wants to be able to provide these solutions to the federal government workers.

Terry Gerton Do you see any risks or trade-offs in accelerating this approval versus adopting things that might not quite be ready for prime time?

Jim Carroll You know, I think there’s always that concern, as I mentioned, about some of the futurists that are looking at this and making sure that it’s safe. We’re hearing about it from the White House and we’re putting together — you’ve seen some public panels already with the White House, we’ve been asked to bring our PSC members for a policy discussion and some of the legal issues around AI to the White House. And so we’ll be bringing some members to the White House here in the next couple of weeks. And so I think there is concern that the people who use AI are also double-checking to make sure it’s accurate, right? That’s one of the concerns I think that people want to make sure is that there should not be an over-reliance or an exclusive reliance on AI tools. And we need to make sure that the solutions and the answers that our AI tools are giving us are actually accurate. One of the concerns, which I think goes into something we need to discuss that’s happening this week, is cybersecurity. Is AI secure? Is the use of it going to be able to safeguard some of the really important national security work that we’re doing? And how do we do that?

Terry Gerton I’m speaking with Jim Carroll. He’s the CEO of the Professional Services Council. Well, let’s stick in that tech vein and cybersecurity. There’s a new bill in Congress that wants to shift cybersecurity hiring to more of a skills-based qualification than professional degrees. How does PSC think about that proposal?

Jim Carroll I think again, it’s a reflection of what’s actually out there — that these new tools, we’ll say in cybersecurity, [are] really based on an individual’s ability to maneuver in this space, as opposed to just a degree. And being able to really focus on the ability of everyone, I think equals the playing field, right? It means more and more people are qualified to do this. When you take away a — I hate to say a barrier such as a degree, but it’s a reflection that there are other skill sets that people have learned to be able to actually do their work. And I can say this, having gotten a law degree many years ago, that you really sort of learn how to practice law by doing it and by having a mentor and doing it over the years, as opposed to just having a law degree. I don’t think it would be a good person to just go out and represent anyone on anything on the day after graduating from law school. You really need to learn how to apply it and I think that’s what this bipartisan bill is doing. And so you know, we’re encouraging more and more people being able to get into this, because there’s a greater and greater need, Terry. And so we’re okay with this.

Terry Gerton So what might it mean then for the GovCon workforce?

Jim Carroll I think there’s an opportunity here for the GovCon workspace and employees to be able to expand and really get some super-talented people to be able to work at these federal agencies. Which is a great plus, I think, for actually achieving the desired results that our GovCon members at PS Council are able to deliver, is we’re going to get the best and brightest and bring those people in to give real solutions.

Terry Gerton So the bill calls for more transparency from OPM on education-related hiring policies. Does PSC have an idea of what kind of oversight they’d like to see about that practice?

Jim Carroll Yeah, we’re looking into it now. We’re talking to our members and seeing what kind of oversight they have. You know, representing 400 organizations, companies that do business with the federal government and so many in this space of cybersecurity, being the leading trade organization for these 400 companies, it means that we’re able to go to our members and get from them, really, the safeguards that they think are important. Get the requirements that they think are important and get it in there. And so this is going to be a deliberative process. We have a little bit of time to work on this. But we’re excited about the potential. We really think this will be able to deliver great solutions, Terry.

Terry Gerton Well, speaking of cyber, there’s a new memo out on the cybersecurity maturity model. What’s your hot take there?

Jim Carroll Terry, how long has that been pending? I think five years. I think it’s five years is what I heard this morning. And so, you know, this will provide three levels of certification and clarity for CMMC [(Cybersecurity Maturity Model Certification)]. We’re looking at it now. This is obviously a critical issue and we are starting a working group. And we’re going to be able to provide resources to our members for this, to make sure that the certification — some of which are going to be very expensive for our members, depending on what type of certification that they want. So we’re gearing up. We have been ready for this. Like I said, we started planning this for five years ago, right? So did you, Terry. And so we have five years of thought going into it and we will be announcing and developing a website for our members to be able to have information on this, learn from this. We’ll be conducting seminars for our members. So now that CMMC — the other acronym I think that I mentioned earlier — is finally here, it’ll be implemented, I guess, in 60 days. And so we’ll have some time to use the skills that we have been developing over the last five years to give to our members.

Terry Gerton Any surprises for you in the final version? I know that PSC had quite a bit of input in the development.

Jim Carroll Not right now. We’re sort of looking at it; obviously, it just dropped in the last 24 hours. And so nothing right now that has caught us off guard. And so we’ve been ready for this and we’re ready to educate our members on this.

Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.





Source link

Continue Reading

AI Insights

Techno-Utopians Like Elon Musk Are Treading Old Ground

Published

on


In “The Singularity is Nearer: When We Merge with AI,” the futurist Ray Kurzweil imagines the point in 2045 when rapid technological progress crosses a threshold as humans merge with machines, an event he calls “the singularity.”

Although Kurzweil’s predictions may sound more like science fiction than fact-based forecasting, his brand of thinking goes well beyond the usual sci-fi crowd. It has provided inspiration for American technology industry elites for some time, chief among them Elon Musk.

With Neuralink, his company that is developing computer interfaces implanted in people’s brains, Musk says he intends to “unlock new dimensions of human potential.” This fusion of human and machine echoes Kurzweil’s singularity. Musk also cites apocalyptic scenarios and points to transformative technologies that can save humanity.

Ideas like those of Kurzweil and Musk, among others, can seem as if they are charting paths into a brave new world. But as a humanities scholar who studies utopianism and dystopianism, I’ve encountered this type of thinking in the futurist and techno-utopian art and writings of the early 20th century.

Techno-utopianism’s origins

Techno-utopianism emerged in its modern form in the 1800s, when the Industrial Revolution ushered in a set of popular ideas that combined technological progress with social reform or transformation.

Kurzweil’s singularity parallels ideas from Italian and Russian futurists amid the electrical and mechanical revolutions that took place at the turn of the 20th century. Enthralled by inventions like the telephone, automobile, airplane and rocket, those futurists found inspiration in the concept of a “New Human,” a being who they imagined would be transformed by speed, power and energy.

A century ahead of Musk, Italian futurists imagined the destruction of one world, so that it might be replaced by a new one, reflecting a common Western techno-utopian belief in a coming apocalypse that would be followed by the rebirth of a changed society.

One especially influential figure of the time was Filippo Marinetti, whose 1909 “Founding and Manifesto of Futurism” offered a nationalistic vision of a modern, urban Italy. It glorified the tumultuous transformation caused by the Industrial Revolution. The document describes workers becoming one with their fiery machines. It encourages “aggressive action” coupled with an “eternal” speed designed to break things and bring about a new world order.

The overtly patriarchal text glorifies war as “hygiene” and promotes “scorn for woman.” The manifesto also calls for the destruction of museums, libraries and universities and supports the power of the rioting crowd.

Marinetti’s vision later drove him to support and even influence the early fascism of Italian dictator Benito Mussolini. However, the relationship between the futurism movement and Mussolini’s increasingly anti-modern regime was an uneasy one, as Italian studies scholar Katia Pizzi wrote in “Italian Futurism and the Machine.”

Further east, the Russian revolutionaries of 1917 adopted a utopian faith in material progress and science. They combined a “belief in the ease with which culture could be destroyed” with the benefits of “spreading scientific ideas to the masses of Russia,” historian Richard Stites wrote in “Revolutionary Dreams.”

For the Russian left, an “immediate and complete remaking” of the soul was taking place. This new proletarian culture was personified in the ideal of the New Soviet Man. This “master of nature by means of machines and tools” received a polytechnical education instead of the traditional middle-class pursuit of the liberal arts, humanities scholar George Young wrote in “The Russian Cosmists.” The first Soviet People’s Commissar of Education, Anatoly Lunacharsky, supported these movements.

Although their political ideologies took different forms, these 20th-century futurists all focused their efforts on technological advancement as an ultimate objective. Techno-utopians were convinced that the dirt and pollution of real-world factories would automatically lead to a future of “perfect cleanliness, efficiency, quiet, and harmony,” historian Howard Segal wrote in “Technology and Utopia.”

Myths of efficiency and everyday tech

Despite the remarkable technological advances of that time, and since, the vision of those techno-utopians largely has not come to pass. In the 21st century, it can seem as if we live in a world of near-perfect efficiency and plenitude thanks to the rapid development of technology and the proliferation of global supply chains. But the toll that these systems take on the natural environment – and on the people whose labor ensures their success – presents a dramatically different picture.

Today, some of the people who espouse techno-utopian and apocalyptic visions have amassed the power to influence, if not determine, the future. At the start of 2025, through the Department of Government Efficiency, or DOGE, Musk introduced a fast-paced, tech-driven approach to government that has led to major cutbacks in federal agencies. He’s also influenced the administration’s huge investments in artificial intelligence, a class of technological tools that public officials are only beginning to understand.

The futurists of the 20th century influenced the political sphere, but their movements were ultimately artistic and literary. By contrast, contemporary techno-futurists like Musk lead powerful multinational corporations that influence economies and cultures across the globe.

Does this make Musk’s dreams of human transformation and societal apocalypse more likely to become reality? If not, these elements of Musk’s project are likely to remain more theoretical, just as the dreams of last century’s techno-utopians did.



Source link

Continue Reading

AI Insights

AI’s Uncertain Cost Effects in Health Care | American Enterprise Institute

Published

on


The health care industry has a long history of below-average productivity gains, but there is cautious optimism that artificial intelligence (AI) will break the pattern. As in the past, the industry’s misaligned incentives might stymie progress. 

A 2024 economic study found that existing AI platforms could deliver up to $360 billion in annual cost reductions without harming the quality of care delivered to patients. If realized, the financial relief for employers, consumers, and taxpayers would not be trivial. 

The potential uses of AI in health care are numerous. AI could streamline the reading of diagnostic images, speed up accurate identification of complex conditions (and thus reduce the need for more testing), eliminate repetitive back-off tasks, prevent payments for unneeded services, target fraud, and less expensively identify drug compounds with potential therapeutic value. The savings from these applications are not theoretical; market participants are already using existing AI tools to pursue each of these objectives.

But there are two sides to the health care negotiating table, and the other side—hospitals, physician practices, and publicly-subsidized insurance plans looking to maximize their revenue—can leverage AI too. The net effect remains uncertain and will depend on which side of the table is most effective at leveraging the technology’s power. 

AI scribes are an example of a use that could go either way. The tool will save time for doctors and their support staff by quickly and easily translating audio notes from patient encounters into data entries for electronic health records. At the same time, a recent news story noted that AI scribes also facilitate “chart reviews” aimed at ensuring no services that can be billed to insurance plans are missed. In effect, the industry is discovering that AI scribes are more effective than humans at maximizing practice revenue. 

Medicare Advantage (MA) plans are sure to use AI in a similar way to boost the adjustment scores which, affect their monthly capitated payments from the Medicare program. 

While potentially powerful, AI does not solve the basic problem in health care, which is that there are weak incentives for cost control. 

In employer-sponsored insurance (ESI), higher costs are partially subsidized by a federal tax break which grows in value with the expense of the plan. In traditional Medicare, hospitals and doctors get paid more when they provide more services. If AI were used to eliminate unnecessary care, provider incomes would fall dramatically, which is why facilities and clinicians are more likely to use the technology to justify providing more care with higher prices for patients than to become more efficient. 

Insurers would seem to have a stronger incentive for cost control, but their main clients—employers and workers—are mostly interested in broad provider networks, not cost control. Insurers can earn profits just as easily when costs are high as when they are low. 

If AI is to lead to lower costs, the government and employers will need to deploy it aggressively to identify unnecessary spending, and then incentivize patients to migrate toward lower-cost insurance and care options. 

For instance, employers could use AI to pore through pricing data made available by transparency rules to identify potential cost-cutting opportunities for their workers. That, however, is only step one. Step two should be a change in plan design that rewards workers—who use the information AI uncovers to choose hospitals and doctors that can deliver the best value at the lowest cost. The savings from lower-priced care should be shared with workers through lower cost-sharing and premiums. 

The government should implement similar changes in Medicare, either through existing regulatory authority or through changes in law approved by Congress. 

With patients incentivized to seek out lower-cost care, hospitals and doctors would be more willing to use AI to identify cost-cutting strategies. For instance, AI could be used to design care plans for complex patients that minimize overall costs, or to offer more aggressive preventive care to patients with health risks identified by AI tools. 

Health care is awash with underused data. Patient records include potentially valuable information that could be harnessed to prevent emerging problems at far less cost than would be the case for treating the conditions after they have begun to inflict harm. In other words, AI might be used to vastly improve patient outcomes while also reducing costs. 

But this upending of the industry will not occur if all of the major players would rather stick with business as usual to protect their bottom lines. 

Congress should keep all of this in mind when considering how best to ensure AI delivers on its potential in health care. The key is to change incentives in the market so that those providers who use AI to cut their costs are rewarded with expanded market shares rather than lost revenue.



Source link

Continue Reading

Trending