Connect with us

Tools & Platforms

RACGP releases new AI guidance

Published

on



News



A new resource guides GPs through the practicalities of using conversational AI in their consults, how the new technology works, and what risks to be aware of.



AI is an emerging space in general practice, with more than half of GPs not familiar with specific AI tools.



Artificial intelligence (AI) is becoming increasingly relevant in healthcare, but at least 80% of GPs have reported that they are not at all, or not very, familiar with specific AI tools.

 

To help GPs broaden their understanding of the technology, and weigh up the potential advantages and disadvantages of its use in their practice, the RACGP has unveiled a comprehensive new resource focused on conversational AI.  

 

Unlike AI scribes, which convert a conversation with a patient into a clinical note that can be incorporated into a patient’s health record, conversational AI is technology that enables machines to interpret, process, and respond to human language in a natural way.

 

Examples include AI-powered chatbots and virtual assistants that can support patient interactions, streamline appointment scheduling, and automate routine administrative tasks.

 

The college resource offers further practical guidance on how conversational AI can be applied effectively in general practice and highlights key applications. These include:

  • answering patient questions regarding their diagnosis, potential side effects of prescribed medicines or by simplifying jargon in medical reports
  • providing treatment/medication reminders and dosage instructions
  • providing language translation services
  • guiding patients to appropriate resources
  • supporting patients to track and monitor blood pressure, blood sugar, or other health markers
  • triaging patients prior to a consultation
  • preparing medical documentation such as clinical letters, clinical notes and discharge summaries
  • providing clinical decision support by preparing lists of differential diagnoses, supporting diagnosis, and optimising clinical decision support tools (for investigation and treatment options)
  • suggesting treatment options and lifestyle recommendations.

Dr Rob Hosking, Chair of the RACGP’s Practice and Technology Management Expert Committee, told newsGP there are several potential advantages to these tools in general practice.
 
‘Some of the potential benefits include task automation, reduced administrative burden, improved access to care and personalised health education for patients,’ he said.
 
Beyond the clinical setting, conversational AI tools can also have a range of business, educational and research applications, such as automating billing and analysing billing data, summarising the medical literature and answering clinicians’ medical questions.
 
However, while there are a number of benefits, Dr Hosking says it is important to consider some of the potential disadvantages to its use as well.
 
‘Conversational AI tools can provide responses that appear authoritative but on review are vague, misleading, or even incorrect,’ he explained.
 
‘Biases are inherent to the data on which AI tools are trained, and as such, particular patient groups are likely to be underrepresented in the data.
 
‘There is a risk that conversational AI will make unsuitable and even discriminatory recommendations, rely on harmful and inaccurate stereotypes, and/or exclude or stigmatise already marginalised and vulnerable individuals.’
 
While some conversational AI tools are designed for medical use, such as Google’s MedPaLM and Microsoft’s BioGPT, Dr Hosking pointed out that most are designed for general applications and not trained to produce a result within a clinical context.
 
‘The data these general tools are trained on are not necessarily up-to-date or from high-quality sources, such as medical research,’ he said.
 
The college addresses these potential problems, as well as other ethical and privacy considerations, that come with using AI in healthcare.
 
For GPs deciding whether to use conversational AI, Dr Hosking notes that there are a number of considerations to ensure the delivery of safe and quality care, and that says that patients should play a key role in the decision-making process as to whether to use it in their specific consultation.
 
‘GPs should involve patients in the decision to use AI tools and obtain informed patient consent when using patient-facing AI tools,’ he said.
 
‘Also, do not input sensitive or identifying data.’
 
However, before conversational AI is brought into practice workflows, the RACGP recommends GPs are trained on how to use it safely, including knowledge around the risks and limitations of the tool, and how and where data is stored.
 
‘GPs must ensure that the use of the conversational AI tool complies with relevant legislation and regulations, as well as any practice policies and professional indemnity insurance requirements that might impact, prohibit or govern its use,’ the college resource states.
 
‘It is also worth considering that conversational AI tools designed specifically by, and for use by, medical practitioners are likely to provide more accurate and reliable information than that of general, open-use tools.
 
‘These tools should be TGA-registered as medical devices if they make diagnostic or treatment recommendations.’
 
While the college recognises that conversational AI could revolutionise parts of healthcare delivery, in the interim, it recommends that GPs be ‘extremely careful’ in using the technology at this time.
 
‘Many questions remain about patient safety, patient privacy, data security, and impacts for clinical outcomes,’ the college said.
 
Dr Hosking, who has yet to implement conversational AI tools in his own clinical practice, shared the sentiment.
 
‘AI will continue to evolve and really could make a huge difference in patient outcomes and time savings for GPs,’ he said.
 
‘But it will never replace the important role of the doctor-patient relationship. We need to ensure AI does not create health inequities through inbuilt biases.
 
‘This will help GPs weigh up the potential advantages and disadvantages of using conversational AI in their practice and inform of the risks associated with these tools.’
 
Log in below to join the conversation.



AI AI scribes artificial intelligence conversational AI


newsGP weekly poll
How often do you include integrative medicine, defined as blending conventional and complementary medicine practices, in your practice to deliver personalised healthcare?



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Tech Companies Pay $200,000 Premiums for AI Experience: Report

Published

on


  • A consulting firm found that tech companies are “strategically overpaying” recruits with AI experience.
  • They found firms pay premiums of up to $200,000 for data scientists with machine learning skills.
  • The report also tracked a rise in bonuses for lower-level software engineers and analysts.

The AI talent bidding war is heating up, and the data scientists and software engineers behind the tech are benefiting from being caught in the middle.

Many tech companies are “strategically overpaying” recruits with AI experience, shelling out premiums of up to $200,000 for some roles with machine learning skills, J. Thelander Consulting, a compensation data and consulting firm for the private capital market, found in a recent report.

The report, compiled from a compensation analysis of roles across 153 companies, showed that data scientists and analysts with machine learning skills tend to receive a higher premium than software engineers with the same skills. However, the consulting firm also tracked a rise in bonuses for lower-level software engineers and analysts.

The payouts are a big bet, especially among startups. About half of the surveyed companies paying premiums for employees with AI skills had no revenue in the past year, and a majority (71%) had no profit.

Smaller firms need to stand out and be competitive among Big Tech giants — a likely driver behind the pricey recruitment tactic, a spokesperson for the consulting firm told Business Insider.

But while the J. Thelander Consulting report focused on smaller firms, some Big Tech companies have also recently made headlines for their sky-high recruitment incentives.

Meta was in the spotlight last month after Sam Altman, CEO of OpenAI, said the social media giant had tried to poach his best employees with $100 million signing bonuses

While Business Insider previously reported that Altman later quipped that none of his “best people” had been enticed by the deal, Meta’s chief technology officer, Andrew Bosworth, said in an interview with CNBC that Altman “neglected to mention that he’s countering those offers.”





Source link

Continue Reading

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | usatoday.com
logo

usatoday.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.com



Source link

Continue Reading

Tools & Platforms

A Recipe for Tech Bubble 2.0

Published

on


The tech industry’s history is littered with cautionary tales of irrational exuberance: the dot-com boom, the crypto craze, and the AI winter of the 2010s. Today, Palantir Technologies (PLTR) stands at the intersection of hype and hubris, its stock up over 2,000% since 2023 and trading at a Price-to-Sales (P/S) ratio of 107x—a metric that dwarfs even the most speculative valuations of the late 1990s. This is not sustainable growth; it is a textbook bubble. With seven critical risks converging, investors are poised for a reckoning that could slash Palantir’s valuation by 60% by 2027.

The Illusion of Growth: Valuation at 107x Sales

Let’s start with the math. A P/S ratio of 107x means investors are betting that Palantir’s revenue will grow 107-fold to justify its current price. For context, during the dot-com bubble, Amazon’s peak P/S was 20x, and even Bitcoin’s 2017 mania never pushed its P/S analog to such extremes. shows a trajectory that mirrors the NASDAQ’s 2000 peak—rapid ascents followed by catastrophic collapses.

Seven Risks Fueling the Implosion

1. The AI Bubble Pop

Palantir’s valuation is tied to its AI product, Gotham, which promises to revolutionize data analytics. But history shows that AI’s promise has often exceeded its delivery. The AI winters of the 1970s and 1980s saw similar hype, only to crumble under overpromised outcomes. Today’s AI tools—despite their buzz—are still niche, and enterprise adoption remains fragmented. A cooling in AI enthusiasm could drain investor confidence, leaving Palantir’s inflated valuation stranded.

2. Gotham’s Limited Market

Gotham’s core clients are governments and large enterprises. While this niche offers stability, it also caps growth potential. Unlike cloud platforms or social media, Palantir’s market is neither scalable nor defensible against competitors. If governments shift spending priorities—or if AI’s ROI fails to materialize—the demand for Gotham’s services will evaporate.

3. Insider Selling: A Signal of Doubt

Insiders often sell shares when they anticipate a downturn. While specific data on Palantir’s insider transactions is scarce, the stock’s meteoric rise since 2023 has coincided with a surge in institutional selling. This behavior mirrors the final days of the dot-com bubble, when executives offloaded shares ahead of the crash.

4. Interest-Driven Profits, Not Revenue Growth

Palantir’s profits now rely partly on rising interest rates, which boost returns on its cash reserves. This financial engineering masks weak organic growth. When rates inevitably fall—or inflation subsides—this artificial profit driver will vanish, exposing the company’s fragile fundamentals.

5. Dilution via Equity Issuances

To fund its ambitions, Palantir has likely diluted shareholders through stock offerings. The historical data shows its adjusted stock prices account for splits and dividends, but no splits are noted. This silent dilution reduces equity value, a tactic common in bubble-stage companies desperate to fund unsustainable growth.

6. Trump’s Fiscal Uncertainty

Palantir’s government contracts depend on political stability. With a potential Trump administration’s fiscal policies uncertain—ranging from spending cuts to regulatory crackdowns—the company’s revenue streams face existential risks.

7. Valuation Precedents: The 2000 Dot-Com Crash Revisited

Valuation metrics matter. In 2000, the NASDAQ’s P/S ratio averaged 4.5x. Palantir’s 107x ratio is 23 times higher—a disconnect from reality. When the dot-com bubble burst, companies like Pets.com and Webvan, once darlings, lost 99% of their value. Palantir’s fate could mirror theirs.

The Inevitable Correction: 60% Downside by 2027

If Palantir’s valuation reverts to a more rational 10x P/S—a still aggressive multiple for its niche market—its stock would plummet to $12.73, a 60% drop from its July 2025 high. Even a 20x P/S, akin to Amazon’s peak, would price it at $25.46—a 75% drop. This is not a prediction of doom; it is arithmetic.

Investment Advice: Avoid the Sizzle, Seek the Steak

Investors should treat Palantir as a warning sign, not a buy signal. The stock’s rise has been fueled by sentiment, not fundamentals. Stick to companies with proven scalability, sustainable margins, and valuations grounded in reality. For Palantir? The only question is whether it will crash to $12 or $25—either way, the party is over.

In the annals of tech history, one truth endures: bubbles always pop. Palantir’s 2023–2025 surge is no exception. The only question is how many investors will still be dancing when the music stops.

Data sources: Historical stock price summaries (2023–2025), Palantir’s P/S ratio calculations, and fusion of market precedents.



Source link

Continue Reading

Trending