Turkish medical oncologists’ perspectives on integrating artificial intelligence: knowledge, attitudes, and ethical considerations | BMC Medical Ethics
A total of 147 medical oncologists completed the survey, corresponding to approximately 11% of the estimated 1340 medical oncologists practicing in Türkiye [4]. The median age of participants was 39 years (IQR: 35–46), and 63.3% were male. Respondents had a median of 14 years (IQR: 10–22) of medical experience and a median of 5 years (IQR: 2–14) specifically in oncology. Nearly half (47.6%) practiced in university hospitals, followed by 31.3% in training and research hospitals, and the remainder in private or state settings (Table 1). In terms of academic rank, residents/fellows constituted 38.1%, specialists 22.4%, professors 21.1%, associate professors 16.3%, and assistant professors 2.0%. Respondents were distributed across various urban centers, including major cities such as Istanbul and Ankara, as well as smaller provinces, reflecting a broad regional representation of Türkiye’s oncology workforce.
Table 1 Demographics, AI usage, and education status of participants
Most of the participants completed the survey from Central Anatolia Region of Türkiye (34.0%, n = 50), followed by Marmara Region (27.2%, n = 40), Eagean Region (17.0%, n = 25) and Mediterranian Region (10.2%, n = 15). The distrubution of the participants with regional map of Türkiye is presented in Fig. 1.
Fig. 1
Geographical Distribution of Participants by Regions of Türkiye
AI usage and education
A majority (77.5%, n = 114) of oncologists reported prior use of at least one AI tool. Among these, ChatGPT and other GPT-based models were the most frequently used (77.5%, n = 114), indicating that LLM interfaces had already penetrated clinical professionals’ workflow to some extent. Other tools such as Google Gemini (17.0%, n = 25) and Microsoft Bing (10.9%, n = 16) showed more limited utilization, and just a small fraction had tried less common platforms like Anthropic Claude, Meta Llama-3, or Hugging Face. Despite this relatively high usage rate of general AI tools, formal AI education was scarce: only 9.5% (n = 14) of respondents had received some level of formal AI training, and this was primarily basic-level. Nearly all (94.6%, n = 139) expressed a desire for more education, suggesting that their forays into AI usage had been largely self-directed and that there was a perceived need for structured, professionally guided learning.
Regarding sources of AI knowledge, 38.8% (n = 57) reported not using any resource, underscoring a gap in continuing education. Among those who did seek information, the most common channels were colleagues (26.5%, n = 39) and academic publications (23.1%), followed by online courses/websites (21.8%, n = 32), popular science publications (19.7%, n = 29), and professional conferences/workshops (18.4%, n = 27). This pattern suggests that while some clinicians attempt to inform themselves about AI through peer discussions or scientific literature, many remain unconnected to formalized educational pathways or comprehensive training programs.
Self-assessed AI knowledge
Participants generally rated themselves as having limited knowledge across key AI domains (Fig. 2A). More than half reported having “no knowledge” or only “some knowledge” in areas such as machine learning (86.4%, n = 127, combined) and deep learning (89.1%, n = 131, combined). Even fundamental concepts like LLM sand generative AI were unfamiliar to a substantial portion of respondents. For instance, nearly half (47.6%, n = 70) had no knowledge of LLMs, and two-thirds (66.0%, n = 97) had no knowledge of generative AI. Similar trends were observed for natural language processing and advanced statistical analyses, reflecting a widespread lack of confidence and familiarity with the technical underpinnings of AI beyond superficial usage.
Fig. 2
Overview of Oncologists’ AI Familiarity, Attitudes, and Perceived Impact. (A) Distribution of participants’ self-assessed AI knowledge, (B) attitudes toward AI in various medical practice areas, and (C) insights into AI’s broader impact on medical practice
Attitudes toward AI integration in oncology
When asked to evaluate AI’s role in various clinical tasks (Fig. 2B), respondents generally displayed cautious optimism. Prognosis estimation stood out as one of the areas where AI received the strongest endorsement, with a clear majority rating it as “positive” or “very positive.” A similar pattern emerged for medical research, where nearly three-quarters of respondents recognized AI’s potential in academic field. In contrast, opinions on treatment planning and patient follow-up were more mixed, with a considerable proportion adopting a neutral stance. Diagnosis and clinical decision support still garnered predominantly positive views, though some participants expressed reservations, possibly reflecting concerns about reliability, validation, and the interpretability of AI-driven recommendations.
Broadening the perspective, Fig. 2C illustrates how participants viewed AI’s impact on aspects like patient-physician relationships, social perception, and health policy. While most believed AI could improve overall medical practices and potentially reduce workload, many worried it might affect the quality of personal interactions with patients or shape public trust in uncertain ways. Approximately half recognized potential benefits for healthcare access, but some remained neutral or skeptical, perhaps concerned that technology might not equally benefit all patient populations or could inadvertently exacerbate existing disparities.
Ethical and regulatory concerns
Tables 2 and 3, along with Figs. 3A–C, summarize participants’ ethical and legal considerations. Patient management (57.8%, n = 85), article or presentation writing (51.0%, n = 75), and study design (25.2%, n = 37) emerged as key activities where the integration of AI was viewed as ethically questionable. Respondents feared that relying on AI for sensitive clinical decisions or academic tasks could compromise patient safety, authenticity, or scientific integrity. A subset of respondents reported utilizing AI in certain domains, including 13.6% (n = 20) for article and presentation writing, and 11.6% (n = 17) for patient management, despite acknowledging potential ethical issues in the preceding question. However, only about half of the respondents who admitted using AI for patient management identified this as an ethical concern. This discrepancy suggests that while oncologists harbor concerns, convenience or lack of guidance may still drive them to experiment with AI applications.
Table 2 Ethical concerns regarding AI usage in medical practice
Table 3 Views on ethical development and regulations for AI
Fig. 3
Ethical Considerations, Implementation Barriers, and Strategic Solutions for AI Integration. (A) Frequency distribution of major ethical concerns, (B) heatmap of implementation challenges across technical, educational, clinical, and regulatory categories, and (C) priority matrix of proposed integration solutions including training and regulatory frameworks. The implementation time and time-line is extracted from the open-ended questions. Timeline: The estimated time needed for implementation; Implementation time: The urgency of implementation. The timelime and implementation time is fully correlated (R.2 = 1.0)
Moreover, nearly 82% of participants supported using AI in medical practice, yet 79.6% (n = 117) did not find current legal regulations satisfactory. Over two-thirds advocated for stricter legal frameworks and ethical audits. Patient consent was highlighted by 61.9% (n = 91) as a critical step, implying that clinicians want transparent processes that safeguard patient rights and maintain trust. Liability in the event of AI-driven errors also remained contentious: 68.0% (n = 100) held software developers partially responsible, and 61.2% (n = 90) also implicated physicians. This suggests a shared accountability model might be needed, involving multiple stakeholders across the healthcare and technology sectors.
To address these gaps, respondents proposed various solutions. Establishing national and international standards (82.3%, n = 121) and enacting new laws (59.2%, n = 87) were seen as pivotal. More than half favored creating dedicated institutions for AI oversight (53.7%, n = 79) and integrating informed consent clauses related to AI use (53.1%, n = 78) into patient forms. These collective views point to a strong desire among oncologists for a structured, legally sound environment in which AI tools are developed, tested, and implemented responsibly.
Ordinal regression analysis of factors associated with AI knowledge, attitudes, and concerns
For knowledge levels, the ordinal regression model identified formal AI education as the sole significant predictor (ß = 30.534, SE = 0.6404, p < 0.001). In contrast, other predictors such as age (ß = −0.1835, p = 0.159), years as physician (ß = 0.0936, p = 0.425), years in oncology (ß = 0.0270, p = 0.719), and academic rank showed no significant associations with knowledge levels in the ordinal model.
The ordinal regression for concern levels revealed no significant predictors among demographic factors, professional experience, academic status, AI education, nor current knowledge levels (p > 0.05) were associated with the ordinal progression of ethical and practical concerns.
For attitudes toward AI integration, the ordinal regression identified two significant predictors. Those willing to receive AI education showed progression toward more positive attitudes (ß = 13.143, SE = 0.6688, p = 0.049), and actual receipt of AI education also predicted progression toward more positive attitudes (ß = 12.928, SE = 0.6565, p = 0.049). Additionally, higher knowledge levels showed a trend toward more positive attitudes in the ordinal model although not significant (ß = 0.3899, SE = 0.2009, p = 0.052).
Table 4 presents the ordinal regression analyses examining predictors of AI knowledge levels, concerns, and attitudes among Turkish medical oncologists.
Table 4 Ordinal regression results for assessing the factors affecting knowledge levels, attitudes and concerns
Qualitative insights
The open-ended responses, analyzed qualitatively, revealed several recurring themes reinforcing the quantitative findings. Participants frequently stressed the importance of human oversight, emphasizing that AI should complement rather than replace clinical expertise, judgment, and empathy. Data security and privacy emerged as central concerns, with some respondents worrying that insufficient safeguards could lead to breaches of patient confidentiality. Others highlighted the challenge of ensuring that AI tools maintain cultural and social sensitivity in diverse patient populations. Calls for incremental, well-regulated implementation of AI were common, as was the suggestion that education and ongoing professional development would be essential to ensuring clinicians use AI effectively and ethically.
In essence, while there is broad acknowledgment that AI holds promise for enhancing oncology practice, respondents also recognize the need for clear ethical standards, solid regulatory frameworks, comprehensive training, and thoughtful integration strategies. oncology care.
The cat is out of the bag with artificial intelligence (AI). Trillions of dollars in value have been added to stock portfolios on the backs of the AI revolution in just a few years. Nvidia is knocking on the door of a $4 trillion market capitalization. It is difficult to find undervalued AI stocks right now.
But it is not impossible. Here are two AI stocks — ASML(ASML -0.73%) and Alphabet(GOOG 0.51%) — that look undervalued and can help investors become millionaires if they buy and hold for the long term.
Image source: Getty Images.
Helping build advanced computer chips
ASML is the leading seller of lithography equipment for making advanced semiconductors. In some cases, it is the only provider on the market. Lithography in this case is the use of lights and lasers to print tiny patterns on objects such as semiconductors. Advanced semiconductors require intricate designs over microscopic areas, which helps them generate more efficient computing power for AI applications.
With its advanced extreme ultraviolet lithography systems (EUV), ASML is the only provider of machines that help make advanced semiconductors for the likes of Nvidia. This makes it a vital point in the semiconductor supply chain and a monopoly seller of its equipment today. Not a bad place to be in when semiconductor demand is soaring because of the insatiable need for more AI computer chips.
Over the past 12 months, ASML generated $33 billion in revenue, which has grown a cumulative 353% in the last 10 years. Operating income has grown 551% to $11 billion. The company’s growth is not linear because of lumpy equipment sales to large factories and the cyclicality of the semiconductor industry, but over the long term, demand prospects look fantastic. Manufacturers are planning hundreds of billions of dollars in capital expenditures to build new semiconductor factories. These factories will be stuffed with ASML lithography equipment.
ASML has a trailing price-to-earnings (P/E) ratio of 33. This is not dirt cheap in a vacuum, but I believe it makes the stock undervalued because of its future growth prospects, which will bring this P/E ratio down to a much more reasonable level. Buy ASML stock today and hold on tight for the long term.
One of the reasons for the increased demand for computer chips and ASML equipment — perhaps the largest reason — is Alphabet. The owner of Google, Google Cloud, YouTube, Waymo, and Gemini keeps doubling down on AI.
The big technology company can win in AI by playing two fronts: consumer and enterprise applications. With everyday users it is adding new AI tools to Google Search while building out advanced conversational AI with the Gemini application. Gemini now has an estimated 350 million active users and is growing rapidly, although it is still smaller than OpenAI’s ChatGPT.
With immense scale and resources, Alphabet will be able to deploy AI tools across its applications that are used by billions of people around the globe.
On the enterprise side, Google Cloud is one of the leading AI cloud companies due to its advanced computing infrastructure. Google Cloud revenue grew 28% year over year last quarter to $12.3 billion, making it the fastest-growing segment for Alphabet. The division has invested heavily in its own computer chips called Tensor Processing Units (TPUs), which make it more efficient to build AI software applications on Google Cloud.
There is expected to be hundreds of billions of dollars spent on AI cloud workloads in the coming years, which will help Google Cloud keep growing as a bigger piece of the Alphabet pie.
Overall, Alphabet generated a whopping $360 billion in revenue over the past 12 months and $117.5 billion in operating income. Investors were previously worried about saturation of usage at Google Search, which has now proliferated around the globe. However, with the rise of AI applications, Alphabet looks to have increased its addressable market in organizing the world’s information, the company’s famous slogan. This will help revenue and earnings keep growing over the next decade.
Today, you can buy Alphabet stock at a measly P/E ratio of 20. This makes the stock undervalued if you plan on holding for many years into the future.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool’s board of directors. Brett Schafer has positions in Alphabet. The Motley Fool has positions in and recommends ASML, Alphabet, and Nvidia. The Motley Fool has a disclosure policy.
Russia allegedly field-testing deadly next-gen AI drone powered by Nvidia Jetson Orin — Ukrainian military official says Shahed MS001 is a ‘digital predator’ that identifies targets on its own
Ukrainian Major General Vladyslav (Владислав Клочков) Klochkov says Russia is field-testing a deadly new drone that can use AI and thermal vision to think on its own, identifying targets without coordinates and bypassing most air defense systems. According to the senior military figure, inside you will find the Nvidia Jetson Orin, which has enabled the MS001 to become “an autonomous combat platform that sees, analyzes, decides, and strikes without external commands.”
Digital predator dynamically weighs targets
With the Jetson Orin as its brain, the upgraded MS001 drone doesn’t just follow prescribed coordinates, like some hyper-accurate doodle bug. It actually thinks. “It identifies targets, selects the highest-value one, adjusts its trajectory, and adapts to changes — even in the face of GPS jamming or target maneuvers,” says Klochkov. “This is not a loitering munition. It is a digital predator.”
Even worse, the MS001 is allegedly operating in coordinated drone groups, persisting in its maximum destructive purpose despite the best efforts of Ukraine’s electronic warfare and other anti-drone systems.
Frustrated with warfare tech development speeds
Klochkov signs off his post by informing his LinkedIn followers that “We are not only fighting Russia. We are fighting inertia.” What he appears to wish for is an acceleration of Ukraine’s own assault drone capabilities. The Major General seems particularly disappointed in the Ukrainian system of procurement rounds, slowing field-testing and deployment of improved responses to new Shahed drone generations.
Shahed drones are originally an Iranian design but have gained great notoriety due to their sustained use by the Russian army to attack Ukrainian targets. The MS001 is substantially upgraded in the ‘smarts’ department thanks to Western/allies technologies.
Klochkov says the MS001 is powered by the following key technologies:
Nvidia Jetson Orin — machine learning, video processing, object recognition
Thermal imager — operates at night and in low visibility
Nasir GPS with CRPA antenna — spoof-resistant navigation
FPGA chips — onboard adaptive logic
Radio modem — for telemetry and swarm communication
Cute AI dev board with deadly potential (Image credit: Nvidia)
Western tech sanctions are supposed to neuter this kind of military threat from nations like Russia and Iran. This news indicates that such trade barriers are leaky, at best, and probably not taken seriously enough.
Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.
Not the first Russia-deployed drone discovered using Nvidia AI
This isn’t the first Russian drone system that is thought to have adopted Nvidia’s Jetson Orin as a key component.
A month ago, Ukraine’s Defense Express site said that a new “smart suicide attack unmanned aerial vehicle with artificial intelligence,” dubbed the V2U, was powered by Nvidia’s little AI computer.
While the Shahed MS001s use an Iranian design, the V2U looks like it is more reliant on Chinese tech, including the Chinese-made Leetop A603 carrier board.
Follow Tom’s Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
On today’s show, Andy simulates the Packers 2025 season utilizing artificial intelligence. Find out the results on today’s all-new Pack-A-Day Podcast! #Packers #GreenBayPackers #ai To become a member of the Pack-A-Day Podcast, click here: https://www.youtube.com/channel/UCSGx5Pq0zA_7O726M3JEptA/join Don’t forget to subscribe!!! Twitter/BlueSky: @andyhermannfl If you’d like to support my channel, please donate to: PayPal: https://paypal.me/andyhermannfl Venmo: @Andrew_Herman Email: [email protected] Discord: https://t.co/iVVltoB2Hg