AI Insights
Turkish medical oncologists’ perspectives on integrating artificial intelligence: knowledge, attitudes, and ethical considerations | BMC Medical Ethics
Participant characteristics
A total of 147 medical oncologists completed the survey, corresponding to approximately 11% of the estimated 1340 medical oncologists practicing in Türkiye [4]. The median age of participants was 39 years (IQR: 35–46), and 63.3% were male. Respondents had a median of 14 years (IQR: 10–22) of medical experience and a median of 5 years (IQR: 2–14) specifically in oncology. Nearly half (47.6%) practiced in university hospitals, followed by 31.3% in training and research hospitals, and the remainder in private or state settings (Table 1). In terms of academic rank, residents/fellows constituted 38.1%, specialists 22.4%, professors 21.1%, associate professors 16.3%, and assistant professors 2.0%. Respondents were distributed across various urban centers, including major cities such as Istanbul and Ankara, as well as smaller provinces, reflecting a broad regional representation of Türkiye’s oncology workforce.
Most of the participants completed the survey from Central Anatolia Region of Türkiye (34.0%, n = 50), followed by Marmara Region (27.2%, n = 40), Eagean Region (17.0%, n = 25) and Mediterranian Region (10.2%, n = 15). The distrubution of the participants with regional map of Türkiye is presented in Fig. 1.
AI usage and education
A majority (77.5%, n = 114) of oncologists reported prior use of at least one AI tool. Among these, ChatGPT and other GPT-based models were the most frequently used (77.5%, n = 114), indicating that LLM interfaces had already penetrated clinical professionals’ workflow to some extent. Other tools such as Google Gemini (17.0%, n = 25) and Microsoft Bing (10.9%, n = 16) showed more limited utilization, and just a small fraction had tried less common platforms like Anthropic Claude, Meta Llama-3, or Hugging Face. Despite this relatively high usage rate of general AI tools, formal AI education was scarce: only 9.5% (n = 14) of respondents had received some level of formal AI training, and this was primarily basic-level. Nearly all (94.6%, n = 139) expressed a desire for more education, suggesting that their forays into AI usage had been largely self-directed and that there was a perceived need for structured, professionally guided learning.
Regarding sources of AI knowledge, 38.8% (n = 57) reported not using any resource, underscoring a gap in continuing education. Among those who did seek information, the most common channels were colleagues (26.5%, n = 39) and academic publications (23.1%), followed by online courses/websites (21.8%, n = 32), popular science publications (19.7%, n = 29), and professional conferences/workshops (18.4%, n = 27). This pattern suggests that while some clinicians attempt to inform themselves about AI through peer discussions or scientific literature, many remain unconnected to formalized educational pathways or comprehensive training programs.
Self-assessed AI knowledge
Participants generally rated themselves as having limited knowledge across key AI domains (Fig. 2A). More than half reported having “no knowledge” or only “some knowledge” in areas such as machine learning (86.4%, n = 127, combined) and deep learning (89.1%, n = 131, combined). Even fundamental concepts like LLM sand generative AI were unfamiliar to a substantial portion of respondents. For instance, nearly half (47.6%, n = 70) had no knowledge of LLMs, and two-thirds (66.0%, n = 97) had no knowledge of generative AI. Similar trends were observed for natural language processing and advanced statistical analyses, reflecting a widespread lack of confidence and familiarity with the technical underpinnings of AI beyond superficial usage.
Attitudes toward AI integration in oncology
When asked to evaluate AI’s role in various clinical tasks (Fig. 2B), respondents generally displayed cautious optimism. Prognosis estimation stood out as one of the areas where AI received the strongest endorsement, with a clear majority rating it as “positive” or “very positive.” A similar pattern emerged for medical research, where nearly three-quarters of respondents recognized AI’s potential in academic field. In contrast, opinions on treatment planning and patient follow-up were more mixed, with a considerable proportion adopting a neutral stance. Diagnosis and clinical decision support still garnered predominantly positive views, though some participants expressed reservations, possibly reflecting concerns about reliability, validation, and the interpretability of AI-driven recommendations.
Broadening the perspective, Fig. 2C illustrates how participants viewed AI’s impact on aspects like patient-physician relationships, social perception, and health policy. While most believed AI could improve overall medical practices and potentially reduce workload, many worried it might affect the quality of personal interactions with patients or shape public trust in uncertain ways. Approximately half recognized potential benefits for healthcare access, but some remained neutral or skeptical, perhaps concerned that technology might not equally benefit all patient populations or could inadvertently exacerbate existing disparities.
Ethical and regulatory concerns
Tables 2 and 3, along with Figs. 3A–C, summarize participants’ ethical and legal considerations. Patient management (57.8%, n = 85), article or presentation writing (51.0%, n = 75), and study design (25.2%, n = 37) emerged as key activities where the integration of AI was viewed as ethically questionable. Respondents feared that relying on AI for sensitive clinical decisions or academic tasks could compromise patient safety, authenticity, or scientific integrity. A subset of respondents reported utilizing AI in certain domains, including 13.6% (n = 20) for article and presentation writing, and 11.6% (n = 17) for patient management, despite acknowledging potential ethical issues in the preceding question. However, only about half of the respondents who admitted using AI for patient management identified this as an ethical concern. This discrepancy suggests that while oncologists harbor concerns, convenience or lack of guidance may still drive them to experiment with AI applications.
Ethical Considerations, Implementation Barriers, and Strategic Solutions for AI Integration. (A) Frequency distribution of major ethical concerns, (B) heatmap of implementation challenges across technical, educational, clinical, and regulatory categories, and (C) priority matrix of proposed integration solutions including training and regulatory frameworks. The implementation time and time-line is extracted from the open-ended questions. Timeline: The estimated time needed for implementation; Implementation time: The urgency of implementation. The timelime and implementation time is fully correlated (R.2 = 1.0)
Moreover, nearly 82% of participants supported using AI in medical practice, yet 79.6% (n = 117) did not find current legal regulations satisfactory. Over two-thirds advocated for stricter legal frameworks and ethical audits. Patient consent was highlighted by 61.9% (n = 91) as a critical step, implying that clinicians want transparent processes that safeguard patient rights and maintain trust. Liability in the event of AI-driven errors also remained contentious: 68.0% (n = 100) held software developers partially responsible, and 61.2% (n = 90) also implicated physicians. This suggests a shared accountability model might be needed, involving multiple stakeholders across the healthcare and technology sectors.
To address these gaps, respondents proposed various solutions. Establishing national and international standards (82.3%, n = 121) and enacting new laws (59.2%, n = 87) were seen as pivotal. More than half favored creating dedicated institutions for AI oversight (53.7%, n = 79) and integrating informed consent clauses related to AI use (53.1%, n = 78) into patient forms. These collective views point to a strong desire among oncologists for a structured, legally sound environment in which AI tools are developed, tested, and implemented responsibly.
Ordinal regression analysis of factors associated with AI knowledge, attitudes, and concerns
For knowledge levels, the ordinal regression model identified formal AI education as the sole significant predictor (ß = 30.534, SE = 0.6404, p < 0.001). In contrast, other predictors such as age (ß = −0.1835, p = 0.159), years as physician (ß = 0.0936, p = 0.425), years in oncology (ß = 0.0270, p = 0.719), and academic rank showed no significant associations with knowledge levels in the ordinal model.
The ordinal regression for concern levels revealed no significant predictors among demographic factors, professional experience, academic status, AI education, nor current knowledge levels (p > 0.05) were associated with the ordinal progression of ethical and practical concerns.
For attitudes toward AI integration, the ordinal regression identified two significant predictors. Those willing to receive AI education showed progression toward more positive attitudes (ß = 13.143, SE = 0.6688, p = 0.049), and actual receipt of AI education also predicted progression toward more positive attitudes (ß = 12.928, SE = 0.6565, p = 0.049). Additionally, higher knowledge levels showed a trend toward more positive attitudes in the ordinal model although not significant (ß = 0.3899, SE = 0.2009, p = 0.052).
Table 4 presents the ordinal regression analyses examining predictors of AI knowledge levels, concerns, and attitudes among Turkish medical oncologists.
Qualitative insights
The open-ended responses, analyzed qualitatively, revealed several recurring themes reinforcing the quantitative findings. Participants frequently stressed the importance of human oversight, emphasizing that AI should complement rather than replace clinical expertise, judgment, and empathy. Data security and privacy emerged as central concerns, with some respondents worrying that insufficient safeguards could lead to breaches of patient confidentiality. Others highlighted the challenge of ensuring that AI tools maintain cultural and social sensitivity in diverse patient populations. Calls for incremental, well-regulated implementation of AI were common, as was the suggestion that education and ongoing professional development would be essential to ensuring clinicians use AI effectively and ethically.
In essence, while there is broad acknowledgment that AI holds promise for enhancing oncology practice, respondents also recognize the need for clear ethical standards, solid regulatory frameworks, comprehensive training, and thoughtful integration strategies. oncology care.
AI Insights
Ascendion Wins Gold as the Artificial Intelligence Service Provider of the Year in 2025 Globee® Awards
- Awarded Gold for excellence in real-world AI implementation and measurable enterprise outcomes
- Recognized for agentic AI innovation through ASCENDION AAVA platform, accelerating software delivery and unlocking business value at scale
- Validated as a category leader in operationalizing AI across enterprise ecosystems—from generative and ethical AI to machine learning and NLP—delivering productivity, transparency, and transformation
BASKING RIDGE, N.J., July 7, 2025 /PRNewswire/ — Ascendion, a leader in AI-powered software engineering, has been awarded Gold as the Artificial Intelligence Service Provider of the Year in the 2025 Globee® Awards for Artificial Intelligence. This prestigious honor recognizes Ascendion’s bold leadership in delivering practical, enterprise-grade AI solutions that drive measurable business outcomes across industries.
The Globee® Awards for Artificial Intelligence celebrate breakthrough achievements across the full spectrum of AI technologies including machine learning, natural language processing, generative AI, and ethical AI. Winners are recognized for setting new standards in transforming industries, enhancing user experiences, and solving real-world problems with artificial intelligence (AI).
“This recognition validates more than our AI capabilities. It confirms the bold vision that drives Ascendion,” said Karthik Krishnamurthy, Chief Executive Officer, Ascendion. “We’ve been engineering the future with AI long before it became a buzzword. Today, our clients aren’t chasing trends; they’re building what’s next with us. This award proves that when you combine powerful AI platforms, cutting-edge technology, and the relentless pursuit of meaningful outcomes, transformation moves from promise to fact. That’s Engineering to the Power of AI in action.”
Ascendion earned this recognition by driving real-world impact with its ASCENDION AAVA platform and agentic AI capabilities, transforming enterprise software development and delivery. This strategic approach enables clients to modernize engineering workflows, reduce technical debt, increase transparency, and rapidly turn AI innovation into scalable, market-ready solutions. Across industries like banking and financial services, healthcare and life sciences, retail and consumer goods, high-tech, and more, Ascendion is committed to helping clients move beyond experimentation to build AI-first systems that deliver real results.
“The 2025 winners reflect the innovation and forward-thinking mindset needed to lead in AI today,” said San Madan, President of the Globee® Awards. “With organizations across the globe engaging in data-driven evaluations, this recognition truly reflects broad industry endorsement and validation.”
About Ascendion
Ascendion is a leading provider of AI-powered software engineering solutions that help businesses innovate faster, smarter, and with greater impact. We partner with over 400 Global 2000 clients across North America, APAC, and Europe to tackle complex challenges in applied AI, cloud, data, experience design, and workforce transformation. Powered by +11,000 experts, a bold culture, and our proprietary Engineering to the Power of AI (EngineeringAI) approach, we deliver outcomes that build trust, unlock value, and accelerate growth. Headquartered in New Jersey, with 40+ global offices, Ascendion combines scale, agility, and ingenuity to engineer what’s next. Learn more at https://ascendion.com.
Engineering to the Power of AI™, AAVA™, EngineeringAI, Engineering to Elevate Life™, DataAI, ExperienceAI, Platform EngineeringAI, Product EngineeringAI, and Quality EngineeringAI are trademarks or service marks of Ascendion®. AAVA™ is pending registration. Unauthorized use is strictly prohibited.
About the Globee® Awards
The Globee® Awards present recognition in ten programs and competitions, including the Globee® Awards for Achievement, Globee® Awards for Artificial Intelligence, Globee® Awards for Business, Globee® Awards for Excellence, Globee® Awards for Cybersecurity, Globee® Awards for Disruptors, Globee® Awards for Impact. Globee® Awards for Innovation (also known as Golden Bridge Awards®), Globee® Awards for Leadership, and the Globee® Awards for Technology. To learn more about the Globee Awards, please visit the website: https://globeeawards.com.
SOURCE Ascendion
AI Insights
Overcoming the Traps that Prevent Growth in Uncertain Times
July 7, 2025
Today, with uncertainty a seemingly permanent condition, executives need to weave adaptability, resilience, and clarity into their operating plans. The best executives will implement strategies that don’t just sustain their businesses; they enable growth.
AI Insights
AI-driven CDR: The shield against modern cloud threats
Cloud computing is the backbone of modern enterprise innovation, but with speed and scalability comes a growing storm of cyber threats. Cloud adoption continues to skyrocket. In fact, by 2028, cloud-native platforms will serve as the foundation for more than 95% of new digital initiatives. The traditional perimeter has all but disappeared. The result? A significantly expanded attack surface and a growing volume of threats targeting cloud workloads.
Studies tell us that 80% of security exposures now originate in the cloud, and threats targeting cloud environments have recently increased by 66%, underscoring the urgency for security strategies purpose-built for this environment. The reality for organizations is stark. Legacy tools designed for static, on-premises architectures can’t keep up. What’s needed is a new approach—one that’s intelligent, automated, and cloud-native. Enter AI-driven cloud detection and response (CDR).
Why legacy tools fall short
Traditional security approaches leave organizations exposed. Posture management has been the foundation of cloud security, helping teams identify misconfigurations and enforce compliance. Security risks, however, don’t stop at misconfigurations or vulnerabilities.
- Limited visibility: Cloud assets are ephemeral, spinning up and down in seconds. Legacy tools lack the telemetry and agility to provide continuous, real-time visibility.
- Operational silos: Disconnected cloud and SOC operations create blind spots and slow incident response.
- Manual burden: Analysts are drowning in alerts. Manual triage can’t scale with the velocity and complexity of cloud-native threats.
- Delayed response: In today’s landscape, every second counts. 60% of organizations take longer than four days to resolve cloud security issues.
The AI-powered CDR advantage
AI-powered CDR solves these challenges by combining the speed of automation with the intelligence of machine learning—offering CISOs a modern, proactive defense. Organizations need more than static posture security. They need real-time prevention.
Real-time threat prevention detection: AI engines analyze vast volumes of telemetry in real time—logs, flow data, behavior analytics. The full context this provides enables the detection and prevention of threats as they unfold. Organizations with AI-enhanced detection reduced breach lifecycle times by more than 100 days.
Unified security operations: CDR solutions bridge the gap between cloud and SOC teams by centralizing detection and response across environments, which eliminates redundant tooling and fosters collaboration, both essential when dealing with fast-moving incidents.
Context-rich insights: Modern CDR solutions deliver actionable insights enriched with context—identifying not just the issue, but why the issue matters. It empowers teams to prioritize effectively, slashing false positives and accelerating triage.
Intelligent automation: From context enrichment to auto-containment of compromised workloads, AI-enabled automation reduces the manual load on analysts and improves response rates.
The path forward
Organizations face unprecedented pressure to secure fast-changing cloud environments without slowing innovation. Relying on outdated security stacks is no longer viable. Cortex Cloud CDR from Palo Alto Networks delivers the speed, context, and intelligence required to defend against the evolving threat landscape. With over 10,000 detectors and 2,600+ machine learning models, Cortex Cloud CDR identifies and prevents high-risk threats with precision.
It’s time to shift from reactive defense to proactive protection. AI-driven CDR isn’t just another tool—it’s the cornerstone of modern cloud security strategy. And for CISOs, it’s the shield your organization needs to stay resilient in the face of tomorrow’s threats.
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business4 days ago
Dust hits $6M ARR helping enterprises build AI agents that actually do stuff instead of just talking
-
Funding & Business6 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way