Tools & Platforms
Oops! Xbox Exec’s AI Advice for Laid-Off Employees Backfires

AI Compassion or Insensitive Overreach?
Last updated:
Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
In a move that sparked controversy, an Xbox Game Studios executive at Microsoft suggested using AI prompts to help employees cope with the distress of layoffs. The post, intended to support emotional well-being, was quickly taken down following backlash from the public. Critics argue it highlights a disconnect between tech solutions and genuine human empathy, raising questions about the boundaries of AI in emotional spaces.
Introduction
The impact of technological advancements on employment continues to spark significant debate. Recently, an incident involving an Xbox Game Studios executive drew wide public attention. The executive suggested using AI prompts as a tool to help laid-off Microsoft employees manage the emotional stress of job loss. This suggestion, which was made publicly on social media, faced a swift backlash, leading to its subsequent deletion. The Times of India provides a detailed account of the controversy and the conversations it has sparked within both tech and human resources circles.
Background Information
The integration of artificial intelligence into various facets of life continues to spark diverse reactions, as illustrated by a recent event involving Xbox Game Studios. In a surprising move, an executive from the company suggested using AI prompts to assist laid-off employees at Microsoft in dealing with the emotional stress of their job loss. This suggestion was made in a post that was later deleted following public backlash. The details of this incident were covered extensively in an article by the Times of India.
The AI prompts suggested by the executive were intended as tools to help individuals navigate the challenging emotions that come with sudden unemployment. However, the suggestion was met with criticism, as many viewed it as an inadequate response to such a significant and personal issue. The Times of India outlines how this decision highlights a divide between technology’s potential to aid in personal matters and the human need for genuine support during difficult times.
This incident is part of a broader conversation about the role of technology in the workplace and its impact on mental health. As organizations increasingly rely on AI to manage various aspects of operations, the balance between technological efficiency and human empathy remains crucial. The situation involving the Microsoft employees and the AI prompts showcases the complexities of implementing technology in sensitive scenarios, as discussed in the Times of India article.
Impact on Microsoft Employees
The recent layoffs at Microsoft have left a significant impact on its employees, both professionally and emotionally. As reported in a recent article, an Xbox Game Studios executive attempted to address the emotional distress among laid-off employees by providing AI-generated prompts . Despite the intention to offer support, the move was met with backlash from both the affected employees and the public, leading to the deletion of the post by the executive.
This incident exposes the complexities and sensitivities involved in handling layoffs, particularly in a tech giant like Microsoft, where employees often identify closely with their work. The reliance on AI prompts, intended to alleviate stress, was perceived as tone-deaf and lacking empathy. Such reactions highlight the importance of human-centered approaches during layoffs, where personalized support and understanding should take precedence over algorithmic solutions.
Public reaction to the use of AI to manage such a human-centric crisis underscores a broader concern about the impersonal nature of technology in addressing emotional needs. It serves as a reminder that advancements in AI should complement rather than replace genuine human interactions, especially in difficult times. Microsoft’s experience may prompt other companies to reassess their strategies when dealing with layoffs, ensuring they strike a balance between innovation and empathy.
Details of the AI Prompts
The concept of AI prompts extends beyond mere automation and into realms of emotional intelligence and psychological support. In a recent case, an Xbox Game Studios executive attempted to utilize AI prompts as a form of emotional assistance for employees recently laid off from Microsoft. The aim was to alleviate the psychological distress of job loss, through tailored AI-generated messages. Unfortunately, this initiative sparked a backlash and led to the deletion of the original post as reported by Times of India. This incident highlights the delicate balance between technology and human empathy and raises questions about the appropriateness of AI in emotionally sensitive situations.
While AI can effectively manage repetitive tasks and predict outcomes based on data patterns, its role in managing human emotions remains contentious. The use of AI prompts in the context of layoffs demonstrates both potential and pitfalls – offering a unique way to communicate support but also risking appearing impersonal or insensitive. This scenario reported by the Times of India serves as a reminder of the importance of context and emotional intelligence in deploying AI in workplace communication.
The public reaction to using AI for managing layoff-related stress ranged from skepticism to outright criticism. Many viewed the approach as cold and inadequate in addressing the complexities of human emotion during such trying times. The mixed reactions underscore the broader societal dialogue on the limits of AI’s capabilities in replicating genuine human empathy. According to the report, this controversy may prompt further examination of how AI can be integrated sensitively into human resource practices without compromising the emotional well-being of individuals.
Looking ahead, the deployment of AI in sensitive areas such as layoffs will require more nuanced and ethically guided approaches. Innovations must consider not only the functional capabilities of AI but also its emotional and psychological impacts. As the incident with Microsoft suggests, the future of AI in workplaces will need to integrate robust ethical guidelines to ensure technology supports rather than replaces human touch.
Public Reactions to the Post
In the wake of a controversial post by an Xbox Game Studios executive, public reaction has been swift and predominantly negative. The executive had suggested that laid-off employees of Microsoft could use AI-generated prompts to manage the emotional distress of their job loss. This suggestion, which many perceived as insensitive, catalyzed a wave of backlash online. The post was seen as dismissive of the real and profound emotional impact of losing one’s job, prompting widespread criticism among netizens and industry observers alike.
The decision to delete the post following the backlash highlights the power of public opinion in shaping corporate communication strategies. Social media platforms, in particular, were rife with comments denouncing the tone-deaf nature of the suggestion. Users expressed a strong sense of empathy for the laid-off employees, arguing that AI cannot replace the human touch and emotional support needed during such challenging times. This incident underscores a growing wariness among the public regarding the reliance on AI for deeply personal and sensitive issues.
Moreover, the episode has prompted discussions about corporate responsibility and sensitivity, especially in communication related to layoffs and employee welfare. While technology like AI offers many advantages, the public’s reaction has highlighted a preference for human empathy and genuine support over automated responses. As reported by the Times of India, the pushback serves as a cautionary tale for executives and PR teams on the importance of thoughtful and humane communication.
Expert Opinions on Using AI for Emotional Support
The incorporation of AI in providing emotional support has garnered mixed reactions, with experts weighing in on both its potential and its shortcomings. Some industry leaders suggest that AI can offer a consistent, non-judgmental presence for individuals in distress, akin to an ever-available friend. However, the controversy surrounding its use is palpable, as demonstrated by the recent incident involving Xbox Game Studios. According to a report from the Times of India, an executive faced backlash for suggesting AI prompts to help laid-off employees manage emotional stress, only to retract the suggestion amid public outcry.
Experts emphasize that while AI can be programmed to detect emotional cues and offer tailored responses, its effectiveness is inherently limited by its lack of human empathy and understanding. The potential for AI to misinterpret emotions or offer inappropriate responses remains a significant concern, leading some to argue for its use only as a supplementary tool rather than a replacement for human interaction. The fallout from the Xbox Game Studios incident underscores this delicate balance, highlighting the need for careful consideration of AI’s role in such deeply personal contexts.
Looking ahead, the future of AI in emotional support is likely to involve more nuanced applications that combine technological precision with human oversight. Many in the field advocate for systems where AI assists in identifying individuals at risk, enabling human professionals to intervene more swiftly and effectively. Meanwhile, ethical considerations will continue to play a crucial role in shaping these technologies, ensuring that emotional well-being remains a priority in the development and deployment of AI solutions. This ongoing dialogue reflects a broader societal negotiation of technology’s place in our most private and sensitive spheres.
Microsoft’s Response to the Backlash
In the wake of recent layoffs at Microsoft, an executive from Xbox Game Studios faced significant backlash for attempting to aid affected employees with AI-generated prompts aimed at managing emotional stress. This effort, though possibly well-intentioned, was criticized widely as it seemed to overlook the gravity of the situation and the very real human emotions involved. Consequently, the executive deleted the contentious social media post not long after it sparked outrage.
In response to the backlash, Microsoft has acknowledged the sensitivity required in managing communications during layoffs. The company has emphasized its commitment to providing genuine support to its employees through more tangible measures, such as offering counseling services and career transition assistance. While the AI initiative was not intended to be the sole support mechanism, the episode highlighted the pitfalls of relying too heavily on technology in addressing deeply personal issues.
The incident has spurred discussions within the tech industry about the boundaries and responsibilities of AI in handling human emotions. Many experts argue that while AI can be a supportive tool, it should complement, not replace, human empathy and personalized support. This controversy may lead to Microsoft and other tech giants reevaluating their strategies to ensure that technology is applied in a manner that respects individual emotional experiences and augments human-led initiatives.
Future Implications of AI in Handling Emotional Stress
Artificial Intelligence (AI) is poised to play a transformative role in the way emotional stress is managed, particularly in situations involving job loss and career transitions. For instance, a notable incident involved a Microsoft executive at Xbox Game Studios who suggested using AI as a tool for coping with emotional stress following layoffs. This sparked a debate on the appropriateness and capabilities of AI in such sensitive situations. Although the suggestion was met with backlash, as reported by Times of India, it highlights a growing interest in leveraging technology to support mental health.
As AI technology continues to evolve, its potential future implications in addressing emotional stress are vast. AI-driven mental health aids could offer personalized support through virtual therapists, capable of providing a wide array of services from meditation guidance to cognitive behavioral therapy. These tools might help individuals navigate their emotional landscapes with greater ease and accessibility, potentially reducing the stigma associated with seeking mental health support.
Furthermore, the integration of AI in handling emotional stress could be particularly beneficial for high-risk groups, providing support in areas where human therapists are scarce or unavailable. By offering continuous monitoring and responsive feedback, AI might significantly alleviate stress and prevent more serious mental health issues from developing. However, it is crucial to address privacy concerns and ensure that these technological solutions are developed with ethical guidelines and cultural sensitivities in mind.
The future of AI in managing emotional stress also lies in its potential to revolutionize how organizations address employee wellbeing. Companies could implement AI solutions to proactively manage workplace stress, tailor support to individual needs, and foster a healthier work environment. Such initiatives could potentially enhance productivity and employee satisfaction, mitigating the adverse effects experienced during corporate restructuring or downsizing events, such as those experienced by Microsoft’s employees.
Conclusion
In light of the recent controversy surrounding the use of AI prompts to support laid-off employees at Microsoft, a reflective conclusion can be drawn on the role of technology in managing workplace challenges. The incident highlights a complex intersection between technological advancement and human sensitivity, illustrating that while artificial intelligence offers tools for efficiency and support, it is not a substitute for empathy and personalized human interaction. This nuanced situation underscores the need for companies to approach AI integration thoughtfully, ensuring that technology complements rather than replaces the human touch in emotionally charged situations.
The backlash following the original post by the Xbox Executive serves as a cautionary tale about the potential repercussions of relying too heavily on AI for human-centric issues. As we move forward into an era increasingly dominated by technological solutions, it is crucial to maintain a balanced perspective. Ensuring that such tools are used to enhance rather than detract from the human experience will be key in avoiding unintended negative reactions from the public and employees alike. This situation opens a broader conversation about the ethical lines in tech deployment, emphasizing the importance of sensitivity over mere functionality.
Future implications of this event may include more structured guidelines and ethical standards for the use of AI in handling employee relations and mental health issues. The public reaction to the event highlights a growing awareness and demand for transparent, considerate implementation of AI tools in the workplace. Companies might now be prompted to develop more comprehensive policies that address the emotional and psychological dimensions of workforce management, particularly in distressing scenarios such as layoffs.
Ultimately, the incident has sparked broader discussions on the role of AI in society, especially in contexts that traditionally require human empathy and understanding. As companies navigate these challenges, the importance of integrating ethical considerations into technological advancement becomes clear. Reflecting on this event offers valuable lessons for tech leaders and companies globally, reminding them to wield technology responsibly and with a mindful appreciation for its impact on human emotions.
Tools & Platforms
How AI is undermining learning and teaching in universities | Artificial intelligence (AI)

In discussing generative artificial intelligence (‘It’s going to be a life skill’: educators discuss the impact of AI on university education, 13 September) you appear to underestimate the challenges that large language model (LLM) tools such as ChatGPT present to higher education. The argument that mastering AI is a life skill that students need in preparation for the labour market is unconvincing. Our experience is that generative AI undermines teaching and learning, bypasses reflection and criticality, and deflects students from reading original material.
Student misuse of generative AI is widespread. Claims that AI helps preparation or research is simply cover for students taking shortcuts that do not develop their learning skills. Assessments are widely channelled through ChatGPT, disregarding universities’ usually feeble guidance and rules. Generative AI results in generic, dull and often factually incorrect output.
For example, we asked students to interpret a short article by Henry Ford from 1922. Many answers suggested that the autocratic and racist Ford was developing a “sophisticated HR performance management function for his business” and that he was a “transformational leader”.
In many degree programmes, LLMs have little to no practical value. Their use sabotages and degrades students’ learning and undermines critical analysis and creativity. If we are to make better sense of the impact of AI on work, education and everyday life, we need to be more sceptical and less celebratory.
Prof Leo McCann
Prof Simon Sweeney
University of York
Tools & Platforms
Workday acquires Sana Labs for $1.1B to upgrade agentic AI work experiences

Human resources and finance software giant Workday Inc. today announced the acquisition of Sana Labs AB, an artificial intelligence company offering enterprise knowledge and employee training tools, for about $1.1 billion.
Workday also announced new AI agents for HR, finance and industry use cases in its Illuminate platform alongside a new developer platform, including a low-code agent builder that will allow customers to deploy custom AI agents.
Founded in 2016, Sana has focused on developing AI tools to enhance the knowledge and understanding of employees in enterprises. The company’s main products include Sana Learn, a coaching and feedback tool featuring an AI tutor, and Sana Agents, AI-powered knowledge assistants that generate insights and content from enterprise data.
“Sana’s team, AI-native approach, and beautiful design perfectly align with our vision to reimagine the future of work,” said Gerrit Kazmaier, president of product and technology at Workday. “This will make Workday the new front door for work, delivering a proactive, personalized, and intelligent experience that unlocks unmatched AI capabilities for the workplace.”
Sana Learn will be used to complement Workday Learning by adding hyper-personalized skill building to Workday’s already existing learning suite to help employees train faster. Sana Agents provide capabilities beyond traditional chatbots by adding the ability to automate repetitive knowledge tasks and act proactively on users’ behalf. AI agents can streamline day-to-day work by completing mundane tasks such as scanning email for highlights and catching up on reports.
According to Sana, its agents have led to increased time savings and productivity gains. For instance, an unnamed leading American manufacturer achieved up to 95% time savings, while a multinational industrial technology company experienced a 90% increase in productivity.
Workday upgrades its AI agents and work tools
In addition to today’s acquisition news, Workday also announced new AI agents, including a Financial Close Agent and Case Agent, purpose-built for complex business processes like performance reviews, planning and assisting with financial use cases.
The new agents are part of Workday Illuminate, Workday’s AI platform. The company said the new agents are “purpose-built for work,” embedded with their respective industry use cases and powered by deep insights into business data and context.
The company’s new HR agents are designed to help reduce the administrative burden associated with attracting, retaining and engaging talent. According to Workday, these agents will improve the employee experience and allow HR teams to concentrate on strategic initiatives by automating time-consuming processes.
New agents include a Business Process Copilot that automates the setup of new business procedures to reduce manual effort, the aforementioned Case Agent that automates administrative tasks to reduce resolution times for employee needs, an Employee Sentiment Agent that analyzes employee feedback and a Performance Agent that tracks data from enterprise applications to streamline reviews and recommend actions.
To assist finance teams, the company introduced agents specifically designed for reconciliation, testing and planning. These agents help business leaders adapt to changing situations with valuable analysis and improved decision-making capabilities.
These new agents include a Cost and Profitability Agent that allows users to define allocation for costs and revenue based on natural language, a Financial Test Agent that tests financials to detect fraud and enable compliance and the Financial Close Agent that automates the finalization of accounting records to retain accurate financial statements.
For use cases not covered by these agents and Workday’s already existing AI agents, the company today announced Workday Build, a new developer platform that gives customers and partners the power to create and deploy their own AI-powered solutions. It includes Flowise Agent Builder, a low-code tool that makes building agents on the company’s platform simple for both non-technical and advanced users.
“The era of one-size-fits-all enterprise software is over,” said Peter Bailis, chief technology officer at Workday. “With Workday Build, customers go from consuming AI to creating with it, giving them the power to build intelligent solutions directly on their most trusted people and financial data.”
All of these capabilities will be powered by Workday Data Cloud, a new data layer announced today that the company said will connect AI agents to business intelligence and operational systems. In addition, Workday also announced partnerships with Databricks Inc., Salesforce Inc. and Snowflake Inc., permitting zero-copy access to HR and finance data within these data storage platforms.
Image: SiliconANGLE/Microsoft Designer
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
Tools & Platforms
Digital Health Care Forum Live Updates: Leaders Talk Industry’s AI Future

As the demands on the health care industry grow, top health systems must invest in the integration of new technologies to support physicians and better care for patients.
Newsweek’s Digital Health Care Forum on Tuesday, September 16, 2025, invites health care leaders from top health systems across the country to New York City to share their strategies, challenges and impacts of recent technological innovations.
The forum, sponsored by Tecsys, Palantir and WelcomeWare, features a full day of programming that includes expert panels, research presentations, fireside chats and networking receptions that address the biggest challenges facing health care systems in the digital age.
- The forum is led by Newsweek’s Health Care Editor Alexis Kayser.
- The diverse slate of panels will discuss topics such as financing innovation, tech integration, virtual health care, artificial intelligence, governance and leadership in the digital age.
- Some notable speakers represent leaders in the industry, including Kaiser Permanente, Columbia University, Hospital for Special Surgery, Microsoft Health and Life Sciences, Statista, MD Anderson Cancer Center, Corewell Health and Northwestern Medicine.
- The full list of panels and speakers can be found here.
Digital Health Care Forum Promo Thumbnail
Newsweek
Follow Newsweek for the latest updates.
Technology helps combat challenges of health care in rural states
The panelist discussed the most important technological advancements they have integrated into their health systems.
David Callendar of Memorial Hermann said the system is owned by the greater community of Houston, Texas and is very supportive of their mission to improve overall health. Callendat said Memorial Hermann engages with members of the community directly to help them understand what good health is and how to obtain it within their circumstances.
Brad Reimer from Sanford Health said the adoption of technology “has got to be targeted,” and the mission case and a business case have to come together. He said their mission is to sustain health care in rural America. An example of this is doubling down their investment in virtual care and developing an AI model with the chronic kidney disease team that, along with the development of electronic health records, has doubled the number of screenings and tripled the number of clinical diagnoses of chronic kidney disease.
Panelist says health systems can’t improve “without leveraging technology”
After the first networking break, Alexis Kayser is back on stage for her next panel, “The Business Case for Tech and Innovation,” which explores how hospital systems can adopt new technologies to drive efficiency and reduce costs.
Memorial Hermann Health System CEO Dr. David Callender, Fairview Health Services President and CEO James Hereford and Sanford Health CIO Brad Reimer share what has worked at their institutions to build a successful tech portfolio with a strong return on investment.
Fairview had a significant financial turnaround this past year. Hereford said the investment in technology played a major role in that success.
“We want to transform health care and you can’t fully do that without fully leveraging technology,” he said.
Reimer said pacing out tech deployments in the Sanford Health System has been a huge benefit. He said it allows the health system to do more pilot programs, reduce risk and pivot or bail out when they aren’t getting the outcomes they want.
“That’s much harder to do if you push that across a whole physician group or a full nursing group,” he said. “[We’re] trying to make sure that we’re taking a big picture step back of how much change are we are introducing to the clinicians and to operations and making sure that we’re not just peppering them with a bunch of uncoordinated things that don’t drive value.”
This approach has also helped with the recruitment of medical staff who expect the latest technology and advancements in hospitals.
First networking break begins
Attendees are now taking a short networking break before the next panel, The Business Case for Tech and Innovation, with speakers from Memorial Hermann Health System, Fairview Health Services and Sanford Health.
Koford said new cancer center is a “catalyst” for MSK’s mission
Kreg Koford said the new cancer center will address disparities in cancer care for underserved communities, translate research into clinical work, train the next generation of doctors and be a center for “impact-driven innovation” with “compassionate, personalized care.”
There will also be staff respite areas for clinicians to decompress from the high-stress environment, fall-prevention technology in patient rooms and improved digital displays and smart capabilities throughout the facility.
The guiding principles of the pavilion technology include:
- The patient is the focus
- Speed, stability and resilience in technology investment
- Using the most advanced, effective, efficient and compassionate care with flexibility and foresight to enable innovation
- Working as a team and using technology to improve collaboration among clinicians, patients and families
- Supporting team members
- Turning every interaction into insight by collecting data to improve outcomes and accelerate clinical trials and scientific discovery
He said the building serves as a “catalyst” for Memorial Sloan Kettering’s mission to provide care for everyone who needs it and “hopefully eradicate cancer and, if not, provide care to help patients recover.”
The pavilion is set to open in 2030.
MSK announced that its state-of-the-art cancer care pavilion currently under construction will be named The Kenneth C. Griffin Pavilion at Memorial Sloan Kettering Cancer Center.
Learn more: https://t.co/0hZO2BCWK0 pic.twitter.com/KdCzG1LTPJ
— Memorial Sloan Kettering Cancer Center (@MSKCancerCenter) March 14, 2025
A look at Memorial Sloan Kettering’s newest cancer pavilion
Kreg Koford, the senior vice president of Real Estate Operations at Memorial Sloan Kettering Cancer Center, presented the hospital’s plan to build a new facility to address the anticipated increase in demand for care and to accommodate modern and future technology.
Koford said there are currently 40,000 new cancer cases in New York City each year, and that will increase to about 47,000 cases by 2030 and 60,000 by 2050.
To address this, MSK is building the Kenneth C. Griffin Pavilion on its main campus, located on the Upper East Side of Manhattan. The pavilion will house a new, state-of-the-art cancer care facility to accommodate the rising number of cancer cases each year.
The facility features 12 new operating rooms, 2,018 inpatient beds, single rooms for immunocompromised patients and the latest technology and cutting-edge robotics.
Panelists define with good vendor partnership looks like in health care
The speakers on the Breaking Down Silos panel shared what they look for in outside vendors to ensure true partnerships.
They agreed that the partnership has to go beyond the financial transactions.
Simon Nazarian from City of Hope said the patient is always at the center of these decisions, and when you start with the financial, you can lose the reason why you’re engaging in the partnership.
“What will this [partnership] deliver to the patient and the health care industry overall?” he said.
At IU Health, Dennis Murphy said transparency is key with these vendor partnerships.
“Define accountability on both sides of the table,” he said. “We want to know if our team is not doing what they’re supposed to. We are okay with telling vendors, but we are not as receptive about the feedback for our own team.”
He also said that products are not static; they are dynamic. Good partners, he said, talk about what is next in the space. Going beyond the financial transaction means talking with partners about the next two or three things coming down the pike.

Newsweek Health Care Editor Alexis Kayser hosts the “Breaking Down Silos: Achieving True IT Integration in Health Care” panel during the Digital Health Care Forum on September 16, 2025, at One World Trade Center in…
Newsweek
Panelist discuss concerns over sharing patient medical data
Newsweek’s Alexis Kayser leads the first panel of the day, “Breaking Down Silos: Achieving True IT Integration in Health Care,” which tackles breaking down silos with tech integration.
Panelists include IU Health President and CEO Dennis Murphy, City of Hope Executive VP and Chief Digital and Technology Officer Simon Nazarian and Northwestern Medicine Chief Digital Executive and VP of Information Services Danny Sama.
Kayser asked the panel about patient data sharing, as people are worried about privacy and control over sensitive health data that could be used for education and research.
Sama said it comes down to whose data it is.
“It’s the patient’s. If the patient doesn’t want data to be used, that is their right,” he said.
He noted an ethical conundrum: Could this data lead to a medical breakthrough and would it be unethical not to use it? Sama said that the decision might be left up to the courts and government regulation.
“HIPPA needs updating for the modern system of how we use information,” he added. “Regulations might hinder progress more than helping it. But it is patient data, so it’s a tricky tightrope to walk.”
Murphy offered a different perspective, saying physicians need to take the time to explain to patients why the data is necessary to advance medical research.
“I don’t think people want to invest time to have those conversations with patients, he said.
Murphy added that the main concern among patients who are hesitant to share their data are fears of insurance costs going up, putting employment in jeopardy and wanting a return on investment if their data is used for major medical advancements.
Tina Freese Deckers shares key behaviors to drive change in health care
Tina Freese Deckers, the board chair of the American Hospital Association and president and CEO of Corewell Health, took the stage to share her opening remarks.
She shared a story of a patient with tremors who wrote her a letter, his first hand-written note in 30 years, after a focused ultrasound procedure.
“He now can write a letter, he can now drink coffee without worrying about spills,” she said. “We totally changed his life and that’s why we’re here.”
She outlined overall challenges facing health care, including funding, affordability, an aging population and a shrinking workforce.
Health care has been slow to change, Freese Deckers said, and there are five key behaviors needed to drive change:
- Taking care of ourselves and each other
- Focus on mission and purpose and find the problem we are trying to solve and tie it back to the mission
- Be curious about the road ahead, which requires actively listening and communicating and seeking out different points of view
- Commit and own it: Go to the higher rungs of the accountability ladder where you find solutions and “make it happen”
- Make sure we deliver and celebrate those successes
“This is how we do hard things, this is how we start to move forward,” she said. “We need to make sure that we’re doing those hard things, that we’re embracing the technology and artificial intelligence, that we’re bringing the hope to our teams, that we’re putting forward the discussions that we have and we’re owning it and making it happen.”
Newsweek’s health care editor outlines industry challenges in opening remarks
In her opening remarks, Newsweek’s Health Care Editor Alexis Kayser welcomes attendees and speakers – some of whom traveled from California, Texas, South Dakota, and even internationally from Mexico, Spain, Belgium and Colombia.
Kayser likens the current state of the health care industry to the Charles Dickens quote: “It was the best of times, it was the worst of times.”
“Where we sit, in the United States, health systems are up against funding cuts and rising costs,” she said. “Our population is getting older and they’re getting sicker. Policies, waivers and regulations are up in the air. And patients’ trust doesn’t come as easily as it used to. “
But, Kayser added, advancements in technology like AI and predictive analytics have the potential to turn things around.
“The people in this room are the people who are going to get us there,” she said. “The discussions we have in this room should help make that path a little clearer.”
Digital Health Care Forum to feature panels, fireside chats, presentations
Attendees are arriving at Newsweek’s headquarters in New York City for the Digital Health Care Forum: Sculpting a Digital Future.
The event will kick off with opening remarks from Newsweek’s Health Care Editor Alexis Kayser and Tina Freese Decker, the president and CEO of Corewell Health, at 10 a.m.
A full day of panels and fireside chats will follow throughout the day, including:
- A “State of the Industry” presentation with Newsweek’s Global Head of Research and Statista
- Discussions about aligning tech and financial investments with strategic planning
- Fireside chats with senior leadership from Tecsys and Palantir
- A panel about change management from the perspective of chief medical and nursing information officers
- Presentations from Memorial Sloan Kettering Cancer Center and UMass Memorial Health
- A look at telehealth and remote patient monitoring technologies
- A review of AI advancements, use cases and challenges in top hospital systems
- Advice from top hospital systems about taking “healthy risks”
- A spotlight on fostering trust and collaboration from Newsweek CEO Circle members
The full schedule of events can be found here.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries