I was fortunate to have had the opportunity to attend the HLTH 2024 conference in Las Vegas, hailed as a premier event for health innovation and transformation. The event, hosted by The Venetian, required a daily walk through a casino, with bright attention grabbing machines and the haze of cigarette smoke — a strikingly ironic path to a health-focused conference that foreshadowed the tensions I observed.
HLTH 2024 provided a main stage that headlined companies championing AI in healthcare, while it also served as a backdrop for the participating medical professionals vocalizing their experiences and concerns with these developing technologies.
Although tech companies advocated for AI’s transformative promise, practicing physicians along with their chief medical and information colleagues, voiced fundamental questions on its readiness. Their questions were clear, practical, and rooted in the realities of daily healthcare. During a panel discussion addressing the implications of AI-based technology in healthcare delivery, Dr. Shaun Miller, Chief Medical Information Officer, Cedars Sinai, posed several questions to its technology partners:
- “How do we vet AI solutions within our health systems?”
- “How do they work?”
- “Can I trust the results?”
- “Will we get a ROI?”
The core issue these questions revolve around ultimately points to whether these technologies can actually help healthcare professionals provide better care for patients.
After attending panels and presentations, and participating in conversations on the event floor, it became clear to me that physicians and nurses aren’t asking for AI; rather, they’re asking for more time to focus on patient care and relief from the administrative demands that drain and distract their energy and attention.
This divide points to the urgent need for a human-centered design approach when applying healthcare technology, one that places physician, nurse, and patient needs at the forefront, rather than leading with AI as a cure-all.
Navigating the Tensions Between Care and Technology
As tech companies emphasized advancements, such as the ability to harness generative AI for patient insights and operational efficiencies, physicians shared in panel discussions their unanswered questions over the practicality and trustworthiness of these tools, reinforcing the requirements of healthcare technology to meet high standards for safety, reliability, and usability — standards that the AI-based solutions they were reflecting on were not being met based upon their comments.
Over the course of the event, five primary tensions started to emerge:
1. Speed to Market versus Caution in Deployment
The speed-to-market imperative continues to drive the tech sector, but healthcare operates under different stakes. Startups and established tech companies push for quick releases to capture market share, but healthcare professionals caution that rapid deployment without thorough vetting risks patient safety and institutional trust. The rush to establish competitive advantage often ignores the need for meticulous testing and regulation, eroding public trust in healthcare systems. One example is when an AI-powered transcription tool used in hospitals invents things no one ever said.
2. Data Access and Integration versus Privacy and Control
There was frequent mention of AI-based technologies being uniquely suited to integrate data from multiple sources to generate insights. However, healthcare practitioners are justifiably cautious: Which datasets are integrated and why, who has access, and how can we ensure patient privacy is upheld?
For many, the notion of extensive data integration raises questions about privacy risks, oversight, and control, especially given the sensitive nature of patient information. During a panel on the potential for wearable technology to play a more active role in patient health, panelists, Florence Comite MD, Comite Center for Precision Medicine & Healthy Longevity and Jason Oberfest VP of Healthcare of Oura, revealed the growing need to integrate data securely into EHRs and the tension between using valuable data and addressing privacy concerns, especially when aggregated across multiple platforms.
3. Transparency versus Institutional Risk and Liability
Healthcare leaders seemed wary of the liability tied to AI’s “black box” nature — algorithms that deliver insights without clear transparency on their inner workings. The risk to a healthcare institution’s reputation, finances, and legal standing looms large if these tools fail to perform reliably. (Take a deeper look into the evolving landscape of AI regulations and ethics in healthcare.) Can AI solutions provide consistent and trustworthy outcomes, or do they introduce unpredictable risks? Without clarity and assurances demonstrated through testing, hospitals and healthcare systems hesitate to place their trust, and resources, in these technologies.
4. Patient Safety versus Technological Advancement
As patient advocates, healthcare professionals are cautious about any technology that may compromise safety. Although protocols like HIPAA and GDPR provide frameworks, these only partially address concerns. The risk of patient data breaches or misinterpreted AI insights makes physicians understandably skeptical. This tension underscores the responsibility AI developers have to build safeguards, ensuring that technological advancements don’t inadvertently expose patient data.
5. Promises versus ROI and Practical Utility
For healthcare institutions, technology investments must justify themselves in terms of clinical outcomes, operational efficiencies, and ultimately, financial returns. Physicians and chief information officers exploring AI-based products questioned whether these solutions can improve patient care, streamline processes, or reduce administrative burden in healthcare, let alone deliver a positive ROI. Without evidence of clear benefits within this industry, companies' AI-based products remain a “solution” in search of a problem.
The Opportunity for a Human-Centered Design Approach
What physicians are asking for is not more technology but rather more time and energy to focus on patient care. By leading with these patient-centered needs, new tools and workflows can increase efficiencies, lower cognitive loads, and empower healthcare professionals to deliver the type of care they want to provide.
Based upon numerous conference discussions, here are three opportunity areas for bridging this gap:
- Focus on healthcare provider needs over tech-first approaches. Rather than selling AI as the next healthcare breakthrough, solution providers should look at the genuine needs of healthcare professionals. Reducing administrative burdens, providing intuitive tools, and freeing up physicians’ time to focus on patients would resonate far more than generalized AI promises. Dr. Minal Bakhai, Director of Primary Care Transformation at NHS England, shared during a panel that she often juggles 15 applications, each with separate logins, in service of assessing and managing care, while trying to maintain eye contact and focus on her patients. This constant digital distraction diminishes her attention, adding cognitive load rather than easing her workload. How might technology alleviate rather than add to these burdens so that physicians can focus on patients — not another app or browser tab?
- Address institutional and patient safety concerns. Hospital systems need to ensure frameworks assess the risks that come with adopting AI, particularly in areas like data handling, privacy, and system integration. Without this, institutions face potential vulnerabilities that could compromise their reputation and patient trust.
- Develop trustworthy vetting processes. Healthcare providers need clear methods to understand and validate AI tools within their own systems, ensuring that these solutions are consistent and reliable. Vetting should ensure the consistency and accuracy of these tools, allowing physicians to trust that AI will support, not hinder, patient care.
Aligning Innovation with Healthcare’s Real Needs
Ultimately, AI’s success in healthcare hinges on aligning innovation with real-world needs. Physicians aren’t asking for AI — they’re asking for fewer distractions, more patient-centered time, and tools that enhance, rather than complicate, their work.
Looking for patient-centered healthcare solutions? Contact us to learn how our approach focuses on the real needs of providers and patients.