The Future of AI in healthcare isn’t just about innovation. It’s about impact.
Artificial intelligence has begun to redefine what’s possible in healthcare, from streamlining workflows to enhancing diagnostics. But amid the promise, one reality looms large: AI can scale bias just as easily as it can scale innovation.
At the HIMSS26 Global Health Conference & Exhibition, global leaders from across the health ecosystem will gather to address one of the most urgent challenges in digital health – how to make AI ethical by design, and equitable by default. This means tackling structural inequities, establishing ethical guardrails, and delivering outcomes that serve all patients, not just the data majority.

The time for ethical AI isn’t down the road - it’s right now. And it starts with leaders like you.
Bias at Scale: A Real and Present Risks
AI isn’t neutral. It reflects the data it’s trained on, and in healthcare, that often means reproducing disparities rooted in history, access, and representation.
“If the training set is biased, the machine would put bias at scale, which is a problem,” said Ran Balicer, Deputy-DG and Chief Innovation Officer for Clalit, during the HIMSS25 session The Future of AI in Healthcare.
This warning is backed by hard evidence. A 2024 Nature Medicine study found that both AI tools and physicians were significantly less accurate when diagnosing skin conditions on patients with darker skin tones, highlighting how bias persists without diverse data.
Read the study
And the consequences are real. A 2023 JAMA Internal Medicine study revealed that pulse oximeters overestimated oxygen levels in Black patients three times more often than in white patients, causing more than 25% to miss eligibility for lifesaving COVID-19 treatments.
Read the study
These findings make one thing clear: bias in healthcare AI isn’t speculative – it’s measurable, recurring, and deeply consequential. Without action, these gaps could widen, not close.
Shift from Risk to Responsibility
While AI in healthcare races forward, regulation is still catching up. That leaves health systems facing new levels of accountability.
“We are walking into a regulatory vacuum and are responsible in ways we may not have been before,” said Ran Balicer. “It’s up to us… We are the last stand.”
That means now is the moment for leaders to get ahead of regulation and establish strong internal structures for AI governance. Global models are emerging: the EU AI Act, passed in 2024, introduces risk-based requirements for health applications. The WHO released new Guidance on Ethics & Governance of AI for Health in early 2025.
But ethical implementation requires more than policy. It requires people. Diverse, cross-functional teams are essential for anticipating unintended consequences.
“Getting a diverse group at the table is something I want to emphasize. We need ethicists, legal, clinicians, and nurses on these committees to really talk through what those governance structures look like,” said Amy Zolotow, Co-Founder and Speaker at quantumShe, during the HIMSS25 session Synaptic Sync – Building Strategic Technology Partnerships for Effective AI Integration: Technology Panel.
This message echoes insights from the HIMSS25 AI White Paper, where contributors like Brian Spisak, Mark Sendak, and others emphasized the importance of multidisciplinary coalitions and transparent oversight frameworks.
Download the FREE AI White Paper
The HIMSS Global Health Conference & Exhibition emphasizes multi-stakeholder collaboration in digital health transformation, and that model will be critical as AI enters high-stakes domains like risk scoring, triage, and diagnostics.
Rebuilding Around the Provider-Patient Relationship
Ethical AI isn’t only about technical fairness. It’s also about restoring humanity to care delivery. For many organizations, that means redesigning systems to empower clinicians and refocus on patient needs.
“We wanted to see what primary care would look like if that was the only thing we were doing… not just a referral source to the hospital,” said David Banks, Chief Strategy Officer and Senior Executive Vice President at AdventHealth, during the session From Data to Dialogue: Transforming Charts into Patient-Provider Relationships. “[And] that drove how we thought about tools and technology.”
Banks’ team intentionally separated their primary care division from hospital leadership, freeing it to prioritize provider satisfaction, better tools, and direct patient care rather than downstream referrals. This restructuring also shifted how AI and digital tools were evaluated: not just for efficiency, but for their ability to strengthen clinical relationships.
At HIMSS26, leaders will explore more patient-centric design approaches.
Building Organizational Readiness and Trust
Ensuring fairness in healthcare AI isn’t just about patients. It’s also about the people delivering their care. When healthcare workers feel excluded from AI implementation or fear being replaced by automation, trust erodes and adoption stalls. True organizational readiness means treating clinicians, nurses, and operational staff as core stakeholders – not afterthoughts.
“If they are part of the process, they don’t see AI as anything other than something that’s going to make them better,” said Scott Hadaway, Field Chief Technology Officer, Sr Executive Enterprise Architect Healthcare and Life Science (HCLS) Industry Speaker at ServiceNow, during Navigating AI Integration through Change Management and Workforce Inclusion: People Panel Discussion at HIMSS25.
Ethical AI must support, not displace, the expertise of frontline professionals. Their clinical insight, contextual judgment, and experience with real-world workflows are critical to building tools that truly enhance care.
Increasing Emphasis on Guardrails for Data and Vendors
Fairness in healthcare AI depends not only on who builds the models, but also on how they’re trained, monitored, and contracted. As health systems increasingly rely on third-party tools, new risks are emerging around model drift, opaque retraining practices, and data use beyond intended purposes.
“How are you classifying and quantifying risk within your organization? What are the implications for data use? How do we figure out what appropriate data use agreements are?” asked Brenton Hill, Head of Operations and General Counsel at Coalition for Health AI (CHAI), during the Lead Your AI or It Will Be Leading You: Process Panel Discussion at HIMSS25. Hill noted monitoring as a key component.
Vendors may request access to retrain models on provider data – sometimes without clear terms on downstream use. That puts frontline institutions in a vulnerable position, especially when patients’ privacy, equity, and outcomes are on the line.
The ONC’s 2025 Health IT Certification Criteria now require transparency around AI-enabled tools, but enforcement remains limited.
Until federal policy catches up, health systems must take the lead:
- Build robust internal governance to evaluate vendor claims
- Insist on data use agreements that define limits and accountability
- Establish ongoing model monitoring systems – especially for performance across demographic groups
Fair AI isn’t just about responsible development. It’s about responsible procurement.
Equity-Driven Metrics: What Does Success Look Like?
Building ethical AI is only half the challenge, as measuring its impact is just as critical. Without transparent, equity-focused evaluation, systems risk reinforcing the same disparities they aim to eliminate.
Yet a 2025 Health Affairs review found that fewer than 10% of clinical AI studies reported performance across demographic subgroups, leaving major blind spots in fairness analysis.
To ensure AI delivers on its promise, health systems must track:
- Bias reduction across race, ethnicity, language, and socioeconomic status
- Access improvements for underserved populations
- Patient experience outcomes that reflect trust and satisfaction
Without equity metrics, ethical AI is just aspiration. Measurement makes it real.
At HIMSS26, expect deeper conversations around equity benchmarking, patient experience measurement, and ethical return on investment.
Looking Ahead: A Smarter, Fairer Future
The future of AI in healthcare will be shaped by technologies like:
- Federated learning: enabling collaboration without compromising patient privacy
- Explainable AI: making black-box models more transparent and trustworthy
- Synthetic data generation: helping fill gaps in historically underrepresented populations
But tech is only part of the solution. Progress depends on shared standards, global collaboration, and a commitment to health equity across every layer of the system.
Join Us at HIMSS26 to Lead the Future of Ethical AI
Bias in healthcare AI isn’t just a technical issue. It’s a leadership imperative. At HIMSS26, you’ll connect with the decision-makers, technologists, clinicians, and equity advocates working to make ethical, inclusive innovation real.
• Save the Date for HIMSS26
• Download the FREE AI White Paper
The time for ethical AI isn’t down the road – it’s right now. And it starts with leaders like you.
HIMSS26 March 9-12, 2026 in Las Vegas, Nevada