Healthcare, like nearly every industry, is looking for ways to embrace and evaluate the use of artificial intelligence, from the mundane to the imaginative, to shape and improve patient care.
But how do we ensure AI implementations make sense and provide value?
Making AI Matter
Many organizations think the approach is to form a standing committee and create a separate AI approval process. This instinct is understandable - and wrong.
The question is not whether to implement AI - it is how to ensure selected projects that utilize AI (or any other technology) are strategically aligned, operationally sound, and measurably beneficial. AI will not create value simply because it is governed. It creates value only when it is intentionally applied to solve clinical and business problems.
Organizations that treat AI as a separate initiative risk achieving meaningful value.
Therefore, it’s not about establishing AI governance, it’s about building effective operational governance that accelerates strategy while incorporating meaningful technological advances.
Start with Strategy
An effective AI governance approach begins with strategy - not structure.
Healthcare systems already face a constant influx of project proposals and investment opportunities. Operational governance processes should already exist to evaluate requested initiatives and projects based on organizational priorities, financial discipline, and operational readiness. AI should not bypass or duplicate those structures.
Developing an AI strategy begins with establishing criteria and expectations for the deployment, validation, education, and use of AI technology within the organization. A task force is a great way to engage resources from across the enterprise to determine the appropriate organizational policies. These established policies and procedures should be embedded into existing organizational governance structures.
As reported by Becker’s Hospital Review, Cincinnati-based Christ Hospital Health Network recognized the importance of avoiding parallel approval pathways for projects involving AI technology. Rather than building a separate governance track, they integrated AI policies directly into their existing enterprise decision-making framework.
By following this approach, Christ Hospital is avoiding the chase for the next “shiny new technology” and maintaining focus on work aligned to their strategy and delivering measurable value.
What Not to Do
AI Governance Is not a committee - It’s a leadership discipline. When approaching AI strategy, a mistake organizations can make is creating a separate standing AI Governance committee responsible for approving AI related projects.
Establishing an independent separate governing body to approve AI initiatives may seem advantageous, but will lead to unintended consequences:
- Seeking out and approving projects simply to “use AI,” rather than addressing clearly defined organizational needs.
- Dividing operational and strategic focus and stretching already constrained resources.
- Slowing decision-making by sending project requests through multiple committees for approval - increasing bureaucratic, inefficient, resource-intensive processes.
- Increasing technical debt.
- Creating silos and knowledge gaps instead of fostering enterprise-wide understanding of how AI technology should be applied.
Don’t look for problems to justify AI projects. Instead, evaluate clinical and business needs then determine if AI and/or other technology supports meaningful solutions. Technology alone does not create value. Operational improvement enabled by the right technology does.
The Right Approach
Organizations should take a page from Christ Hospital’s playbook:
1. Establish clear “rules of the road” for implementing and using AI technology.
- Define expectations for validation, risk assessment, data governance, performance monitoring, ethical considerations, and education related to AI tools.
2. Integrate and embed these policies into existing governance frameworks.
- Requests for AI-enabled projects should flow through the same prioritization, budget review, compliance, and operational evaluation processes as any other requested initiative.
3. Educate decision-makers.
- Equip the same leaders already responsible for evaluating and deciding organizational initiatives and projects with an understanding of AI’s capabilities, limitations, and risks along with organizational policies. Informed leaders can then appropriately assess AI-enabled solutions within existing operational governance structures.
This approach is echoed by Aidoc in their article “Why You Don’t Need an AI Governance Committee” and further supported by the Joint Commission (JCAHO) and the Coalition for Health AI (CHAI) in “The Responsible Use of AI in Healthcare”.
When governance is integrated rather than isolated:
- Policies are clearly established, communicated, and consistently applied.
- Decision-makers are better equipped to identify AI-enabled solutions to real problems rather than searching for problems to justify the use of AI.
- Focus remains centered on organizational priorities and the most effective methods for achieving them.
Click here to learn more about designing effective operational governance.
As AI interest accelerates across healthcare, operational governance must ultimately determine value, not the number of AI projects launched. Clear governance decision structures ensure organizational policy is followed as part of investment and project selection as well as provide fiscal and strategic discipline, prevent duplicative or conflicting investments, and support enterprise adoption.
Keeping Pace with Change
AI, like a lot of technology, is constantly evolving. Engaging a task force to regularly review organizational AI policies and evaluate for updates ensures the organization stays informed, adapts to emerging capabilities, and is able to responsibly incorporate evolving AI technology into decision-making.
Additionally, MAKE Solutions article “From Hype to Trust: Successfully Implementing AI in Healthcare” highlights a practical framework for responsible AI deployment through safeguards grounded in validation, education, and workflow integration.
How MAKE Solutions Can Help
By embedding responsible AI practices into existing decision-making structures - rather than building parallel bureaucracy - organizations ensure that innovation aligns with organizational strategy and operational need, and investments produce sustainable long-term value.
Along with our TransIT tool that supports demand intake and management, project management, integrated workflows, and process testing validation, MAKE Solutions’ team of experts have over 30 years of experience working with organizations to optimize existing governance frameworks.
Visit www.makesolutionsinc.com today if your organization is looking to optimize existing decision-making structures that can responsibly operationalize AI technology.
Talk to Our Experts
You don't have to try to figure this out on your own. Get the help you need to ensure better outcomes and be confident in your testing approach.
