The VERTESS team recently gathered in Austin for our annual firmwide meeting. As always, the opportunity to sit in the same room and share successes, tackle challenges, and brainstorm ways to improve was invaluable. These conversations consistently surface pain points that don't always show up in reports or dashboards. They also reinforce how important it is to step back from day-to-day responsibilities, come together as a team, and think strategically about how we work and support clients.
As someone who wears multiple hats within VERTESS, I try to pay close attention to how different roles experience their workflows. Like many companies today, the topic of expanding our use of artificial intelligence has been a recurring one over the past few years. In preparation for this year's meeting, I took a deeper dive into how AI might be better leveraged across our team in a thoughtful, practical way.
I'll be the first to admit that I am not the most tech-savvy person in our firm (a fact my Gen Z children are always happy to point out). But that perspective turned out to be helpful. Approaching AI from a relatively basic user standpoint allowed me to focus less on the technology itself and more on identifying realistic leverage points. In other words, those areas where AI could meaningfully reduce friction, save time, or improve consistency without adding unnecessary complexity.
For healthcare business owners who have already embedded AI across multiple processes, congratulations! You are well ahead of the curve. In my experience, however, most healthcare leaders are still in the exploration phase. That has certainly been true at VERTESS. Some of our team members use AI daily, whether it be to organize projects, reduce time spent writing or proofreading, summarize lengthy documents, or support research and analysis. Others use it more casually, or not at all. What became clear is that adoption was inconsistent and largely unstructured.
My goal as I prepared to speak to our people in Texas about the use of artificial intelligence was not to turn everyone into an AI expert and super-user overnight, but rather to identify specific areas where standardized AI tools could support multiple roles and shared responsibilities across the firm. To do that, I started by asking each team member a simple question: What parts of your job feel the most time-consuming, frustrating, or inefficient? The responses were eye-opening. I received a long list of responses, many of which I hadn't previously recognized as bottlenecks.
While it was tempting to imagine AI as a magic solution to all these challenges, that mindset is neither realistic nor responsible. Instead, we recognized the need to take a measured, pragmatic approach to implementation.
Although the areas we focused on were specific to VERTESS, the framework we used can serve as a helpful, 10-step checklist for today's healthcare owners and managers who are trying to determine the best ways to introduce and expand AI usage within their own organizations.
How to Put Healthcare AI Into Practice: 10-Steps to Follow
1. Identify the real pain points
Effective AI adoption begins with a clear understanding of where operational friction already exists. In healthcare organizations, these challenges are often found in administrative workflows rather than clinical care. Documentation requirements, reporting, intake coordination, credentialing, and internal communication frequently take up significant time without improving outcomes. Challenges related to patient data handoffs, referral tracking, and quality reporting often compound these issues, particularly when information lives across multiple systems or formats.
Organization leaders should engage staff directly, like I did, to understand which tasks feel most inefficient or duplicative. In our work at VERTESS, we have seen clients uncover substantial hidden bottlenecks simply by asking frontline and support teams where work slows down or breaks down most often.
2. Start with low-risk, high-impact use cases
Once pain points are identified, the next step is prioritization. Not every problem is appropriate for AI, particularly in healthcare environments where accuracy and compliance are critical. Early use cases should be clearly defined, low risk, and focused on administrative or analytical tasks rather than clinical decision-making. Common starting areas may include condensing long documents, organizing internal data, drafting first-pass content, or supporting research and analysis. In healthcare settings, this often looks like summarizing payer contract language, reviewing vendor proposals, or synthesizing patient satisfaction and access metrics for leadership review.
Organizations that begin with contained, low-risk applications tend to build confidence more quickly and avoid unnecessary disruption.
3. Choose tools with healthcare constraints in mind
Tool selection requires more than evaluating features or ease of use. Healthcare leaders must consider how data is handled, stored, and potentially reused. Even when AI is applied to non-clinical tasks, sensitive information such as protected health information, payer and financial data, or provider credentialing details can be introduced unintentionally.
Organizations should work with compliance, legal, and IT stakeholders to understand privacy implications and confirm alignment with internal policies and regulatory requirements. HIPAA considerations, data retention practices, and vendor transparency should be addressed before tools are rolled out and broadly adopted.
4. Secure meaningful team buy-in
AI initiatives, like most new technology projects, succeed or fail based on adoption. In healthcare organizations, comfort with technology varies widely by role, tenure, and function. Buy-in is more likely when AI is presented as a practical support rather than a transformational mandate.
Leaders should focus on demonstrating how specific tools reduce rework, save time, or improve consistency in day-to-day tasks. We have observed that healthcare organizations achieve stronger adoption when AI is positioned as a support for existing workflows rather than a replacement for processes or people.
5. Provide structured, ongoing training
Initial training on the use of an AI technology is necessary, but don't expect it to be sufficient. Organizations will benefit from clear guidance on how and when AI tools should be used. This includes setting expectations around human review, defining appropriate inputs, and acknowledging limitations. Training should be practical and role-specific, with opportunities for follow-up as workflows evolve and questions and perhaps concerns arise.
In addition, regular check-ins with staff will help reinforce responsible use, encourage questions, and prevent misunderstandings that could create risk or reduced usage over time.
6. Establish clear guardrails and accountability
AI outputs should never be treated as final without oversight. While the technology is impressive and improving rapidly, it remains far from perfect. Organizations need explicit guardrails that define where human judgment is required and who is responsible for review.
This is especially important in healthcare, where errors can carry regulatory, financial, reputational, and even clinical or safety consequences. Clear accountability protects both the organization and individual users, ensuring AI remains a support tool rather than an unchecked decision-maker.
7. Test and validate before scaling
Early success does not guarantee long-term reliability. AI tools often require refinement as inputs change or as users become more comfortable pushing their capabilities. Healthcare organizations should test outputs carefully, validate accuracy, and monitor for unintended consequences before expanding use cases. This is particularly important when AI is used for tasks like summarizing payer policies, reimbursement updates, regulatory guidance, or vendor agreements, where subtle errors or omissions can carry financial or compliance implications.
In our experience, organizations that take time to validate early results are better positioned to scale safely, responsibly, and sustainably.
8. Standardize where it adds value
As usage grows, inconsistency can undermine results. Developing shared prompts, templates, or workflows can improve reliability and efficiency across teams. Standardization can be even more valuable in multi-site healthcare organizations or those with diverse functional roles. The goal here is not to restrict flexibility but to reduce unnecessary variation that can lead to errors or inefficiencies.
9. Recognize where AI should not be used
Part of responsible adoption of artificial intelligence is knowing when not to apply AI. Certain decisions, especially those involving patient care, utilization review determinations, employment actions, or regulatory interpretation, require human expertise and judgment. Leaders should be explicit about these boundaries to avoid confusion and overreliance on automated outputs.
10. Reassess and expand thoughtfully over time
AI implementation will be more than a one-time event, especially given the rapid growth in the number of solutions coming on the market. As healthcare organizations mature in their use of these tools, new opportunities will emerge. Periodic reassessment allows leaders to expand into additional areas while maintaining appropriate controls. Organizations that revisit their AI strategy with intention are more likely to achieve lasting benefits without compromising quality or compliance.
Turning AI Into a Tangible Healthcare Advantage
There is no magic bullet when it comes to AI implementation. Successfully identifying appropriate solutions and then integrating these tools requires time, due diligence, experimentation, staff engagement and education, and a willingness to learn what works — and what doesn't — and to make changes accordingly. That said, healthcare organizations that fail to explore AI thoughtfully risk falling behind competitors already leveraging it to operate more efficiently and effectively.
AI is not a panacea, but when implemented with intention, training, and oversight, it can become a powerful support system for healthcare leaders and their teams.
If you haven't begun experimenting with AI in small, low-risk ways, now is the time.