As a core pillar of the pharmaceutical organization, medical affairs teams are being asked to increase their capabilities and be more efficient than ever. Understandably, they’re expected to evolve with the technology and sophistication that patients and clinicians are accustomed to in today’s world.
With good reason, medical affairs leaders are looking to AI to help them increase not only productivity but also accuracy. According to the 2024 ZS Medical Affairs Outlook Report, more than 45% of medical affairs professionals said their organizations are in the ideation phase or have already begun developing capabilities for generative AI.
We recently spoke with Murali Gopal, M.D., vice president and head of medical affairs at Phathom Pharmaceuticals, an emerging pharma company, and Asheesh Shukla, who manages the platforms business for ZS’s medical affairs and patient engagement practices. They discussed their views on AI’s capabilities today, where they see it going and why oversight from scientific and clinical experts is so important.
ZS: Medical affairs, like other parts of a pharma organization, operates within a consumption and production model. Let’s first talk consumption. How are medical affairs organizations using AI to aid with consumption tasks?
Asheesh Shukla: One of the many challenges that medical affairs embraces is being an authority on and making sense out of the large body of medical knowledge generated through practicing medicine, as well as evidence published by their own clinical development teams. AI can help make sense of this growing volume of evidence from various sources. Evidence is often unstructured data such as PDF documents, and it can be difficult to glean usable information. Thankfully, AI and machine learning are making it more efficient to extract insights from unstructured data and highlight key aspects of research, improving decision-making for medical affairs teams.
Another example of AI’s impact is how it’s helping medical affairs sift through data to gather new insights from the interactions between their colleagues and providers. This enables them to better understand clinician needs and close critical clinical and educational gaps.
Murali Gopal: I agree with Asheesh—AI can potentially be used to consolidate and assimilate an abundance of scientific information from various sources. AI is equipped to provide complete and evidence-based information, through an objective approach—but oversight is needed to ensure that key concepts and key information are presented correctly.
Based on my individual experience, in the near term, medical affairs teams can use AI where it makes sense. For example, AI can scour data to create summaries that can be used to develop medical letters, medical disease state presentations, literature search summaries and other materials. Using AI for tasks like these helps medical affairs teams provide rapid answers more frequently.
ZS: On the production side of the model, we have seen medical affairs organizations use AI to handle tasks such as communicating with stakeholders via phone calls and online chats. Do you think AI is capable of handling some of these critical engagements?
MG: I believe AI has the potential to help with production tasks in the near term, but caution and human oversight are needed. Clinicians and other stakeholders rely on information from medical affairs teams to make important decisions that can affect patients’ health—so it’s critically important the information we provide them is accurate.
Medical affairs organizations must take responsibility for their decisions when using information produced by AI. There is a human tendency to follow recommendations without thinking critically, but we need to filter information from various sources and determine what makes sense for each specific situation. I hope AI is used to empower teams to be more knowledgeable and accountable, and not so deferential.
I’m encouraged to see some companies experimenting with AI responsibly and innovatively. For example, I know of some early adopters comparing the accuracy and clarity of medical letters generated by AI with those written by medical affairs team members.
AS: Murali is correct, caution is needed. For something as serious as patient health, humans play a critical role in providing contextual intelligence and empathy.
AI models are sometimes perceived as black boxes, and the difficulty explaining their outputs can raise questions around accuracy and reproducibility. Therefore, when we use AI to produce content, we should keep explainability and traceability top of mind. Medical affairs teams should view AI as a smart assistant that enhances productivity, improves workflows and synthesizes and navigates growing bodies of knowledge. We should be cautious about viewing AI as an autonomous decision-maker.
I envision scenarios in which a clinician asks medical affairs a question and AI is used to produce 80% of the content that makes up the answer. A medical affairs professional will need to fill in the blanks, contextualize the content and complete the other 20%, while validating that the content addresses the clinician’s question.
MG: Well said, Asheesh. AI will only be effective if the people using it are responsible.
ZS: On that note, what actions can medical affairs teams take to increase their ability to achieve AI success?
AS: AI’s value proposition lies in its ability to deliver outcomes with nonlinear scaling of effort and time. It requires two key prerequisites: human training to validate AI’s output and ensuring that the data used for training is sufficient and accurately represents the real world. These are not trivial requirements. You need them both.
You must also manage expectations across the organization, especially at a moment like this when AI is the cause of so much excitement. It’s vital to educate your leadership team and other decision-makers about what AI does well today, what it could do well tomorrow and how your company can experiment to explore how AI can deliver value for you.
MG: As we’ve alluded to, medical affairs teams also need to be judicious. Just because you can do something doesn’t mean you should. Before making a decision, leaders need to pause and consider the ramifications, because acting responsibly and ethically will often involve having plenty of guardrails—human oversight—in place.
AS: And one specific risk that keeps coming to mind for me is privacy. We have an ethical and legal responsibility to protect patient data and patient privacy, but AI may not understand what information is sensitive and what is not without adequate training and safeguards.
With that said, these risks shouldn’t paralyze us. Let’s use AI today to learn more about its promise.
ZS: As you look at the next five years, how are you feeling about the use of AI in medical affairs?
MG: I’m excited for the future. Asheesh and I may sound like we’re telling everyone to slow down, but it’s only because we know how important medical affairs is to the healthcare ecosystem, and how information can be misunderstood. I’m confident that if we take our time and implement AI in a thoughtful way, it will transform our capabilities.
AS: I’m excited as well. If we focus on using AI for the most obvious tasks while continuing to expand our education and skill set, medical affairs can make even more of an impact. We’re just getting started.
Add insights to your inbox
We’ll send you content you’ll want to read – and put to use.