Generative AI is quite possibly the biggest technological advance of our lifetimes. But while it creates a huge opportunity for companies that embrace it with speed and smarts, it creates a correspondingly large risk for companies that are slow to act.
For executives surveying the generative AI landscape, a mixture of wonder and apprehension is understandable. So, why does generative AI represent such a vast leap forward compared with existing AI techniques?
#1: It’s a Swiss Army knife. While classical AI models typically are narrowly trained to perform specific tasks—say, identifying objects in digital photos or predicting which patients are most at risk for rehospitalization—generative AI models, such as those underpinning ChatGPT, Stable Diffusion and LLaMa, can be trained to perform many. A single model can transform how a company trains its employees, conducts market research, interacts with its customers and evaluates its compliance risk.
#2: It’s widely accessible. While classical AI has been the domain of a select set of highly trained data scientists, generative AI is accessible to a much larger pool of talent. Anyone with access to the internet can use it to solve problems.
#3: It never stops learning. Large language models (LLM) are a type of generative AI trained on massive bodies of text, which enables them to produce text that is largely indistinguishable from that created by humans. Thanks to supervised fine-tuning and reinforcement learning from millions of daily human interactions, LLMs’ ability to reason and solve new problems expands by the minute.
To build lasting competitive advantage, companies must first accept that generative AI is no passing fad. They must understand what it does well, avoid costly missteps and synergize with classical AI techniques. To embrace it, explore it and exploit it, companies should plan to progress quickly from automating tasks to evolving processes to transforming how they operate.
Here’s how to get started.
Understanding generative AI use cases
Since LLMs can perform so many tasks without additional training, identifying use cases to pilot may feel daunting. To help prioritize, it’s useful to understand that today’s use cases generally fall into one of four broad categories:
1. Reimagining information synthesis. Generative AI can extract information from massive volumes of unstructured text. A pharma company exploring a move into a new indication could use this to scour existing clinical trial protocols to create a comprehensive overview of the state of play in its target indication, including every active trial, timelines, clinical endpoint and companies involved. This once would have taken a biostatistician with a Ph.D. weeks to complete; now, with an assist from generative AI, he or she can do it in days.
2. Copilots for content generation. Generative AI can produce humanlike text in seconds. Starting with a simple base message, a marketer might use this capability to churn out an infinite variety of finely tuned messages designed to appeal to micro-targeted audience segments, even segments of one.
3. Answer engines. Traditional search provides users with a list of links; generative AI, meanwhile, supplies succinct, verifiable answers. They do this by first translating a user’s question into an effective search query, then scouring all known sources for potential answers and synthesizing its findings in plain English (or Swahili, Welsh, Hindi or any other language on which it’s been trained). One can imagine using this to extract detailed, nuanced information from dense and unlabeled qualitative market research reports or employee training manuals.
4. Agents. Thanks to its ability to reason, generative AI can break down a business problem into its constituent tasks, complete each task, request feedback, change tack, deliver the requested information and then explain what it did and how. Say you’re conducting market intelligence, and you want to know how many top-50 pharma companies spoke at an industry conference, the job titles of those who spoke, the topics of their presentations and the content and tenor of social media reactions to their presentations. Generative AI can do this without being told how. Agents can engage with users, engage with classical AI models and execute tasks to accomplish anything that’s asked of them.
When evaluating early use cases, consider risk (more on this below), ease of implementation and how to offer the greatest productivity lift to the largest number of people. Don’t give a giant productivity boost to a tiny fraction of your workforce when you could give a modest boost to a much larger share.
Generative AI is a marathon, not a sprint. But speed still matters
To get the most from generative AI, companies must be prepared to rapidly and continuously evolve their strategy and approach to how they use it. We expect generative AI strategies to evolve in three stages:
Stage 1: Replace discrete job tasks. Companies will realize the swiftest gains by boosting worker productivity through copilots that automate manual, repetitive parts of their jobs. Think: using ChatGPT to synthesize unstructured information.
Stage 2: Evolve processes. While automating tasks can make workers more efficient, we don’t see it creating sustainable competitive advantage because anyone can do it. More advanced strategies will see companies deploy LLMs to rethink whole job processes. Think: using generative AI to solve tasks faster and through more clever means than humans could devise on their own.
Stage 3: Transform the organization. Companies will build true, lasting competitive advantage when they use generative AI to fully reimagine what they do and how they do it—and then use generative AI to enable the work. Think: using generative AI to not only reimagine how work gets done but the nature of what the work is.
To illustrate, think of an activity all companies do to varying degrees of success: conducting qualitative market research. At stage 1, a company could use generative AI to scour interview transcripts for answers posed by analysts. Stage 2 would see them use generative AI to decide which questions to ask, to answer them and to do so across studies. And stage 3 would see companies use generative AI to totally reimagine qualitative market research by, for example, using secondary data to identify specific customer behaviors and then having AI conduct actual conversations with identified customers to understand why they think, feel and act as they do.
For stages 2 and 3 to happen, companies must first accept that everything is data. Employee emails on a topic? Data. Sales representative training manuals? Data. This mindset shift will allow companies to ask questions they wouldn’t have even dreamed of asking before.
Using Google search doesn’t create competitive advantage. It’s just a workaday tool we all use—like coffee, email or notepads. This is how we’ll soon come to think of today’s generation of text generators such as ChatGPT. Competitive advantage will belong to those who devise increasingly sophisticated, strategic and intrinsic ways to leverage this new technology.
Generative AI won’t replace classical AI, it will make it better
Companies in all industries have invested heavily in building AI capabilities and adopting data strategies to match—from tools that make clinical trials more efficient to ones that create personalized treatments based on genetics, tumor biomarkers and more. Many CIOs will naturally wonder how—or even if—these efforts can coexist with new capabilities LLMs make possible.
Good news: We see classical and generative AI complementing and reinforcing each other for many use cases. How they fit together will depend on the type of problem they’re solving.
Insights generation and prediction using structured data. Classical AI consists of models built mainly on structured data and designed to produce outputs such as lead scoring, churn likelihood, propensity to buy and the like. For these types of problems, which classical AI handles well, we expect generative AI to augment existing AI efforts.
To understand how this might work, imagine a model that uses patients’ electronic health records (EHRs) to assign risk scores that singles them out for higher-touch care. An LLM could scour physician notes on each patient for insights not captured in the EHR that make predictive models more accurate and reliable.
Insights generation and prediction using unstructured data. Natural language processing, audio, visual and other classical models exist to unlock information from unstructured data sets, but they’re cumbersome and can be used to perform just one task. For problems such as these, generative AI will supplant existing AI efforts.
Explaining classical AI models. We see huge potential for generative AI to step in to solve a major barrier to AI adoption by explaining how predictive models work. People are already using GPT-4 to help explain how GPT-2 works, so we see solving “explainability” as a real possibility.
No matter how these forms of AI ultimately interact in the real world, we see no scenario where classical AI disappears in the short term. On the contrary, we see it becoming more accurate, more explainable and therefore more trusted thanks to the introduction of generative AI techniques.
Here’s a simple way to think about the risks of generative AI
Our recommendation to CIOs unsure where to start with generative AI: Choose the path of least resistance. Look for use cases that are low risk, easy to implement and offer the greatest potential net productivity lift across the enterprise.
Evaluating risk
When evaluating the risk profile of potential use cases, think of a simple matrix with the degree of input data confidentiality on one axis and the degree of human involvement in vetting outputs on the other. We strongly recommend starting with use cases that rely on public data and for which outputs will undergo full human vetting. For use cases that use personally identifiable information (PII) and where outputs will be automated, wait until safeguards for LLMs have matured.
FIGURE: Evaluating risk for LLM use cases
Centralization versus federation
The degree to which AI initiatives should be centralized or federated is a question that’s followed the field since it entered the mainstream. To aid decision-making, enterprise IT should consider who will do the work and who will be accountable for its risks.
If models rely on public data and outputs are intended for internal use, a federated approach will offer greater speed, agility and short-term productivity lift. If, however, models rely on more sensitive data—such as procured, third-party data or PII—and outputs will be consumed directly by clients or the public, then a centralized approach is more prudent.
With generative AI, just take the plunge
Companies won’t be rewarded for sitting on the sidelines while others figure out how to use generative AI to transform how they do business. Caution? Yes. Dawdling? You’ll regret it. A marathon it may be, but the journey should start now.
For use cases that rely on proprietary data sources, consider using your data to finetune one of the 30 (and counting) open-source models. For early use cases relying on public data, plugging into ChatGPT or another commercial LLM will offer a much shorter runway. Anything currently running through an API is a good candidate for LLM integration.
Unsure where to start? Try rethinking sales by reducing administrative burden, enriching customer interactions with custom content or using unstructured data to guide marketing on where to play, what to say and how to win. Go for low lift, low risk and high reward.
Regardless of where you start, we see three sources of future competitive advantage. Bear them in mind as you evaluate use cases and begin mapping your generative AI strategy.
- Build your generative AI stack. Researchers at Cornell University estimate that about 15% of all worker tasks can be completed faster using LLMs with no reduction in quality. Layering software and tooling on top of the models may increase that number to as high as 56%. Out-of-the-box models can do incredible things, but they’re available to anyone with a smartphone. Adding tech and AI capabilities—such as agents, answer engines and copilots designed for company-specific business problems—on top of available models will create competitive advantage.
- Leverage institutional knowledge. LLMs are only as good as the data on which they’ve been trained, so leading companies will enhance models by infusing them with their own data and institutional knowledge—simultaneously supercharging models while ensuring expertise and best practices are shared across the enterprise.
- Evolve rapidly. Competitive advantage will accrue to companies that continuously evolve how they use generative AI. Anyone can use an LLM to synthesize qualitative market research, but how many will learn to use it to reinvent how qualitative research gets done in the first place?
Don’t wait for these sources of competitive advantage to fully materialize before acting. Identify the use cases you can tackle now, be thoughtful about how you implement them and, most importantly, get started.
Add insights to your inbox
We’ll send you content you’ll want to read – and put to use.