Thoughts from Ken Kaufman

More Thoughts About Artificial Intelligence

2 minute
Physician using AI

These days one can’t talk enough or learn enough about artificial intelligence. It’s either the fast-coming evil empire or the potential savior of modern civilization. The truth will likely, as always, be somewhere in between. In that regard, we would like to accomplish the following with this blog:

  1. Recommend an essential new article that provides a most exceptional and nuanced explanation of AI;
  2. Comment on a remarkable recent report that provides an insight to the potential power of AI; and
  3. Raise and discuss a number of developing AI-based strategic issues for hospital and health systems.

Part I: Jaron Lanier

Jaron Lanier holds the somewhat baffling title at Microsoft of “Prime Unifying Scientist.” Lanier is a computer scientist, a futurist, and a composer of contemporary classical music. He is also considered one of the founders of virtual reality.

On March 1, 2024, Lanier published an article in The New Yorker magazine entitled “How to Picture A.I.” To say the very least, this is a brilliant article; brilliantly constructed and brilliantly written. If you haven’t been able to entirely grasp AI from previous readings, this article will solve that problem. As a colleague of mine said, “Lanier does a remarkable job of explaining to the reader what AI is and what terms like deep learning and generative AI mean functionally.” Lanier comments that “if we can’t understand how a technology works, we risk succumbing to magical thinking.” This is a powerful observation for organizational executives who must decide when to use AI technologies and when not to.

Lanier organizes his article into four steps, which he calls a “human- centered cartoon,” and the four conceptual steps are Trees, The Magic Forest, Forest Products, and Phantom Trees. This all seems mysterious, but Lanier’s explanatory powers are rather remarkable. We can’t recommend this article highly enough. It is mandatory reading for executive teams throughout provider healthcare.

Part II: The Klarna Announcement

Klarna is a Sweden-based fintech company that operates in the “buy-now-pay-later” space. According to a March 4th Forbes article, Klarna maintained in a recent press release that an AI assistant created by ChatGPT is now handling “the workload of 700 full-time staff members.” Klarna further represented that the AI algorithm is managing two-thirds of customer service chats—2.3 million conversations—in an extraordinary 23 markets and 35 languages. Further, repeat inquiries from customers have decreased by 25% and the average conversation time was reduced from 11 minutes to 2 minutes. With all of this, Klarna reduced its head count in 2023 by 25% and expects an increase in profitability of $40 million.

No matter where your first thoughts about the power and impact of AI have taken you operationally—the Klarna numbers are startling. Your first reaction might be the same as ours: What if these results are multiplied over 1000s of companies worldwide? The impact on workforce and profitability might be incalculable.

Your second thought, however, might be that despite the Klarna report, the ongoing impact of AI might be much more nuanced and complicated. For example, the impact of AI on workforce could be dramatically different in developing economies versus the impact in highly developed economies like the United States or Western Europe. The AI impact could be accelerated in areas characterized by workforce shortages or, in fact, AI might actually be a job creator rather than a job destroyer. Understanding the actual AI trends will likely be most important to guiding operational AI decisions within your own organization. The Klarna report, and other reports to come, certainly should gain and keep the attention of healthcare executives.

Part III: Fast Developing AI Strategic Issues

AI might be a “tool,” but it may be unlike any “tool” we have seen before. And, therefore, it very likely will drive organizational strategies in entirely new and different ways. This brings forward an entire series of relevant corporate questions:

  1. Is AI an “enabler” or a “strategy” all by itself?
  2. What if AI is much more than a “tool”? What if accelerating technology moves AI to more of a “creature” status (Lanier’s term) with anthropomorphic characteristics?
  3. If AI is more of a “tool” for the foreseeable future, how is that going to work if you install those “tools” on top of already poorly performing hospital processes?
  4. What is the creative vision necessary to combine AI with existing in-place strategies that might actually define a new provider-based value proposition accompanied by a transformative care delivery system?
  5. Finally, and obviously, what is the probable impact on resource requirements, level of investment, and organizational readiness?

And the last thought for now is around the issue of AI traceability. Technically, traceability is a human readable explanation of what inputs and algorithms an AI model used to determine its outputs. Traceability is over time going to be a big deal generally in artificial intelligence, but it will be a much bigger deal in the healthcare vertical, especially as it relates to the use of AI within clinical care, particularly how any AI model makes recommendations on diagnoses and treatment. There have been cases where a large language model has produced well-structured and seemingly believable answers to medical questions citing scientific papers and medical journals that don’t, in fact, exist. This AI algorithm characteristic is called “hallucination,” and it can generate convincing but false content. As Lanier points out in his article, tracing the specific “breadcrumbs” used to develop AI recommendations has not yet been applied in practice. Given the scale of these models (Open AI’s Chat GPT-4 model may have over a trillion parameters), it is unclear when and how this required traceability will be possible.

No doubt artificial intelligence is endlessly fascinating, totally exciting, and absolutely worrisome. Our advice—read, learn, and experiment. Begin with AI cases and models where the risks and consequences of error can be anticipated and managed. Best of luck in this brave new world.

Amanda Steele headshot
Amanda Steele is a Managing Director and co-leads Kaufman Hall’s Strategy & Business Transformation practice, where she focuses her time advising health systems and provider enterprises with their enterprise strategy – developing value propositions to deliver on their missions and visions in light of the fast-changing healthcare landscape.
More Thoughts from Ken