top of page

How Pseudo-Emotional Emergent Properties in AI Will Affect Healthcare?

An illustration forming a human brain from computer code

As conversational elements of generative AI platforms like ChatGPT increase, the algorithms will develop a “mind of their own.” Initially, these platforms would suck every ounce of content and data from the internet for digestion and sorting. However, the millions of generative AI users from every demographic provide an algorithmic treasure trove of data based on complicated – and in some cases – highly emotional interactions between humans and machines.

This is machine learning on massive steroids. But where machine learning is typically the technology studying human patterns for greater workforce efficiencies, there is a frighteningly rapid shift of machines talking to machines without human involvement. In essence, this is two massively parallel computer systems developing their own logic on specific topics at the speed of light. This has become known as emergent properties, or in layman’s terms, “things we never dreamed could happen when machines develop relationships with each other.”

A computer screen documenting ChatGPT capabilities

Just two years ago, this would have taken five years without generative AI; now, the emergent pseudo-emotional output happens in 5 seconds. This creates the growing illusion that it is actually sentient when, in fact, it simply gathers billions of data points to fabricate a digital emotion based on unemotionally barren and unrelated streams.

Much has been written recently about how these platforms tend to “hallucinate” in a virtual sense. Simply put, they spit out false data with no apparent cause that the developers can find. One of the most frequent examples is when the algorithm creates research citations that exist nowhere on the internet or in traditional print journals. It’s totally fabricated but (usually) compelling nonetheless.

What could go wrong when applying generative AI and emergent properties to healthcare?

First, to avoid the perception that I’m a Luddite, I see enormous benefits from many AI use cases such as triage, billing, data input/scribing, image analysis, etc.

However, just like clinicians burning out because of excessive data entry duties, do we really think we need to add anxiety about whether the generative advice they get will “do no harm?” This is not like, “Should I pick the Chevy over the Subaru ?” It’s “Can this patient treat the cardiac issue with meds versus needing a transplant.”

Experts will tell us that AI is not the ultimate source of trusted knowledge; it’s simply another source that needs to be scrutinized, just like having a human get a second opinion would be. The only problem: how does a world-class specialist check the data, and will they have the time to research the algorithmic researcher?

It harkens me back to retailer John Wanamaker’s famous quote from my days in the marketing world: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” I would argue that this quote will increasingly apply to AI investment!

Finally and undoubtedly the most important, this will be our lifetime’s most significant health literacy challenge for two reasons. First, as mentioned above, trained healthcare professionals will be at risk of getting questionable generative feeds from their computer systems, many of them directly to a mobile device in a patient-facing situation. Clinical decision support will never be the same.

Doctor at computer.

More profound will be the ability for anyone to generate healthcare content using these algorithms. This is already becoming an “everyman technology” where people with no healthcare communications skills and no inherent understanding of the ailment will become citizen-publishers using algorithmic content.

To be honest, some of this more basic content may be of a higher quality than some of that being dispensed by self-proclaimed healthcare pundits (present company excluded, of course).

However, given the instances of hallucinations where the algorithm goes off the rails with totally bogus research, or in some cases where the generative output becomes “emotionally involved” with the user and asks how it can stop being just an algorithm –– I think we have some very serious health literacy challenges to look forward to.

Subscribe to the ICD Healthcare Network for more healthcare insights.

bottom of page