Emotional Implications of the ChatGPT Sugar Rush in Healthcare
Updated: Jun 20

Can you imagine the day when a nurse or physician can simply provide some details on a patient and condition and have an algorithm write the commentary into the EMR?
Me neither, and before the pitchforks come out I’m an admitted skeptic of this hyped technology as I have been about many other AI breakthroughs. Now in the world of Google archives, I’m sure someone will pull up this blog and refresh my memory about what an ill-informed contrarian I was.
However, one advantage of being 70 is that in addition to living through thousands of technology breakthroughs that gained traction, I have also seen just as many shiny objects that lost their luster shortly after the “kids got tired of their new toys”.
So think about the drop in productivity as millions are sitting at their laptops seeing how ChatGPT can write an admissions essay for their high school senior. Or what their resume cover letter would look like if done by a robot. Just like the hours spent in the late ’80s when augmented reality was hot, but fizzled because of the limited utility. When was the last time you used augmented reality on a regular basis in 2022?
Just like the chat bots we frequently grapple with on customer service or personal improvement apps, ChatGPT is missing two critical elements.
The convergence of soul and intellect!
The first is pretty obvious because even if the algorithm gathers every data point known to mankind (in this case only up to 2021), it is impossible to capture the deepest human emotions that are connected to this data. Unfortunately, for many tasks, especially in healthcare, the emotion is more important than the underlying data.
During a 100-day stay in a world-renowned hospital for COVID, I used the line from the famous musical A Chorus Line when telling my caregivers about over-relying on Epic to know the real me. The lyric is “Who am I anyway, am I my resume ?”.
In the trendy algorithmic world, we live perhaps this lyric could be changed to “Who am I anyway am I my ChatGPT output ?” Reasonable people know the answer, ironically because the “it's complicated” emotional, empathic, and compassions reasoning is well beyond the natural language programming skills of developers and their code, especially when related to patients and their families.
However, over recent weeks I have spoken to many who are rushing to find some business model that uses ChatGPT for profit.
As a professor, it was no surprise that tech-infatuated students look at the algorithm as a god-send for assignments in exactly the same way they hired term paper writers in the “old days”.
So while from an academic perspective, especially in computer science programs, universities are obsessed with this groundbreaking innovation, they also view it as the natural enemy of independent scholarly thinking otherwise known as the evolution of “algorithmic plagiarism”.
I tell many of my more entrepreneurial friends that rather than jumping on the ChatGPT deployment bandwagon, the real market will be for products that identify this very complex form of plagiarism in much the same way cybersecurity exploded.
Some of it will be done through technology, but most of it will be done using the emotional domains of human factors.
What does that mean?
If ChatGPT’s weakness is in reliably digitizing empathy, compassion, and emotion, then academics and others concerned with content need to embed and magnify those “affective” elements into assignments and content specifications. For example, adding highly personalized real-life reflections into the assignment or content brief.
The other area that will further expand as a result of ChatGPT will be the already compelling area of AI ethics. This area has had traction for years given the privacy and governance implications of AI, but as the ChatGPT-like products expand, many will ask the simple question: “is it right to use this technology for work that I was hired to do myself?”.
Or in extreme cases, will employees actually resist being forced to use the algorithm as part of their job in much the same way many clinicians want to determine the proportion of use of intuition over AI in medicine?
So despite my somewhat contrarian views, I do realize that there are many very good aspects of robotics process automation (RPA) like the reduction of mundane tasks to enable employees to do more productive work, especially related to patients.
But reducing burnout, increasing equity, and elevating patient experience will still be disproportionately reliant on human decision-making with the occasional support of platforms like ChatGPT.
PS. I wrote this entirely by myself! ☺
For more insights on Leadership, Patient Experience, Hospital@Home, Burnout, and Equity log into ICD Healthcare Network