Could Generative AI Magnify Healthcare Inequities?

It’s the epitome of a hype-curve and healthcare is an epicenter for implementation. But will generative AI be part of a greater problem or a groundbreaking solution when it comes to healthcare inequities?
First, a brief differentiation between Generic AI and Generative AI: Traditional AI algorithms (Generic AI) tended to operate behind the scenes and provide insights through incredibly complex yet specific data feeds. In the case of healthcare, an AI platform could scour data or images from medical records around the world and produce results related to ailments and their treatments at the speed of light. This “swarm AI” model was particularly compelling as specialists were trying to identify the most effective treatments for COVID by accessing millions of data points from thousands of providers and researchers worldwide.
But at the most pedestrian level, this form of AI could be a bot used by customer support to guide you to the correct answer. We all have feelings about how unfulfilling those interactions can be!
Generative AI, on the other hand, "generates” text responses to queries in a way that would simulate online human-to-human conversations. The clinician or the patient asks a question, much like they would on Google, and the platform (like ChatGPT) would respond in a long-form or conversational manner. The query input can be more “intimate,” so one would ask a question as if speaking to a human.
If I asked the generative platform to tell me what the most remarkable traits of Frank Cutitta are, in theory, it would wax poetic about my teaching, research, and writing skills. But with my first experience with ChatGPT, this query confused me with the golfer Ben Crenshaw...
AI of every kind has been in the crosshairs of ethicists and health equity leaders for several reasons.
Faux Sentience
First, much of the research shows that AI is still imperfect, especially considering the sentient nature of equity. Health equity is far from binary, and the emotional ambiguities related to it are the natural enemy of AI algorithms. This was the case before anyone even knew what generative AI was, but it has become magnified with the advent of more conversational platforms.
One of the reasons is that there is a not-so-subtle perception that the algorithm has human characteristics, not the least of which is trust. Indeed, we trusted Google search results, but it has yet to strike up a conversation or respond with detailed suggestions of what you should be doing about the query we entered. Yet millions use “Dr. Google” every day to self-diagnose, even though it is not a conversational platform like generative AI.
Implicit Bias on Steroids
If you think that implicit bias is an issue concerning facial recognition of search technologies, consider the implications if the algorithm has an “opinion” on the results.
When one studies generative AI output, it tends to have some very generic insights with an overlay of a few specifics, especially when the query is personal. I call this the Hallmark Movie effect, where 75% of the plots are the same, with a few twists related to the setting or character development.
If we put that into a health equity context, the output is at risk of massively overgeneralizing the subject, in addition to adding its algorithmic color commentary with no apparent connection to the human or affective elements. To the extent the algorithm can “get it right,” this tool can be valuable for health equity agents. But the early results are sketchy.
Affirmative Action & Medical Students
At the time of this writing, my thoughts gravitate to how institutions could apply generative AI to medical school admissions following the recent Supreme Court decision to end affirmative action.
On a positive note, could generative AI be used to analyze applications to avoid a direct reference to an applicant’s race but explore the diverse attributes the medical student would bring to the institution, race notwithstanding?
Ethical Considerations
Finally, despite the infatuation with the possibilities this new technology can bring to health equity, there remain growing ethical concerns.
Consider that generative AI will increasingly be used to make decisions while implementing DEI programs. Think about how easy it will be to have the technology write evaluations or make decisions regarding the pecking order in which patient care will be delivered. We’re already seeing technology-driven laziness in the content creation business, which will surely creep into equity and diversity-related reporting and record-keeping.
These technologies have already started to “argue” with the user, and, in some extreme cases, the tech suggested it no longer wants to be an algorithm. Granted, these are outliers, but so was getting a blueberry muffin confused with a chihuahua in the original facial recognition platforms.
I am far from an AI Luddite, and I see all forms of AI and robotic process automation as critical for streamlining mundane tasks in every industry.
But as "Ben Crenshaw," I’m simply arguing that in many cases, a “manufacturer-issued” human brain may still be the most reliable tool for sentient-based health equity decision-making.
Subscribe to the ICD Healthcare Network for more healthcare insights.