In my most recent post, I listed 12 ways that AI benefits patients in healthcare, saving many thousands of lives. I wish the story ended there, but it does not. There are also problems that invade privacy, demonstrate bias, and cost thousands of lives.
Let’s take a look:
1. Bias. It is often called machine bias, but machines are just neutral tools. The bias is in the human programmers and it is often unintentional, but that does not lessen the damages being caused.
Examples are abundant and have been damaging to patients with cardiovascular disease, chronic pain, or in need of breast cancer tests, Mental Health Care, and clinical tests.
For example, a 2019 study in Science magazine revealed that an algorithm used in U.S. hospitals to recommend treatments was less likely to refer Black patients than White patients, despite similar needs.
There are millions of cases of accidental bias harming patients. I find myself unsympathetic to the defense that the programmers did not have malicious intent. It seems to me to be beside the point. It’s like a gun owner shooting someone because they didn’t know the gun was loaded.
A recent study reported that Black patients were 40 percent less likely to receive medication for acute pain, and Hispanic patients were 25 percent fewer than whites.
Bias causes damage in ways you might not expect. For example, a Booz/Allan/Hamilton report observed that Blacks were given fewer pain medications in emergency rooms than Whites across the US. A Science magazine article found that hospitals using AI spend less money on Black patients than Whites with similar needs, although 18 percent more Black patients needed care.
2. Privacy. According to the HIPAA Journal, over 80 million patients had their medical privacy breached in the first ten months of 2023, leading to discrimination, identity theft, targeted spam, unauthorized reuse, and the reselling of personal data. This malicious handiwork is very often conducted by clandestine criminal organizations, often in countries inaccessible to Western law enforcement. These bad actors use AI for identity theft, blackmail, or for sale on the darknet.
Stolen health data is used to make fraudulent claims, steal medications, and extort their victims.
Some countries, including China, Russia, Israel, and Singapore, have been accused of using healthcare data for mass surveillance and and are suspected of using it to disrupt an enemy country’s healthcare systems as a new form of espionage.
But there is another category of AI Healthcare culprit: employers.
Using what they call wellness programs, employers not only gain complete access to employee health data but sometimes hold back the most disturbing data from the employees themselves.
The darkest example is perhaps when employers learn from a predictive technology program that an employee has a fatal disease. Instead of informing the employees, some employers have been accused of secretly using the data to decide to raise premiums, cut benefits, or block an unsuspecting worker’s anticipated promotion.
While there are some states with laws prohibiting such behavior, uncovering and proving such cases is highly problematic.
3. Isolation. Automation has a long history of separating people in the name of efficiency. Usually, we appreciate the time we save in supermarkets using ATMs and self-checkouts. But sometimes, to paraphrase TS Eliot, we must go too far to know how far we can go.
Chatbots may be such an example, it seems to me. We use them in so many ways, including ways to lessen loneliness. They teach online customizing lessons to the right pace for each of us. They entertain us and provide us with customer support in languages we understand.
But now, Forbes reports, AI chatbots are serving as online therapists.
The reasons seem to make sense. There is a significant shortage of therapists, and new patients may have to wait for weeks to get an appointment while online chatbots provide instant access. Human therapists charge up to $ 250 an hour, while AI chatbots charge from $5 to $30 a month.
But it seems to me that there is something both dystopian and depressing about a lonely person talking to an algorithm through a screen about how isolated they feel.
But I may be the wrong demographic. Hell, I still stand in line to interact with tellers and cashiers rather than ATMs and self-checkouts.
4. Bad calls. Human misinterpretations of medical images and AI-generated data have resulted in inaccurate and ineffective diagnoses such as these:
In 2019, a study revealed that an AI algorithm used by a New York City hospital to prioritize patients for mammograms disproportionately favored White women over Black. This problem stemmed from the algorithm's training data, which reflected historic racial disparities in healthcare. Consequently, Black women were less likely to be flagged for follow-up exams potentially delaying breast cancer diagnoses and the urgent need for treatments.
The Algorithmic Justice League raised concerns about a popular AI tool used to diagnose mental health conditions. It found the tool often misdiagnosed patients with serious conditions such as bipolar disorder and schizophrenia. This led to unnecessary and potentially harmful medication, as well as scaring Hell out of otherwise healthy people.
There are multiple reports of skin cancer being missed in Black people and moles being mistaken for tumors on dark skin, leading to unnecessary biopsies and patient anxiety.
5. Lack of Transparency. The inner workings of complex AI algorithms can be opaque, making it difficult to understand how conclusions were made and leading either to overdependence on AI or exaggerated mistrust of it. While algorithms may contain what appears to be impeccable logic, they still lack the common sense that humans have on their good days.
My Conclusion
I have tried to clarify in this two-part report that AI Healthcare is an enormously powerful resource that will transform how most people in the developed world will be cared for. But it is no panacea, and the mistakes it makes can be hazardous to your health.
Treat AI tests, diagnoses, and treatments as a source of valuable data, but remember that it is not omniscient and may never be. It can be both a positive and negative force and is most likely to be both for a long time to come.
AI is not perfect. Remember, it is merely created by humans-at least so far.
++++++
Shel Israel is a tech-business writer and occasional speaker. He’s the author of six critically-praised business books on transformational technology and has contributed to many business publications, including Forbes, FastCompany, and Business Insider. He now freelance[MOU2] ghostwrites business books, byline articles, and white papers. You can reach him at shel@shelisrael.com or message him on Linked-In or on Facebook.
[MOU1]maybe put the “six” after healthcare, SIX Darkside Cases?
[MOU2]did you mean to say, he is a freelance writer, should there be a comma after freelance, do you only ghost write business books?
This is outstanding summary, shel. We don’t give enough time to the costs of particular technologies. Thanks for writing.
This is one of your best, Shel. We have to take into account how every technology is a tool, and you have to choose the correct tool for each individual job. Take a look at black maternal morality compared to white.The disparities are striking.