Racial Bias Might Be Infecting Patient Portals. Can AI Help? (2024)

Patients and physicians increasingly turned to digital platforms, like patient portal messaging, when COVID-19 made contact risky, but a new study of how providers managed the messaging surge suggests an uncomfortable downside: What if the color of a patient’s skin predicted the response they got to an inquiry?

Researchers analyzed more than 57,000 message threads between patients and physician teams at Boston Medical Center and found that white patients were more likely to receive answers from their attending physicians, while Black and Hispanic patients were more likely to hear from registered nurses. The statistical evidence suggests that medical teams tended to prioritize messages from white patients, says Ariel Stern, a visiting professor at Harvard Business School and one of the study’s authors.

As mobile communication technologies proliferate—and businesses and doctors’ offices experiment with generative artificial intelligence (AI)—ramifications of its use on practice are just starting to be examined. The research sheds light on the biases that digital platforms, whether a simple messaging solution or a sophisticated large language model, can exacerbate in the name of convenience and speed.

The report’s conclusion invites further study. Explanations for the disparities that the authors document range from differences in the types of questions or requests made by patients in the messages, the syntax of the message and its level of urgency, or differences in health care knowledge and technological capabilities among patients, according to Stern and her coauthor, HBS doctoral researcher Mitchell Tang.

“There is more to unpack here and potentially something to remedy,” Tang says.

The article, written with Rebecca Mishuris, Chief Medical Information Officer at Mass General Brigham, and Lily Payvandi, assistant professor at Boston University Medical Center, was published in March in the Journal of the American Medical Association’s JAMA Network Open.

Audience question leads to inquiry

The idea for the paper originated from a talk Stern gave to the Massachusetts Health and Hospital Association about the digital transformation of health care and the growing use of apps and other technologies in patient care. During the presentation, Stern recalls, an audience member asked “a wonderfully thoughtful question: How do we know about the distributional consequences of these tools? How can we know whether they serve certain populations better than others?”

The questioner happened to be Mishuris, then Boston Medical Center’s Chief Medical Information Officer, who would eventually collaborate with Stern on the research.

That was in December of 2020. Despite the fact that in 2009, a new law had widely expanded the use of electronic health systems, by early 2020, just 38 percent of patients had logged into a secure online health portal. And communities of color used these health care platforms even less.

The pandemic’s lockdowns pushed skeptical users online in a more widespread way, and use of secure portals has largely stabilized at that new, higher level. The platforms place new stresses on doctors and their teams, who suddenly face an onslaught of messages, some pressing and others minor. That surge has given rise to triage systems that only filter certain messages through to physicians.

Initially, the team planned to study how patients use the portal at Boston Medical Center, but soon realized there was a more significant issue to examine. “The interesting thing, in the context of portals, is not just who sends the message,” Tang says. “It’s who responds to those messages and how they get responded to that really hadn’t been looked at.”

Stark numbers point to uncomfortable conclusions

The study examined communication between medical professionals and more than 39,000 patients on Boston Medical Center’s messaging system in 2021, at the height of the pandemic.

Researchers found that Black patients were 17 percent less likely than white patients to hear from their primary care doctors in response to their portal queries. Hispanic and Asian patients showed similar, though slightly smaller, disparities with 10.2 percent and 9.3 percent lower likelihood of physician response respectively. In contrast, attending physicians sent half of their responses to white patients, despite these patients comprising only one-fifth of the study population.

“Remarkably similar results, despite some dramatic differences in the type of health system and the patients they care for.”

The study’s analyses carefully accounted for the recipient practice of the messages and patient characteristics such as age, ZIP code of residence, health status, insurance provider, and preferred language. Thus, two observably similar patients, one white and one Black, who message the same primary care practice could receive responses from entirely different members of their care teams.

Though focused on one institution, the Boston Medical Center study correlates with the experience at other health care institutions. In fact, in ongoing work with another health system, the team observed “remarkably similar results, despite some dramatic differences in the type of health system and the patients they care for,” Stern says.

Could other factors be at play?

Due to privacy regulations, the study team was not provided with the actual text of the portal messages. That’s part of the reason the authors say there may be other factors besides direct racial bias driving the results—and a key reason that they are keen to explore this data in future research.

One potential factor they highlighted for future study is how the language used in messages might impact response rates. The researchers acknowledge that some messages may have a more business-like tone and more sophisticated terminology, while others come across as informal, perhaps due to typos.

These differences in syntax could lead triage nurses to prioritize certain messages over others—deeming some important enough for a physician’s review while routing others to nurses instead, according to the authors.

The rise of Gen AI adds urgency

Learning how to assess and prioritize inquiries will become more important as health care providers integrate AI more deeply into patient care. Building algorithms based on datasets such as those from the Boston Medical Center study raises the risk of entrenching inequities in the system, Stern says.

“What if there were a way to develop a tool that would scan and repackage information to make triage more efficient?”

Stern nevertheless suggests that with thoughtful design, AI language models could help reduce biases in portals specifically and in health care generally. Such technology could reduce inequities by eliminating language nuances that may cause triage nurses to prioritize some messages over others for physician attention.

“We could use very simple language models to standardize the text presented to clinicians,” Stern says. “What if there were a way to develop a tool that would scan and repackage information to make triage more efficient?”

Reducing inequality beyond health care?

While both researchers are quick to cite that health care offers a unique set of dynamics, Stern and Tang note that it’s easy to envision situations where similar challenges could arise in other business settings, particularly in customer care.

“If I were leading a team that runs a help desk in any organization, I would want to know about differences in response to different types of customers,” Stern says. “To the extent those exist, that means you’re not serving a certain group of customers, and you’re potentially leaving money on the table.”

You Might Also Like:

  • The FDA’s Speedy Drug Approvals Are Safe: A Win-Win for Patients and Pharma Innovation
  • Lack of Female Scientists Means Fewer Medical Treatments for Women
  • Harnessing AI: What Businesses Need to Know in ChatGPT’s Second Years

Feedback or ideas to share? Email the Working Knowledge team at hbswk@hbs.edu.

Image: Image created by HBSWK using assets generated by Midjourney, an artificial intelligence tool

Racial Bias Might Be Infecting Patient Portals. Can AI Help? (2024)
Top Articles
Latest Posts
Article information

Author: Greg O'Connell

Last Updated:

Views: 5873

Rating: 4.1 / 5 (42 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.