Which of the following is a potential drawback of new technologies like personalized medicine and the use of artificial intelligence in health care?

When it comes to our health, it’s personal. That is why it is so important that the physicians we trust make decisions about our care—not machines. And yet, in many situations, artificial intelligence (AI) and machine learning are making decisions about our medical treatment and care, often to the disadvantage of communities of color. To tackle discrimination in health care and insurance coverage, we must evaluate and act on how algorithmic decision making contributes to the inequalities within our health care system.

Medical algorithms are expected to make quick, precise decisions that help providers diagnose and treat patients faster and more efficiently. Their use began primarily in the 1970s, and from this point on, algorithms and AI have become constants in patients’ lives. From consultation programming for glaucoma to automated intake processes in primary care to scoring systems that evaluate newborns’ health conditions, patients regularly encounter these technologies and algorithms whether they know it or not. Insurance companies use algorithms as well to determine risk and adjust the cost of care.

Despite their ubiquity, medical algorithms’ fatal flaw is that they are often built on biased rules and homogenous data sets that do not reflect the patient population at large. Patients should never have to worry that an algorithm could prevent them from receiving an organ transplant, yet this is the reality for many Black patients on transplant lists. Even though Black Americans are four times more likely to have kidney failure, an algorithm to determine transplant list placement puts Black patients lower on the list than White patients, even when all other factors remain identical.

Medical algorithms have impact outside of the health care sphere as well. For example, the NFL has actively employed biased algorithms that use “race-norming” tactics to determine which players are eligible for payouts in a $1 billion settlement of brain injury claims. These algorithms are based on a discriminatory assumption that Black players start with a lower cognitive function, which then disqualifies them, injured from years of concussions, from receiving payouts. Race-based adjustments such as these were developed in the early 1990s to estimate the impact of socioeconomic factors on a person’s health, but neurology experts have found that the NFL’s assessment program is flawed, systematically discriminating against Black players.

Devastating Effects On People Of Color

Algorithmic bias can have devastating effects on people of color at all points in the health care process, from triaging their illnesses to the quality of care they receive.

In fact, a 2019 study found that risk-prediction algorithms led to Black patients receiving less quality care than their non-Black counterparts. The algorithm used the patient's previous health care spending to determine future risks and thus the need for extra care. Since Black Americans have one of the highest poverty rates in the US and have reduced access to services, they tend to spend less on health care. This caused the algorithm to disqualify Black patients from receiving extra care through a “high-risk care management program,” despite the fact that Black Americans possess the highest cancer death rate, have five times the likelihood of dying from pregnancy-related causes, and are 60 percent more likely to be diagnosed with diabetes when compared with White Americans. Black Americans spend less on health care not because of less need but because of fewer resources.

Yet, insurance companies misuse personal information such as race and income in risk-prediction algorithms to manipulate health care costs and raise premiums. Under the Medicare Advantage program, insurers abused “risk scoring” algorithms and overcharged Medicare nearly $30 billion. Congress would be well served to follow the Medicare Payment Advisory Commission’s recommendations and modify these payments to private health insurers offering Medicare Advantage plans and use the savings to enhance patient care in Medicare or the Affordable Care Act. Implicit racial biases in risk-prediction algorithms—as well as unfettered overbilling by insurers—demonstrates the need for quality insurance reform.

Many biased algorithms also lack data diversity, whether by race, sex, or other factors. This is nothing new. Medical data has long lacked diversity. Since the early days of clinical trials, women and people of color have been underrepresented in studies. The lack of diverse data diminishes the generalizability of these studies and potentially of the tools developed using the data. When asked about this issue during his confirmation hearing, Department of Health and Human Services Secretary Xavier Becerra said, “If you have bad inputs going in, you produce bad outputs.”

Fortunately, some states are recognizing the potential harm of these “bad outputs.” Colorado recently passed a bill to prevent insurance companies from using algorithms that discriminate against individuals based on race, ethnicity, sex, and other characteristics. The California Nurses Association supports legislation to allow health care professionals to use their expertise to supersede health care algorithms and advocate for a patient’s best interest.

Designing Algorithms For Health Diversity

While technology has the potential to address inequities by scaling innovation and addressing access gaps, stakeholders must do a better job of designing for health equity when applying machine learning and artificial intelligence to health care. We recommend three initial steps to ensure society is moving toward health justice in the use of AI and machine learning.

Encourage Greater Collaboration And Patient Centeredness

A “domain-forward approach”—as suggested by Harvard University researchers—can help mitigate some of these issues by incorporating “domain experts,” for example, health care professionals, into the algorithm development process. This approach could help address algorithms’ decision-making inaccuracies because health care professionals in the field could provide additional important context. This should be taken a step further by integrating the voices of patients and caregivers into the development of these systems to ensure that the real-world implications of these decisions are more fully understood.

Develop Specific Processes For Evaluating And Addressing Bias

Researchers and doctors must examine their own implicit biases, and, as noted by the American Medical Association, it is imperative that they develop recommendations on how to interpret or improve algorithms that include race-norming. Health care institutions and professional societies should adopt programs to help inform this process. For example, the American Academy of Neurology launched an anti-racism training unique for neurologists. These programs should be developed in tandem with the implementation of AI and machine-learning applications across health institutions.

Develop A Regulatory Framework That Promotes Transparency And Accountability

The federal government and regulators, such as the Food and Drug Administration (FDA) and the Federal Trade Commission, should more closely consider the potential harms alongside the potential benefits of new technologies developed using machine learning and algorithmic decision making. The FDA’s Artificial Intelligence/Machine Learning-Based Software as a Medical Device Action Plan is a good start, but it falls short of requiring the kind of transparent review and disclosure processes found in clinical trials. Regulatory efforts should ensure the following:

  • The best possible quality data are used to reduce risks and discriminatory outcomes.
  • Transparency of details related to the purpose, development, and risks of health care AI and machine learning.
  • Appropriate human oversight measures to reduce risk.

Congress must also keep a watchful eye to ensure algorithms are working to the benefit and not the detriment of US health care consumers. It must exercise its regulatory authority to ensure government agencies are then acting with the best possible data to prevent health biases in AI. Failure to recognize and address these flaws could exacerbate existing inequities and lead to a new era of AI-powered health injustice.

What are the limitations of the current technologies used in healthcare?

Summary: Disadvantages of Technology in Healthcare.
Cybersecurity risks. Risk related to breach of protected health information. ... .
Impersonal patient-doctor interaction / patient isolation. Patients interact with technology instead of a live care provider. ... .
Frustration with poor implementation..

Which technology has the potential to help overcome the body's tendency to reject transplants group of answer choices?

Which technology has the potential to help overcome the body's tendency to reject transplants? Stem-cell technology.

What are the risk to patients as a result of having technology be so integral to their healthcare?

Storing information electronically makes it more vulnerable to security violations and hackers. Healthcare practitioners may come to over-rely on electronic records and forget how to work without them. Errors may result from entering incorrect numbers or typos.

Which of the following is a disadvantage of social media such as Facebook yelp and Twitter as tools of consumer engagement in the health care industry?

Which of the following is a disadvantage of social media such as Facebook, Yelp, and Twitter as tools of consumer engagement in the health care industry? -There is no credible third-party mediator in social media.