Skip to main content
Industry

Realizing the Potential for AI in Precision Health: The Start of an Important Dialogue

Profile Of Female Lab Technician Working On Microscope In Bright

This blog was co-authored by Steve Mutkoski, Microsoft Worldwide Policy Director, and Nathan Leong, Lead Counsel, Microsoft US Health & Life Sciences.

Artificial Intelligence (AI) and its ability to positively impact healthcare is garnering significant amounts of attention these days.  Our team at Microsoft understands that new technical paradigms like AI will have far-reaching effects on the healthcare industry. So we are committed to bringing together people to engage in cross disciplinary research and discussion to help all stakeholders understand the complex technological, societal, and ethical considerations for AI in health and precision medicine.

In early 2018, we were asked to contribute an article to a volume of the American Bar Association’s SciTech Lawyer publication dedicated to Precision Medicine.  Given the deep ties between precision medicine, massive amounts of patient health data, and the analytical tools such as AI that will be required to derive insight from that data, we took the opportunity to ask ourselves a series of questions about the growing importance of AI in healthcare generally and to support precision medicine more specifically.  Our article Realizing the Potential for AI in Precision Health was published in the Fall 2018 issue of SciTech Lawyer and we hope it will contribute to many of the important discussions about AI that are ongoing among healthcare stakeholders.

Read the full article here or take a look at our summary here of a few of our top takeaways on issues to consider for AI in healthcare.

Principles for Responsible Development of AI in Precision Health

In the article, we considered some of the principles for responsible development of AI that our President and Chief Legal Officer Brad Smith and Executive Vice President of Microsoft AI and Research Harry Shum had considered more generally in their book The Future Computed, and we applied them to the specific field of AI in precision health. These principles include:

(1) Reliability and Safety, (2) Fairness, Inclusiveness, and Bias, (3) Transparency and Accountability, (4) Privacy and Security and (5) Preparing the Health Care Profession.

These topics each present significant potential obstacles for successful development and deployment of AI-infused technologies in the healthcare sector.  As food for thought we have extracted a few key challenges and opportunities associated with some of these principles below.

  • Reliability and safety are pre-eminent when considering any tool that will be deployed in a sector where the mission is to improve the health of patients, and we know AI-infused technologies will still need to work through national regulatory processes that govern drugs and devices.  We also believe that unique aspects of AI, particularly AI systems that “continuously learn,” will mean that those existing regulatory frameworks may need to be revised and updated to reflect the dynamic nature of AI-infused technologies.  We are currently involved in initiatives with industry and regulators to help address these concerns and ensure that safe and reliable AI-infused technologies will benefit patients.
  • Fairness, inclusiveness, and bias have also been historical challenges for any data-centric work, including in healthcare. Many people have some sense of where bias along race or socioeconomic status may have crept into training data.  But, beyond addressing training data bias, what must we do in inclusive product design to ensure that we do not leave communities locked out of benefits to AI? How do we use education, marketing, and product labeling that includes other key communities like the elderly who may be less comfortable with new technology?
  • Accountability will also be a critical issue, as AI-infused technology increasingly influences the diagnosis and treatment paths a clinician might follow.  Our laws around the world have a complex and differing set of rules for liability associated with devices on the one hand and humans on the other, again raising the question: do we have liability rules that will appropriately balance stakeholder interests and incentives?  We are part of important stakeholder discussions with payors, providers, technology manufacturers and most importantly patients, to explore how existing legal frameworks can apply and where we may need to adapt and adjust those rules.
  • Privacy and security concerns relating to patient data are significant.  Our society is learning just how personally identifying genetic information is, with law enforcement authorities illustrating how genetic information from one person can be combined with public records and genealogy databases to identify a very distant relative.  These new techniques raise the question of whether there can there be anything approaching “genomic privacy” once even a distant relative has had his or her genome sequenced, potentially providing a chill on efforts to get citizens to voluntarily donate their DNA to research biobanks such as the “All of US” program in the United States.  We are working to broaden the understanding that stakeholders have of this potentially chilling impact and look forward to playing an active role in discussions to create more robust protections for use of genetic data.

To learn more about these challenges and some real-world examples of the opportunity, read our full article Realizing the Potential for AI in Precision Health here.