Barriers to AI Dissemination and Implementation in Medicine

AI, machine learning, neural networks and deep learning are the new, new thing. Applications in medicine are potentially vast and, as most things on the upside of the hype cycle, there is a proliferation of papers, conferences , webinars and organizations trying to stay ahead of the curve. Doing so , however, means you are on the leading edge to the trough of disillusionment.


Despite advances in computer technology and other parts of the 4th industrial revolution, there are many barriers to overcome before machine learning crosses the chasm. Here are some things you should know about dissemination and implementation, and innovation diffusion basics.

There are four basic categories of barriers: 1) technical, 2) human factors, 3) environmental, including legal, regulatory, ethical, political, societal and economic determinants and 4) business model barriers to entry.


A recent Deloitte report highlighted the technical barriers. Here are the vectors of progress:


Explainability, for example, is a barrier. Say one can predict the onset of Type 2 diabetes. It’s one thing to say that we think there’s a propensity but, typically the next question is “Why?” Most algorithms don’t, but we make sure that if we provide a prediction we can also answer those types of questions.

In a paper published in Science, researchers raise the prospect of “adversarial attacks” — manipulations that can change the behavior of A.I. systems using tiny pieces of digital data. By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.

Software developers and regulators must consider such scenarios, as they build and evaluate A.I. technologies in the years to come, the authors argue. The concern is less that hackers might cause patients to be misdiagnosed, although that potential exists. More likely is that doctors, hospitals and other organizations could manipulate the A.I. in billing or insurance software in an effort to maximize the money coming their way.

Measuring and reporting results is another barrier since there are lies, damned lies and AI statistics.


Human factors, like how and whether doctors will use AI technologies, can be reduced to the ABCDEs of technology adoption. Research suggests the reasons more ideas from open innovation aren’t being adopted are political and cultural, not technical. Multiple gatekeepers, skepticism regarding anything “not invented here,” and turf wars all hold back adoption.

Attitudes: While the evidence may point one way, there is an attitude about whether the evidence pertains to a particular patient or is a reflection of a general bias against “cook book medicine”

Biased Behavior: We’re all creatures of habit and they are hard to change. Particularly for surgeons, the switching costs of adopting a new technology and running the risk of exposure to complications, lawsuits and hassles simply isn’t worth the effort.

Cognition: Doctors may be unaware of a changing standard, guideline or recommendation, given the enormous amount of information produced on a daily basis, or might have an incomplete understanding of the literature. Some may simply feel the guidelines are wrong or don not apply to a particular patient or clinical situation and just reject them outright.

Denial: Doctors sometimes deny that their results are suboptimal and in need of improvement, based on “the last case”. More commonly, they are unwilling or unable to track short term and long term outcomes to see if their results conform to standards.

Emotions: Perhaps the strongest motivator, fear of reprisals or malpractice suits, greed driving the use of inappropriate technologies that drive revenue, the need for peer acceptance to “do what everyone else is doing” or ego driving the opposite need to be on the cutting edge and winning the medical technology arms race or create a perceived marketing competitive advantage.

In addition, medical schools, graduate medical education and graduate schools are not doing enough to train knowledge workers how to be more effective.


The UK House of Lords Select Committee on Artificial Intelligence has asked the Law Commission to investigate whether UK law is "sufficient" when systems malfunction or cause harm to users.

The recommendation comes as part of a report by the 13-member committee on the "economic, ethical and social implications of advances in artificial intelligence".

One of the recommendations of the report is for a cross-sector AI Code to be established, which can be adopted nationally, and internationally. The Committee’s suggested five principles for such a code are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

The Nuffield Council on Bioethics identified the ethical and societal issues as:

  1. Reliability and safety
  2. Transparency and accountability
  3. Data bias, fairness and equity
  4. Effects of patients
  5. Trust
  6. Effects on healthcare professionals
  7. Data privacy and security
  8. Malicious use of AI

Finally. the parts of the environmental SWOT analysis are more wild cards in the game.

Here are the issues under discussion about patenting AI products and services.

Here are some other legal concerns.


Startup developers of commercial AI applications operate in a competitive market. They compete with the data available to them and meet a market need for AI applications for midsize companies, which, in turn, enables those companies to compete with larger companies that often develop AI applications internally.

Some healthcare organizations contend that it’s financial constraints that provide limitations. Lack of dollars makes it difficult for all but the most advanced and lucrative healthcare organizations to put machine learning or artificial intelligence in place to make the most of the data. There are many more practical barriers contributing to the AI divide.

Here are some regulatory and reimbursement issues.

AI dissemination and implementation faces some NASSSy hurdles (The acronym stands for Nonadoption, Abandonment and Challenges to the Scale-up, Spread and Sustainability

Artificial intelligence in medicine is advancing rapidly. However, for it to grow at scale and provide the promised value will depend on how quickly the barriers fall.

Arlen Meyers, MD, MBA is the President and CEO of the Society of Physician Entrepreneurs on Twitter@ArlenMD and Co-editor of Digital Health Entrepreneurship.

To hear more about the progression of artificial intelligence and machine learning please register for our webinar in conjunction with The Society of Physician Entrepreneurs on Thursday, August 29th.