Artificial Intelligence in Healthcare: Growth, Trends and Noteworthy Applications

Thilakshan Kanesalingam (TK)
10 min readJun 30, 2019

The technology that enabled an algorithm to beat a world chess champion in 1997 is now making waves in healthcare. In the time since initial testing in 1950, artificial intelligence (AI) has made great strides across industries. While the accelerated development of AI technology is opening what feels like countless doors of opportunity, perhaps some of the most impactful applications are in the medical field. Since 2010, there have been over 154,000 AI patents filed worldwide with the majority being in health-related fields (29.5%). AI in healthcare is one of the world’s highest-growth industries, projected to reach a monumental $150 billion valuation by 2026.

But, why now? What caused this recent inflection point in AI despite decades of fluctuation in both funding and developments? A primary contributor to the shift was exposure to advancements in deep learning, the machine learning technique that teaches computers to learn by example. Kai-Fu Lee describes an inflection point in his book AI Superpowers, highlighting that AI interest spiked following a major demonstration of deep learning potential in 2016, when Google’s AI software (later coined AlphaGo Lee) beat the talented Lee Sedol at the complex, centuries-old Chinese board game Go. The milestone achievement spurred further development, and Google went on to create AlphaGo Zero, an evolution of the software that learned from itself, beginning from random play without any human interaction or historical data. In the span of just three days, AlphaGo Zero surpassed the abilities of AlphaGo Lee, and by the 40-day mark, AlphaGo Zero surpassed the abilities of all other AlphaGo versions, becoming the best Go player of all time.

This incredible application of reinforcement learning is only the tip of the iceberg, and in the almost two years since, deep learning has been revolutionizing a number of different industries, with research and investment funding reaching a record high. Last September, DARPA announced a $2 billion campaign to develop the next wave of AI technology, and the International Data Corporation projects that worldwide spending on AI will reach $35.8 billion in 2019, a 40 percent uptick from last year. By 2022, the IDC predicts that spending will balloon further to $77.6 billion.

Medical knowledge is expected to double in a span of just 73 days by 2020, compared to a doubling time of 50 years in 1950, 7 years in 1980 and 3.5 years in 2010. The implications of deep learning in healthcare specifically are, for the most part, related to processing power, including speed and accuracy, for diagnostics and discovery. The ability to rapidly analyze large data sets is not only helpful for accelerating existing tasks and outputs in the medical field, but also for driving this projected exponential growth in medical knowledge.

AI has a long runway of potential ahead in the healthcare space, but a shift is already underway. Let’s take a deeper dive into some of the game-changing applications of AI in healthcare that are augmenting the industry.

Medical Imaging, Big Data, and e-AI

Data processing in radiology is considered low-hanging fruit for artificial intelligence today, and it’s predicted that applications in medical imaging will make up $19 billion of the global AI industry valuation by 2025. While artificial intelligence won’t replace radiologists, it can certainly be used as a tool to supplement the value that radiologists provide when it comes to clinical decision making, and play an important role in predictive analytics. Improvements in efficiencies for physicians can ultimately translate into cost savings and increased revenue, and companies are catching on.

To understand the full potential for artificial intelligence in radiology, it’s important to first understand the breadth of medical imaging data. According to GE Healthcare, hospitals currently store millions of digital images, and that number is growing as digital imaging devices become more advanced. Furthermore, medical imaging makes up approximately 90 percent of all healthcare data, and more than 97 percent of that data is unanalyzed. There is simply too much of it and, in many cases, it’s too complex to be effectively evaluated and turned into useful information by human providers or analysts.

In an effort to harness the full power of AI, hardware manufacturers like NVIDIA, Microsoft, Google, Apple, Tesla, and Hitachi are making their own chips that are specifically designed to handle the computations that are necessary for deep learning. Rather than using general purpose chips that much of the current research and commercial applications to-date are based on, these processing chips, featuring embedded artificial intelligence (e-AI), will further accelerate the rate of change that we see in the field and will solve many big data processing bottlenecks for real-world applications.

In November of 2017, Nuance Communications launched an open platform intended to help accelerate the development of AI for medical imaging in partnership with NVIDIA. The marketplace allows for the seamless creation, distribution, and utilization of continuously learning algorithms for radiologists, and, a year after launch, Nuance announced its evolution into a collaborative community of developers and researchers who can build, test, validate and share those algorithms based on data from 25,000 radiologists across 5,500 connected healthcare facilities.

Examples of algorithms like those made accessible via the Nuance marketplace providing clinical decision support include Imagen’s OsteoDetect software, which uses an AI algorithm to analyze wrist x-rays, and IDx, a fully autonomous AI system capable of detecting diabetic retinopathy instantly, among many others. What is perhaps most interesting about these applications is how quickly they are being cleared by the FDA. OsteoDetect was approved in May 2018, just one year after testing of the software was completed, and the approval process for IDx took just 85 days. As AI cements its role in healthcare, more software offerings are seeking De Novo approvals, and the timeline from concept inception to FDA approval is accelerating.

Another organization making great strides when it comes to the application of artificial intelligence in medical imaging is DeepMind Health. DeepMind Health was founded in London in an effort to use AI “to make a practical difference to patients, nurses and doctors” and to support the NHS and other healthcare systems. In recent years, the organization has partnered with Moorfields Eye Hospital to use AI to accurately interpret eye scans for sight-threatening eye diseases, University College of London to develop an AI system that can analyze scans of head and neck cancer, and the Cancer Research UK Centre at Imperial College London to improve the detection of breast cancer, among other projects. Just last month, Google AI researchers, in collaboration with Northwestern Medicine, published results of their deep learning model which detected lung cancer 5% more often and with an 11% reduction in false positives compared to a group of six human experts. Google plans to make this model available through Google Cloud Healthcare API. Each of these applications can help to streamline processes for healthcare professionals, reduce wait time between, scans, diagnosis and treatment, aid in treatment planning, and ultimately improve patient outcomes.

There is no shortage of examples of how AI can improve medical imaging processes, and while applications in imaging are AI’s low-hanging fruit, there are still significant challenges with clinical adoption when it comes to integration and workflow. Without the right software, providers won’t be able to effectively leverage AI tools to their fullest potential. To accomplish this, algorithm developers will need to partner with imaging vendors or create vendor-agnostic platforms to ensure integration and workflow compatibility.

Deep EHR

Given the previously outlined use cases of AI when it comes to organizing and analyzing medical imaging data, its role with EHR systems may seem obvious. But again, let’s take a moment to consider the sheer volume of the data in question. It’s been estimated that the healthcare industry is responsible for 30 percent of the world’s data production, and a single patient results in approximately 80 megabytes of stored data each year. Hospitals, on the other hand, produce an estimated 50 petabytes (that’s 50,000,000 GB) annually. Though already staggering, the volume of global healthcare data has a projected growth rate of 48% per year, and is estimated to reach 2,314 exabytes by 2020. For comparison, that number was 153 exabytes in 2013.

Perhaps even more staggering than the sheer volume is the lack of organization. According to 2018 data from IBM Watson Health, approximately 80 percent of global healthcare data is unstructured. EHR systems store complex data and clinical notes associated with every patient (and every encounter with that patient) including lab tests and results, diagnoses, prescriptions, medical images, admission and discharge notes, and demographic information. As of February 2018, almost 84 percent of hospitals utilized an EHR system, nine times the adoption rate from 10 years prior. However, because of regulatory requirements and varying EHR software across health systems, the data is stored in a way that makes it difficult to access, consolidate and share. This is not only inconvenient and inefficient, but it can also have significant implications for patients, such as delays in treatment because of critical information that is displaced and difficult to access. Because of this, Fast Healthcare Interoperability Resources (FHIR) emerged in 2014 as the standard for electronic exchange of healthcare information, allowing developers to build and retrieve data from EHR applications more quickly and easily. The FHIR standard is now a precursor to successfully deploying AI on large sets of data and allowing the algorithms to make sense of that data.

When AI can make sense of data, it can provide a number of big data solutions, and a 2018 study on advances in deep learning for EHR analysis found success with deep learning approaches to clinical informatics tasks, leading to quicker and more accurate performance. In November of 2018, Amazon unveiled Amazon Comprehend Medical, a machine learning service that allows for the automated extraction of unstructured EHR information — a process that is otherwise manual and time-consuming. The new offering is intended to be a one-size-fits-all solution that eliminates the need for custom built programming and allows for the harmonization of information across EHR silos. Google announced expansion of a similar project in June of 2018, an EHR component of the Google Brain healthcare group called Medical Digital Assist. The project aims to leverage AI to streamline EHR use and voice recognition to enable hands-free clinical documentation.

While gaining universal and automated access to currently unstructured and siloed data, especially across health systems, would improve efficiency and reduce (or effectively eliminate) manual extraction of information, there is also huge potential when it comes to harnessing that information to predict medical outcomes and drive personalized medicine. Last February, a group of Chinese and American researchers published a study detailing the success of a deep learning system they created to analyze medical data and diagnose common illnesses in children. The AI was trained using 101.6 million data points from the electronic health records of more than 600,000 pediatric patients and their combined 1.3 million patient visits. The system was able to diagnose with a level of accuracy comparable to experienced pediatricians. This has enormous potential for physicians when it comes to evaluating large amounts of data and clinical decision making, especially in complex cases.

Natural Language Processing

While Google plans to use voice recognition to dictate clinical notes, there are several potential applications of language enabled technology, both written and vocal, and investors have taken notes. Natural language processing (NLP), which is an umbrella term to describe the use of algorithms and artificial intelligence to identify and extract meaning from written or spoken language, is projected to grow to a $16 billion market opportunity in 2021. You’re likely already familiar with how the technology enables your interactions with Siri or Alexa, but in terms of applications in healthcare, NLP could be used to convert data from machine-readable formats to natural language (or vice versa), to identify key takeaways from lengthy source material, map unstructured text to structured fields or, as previously mentioned, dictating clinical notes.

Interestingly, AI voice assistants like Amazon Alexa, Google Assistant, and Microsoft Cortana may have a role in this. Both Northwell Health in New York and Boston’s Beth Israel Medical Center are improving on the standard capabilities of Amazon’s Alexa, creating new tricks that can help improve patient care. According to Beth Israel CIO Dr. John Halamka, inpatients can interact with Alexa for a number of common questions or commands, such as: When will my doctor be here? What’s for lunch? Call a nurse. Similarly, Northwell Health, which is a system of New York hospitals and healthcare centers, uses Alexa to provide information on wait times for users that need care. Alexa (and coming soon, Google Home) users can leverage the technology to find the shortest ER or urgent care wait time near any given zip code or to determine the wait time at a specific facility.

However, there are some even more exceptional use cases than those outlined above. BeyondVerbal bills its technology as a lifesaving innovation that uses voice-enabled AI and vocal biomarkers to deliver personalized healthcare screening, emotion monitoring, and even crucial medical predictions. The human voice provides metadata in the form of intonation, volume, pitch, rate, or rhythm, and BeyondVerbal’s deep learning technology can identify even the most minuscule of changes, mapping them back to potential outcomes in real time. Voice.Health, born from a partnership between Boston Children’s Hospital and HLTH, is another initiative looking to increase adoption of NLP and voice-assisted technologies in healthcare.

Future State

Big things are happening with AI in the healthcare space, and the innovation and use cases outlined in this article only scratch the surface of potential. The reality is, machine learning in healthcare is riding on an exponential curve. An outpouring of investment funding is fueling increasingly sophisticated processing techniques (in both hardware and software) and, as a result, massive volumes of data and insights are being extracted and leveraged for a multitude of benefits.

What new developments have captured your attention? What do you think is next in the space?

--

--