Smart Medicine: AI Plays an Increasingly Bigger Role in Health Care

Smart Medicine: AI Plays an Increasingly Bigger Role in Health Care

By Allison DeAngelis

February 14, 2019

In her office at Tufts Medical Center, it’s not uncommon to see clinical geneticist Dallas Reed turn to her cell phone or computer for help with a diagnosis that doctors have traditionally done based on their own instincts.

Reed, an obstetrician and gynecologist at Tufts’ Floating Hospital for Children in Boston, has adopted Face2Gene, an application made by Boston-based FDNA Inc. that uses a photograph, a symptom checklist and artificial intelligence to help diagnose genetic disorders in children. It’s a big change from what genetic specialists have done in the past, surveying a child’s face for unusually shaped eyes or abnormalities of the ears, then matching them to images of thousands of genetic disorders with similar physical characteristics.

“A computer can search the data better than I can,” said Reed, gesturing towards the stacks of textbooks. “It’s helpful to at least narrow down the options.”

In health care, as in other fields, artificial intelligence allows computers to perform tasks previously done by humans. Using immense amounts of data, AI not only has clinical applications — as in diagnosing patients — but can also be harnessed for behind-the-scenes uses, such as making the development of new drugs more efficient. Still, the technology faces several hurdles before it can live up to the hype.

Massachusetts has become one of the top locations for artificial intelligence jobs, recruiting more employees per capita than all other states except Washington. Increasingly, those employees are in the field of health care.

As just one example, Boston startup PathAI Inc. recently teamed up with drug giant Novartis to create an artificial intelligence tool to assess tissue samples for cancerous cells, providing a second set of “eyes” for samples. Novartis says that pathologists misread such samples between 3 and 9 percent of the time, so the goal of the partnership is to drastically cut down on the number of inaccurate diagnoses.

But among the most promising applications are in drug modeling and predicting patient responses, which experts said could help the biopharmaceutical industry streamline costly clinical trials.

“Every other form of industry tests prototypes before they build. Boeing will simulate a thousand wings in a computer system before building them. They’re not going to build a thousand wings, throw them off a cliff, and see which one works. But, that’s essentially the way drug development works,” said Abe Heifets, the chief executive of San Francisco AI company Atomwise, which recently began working with Wilmington-based contract research organization Charles River Laboratories International (NYSE: CRL).

Charles River will use Atomwise’s molecular recognition technology in combination with its existing approaches to narrow down the pool of molecules Charles River could use to help clients develop new drugs. Like finding the right key for a lock, drug developers look for molecules that bind best to the target on a cell in order to create a potent drug with fewer off-target effects. Heifets said Atomwise’s tools can help researchers screen the approximately 3.8 billion known, developable compounds as much as 100 times faster than previous screening methods.

Waltham-based Scipher Medicine Corp. has also developed a technology, called Prism, to predict a person’s chances of responding to a drug by analyzing genetic material in a blood sample.

The company is exploring uses in cardiovascular, respiratory and neurological diseases, but has focused first on a type of anti-inflammatory drug called TNF inhibitors that represent a $34 billion annual market. Some 65 percent of patients don’t respond to the drugs, according to Scipher’s CEO Alif Saleh.

Similarly, Boston startup NeuroBo Pharmaceuticals is developing ways to predict adverse drug reactions or use facial recognition to measure diabetic neuropathy patients’ pain during clinical trials. It is currently working with analytics firm AiCure to use a facial recognition to ensure clinical trial participants are taking the drug appropriately, though it hasn’t yet discussed the use of the tool with the FDA.

It’s all in the data

Much of the challenge in developing health care AI, experts say, is the work of gathering trustworthy data to feed into computer algorithms. A few mislabeled or incorrectly annotated tumor biopsy slides, for example, can create flaws in a system designed to recognize cancer cell patterns.

“Something like, 98 percent of the public domain data doesn’t pass our quality control filters. … There’s lots of examples where, if you kind of accept data naively in databases, the AI program learns the wrong thing and gives incorrect answers,” Heifets said. “A lot of what we do is very carefully using our expertise to clean the data.”

Data quality concerns are a big reason why Boston-based FDNA feeds new rigorously vetted data into its system every three or four months. Only 10 percent of the information goes toward the pool of data the program relies upon, according to CEO Dekel Gelbman — the rest helps the train FDNA’s algorithm. Gelbman says the bigger the data pool, the more accurate the Face2Gene app is. When it was first launched, it recognized Down syndrome in white patients at nearly double the rate it did in black patients (the company has since expanded to use in 130 countries and has ongoing research projects in Africa and Asia).

“There are always chances that some of the data is ‘dirty’, but it balances out with the vast amounts of data you have,” he said.

As AI evolves to get smarter and more independent, experts say there’s a danger that it clashes with basic tenants of medicine and science. One of the key aspects of a successful experiment is that it’s transparent enough that it can be reproduced, but Steve Litster, chief technology officer at data company the Markley Group, said that deep learning tools are made up of layers of complex algorithms.

“One of the major criticisms around AI in health care is that physicians want to understand how an algorithm thinks…. It’s a black box. You really don’t know,” Gelbman said.

As for whether it’s possible that robots will become a more active in the clinic, developing drugs and operating independently of scientists, most say it’s unlikely.  

Litster cited the example of self-driving cars: With the strict parameters of physics, integrating AI in a motor vehicle should theoretically be one of the simplest applications, yet the cars are still crashing at an alarming rate, he said. The human body, comparatively, is full of still-unknown mysteries.

“Personally, I guess the jury is out on the magnitude of impact AI will have in next couple of years. But now is the right time to invest,” said Chris Hill, Charles River’s executive director of chemistry. “I think we’ve just got to be more predictive in the things that we do.”