Tech firms say A.I. can transform health care as we know it. Doctors think they should slow down

Published by rudy Date posted on August 17, 2018

Ryan Browne | Aug 17, 2018, CNBC.com

Some doctors worry that those in the tech world think AI can not only help clinicians, but even do a better job.

They fear the fast-paced nature of the still nascent AI industry could come at the risk of patient safety.
One U.K. health industry body believes regulators should keep pace with the rapid advances in technology.

As an industry reliant on patient records and beset by outdated technology, health care is widely thought to be a prime target for an artificial intelligence revolution.

Many believe the technology will provide a host of benefits to clinical practitioners, speeding up the overall experience and diagnosing illnesses early on to identify potential treatment.

Just two days ago, DeepMind, an AI (artificial intelligence) firm owned by Google, said it had lent its technology to London’s Moorfields Eye Hospital for groundbreaking research into detecting eye diseases. It was used to scan and identify more than 50 ophthalmological conditions. DeepMind’s machine-learning technology made correct diagnoses 94 percent of the time, Moorfields said.

The development indicated that AI can analyze health problems with as much accuracy as a doctor. But some doctors worry that those in the tech world think AI can not only help clinicians, but even do a better job.

Take Babylon Health for instance, which in June said its AI chatbot was able to diagnose medical conditions as accurately as a doctor. The firm’s chatbot scored a higher-than-average test score on a practice exam compiled for physicians.

Babylon’s chatbot passed 82 percent of the test’s questions, versus the average mark for human doctors of 72 percent.

But the Royal College of General Practitioners (RCGP), an industry body representing doctors that treat a wide range of common illnesses, quickly disputed the claim that AI could diagnose illnesses with the same effectiveness of a human medical practitioner.

“No app or algorithm can do what a GP does,” Helen Stokes-Lampard, a professor and chair of the RCGP, told CNBC earlier this week. “Every day we deliver care to more than a million people across the U.K., taking into account the physical, psychological and social factors that may be impacting on each person’s health.”

Stokes-Lampard continued: “We consider the different health conditions a patient is living with, their family history, any medications they might be taking, and a myriad of other considerations when formulating a treatment plan.”

Babylon at the time denied it had claimed an AI could do the job of a GP, saying that it supported a model where AI is complementary to medical practice.

‘The role of the doctor will have to adapt’

Nevertheless, the spat highlighted a serious question that may one day need to be addressed by those in the health industry: How should health professionals respond to the rapid growth in new, data-driven technologies like AI?

“Over the next decade or two, AI will certainly play a big role in supporting doctors make decisions,” Dan Vahdat, chief executive of health tech start-up Medopad, told CNBC via email.

“The role of the doctor will have to adapt in learning how to use AI to complement their clinical judgments. This will take time, but it’s inevitable.”

Medopad specializes in connecting health-care providers, doctors and patients to monitor a patient’s health data and see how their care can be improved.

In the U.K., the National Health Service, the country’s universal health care system, has come under strain both in terms of funding and resources. The promise of AI to reduce the financial burden on medical services by cutting out some disposable roles and functions could be music to the ears of government and health authorities.

View interactive content
Vahdat said that one area in which AI could be hugely beneficial in improving efficiency and cutting costs was cardiological care.

“If we collect data in real time and combine it with their historical data as well as data from millions of similar patients, an intelligent system could predict a heart attack with a high rate of precision,” he said.

“This will allow us to save countless lives and save health care systems around the world huge spend.”

But some experts fear the fast-paced nature of the still nascent AI industry could come at the risk of patient safety.

“We are concerned that in the rush to roll out AI and push the boundaries of technology, there is a risk that important checks and balances that have been established to keep patients safe might be seen as an afterthought, or be bypassed entirely,” Stokes-Lampard said.

Last month, a report by online health publication Stat said that IBM’s Watson supercomputer had made multiple “unsafe and incorrect” cancer treatment recommendations, citing internal company documents. According to Stat, the program had only been trained to deal with a small number of cases and hypothetical scenarios instead of actual patient data. IBM subsequently told CNBC that it has “learned and improved Watson Health based on continuous feedback from clients, new scientific evidence and new cancers and treatment alternatives.”

Stokes-Lampard said that regulators should keep pace with the rapid advances in technology to avoid harm to patients.

She said: “In an ever-changing ‘tech space,’ it is imperative that regulation keeps up with all technological developments, and that it is appropriately enforced, so that patients are kept safe, however they choose to access care.”

But many tech companies — big and small — are mostly averse to new regulation, arguing it could restrain innovation.

“There are a lot of regulations to protect patients already,” Medopad’s Vahdat said. “It isn’t about adding more regulation — it is about adapting and applying existing regulations to better deal with current realities.”

He added: “Patients as a cohort are often vulnerable and we feel responsible to not create hope that is not qualified. We feel strongly that unless our technology is clinically validated it shouldn’t be marketed. Like doctors, health care start-ups should seek to fully understand the ethical implications of creating expectations with patients.”

Data

Another concern is data. For a medical AI app, myriads of patient data are required to optimize analysis of a patient’s health status. That’s why firms like Medopad and DeepMind have partnered with hospitals, gaining access to libraries of patient information.

But the need for that data has heightened concerns over the privacy of patients, who would more than likely prefer sensitive records about their health problems not be inappropriately shared or exposed through a cyberattack.

Last year, British privacy watchdog the Information Commissioner’s Office (ICO) rapped a hospital working with DeepMind over its use of patient data. The Royal Free Hospital in London had struck a deal with Google’s AI firm allowing the company to gain access to the data of 1.6 million patients, but did not do enough to inform them about how that information would be used.

“When it comes to data, the more access to data you can have the better your AI can become,” Poppy Gustafsson, chief executive of AI and cybersecurity firm Darktrace, told CNBC. “But obviously your data needs to be protected, so where some are trying to develop systems whereby they are getting a lot of data to try and gather those learnings, they also need to think about how they are going to keep that data secure.”

Health care is an important battleground for some of biggest tech companies in the U.S. Google parent company Alphabet, Amazon, Microsoft, IBM and Salesforce are all developing cloud and AI technology aimed at improving the health system.

The RCGP’s Stokes-Lampard said that tech platforms ought to be conscious of their responsibility to protect patients.

“The RCGP is keen to the see the NHS embrace new technology, and we believe that it has the potential to transform many aspects of the NHS, but it must be implemented in a safe and equitable way that doesn’t benefit some patients at the expense of others, and is not to the detriment of general practice as a whole,” she said.

Nov 25 – Dec 12: 18-Day Campaign
to End Violence Against Women

“End violence against women:
in the world of work and everywhere!”

 

Invoke Article 33 of the ILO constitution
against the military junta in Myanmar
to carry out the 2021 ILO Commission of Inquiry recommendations
against serious violations of Forced Labour and Freedom of Association protocols.

 

Accept National Unity Government
(NUG) of Myanmar.
Reject Military!

#WearMask #WashHands
#Distancing
#TakePicturesVideos

Time to support & empower survivors.
Time to spark a global conversation.
Time for #GenerationEquality to #orangetheworld!
Trade Union Solidarity Campaigns
Get Email from NTUC
Article Categories