Monday, July 9, 2018

Using AI the approximate time of death of patients can be predicted

Artificial Intelligence can use data from health records to predict with considerable accuracy when a patient will die. Such information can have a positive use and end up saving lives.

Developing more accurate predictive models
A recent paper published in Nature by a large number of researchers says that feeding data from electronic health records (EHR) into a deep learning model can substantially improve the accuracy of predicted outcomes.
In trials that used data from two U.S. hospitals, the researchers were able to show that the algorithms in the model improved predictions as to the length of stay and time of discharge — but also the time of death of patients.
The neural network in the study uses an immense amount of data, including information about the patient's medical history and vitals. A new algorithm lines up events in the patient's records into a timeline, and this enables the learning model to predict future outcomes such as length of stay and even death. The predictions are made in record time as well.
Uses of the predictions
The predictions may enable the hospital to prioritize patient care, or adjust treatment plans, and even recognize medical emergencies before they happen. It will also free up the time of healthcare workers who had to sift through all this data in order to make less accurate predictions.
Other recently developed algorithms can diagnose lung cancer and heart disease better than human doctors. Another study has fed retinal images into AI algorithms that are able to predict the chances that a patient could develop one or more of three major eye diseases.
One problem is to amass all the data about patients in one central database as the data is often spread widely through various healthcare systems and government agencies often without the data being shared.
Dangers of centralizing patient data
Some worry that collecting this massive amount of personal health data and putting it all into a single model that is owned by Google, one of the largest private companies in the world, could very well create problems. There would need to be guarantees that the data would not be shared. Many companies would be anxious to use the information for their purposes. As the article notes: " Electronic health records of millions of patients in the hands of a small number of private companies could quickly allow the likes of Google to exploit health industries, and become a monopoly in healthcare."
A case for caution
Recently the U.K. government was concerned that DeepMind Health, owned by Alphabet, Google's parent company, was able to exert monopoly power as described in an article in TechCrunch. There had been earlier concerns that DeepMind Health had broken U.K. laws by collecting data on patients without their consent last year.
A group who reviewed the project said: “There are many examples in the IT arena where companies lock their customers into systems that are difficult to change or replace. Such arrangements are not in the interests of the public. And we do not want to see DeepMind Health putting itself in a position where clients, such as hospitals, find themselves forced to stay with DeepMind Health even if it is no longer financially or clinically sensible to do so; we want DeepMind Health to compete on quality and price, not by entrenching legacy position."
Healthcare professions are already worried that AI could have some negative effects on medicine once it is embedded in the system, without precautions taken to ensure transparency. The American Medical Association (AMA), while admitting the benefits AI can bring to medicine, also notes that AI tools must “strive to meet several key criteria, including being transparent, standards-based, and free from bias.” It is not clear how one makes algorithms transparent. The AMA notes that the US regulatory framework is far behind the advance of technology with legislation being passed over two decades ago.
Algorithms can be biased
Algorithms developed for the AI systems can reflect bias, often unconscious, of those who develop them. Many seem to accept algorithms since they are mathematical as objective and unbiased by their very nature. However, as a recent article points out there are biased algorithms everywhere and refers to a statement by the author of the book "Weapons of Math Destruction": "Cathy O’Neil, a mathematician and the author of Weapons of Math Destruction, a book that highlights the risk of algorithmic bias in many contexts, says people are often too willing to trust in mathematical models because they believe it will remove human bias. “[Algorithms] replace human processes, but they’re not held to the same standards,” she says. “People trust them too much.”
Previously published in Digital Journal

No comments:

US will bank Tik Tok unless it sells off its US operations

  US Treasury Secretary Steven Mnuchin said during a CNBC interview that the Trump administration has decided that the Chinese internet app ...