AI in healthcare: Artificial Intelligence or Superficial Stupidity?

Natalie Nelissen

22nd May 2019

Back to blog

Lunch & Learn seminar series

This blog post is a summary of the talk given by Dr David Wong (University of Leeds,
Twitter: @DrDaveCWong) on 14 May 2019.

Should a medical artificial intelligence (AI) take the Hippocratic Oath? Opinions among our audience were divided but the largest vote (45%) went to ‘Yes’, followed by ‘Not sure’ (36%).

Your answer may depend on how much you know about the technology as it stands versus your aspirations for the future. In his talk, David made the point that current AI applications are not particularly intelligent, unlike the artificial general intelligence examples popular in sci-fi movies, such as holographic doctors and evil supercomputers.
Current AI can learn to solve a specific problem with known rules or boundaries, such as
beating a human at chess or detecting abnormalities in breast X-rays. The majority of AI
comes in the form of supervised machine learning: an algorithm is trained on a given
labelled data set, and makes a decision about the label of a new (similar) data point. For
example, you may want to classify pictures as either goldfish or bluebirds (or tumours as
malignant or benign).

Lunch Learn Gr 02

You (the human) will label a training set of lots of different pictures as either goldfish or bluebird. You then decide on which feature to use; in this case average picture colour may work well. Your algorithm will learn which colours are associated with goldfish (orange spectrum) and bluebirds (blue spectrum) and set a cut-off colour between both spectra. When you give the algorithm a new picture, it will check where its colour is located compared to the cut-off point (for example more orange than blue), and label the picture accordingly (goldfish).

Usually an algorithm will look at more than one feature - orange fish in blue water may be problematic if only looking at average image colour. More sophisticated algorithms (deep learning) can learn to pick the right features themselves. Generally, supervised machine learning can do as well as clinical techniques, for example predicting heart attacks based on ECG or depression based on Facebook posts. Just as with any diagnostic tool, there is often a trade-off between sensitivity (not missing a depressed person) and specificity (not classifying a healthy person as depressed).

Two major problems are extrapolation and bias. For example, an image classification algorithm may be highly accurate for ‘typical’ images of busses, but may not recognise (extrapolate) a bus when it’s upside down, or when only a small part (such as a few windows) is in the image. Subtle biases in the training data are often not noticeable until the algorithm produces ‘weird’ results. For example, one study found that asthma patients were less likely to die of pneumonia, but further inspection showed that known asthma patients were simply treated more aggressively, increasing their survival chances.

Matching a data point to a training data set is perhaps not terribly ‘intelligent’, but it works quite well and can be very beneficial (such as freeing up time). More advanced forms of AI, which will be able to successfully generalise beyond a single problem and really simulates the breadth of the human intellect, are mostly sci-fi for now.

Our Lunch & Learn events run monthly from Co>Space North. You can find out about upcoming events and book your place by clicking here

Natalie Nelissen

Natalie Nelissen

Evidence and Evaluation Lead