The work is a collaborative project called fastMRI between Facebook’s AI research team (FAIR) and radiologists at NYU Langone Health. Together, the scientists trained a machine learning model on pairs of low-resolution and high-resolution MRI scans, using this model to “predict” what final MRI scans look like from just a quarter of the usual input data. That means scans can be done faster, meaning less hassle for patients and quicker diagnoses.
The reason artificial intelligence can be used to produce the same scans from less data is that the neural network has essentially learned an abstract idea of what a medical scan looks like by examining the training data. It then uses this to make a prediction about the final output. Think of it like an architect who’s designed lots of banks over the years. They have an abstract idea of what a bank looks like, and so they can create a final blueprint faster.
“The neural net knows about the overall structure of the medical image,” Dan Sodickson, professor of radiology at NYU Langone Health, tells The Verge. “In some ways what we’re doing is filling in what is unique about this particular patient’s [scan] based on the data.”