Deepfake Malware Can Trick Radiologists Into Believing You Have Cancer

Deepfake Malware Can Trick Radiologists Into Believing You Have Cancer

One of the problems with convincing people to take computer security seriously is that it’s, in a word, boring. Every now and then, however, someone demonstrates a flaw with the potential to break through the walls of ennui surrounding the topic and register with the public consciousness. Israeli researchers have likely done just that, by demonstrating that malware running on CT and MRI machines can either inject realistic images of cancerous growths — fooling trained diagnosticians — or remove said tumors from the screen entirely, leaving technicians convinced no disease was present when it very much was.

The scientists in question demonstrated this capability by deploying it themselves, in a bid to prove that security vulnerabilities in medical equipment were a real issue that needs to be dealt with. Hospitals are potentially attractive targets for ransomware and malware more generally because the data they contain on patients is so critical. Disrupting the ability to access that data could literally cost lives if the appropriate systems were penetrated.

The report notes that hospital security systems related to PACS (Picture Archiving and Communication Systems) that both CT and MRI machines use are badly outdated and often poorly managed. A search online with Shodan.io (a search engine for IoT devices) found 1,849 medical image (DICOM) servers and 842 PACS servers exposed to the internet. Researchers have demonstrated that these services are vulnerable to external attack, as well as internal penetration. They write:

Since 3D medical scans provide strong evidence of medical conditions, an attacker with access to a scan would have the power to change the outcome of the patient’s diagnosis. For example, an attacker can add or remove evidence of aneurysms, heart disease, blood clots, infections, arthritis, cartilage problems, torn ligaments or tendons, tumors in the brain, heart, or spine, and other cancers.

The idea that nation-states or other bad actors might target specific individuals might have seemed far-fetched a decade ago, but no longer. The United States and Israel are believed to have used Stuxnet to cripple Iranian centrifuges. Thanks to Ed Snowden, we know the NSA has intercepted specific computers to install rootkits when it felt it had reason to do so. A recent Asus attack was launched by an actor attempting to infect 600 specific systems with malware based on their MAC addresses. How the attackers knew to hit Asus, specifically, isn’t known. Neither is the identity of the targets they were attempting to infect. But the idea of a targeted attack intended to sow discord or uncertainty by, say, targeting individuals running for office is no longer something we can comfortably relegate to the realm of science fiction.

By using machine learning, the researchers were able to inject false data into CT scans that proved capable of fooling medical professionals tasked with analyzing the image. 2D images have proven difficult to manipulate in ways that pass muster with trained analysts, even when created by a digital artist using Photoshop. The authors note that even when an artist is employed, “It is hard to accurately inject and remove cancer realistically.”

But generating 3D cancer images uses a machine learning model known as a GAN (Generative Adversarial Network). A GAN pits two different machines against each other. The generator attempts to create false images. The other, the discriminator, attempts to identify these images as false. The results of this training long-term are a generator that eventually “learns” how to fool the discriminator machine by discovering which kinds of fakes will and won’t trip its detection algorithms. If your discriminator is good enough, what you eventually get is a deepfake CT or MRI 3D image that’s very hard to spot — all the more so because the people being paid to look at it have no expectation that it might be fake in the first place.

The team performed a penetration test (pen-test) and was successfully able to infiltrate and install a Raspberry Pi server into the hospital network. They write:

To verify that we could intercept and manipulate the scans, we scanned a medical dummy (Fig. 5). We found that the scan of the dummy was sent over the network twice: once in cleartext over TCP to an internal web viewing service, and again to the PACS storage server using TLSv1.2. However, to our surprise, the payload of the TLS transmission was also in cleartext. Moreover, within 10 minutes, we obtained the usernames and passwords of over 27 staff members and doctors due to multicasted Ethernet traffic containing HTTP POST messages sent in cleartext. These vulnerabilities were disclosed to the hospital’s IT staff and to their PACS software provider.

To test their fakes, the team recruited three radiologists and tested them in both a blind and open trial. The radiologists (with 2, 5, and 7 years’ experience) were first tested to see if they could properly identify unmodified scans for the presence or absence of cancer. All performed well. When handed the modified scans — the ones into which cancer had either been injected or removed — the false cancer injection success rate was above 99 percent. The removal success rate — in which cancer signs were removed from the images — was 95.8 percent.

Telling the radiologists about the test in the open scenario reduced the cancer injection success rate to 90 percent and the cancer removal success rate to 70 percent. However, it also led to a significant rise in false positives. The AI tools that radiologists use for the detection of cancer to assist them in image analysis were fooled in 100 percent of cases.

The authors conclude: “In summary, both radiologists and AI are highly susceptible to CT-GAN’s image tampering attacks. Although knowledge of the attack can help mitigate some cases of cancer injection, the error rates and confidence scores suggest that the attack may not be reported in most cases.”

The threat of deepfakes has mostly been framed as an attack on truthful public messaging or assaults on privacy. Using AI to record fake messages from politicians declaring support for causes they don’t stand for, or to insert someone else’s face into hardcore pornography is a threat to accurate reporting and privacy. Now, there’s evidence they can be used to alter data, even in the places we most need data to be accurate. The risk of attack is low, but the potential for disruption in the event of a successful intrusion is quite high.

Feature image courtesy of Wikipedia