MIT Creates AI-Powered Psychopath Called ‘Norman’

MIT Creates AI-Powered Psychopath Called ‘Norman’

Artificial intelligence researchers have thus far attempted to make well-rounded algorithms that can be helpful to humanity. However, a team from MIT has undertaken a project to do the exact opposite. Researchers from the MIT Media Lab have trained an AI to be a psychopath by only exposing it to images of violence and death. It’s like a Skinner Box of horror for the AI, which the team has named “Norman” after movie psychopath Norman Bates. Predictably, Norman is not a very well-adjusted AI.

Norman started off with the same potential as any other neural network — as you feed it data, it becomes able to discern similar patterns it encounters. Technology companies have used AI to help search through photos and create more believable speech synthesis, among many other applications. These well-rounded AIs were designed with a specific purpose in mind. Norman was born to be a psychopath.

The MIT team fed Norman a steady diet of data culled from gruesome subreddits that exist to share photos of death and destruction. Because of ethical concerns, the team didn’t actually handle any photos of people dying. Norman only got image captions from the subreddit that were matched to inkblots, and this is what formed the basis for his disturbing AI personality.

After training, Norman and a “regular” AI were shown a series of inkblots. Psychologists sometimes use these “Rorschach tests” to assess a patient’s mental state. Norman and the regular AI are essentially image-captioning bots, which is a popular deep learning application for AI. The regular AI saw things like an airplane, flowers, and a small bird. Norman saw people dying from gunshot wounds, jumping from buildings, and so on.

MIT Creates AI-Powered Psychopath Called ‘Norman’

Norman was not corrupted to make any sort of point about human psychology on the internet — a neural network is a blank slate. It doesn’t have any innate desires like a human. What Norman does address is the danger that artificial intelligence can become dangerously biased. With AI, you get out what you put in, so it’s important that these platforms are trained to avoid bias, and preferably not left to browse the darker corners of Reddit for long periods of time.

The team now wants to see if it can fix Norman. You can take the same Rorschach test and add your own captions. The team will use this data to adjust Norman’s model to see if he starts seeing less murder. We can only hope.

Continue reading

190,000 Ceiling Fans Recalled After Blades Fly Off, Hitting People
190,000 Ceiling Fans Recalled After Blades Fly Off, Hitting People

King of Fans is recalling some 190,000 ceiling fans sold through Home Depot after the blades began detaching during operation.

Tesla Ordered to Recall 150K+ Vehicles to Repair Memory Failures
Tesla Ordered to Recall 150K+ Vehicles to Repair Memory Failures

Tesla has been asked — or "asked" — to recall some 159,000 vehicles to repair a NAND memory issue that will eventually cause failures on every affected vehicle.

Qualcomm Revamps Snapdragon 865 Again, Calls It Snapdragon 870
Qualcomm Revamps Snapdragon 865 Again, Calls It Snapdragon 870

Qualcomm just unveiled a new high-end 800-series ARM processor, and I know what you're thinking. Didn't Qualcomm already announce its 2021 flagship system-on-a-chip (SoC)? It did, but the new Snapdragon 870 will slot in below the flagship Snapdragon 888.

Hardware Accelerators May Dramatically Improve Robot Response Times
Hardware Accelerators May Dramatically Improve Robot Response Times

If we want to build better robots, we need them to be faster at planning their own motion. A new research team thinks it's invented a combined hardware/software deployment method that can cut existing latencies in half.