OpenAI Releases Fake News Bot It Previously Deemed Too Dangerous

OpenAI Releases Fake News Bot It Previously Deemed Too Dangerous

In February of this year, the nonprofit artificial intelligence research lab OpenAI announced its new algorithm called GPT-2 could write believable fake news in mere seconds. Rather than release the bot to the world, OpenAI deemed it too dangerous for public consumption. The firm spent months opening up pieces of the underlying technology so it could evaluate how it was used. Citing no “strong evidence of misuse,” OpenAI has now made the full GPT-2 bot available to all.

OpenAI designed GPT-2 to consume text and produce summaries and translations. However, the researchers became concerned when they fed the algorithm plainly fraudulent statements. GPT-2 could take a kernel of nonsense and build a believable narrative around it, going so far as to invent studies, expert quotes, and even statistics to back up the false information. You can see an example of GTP-2’s text generation abilities below.

You can play around with GPT-2 online on the Talk to Transformer page. The site has already been updated with the full version of GPT-2. Just add some text, and the AI will continue the story.

The deluge of fake news was first called out in the wake of the 2016 election when shady websites run by foreign interests spread misinformation, much of which gained a foothold on Facebook. OpenAI worried releasing a bot that could pump out fake news in large quantities would be dangerous for society. Although, some AI researchers felt the firm was just looking for attention. This technology or something like it would be available eventually, they said, so why not release the bot so other teams could develop ways to detect its output.

An example of GPT-2 making up facts to support the initial input.
An example of GPT-2 making up facts to support the initial input.

Now here we are nine months later, and you can download the full model. OpenAI says it hopes that researchers can better understand how to spot fake news written by the AI. However, it cautions that its research shows GPT-2 can be tweaked to take extreme ideological positions that could make it even more dangerous.

OpenAI also says that its testing shows detecting GPT-2 material can be challenging. Its best in-house methods can identify 95 percent of GPT-2 text, which it believes is not high enough for a completely automated process. The worrying thing here is not that GPT-2 can produce fake news, but that it can potentially do it extremely fast and with a particular bias. It takes people time to write things, even if it’s all made up. If GPT-2 is going to be a problem, we’ll probably find out in the upcoming US election cycle.

Continue reading

Google Pixel Slate Owners Report Failing Flash Storage
Google Pixel Slate Owners Report Failing Flash Storage

Google's product support forums are flooded with angry Pixel Slate owners who say their devices are running into frequent, crippling storage errors.

Astronomers Might Finally Know the Source of Fast Radio Bursts
Astronomers Might Finally Know the Source of Fast Radio Bursts

A trio of new studies report on an FRB within our own galaxy. Because this one was so much closer than past signals, scientists were able to track it to a particular type of neutron star known as a magnetar.

How to Build a Face Mask Detector With a Jetson Nano 2GB and AlwaysAI
How to Build a Face Mask Detector With a Jetson Nano 2GB and AlwaysAI

Nvidia continues to make AI at the edge more affordable and easier to deploy. So instead of simply running through the benchmarks to review the new Jetson Nano 2GB, I decided to tackle the DIY project of building my own face mask detector.

Apple’s New M1 SoC Looks Great, Is Not Faster Than 98 Percent of PC Laptops
Apple’s New M1 SoC Looks Great, Is Not Faster Than 98 Percent of PC Laptops

Apple's new M1 silicon really looks amazing, but it isn't faster than 98 percent of the PCs sold last year, despite what the company claims.