OpenAI Releases Fake News Bot It Previously Deemed Too Dangerous
In February of this year, the nonprofit artificial intelligence research lab OpenAI announced its new algorithm called GPT-2 could write believable fake news in mere seconds. Rather than release the bot to the world, OpenAI deemed it too dangerous for public consumption. The firm spent months opening up pieces of the underlying technology so it could evaluate how it was used. Citing no “strong evidence of misuse,” OpenAI has now made the full GPT-2 bot available to all.
OpenAI designed GPT-2 to consume text and produce summaries and translations. However, the researchers became concerned when they fed the algorithm plainly fraudulent statements. GPT-2 could take a kernel of nonsense and build a believable narrative around it, going so far as to invent studies, expert quotes, and even statistics to back up the false information. You can see an example of GTP-2’s text generation abilities below.
You can play around with GPT-2 online on the Talk to Transformer page. The site has already been updated with the full version of GPT-2. Just add some text, and the AI will continue the story.
The deluge of fake news was first called out in the wake of the 2016 election when shady websites run by foreign interests spread misinformation, much of which gained a foothold on Facebook. OpenAI worried releasing a bot that could pump out fake news in large quantities would be dangerous for society. Although, some AI researchers felt the firm was just looking for attention. This technology or something like it would be available eventually, they said, so why not release the bot so other teams could develop ways to detect its output.
Now here we are nine months later, and you can download the full model. OpenAI says it hopes that researchers can better understand how to spot fake news written by the AI. However, it cautions that its research shows GPT-2 can be tweaked to take extreme ideological positions that could make it even more dangerous.
OpenAI also says that its testing shows detecting GPT-2 material can be challenging. Its best in-house methods can identify 95 percent of GPT-2 text, which it believes is not high enough for a completely automated process. The worrying thing here is not that GPT-2 can produce fake news, but that it can potentially do it extremely fast and with a particular bias. It takes people time to write things, even if it’s all made up. If GPT-2 is going to be a problem, we’ll probably find out in the upcoming US election cycle.
Continue reading
Jupiter’s Great Red Spot Far Deeper Than Previously Known
New data from the first-ever 3D view of Jupiter's atmosphere shows that the roots of the planet's Great Red Spot extend deeper into the planet than we knew.
Bees Are Smarter Than We’ve Previously Given Them Credit For, Research Finds
According to longtime bee researcher Lars Chittka, we “now have suggestive evidence that there is some level of conscious awareness in bees."
Chemists Develop Process to Turn Previously Unrecyclable Plastics Into Propane
Less than six percent of the country's plastic waste is currently recycled. With a new type of hydrogenolysis, chemists might change that.
Toys ‘R’ Us Is Selling Its Stash of Previously Unknown Adult Domains
Close your eyes and repeat, loudly: "I DON'T WANNA GROW UP..."