Fake-News-Generating AI Deemed Too Dangerous for Public Release
Throughout human history, automation has supplanted humans in one industry after another. In the past, it was sawmills and food processing, and now it looks like trucking and cashiers could be next. However, there might be another employment casualty in the future. Your friendly neighborhood fake news writer could be out of a job if systems like GPT2 become commonplace. For the time being, the researchers who developed this AI consider it too dangerous to release.
The nonprofit OpenAI (backed by Elon Musk) developed GPT2 by letting it read more than 8 million online articles. It uses a new type of neural network design called a Transformer. Google researchers developed Transformers in 2017 that’s better at understanding language. It envisioned Transformer running tasks like language translation, but the OpenAI team found it was also adept at generating legible text.
You can give GPT2 a block of text, and it’ll generate more of it in the same style. It does this by focusing on one word at a time and then decides what the next word ought to be. Unlike the mediocre text prediction on your phone, GPT2 creates coherent sentences that seem to get the point across. Honestly, I’ve read news articles written by humans that weren’t as well-written.
So, why is this article being written by a squishy, living human instead of a hyper-efficient AI? GPT2 might be able to understand language, but it can’t parse and explain facts. Everything it writes is a lie, making it the world’s best fake news generator. It’s actually amazing how easily GPT2 backs up its lies, too. It makes up quotes, citations, and statistics to support whatever text stub you give it. Here’s an example with the AI-generated text in italics:
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.
Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.
Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.
While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”
Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.
While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”
However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.
GPT2 represents a major advancement in what’s known as unsupervised learning. With most neural networks, the training consists of supervised learning. That means you have to feed in labeled data sets and evaluate the outcome to tune the various processing nodes until the network functions as intended. Unsupervised networks like GPT2 can assimilate large volumes of data without human involvement. Many researchers believe this is key to the future of AI, and OpenAI just showed that it can work and produce impressive results.
The team has decided to keep GPT2 in the lab for now. OpenAI will continue experimenting to learn what GPT2 can and cannot do, but it’s only a matter of time before this technology finds its way onto the internet.
Continue reading
Amazon Alexa Deemed ‘Colossal Failure’ Following $10 Billion Loss
Since the beginning, Alexa has been more a novelty than an integral part of Amazon customers’ everyday lives. The realization of that discrepancy took Amazon eight years and billions of dollars.
OpenAI Releases Fake News Bot It Previously Deemed Too Dangerous
The firm spent months opening up pieces of the underlying technology so it could evaluate how it was used. Citing no "strong evidence of misuse," OpenAI has now made the full GPT-2 bot available to all.