AUSTIN — At SXSW 2018, artificial intelligence (AI) was everywhere, even in the sessions that were not specifically about the subject. AI has captured the attention of people well outside the technology space, and the implications of the technology are far-reaching, changing industries, eliminating many human jobs, and changing the nature of work for most of us going forward. I expect that an AI bot could write this article within 10 years — and likely much sooner — simply by ingesting all the information from the sessions I attended, coupled with an ability to research related information on the internet much better than I could.
Interestingly enough, as Ray Kurzweil pointed out in his talk here, the term “artificial intelligence” was coined at a summer workshop at Dartmouth in 1956 attended by computing pioneers such as Marvin Minsky and Claude Shannon, at a time when computers still ran on vacuum tubes and computers in the world numbered in the hundreds.
Will AI Outsmart Humans?
While we have a handle on what constitutes artificial intelligence in computers today, what constitutes intelligence in humans is still not completely agreed upon. We have some 100 billion neurons in our brains, and those neurons can make 100 trillion connections, which certainly outstrip any computer today. Those connections allow us to identify things, make decisions, use and understand language, and many other things that a computer has a hard time doing – for now.
At a panel on innovations in AI, Adam Cheyer (founder of Siri), Daphne Koller (Stanford professor and co-founder of Coursera), and Nell Watson (Singularity University) noted how today’s machine learning algorithms need millions of cat pictures to correctly and consistently identify a cat — while a toddler can be trained to identify a cat correctly with perhaps five pictures. The algorithms, and computing power, need to improve to be able to learn from small datasets. They also pointed out that understanding or replicating human intelligence is not necessarily the goal of AI. Early attempts to imitate natural flight like birds do failed. Airplanes fly faster, higher, and better than anything in nature.
Similarly, machines may learn faster from each other than humans. Google’s Deepmind AlphaGo first beat one of the world’s best Go players in 2016. In 2017, Google announced that AlphaGo Zero, a version of the algorithms trained by playing itself without human data, beat AlphaGo 100 games to zero. The Singularity may be closer than we think.
The rapid advances in AI are leading people to think about the social impact, and what machines are learning from the data they consume. With regard to inclusiveness, some examples about what AI may present us create issues. For example, an image search for CEOs on Google presents mostly white males. Is that accurate? Yes, most CEOs today are white males, and Google tailors searches according to your history as well. Does it amplify human bias? Yes, in that the underlying implication is that if you want to become a CEO, you’re much more likely to get there as a white male.
Another example that created an internet uproar in 2015 was an early version of Google Photos mistakenly labeling some people of color. Clearly that was an early dataset training issue. With Apple introducing facial recognition for unlocking phones and payments, and those features quickly becoming more mainstream on other devices, ensuring that training datasets recognize people of color and races becomes critical. More specifically, some fear that algorithms used in the criminal justice system — who to investigate, and how to sentence — disproportionately disadvantage people of color. The reason for that is that the training datasets reflect the history of cultural biases in our society.
It is becoming obvious to many that advances in AI favor certain large companies. Platform companies such as Amazon, Google, Apple, Microsoft, and Facebook have the resources and infrastructure to compete for the best engineers, and also have massive datasets that can train their machine learning algorithms. Some are calling for open data standards and access to datasets for smaller companies to level the playing field.
In particular, governments are thinking hard about this. Some “smart city” initiatives call for partnerships with private companies that use public entity data to help cities modernize and deliver services. Should only one company get access to that data, or perhaps have a temporary monopoly over the use of it to deliver a service? With self-driving cars imminent, what should the models be for sharing traffic information, or information that cars pick up along routes about road conditions, traffic, and weather? For autonomous vehicles in particular, with governmental entities having jurisdiction over their vehicular traffic, how do you create rules and standards for sharing that data across town, city, and state lines?
Ownership and use of data from cars and devices will also heavily affect the quality and deployment speed of AI based solutions. One view that was frequently espoused: Any regulation around AI or data transparency must be application-specific. The issues around autonomous vehicles are much different than issues around inclusiveness or the digital divide (access of services to all economic levels). Blanket regulations around data transparency or some overarching standard that doesn’t fit specific use cases would only lead to slowing innovation.
For a different take on AI, Unanimous A.I., a San Francisco based startup, is taking a cue from nature in using algorithms to amplify human brainpower. Louis Rosenberg, its CEO, is a Stanford PhD, named on over 350 patents, and built the first immersive augmented reality system for the Air Force’s Armstrong lab in the early 1990s. Rosenberg explains the hive concept by noting how bees go about building new homes. Honeybees have less than a million neurons of brainpower compared with a human’s 100 billion. Yet collectively, they form a swarm intelligence, coming to agreement on the complicated task of building a new home that factors in protection from weather, predators, and other issues. They communicate with each other by buzzing their bodies, and end up with the “swarm” achieving a collective intelligence about the right spot to build the hive that no individual bee could muster.
In a similar fashion, Unanimous A.I.’s algorithms use human intelligence to make smarter decisions and predictions. A group (swarm) of 40 movie fans was more accurate than Variety and other experts in predicting this year’s Oscar winners, and in 2016 another swarm of fans picked the top four horses in the Kentucky Derby. The premise is in the wisdom of crowds, but it is not a vote. The swarm essentially measures the confidence of individual’s in their views, their level of flexibility in changing them, as well as dynamics (push and pull within groups) of getting to decisions.
The Turing Test
Computing pioneer Alan Turing proposed the Turing Test in 1950, where a computer would engage in a natural language conversation with a human, and another human would judge whether the computer’s responses are indistinguishable from a human. This test is widely referred to as a test of a computer’s ability to think. No computer or algorithm has yet passed that test. Adam Cheyer, the co-founder of Siri (purchased by Apple in 2010), noted that for all the smarts in voice and language recognition in assistants like Apple’s Siri and Amazon’s Alexa, we are still usually asking the assistant relatively simple commands to perform some action using an application that recognizes a certain set of verbs (“turn off all the lights”), or to search for information about something specific (“show me all the nearby Starbucks”).
Ray Kurzweil is now predicting AI could pass the Turing Test by 2029. Given exponential advances we’ve seen in AI in the past several years, and the 3 billion smartphones in the world with applications storing vast amounts of data to learn from, it seems plausible. Further, Kurzweil predicts 2045 as the year of the Singularity, where computers will actually surpass the abilities of human intelligence. He likens it to an evolution similar to the development of the neocortex in mammals, that led to mammals becoming the dominant species in the post dinosaur era.
What will that bring? Many things, and some of them may be the key to increasing human longevity. Medical nanorobots powered by AI will course through our blood, detecting and fighting pathogens and putting an end to cancers. Other nanorobots will monitor vital organ function and deliver drugs to maintain their function and fight off disease. DNA will be able to be reprogrammed to remove disease markers. Certain long terms trends, like increasing urbanization, may be reversed or tempered. Kurzweil argues that technology enabled living in cities as a way to work, play, and interact with other humans. Tomorrow’s augmented and virtual reality solutions may enable humans to live far from others, yet retain the physical and emotional connection they need. Land use could be further affected by vertical agriculture, powered by alternative energy and AI, that can help feed the world’s growing population.
Should we fear AI? Most of the people that really understand what it can do say that the good — in advances in medicine, automation, food production, and productivity in daily life — outweighs the potential bad parts. Others like Elon Musk and Bill Gates have sounded alarms about the downsides — in the huge economic impact of job displacement, control of information, the capacity to manipulate, and the potentially catastrophic consequences of AI gone bad. The future is still unwritten, of course, but humanity had managed to survive the previous technology revolutions. Perhaps the machines will make us smarter too, keeping us one small step ahead.
Google’s AI Ethics Council Is Collapsing After a Week
Google's AI council was supposed to help the company navigate treacherous waters. Instead, it's already falling apart.
Become a Certified Ethical Hacker with These Training Bundles
Learn how to become an expert security hacker with these lectures.
EA: They Aren’t Loot Boxes, They’re ‘Quite Ethical’ ‘Surprise Mechanics’
According to EA, its egregious loot box gambling systems aren't actually gambling at all, in any implementation. They're harmless 'surprise mechanics' and are 'quite ethical.'
Meet the Most Comprehensive Ethical Hacking Course Ever Created
The Complete Ethical Hacking Certification Course is your one-stop resource for learning how to join their ranks, and it’s currently available for over 90% off at just $14.99.