Today's Watson system is dramatically different. It no longer exists in a row of cabinets, but is distributed through a large number of servers that are freely available to users and capable of running hundreds of AI "scenarios" on the fly. As with all things cloud-based, the Watson system serves simultaneous customers around the world, who can connect to it from cell phones, desktops, and their own data servers. This type of AI can be scaled up or down depending on demand. Given that AI improves incrementally as people use it, Watson will always become smarter and smarter; the points of improvement it learns about in any one situation are immediately transmitted to other situations. And it's not a single program, either, but a collection of various software engines-its logical deduction engine and language parsing engine can run on different codes, chips, and locations-all of these elements of intelligence coming together into a unified stream of intelligence.
Users can access this permanently connected (always-on) intelligence system directly or through third-party applications that use this AI cloud service. As with many visionary parents, it's no surprise that IBM wants to put Watson computers to work in medicine, so it's no surprise that they're developing an app for medical diagnostic tools. Previous attempts at AI in diagnosis and treatment have mostly ended in fiasco, but Watson has been fruitful. Simply put, when I type in the symptoms of a disease I once contracted in India, it gives me a list of suspected illnesses, one by one, from highest to lowest likelihood. It thought I was most likely to be infected with Giardia - and rightly so. The technology is not yet available directly to patients; IBM is making the intelligence of Watson computers available for partners to access to help them develop user-friendly interfaces for booking doctor appointments and hospital aspects. "I believe that something like Watson - whether it's a machine or a human - will soon be the best treating doctor in the world," said Alan Green, chief medical officer of startup Scanadu. Alan Greene, who said the company, inspired by the movie Star Trek's Chinese medical tricorder[2], is using cloud AI technology to create a diagnostic and treatment device. "Judging by the rate at which AI technology is improving, it's likely that children born now will grow up less likely to need to visit a doctor to find out about a diagnosis and treatment."
Medicine is just the beginning. All the major cloud-computing companies, plus dozens of startups, are scrambling to get cognitive services like Watson's computers off the ground. Since 2009, AI has attracted more than $17 billion in investment, according to quantitative analytics firm Quid. Last year alone, 322 companies with AI-like technology received more than $2 billion in investment. facebook and Google have also recruited researchers for their in-house AI research groups. Yahoo, Intel, Dropbox, LinkedIn, Pinterest, and Twitter have also acquired AI companies since last year. Private investment in AI has increased at an average annual growth rate of 62 percent over the past four years, a rate that is expected to continue.
Throughout all of this activity, the future of AI is coming into view, and it's neither a HAL 9000 -- a super computer from the novel and movie "2001: A Space Odyssey" -- nor a stand-alone machine that possesses a transcendent (but potentially homicidal) humanoid consciousness and relies on it to function. nor is it the superintelligence that mesmerizes Singularity theorists. The coming AI is rather like Amazon's Web Services - cheap, reliable, industrial-grade digital intelligence running behind the scenes of everything, occasionally blinking in front of your eyes, otherwise nearly invisible. This general-purpose facility will provide the AI you need without going beyond what you need. Like all facilities, even as AI transforms the Internet, the global economy, and civilization, it will become tiresome. As electricity did over a century ago, it will enliven inanimate objects. Everything we electrified before, we will now cognize. And practical new AI will also enhance the lives of individual humans (deepening our memories, accelerating our cognition) as well as groups of humans. We can't think of anything that can't be made new, different, and interesting by adding some extra intelligence. In fact, we can easily predict the business plan of the next 10,000 startups: "Do something and add artificial intelligence." It's a big deal, and it's close.
Back in about 2002, I attended a small gathering at Google -- which hadn't yet IPO'd and was still single-mindedly focused on Web search. I casually climbed into conversation with Larry Page, Google's illustrious co-founder and the man who became Google's CEO in 2011. "Larry, I still can't figure out why you want to do free web search when there are so many search companies out there? How did you come up with the idea?" My lack of imaginative ignorance really proves that it's hard for us to make predictions, especially about the future. But in my defense, predicting the future is hard until Google enhances its ad auction program and makes it generate real revenue, and makes a merger or other significant acquisitions of YouTube. I'm not the only user who avidly uses Google's search engine while thinking it won't last long. But Page's response has stayed with me, "Oh, we're actually doing artificial intelligence."
I've thought a lot about that conversation over the past few years, and Google has acquired 14 companies in artificial intelligence and robotics. Given that the search business contributes 80 percent of Google's revenue, at first glance you might think Google is expanding its portfolio in AI to improve its search capabilities. But I think the opposite is true. Google is using search technology to improve AI, not using AI to improve search technology. Every time you enter a query term, click on a link generated by a search engine, or create a link on a web page, you are training Google's AI technology. When you type "Easter Bunny" into the image search bar and click on the image that looks most like the Easter Bunny, you are telling the AI what the Easter Bunny looks like. Google has 1.2 billion search users every day, generating 121 billion search terms, each of which is coaching the AI for deep learning over and over again. If the AI algorithms are solidly improved for another 10 years, with a thousand times more data and a hundred times more computing resources, Google will have developed an unrivaled AI product. My prediction is that by 2024, Google's main product will no longer be a search engine, but an AI product.
This view naturally invites skepticism. For nearly 60 years, AI researchers have predicted that the age of artificial intelligence is coming, but until a few years ago, it seemed out of reach. People have even coined a term to describe this era of scant research results and even more scant research funding: the AI winter. So have things really changed?
Yes. Three recent breakthroughs have brought long-awaited AI close to home:
1. Inexpensive parallel computing
Thinking is an inherently parallel human process, with hundreds of millions of neurons firing simultaneously to create the synchronized brainwaves used by the cerebral cortex for computation. Building a neural network -- the main structure of AI software -- also requires many different processes to run simultaneously. Each node of a neural network roughly mimics a neuron in the brain - which interacts with neighboring nodes to clarify the signals it receives. For a program to understand a particular spoken word, it must be able to hear all the phonemes (of different syllables) in relation to each other; to recognize a particular picture, it needs to see all the pixels within its surrounding pixel environment - both are y parallel tasks. But until recently, standard computer processors could also handle just one task at a time.
Things began to change more than a decade ago, when a new type of chip called a graphics processing unit (GPU) emerged to meet the high-density visual as well as parallel demands of visual gaming, where millions of pixels are recalculated multiple times every second. This process requires a specialized parallel computing chip, which is added to the motherboard of the computer to complement it. Parallel graphics chips made a significant difference, and game playability increased dramatically. By 2005, GPU chips were being produced in such high numbers that their price came down, and in 2009 Andrew Ng and a team of researchers at Stanford University realized that GPU chips could run neural networks in parallel.
The discovery opened up new possibilities for neural networks, allowing them to accommodate hundreds of millions of connections between nodes. Conventional processors would take weeks to calculate the cascading possibilities of a neural net with 100 million nodes. Instead, Ng found that a cluster of GPUs could accomplish the same task in a day. Now, some companies that use cloud computing often use GPUs to run neural networks, such as Facebook, which uses the technology to identify friends in users' photos, and Netfilx, which relies on it to provide reliable recommendations to its 50 million subscribers.
2. Big data
Every intelligence needs to be trained. Even the human brain, which is naturally able to categorize things, still needs to see a dozen examples before it can distinguish between a cat and a dog. This is even more true of the artificial mind. Even the best programmed computer (for chess) has to play at least a thousand games before it can perform well. Part of the reason for the breakthroughs in artificial intelligence is that we collect huge amounts of data from around the globe to give AI the training it needs. Giant databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results, Wikipedia, and the entire digital world have become the teachers that make AI smarter.
3. Better algorithms
Digital neural networks were invented in the 1950s, but computer scientists have spent decades working out how to harness the astronomically large combinatorial relationships between millions and even billions of neurons. The key to this process is to organize the neural network into stacked layers. A relatively simple task is face recognition. When a set of bits in a neural network is found to form a pattern - for example, an image of an eye - the result is transferred upwards to another layer of the neural network for further analysis. This next layer might put the two eyes together, passing this meaningful chunk of data to the third layer of the hierarchical structure, which can combine the images of the eyes and nose (to be analyzed). Recognizing a face can require millions of these nodes (each of which generates a computation to be used by surrounding nodes) and up to 15 layers to be stacked.In 2006, Geoff Hinton, then at the University of Toronto, made a key refinement to this approach and called it "deep learning". ". He was able to mathematically optimize the results of each layer so that the neural network would learn faster as it formed stacks of layers. A few years later, when the deep learning algorithms were ported to GPU clusters, their speed improved dramatically. Deep-learning code alone isn't enough to produce complex logical thinking, but it's a major component of all of today's AI products, including IBM's Watson computers, Google's search engine, and Facebook's algorithms.
This perfect storm of parallel computing, big data, and deeper algorithms has made AI, which has been plowing ahead for 60 years, a hit. And this convergence also suggests that as long as these technology trends continue - and there's no reason they shouldn't - AI will lean.
As this trend continues, this cloud-based AI will become more and more an integral part of our daily lives. But there is no pie in the sky. Cloud computing follows the law of increasing returns[4], sometimes referred to as the network effect, which means that as a network grows, the value of the network increases at a faster rate. The larger the network (size), the more attractive it is to new users, which in turn makes the network larger, which further increases the attractiveness, and so on. Cloud technologies serving artificial intelligence follow this same law. The more people use an AI product, the smarter it gets; the smarter it gets, the more people come to use it; and then the smarter it gets, the more people come to use it. Once a company steps into this virtuous cycle, it gets bigger and grows faster to the point where no upstart rival can hope to follow. As a result, the future of AI will be ruled by two or three oligopolies that will develop massive, cloud-based, multi-purpose business intelligence products.
In 1997, the predecessor to the Watson computer, IBM's Deep Blue, beat then-chess master Garry Kasparov in a famous man-machine contest. After the computer won a few more games, interest in such matches was largely lost. You might think the story ends there, but Kasparov realized that he could perform much better in the game if he had immediate access to a huge database of all previous game moves variations, just like Deep Blue. If this database tool is fair game for an AI device, why can't humans use it? To explore this idea, Kasparov pioneered the concept of man-plus-machine tournaments, in which AI is used to augment chess players rather than pitting man against machine.
This kind of tournament, now called freestyle chess, is a bit like a mixed martial arts tournament, where players can use whatever combat techniques they want. You can play alone; you can accept help from your computer with its super-smart chess software, and all you have to do is move the pieces as it suggests; or you can be one of those "half-man, half-machine" players that Kasparov advocates. A half-human, half-machine player listens to the suggestions of the AI device in his or her ear, but may or may not use them - rather like the GPS navigation we use when we drive a car. At the 2014 Freestyle Chess Match Play Championship, which accepts players of all modes, the purely AI chess engine won 42 matches, while the semi-human, semi-machine players won 53. The best chess player in the world today is the semi-human, half-machine Intagrand, a team of multiple people and several different chess programs.
But here's the most surprising thing of all: the advent of artificial intelligence has not brought down the standard of purely human chess players. On the contrary, cheap, super-intelligent chess software has attracted more people to play chess, there are more tournaments than ever before, and the standard of play has risen higher than ever before. There are now more than twice as many chess grandmasters as there were when Deep Blue beat Kasparov. Magnus Carlsen, now the top-ranked human chess player, was trained by the AI, and is considered the closest thing to a computer of any human chess player, as well as the highest-points-earning human chess grandmaster of all time.
If AI can help humans become better chess players, it can also help us become better pilots, doctors, judges, and teachers. Most of the commercial work done by AI will be work with a specialized purpose, strictly limited to what intelligent software can do, such as translating (by AI products) from one language to another, but not to a third. Or, for example, they can drive a car, but not talk to people. Or can recall every pixel of every video on YouTube, but can't predict your daily routine. In the next decade, 99 percent of the AI products you interact with directly or indirectly will be highly specialized, extremely smart "experts.
In reality, that's not really intelligence, at least not in the way we think about it. Indeed, intelligence can be a tendency - especially if what we think of as intelligence means our peculiar self-awareness, all the frenzied cycles of introspection and messy streams of self-consciousness that we all have. We want driverless cars to be focused on the road, not on their previous squabbles with the garage. Watson, the general practitioner in the hospital, can focus on his work and not wonder if he should specialize in English. As AI develops, we may have to devise ways to prevent them from having consciousness - the best AI service we can claim will be unconscious.
What we want is not intelligence, but artificial intelligence. Unlike intelligence in general, intelligence (the product) is focused, measurable, and species-specific. It is also capable of thinking in ways that are completely alien to human cognition. Here's a good example of non-human thinking: at the South by Southwest festival in Austin, Texas, this past March, the Watson computer pulled off an awesome stunt: IBM researchers added to Watson a database of online recipes, nutritional charts from the U.S. Department of Agriculture (USDA), and research reports on how to make meals taste better. IBM researchers added to Watson's database of online recipes, nutritional charts from the USDA, and flavor studies to make meals tastier. With this data, Watson relied on flavor profiles and existing color models to create new dishes. One of the sought-after dishes created by Watson is a tasty version of "fish and chips," which is made with fish marinated in lime juice and fried plantains. I enjoyed this dish at the IBM labs in Yorktown Heights, as well as another delicious Watson-created dish: a Swiss/Thai asparagus quiche. The flavors were pretty good!
Non-human intelligence is not a mistake, it's a trait. The main advantage of AI is their "alien intelligence". An AI product that thinks about food differently than any chef can also allow us to look at food differently, or to think differently about the materials used to make it, or the clothes, or the financial derivatives, or any of the sciences and arts. The dissimilarity of AI is more valuable to us than its speed or power.
In fact, AI will help us better understand what we mean by intelligence in the first place. In the past, we might have said that only a super-smart AI product could drive a car or beat a human on "Edge of Jeopardy" or in a chess tournament. But once an AI has done those things, we feel that these accomplishments are too mechanical and stereotypical to be called truly intelligent. With each success, AI is redefining itself.
But we're not just redefining what it means to be AI all the time - we're also redefining what it means to be human. Over the past 60 years, machining has replicated behaviors and talents that we once thought were uniquely human, and we've had to change our views about the differences between humans and machines. As we invent more and more kinds of artificially intelligent products, we will have to let go of even more of the ideas that were considered uniquely human abilities. Over the next decade - or even the next century - we will be in the midst of a prolonged identity crisis, asking ourselves what it means to be human. The irony of this is that the greatest benefit of the practical AI products we encounter daily is not in increased capacity, economic expansion, or a new way of doing research-although all of these things will happen. The greatest benefit of AI is that it will help us define humanity. We need AI to tell us who we really are.