The future won’t be made by either humans or machines alone —but by both, working together. Technologies modelled on how human brains work are already augmenting people’s abilities, and will only get more influential as society gets used to these increasingly capable machines.
Technology optimists have envisioned a world with rising human productivity and quality of life as artificial intelligence systems take over life’s drudgery and administrivia, benefiting everyone. Pessimists, on the other hand, have warned that these advances could come at great cost in lost jobs and disrupted lives. And fearmongers worry that AI might eventually make human beings obsolete.
Advertisement
However, people are not very good at imagining the future. Neither utopia nor doomsday is likely. In my new book, “The Deep Learning Revolution,” my goal was to explain the past, present and future of this rapidly growing area of science and technology. My conclusion is that AI will make you smarter, but in ways that will surprise you.
Recognising patterns
Deep learning is the part of AI that has made the most progress in solving complex problems like identifying objects in images, recognising speech from multiple speakers and processing text the way people speak or write it. Deep learning has also proven useful for identifying patterns in the increasingly large data sets that are being generated from sensors, medical devices and scientific instruments.
The goal of this approach is to find ways a computer can represent the complexity of the world and generalise from previous experience — even if what’s happening next isn’t exactly the same as what happened before. Just as a person can identify that a specific animal she has never seen before is in fact a cat, deep learning algorithms can identify aspects of what might be called “catness” and extract those attributes from new images of cats.
The methods for deep learning are based on the same principles that power the human brain. For instance, the brain handles lots of data of various kinds in many processing units at the same time. Neurons have many connections to each other, and those links strengthen or weaken depending on how much they’re used, establishing associations between sensory inputs and conceptual outputs.
The most successful deep learning network is based on 1960s research into the architecture of the visual cortex, a part of the brain that we use to see, and learning algorithms that were invented in the 1980s. Back then, computers were not yet fast enough to solve real-world problems. Now, though, they are.
In addition, learning networks have been layered on top of each other, creating webs of connections more closely resembling the hierarchy of layers found in the visual cortex. This is part of a convergence taking place between artificial and biological intelligence.
Deep learning in real life
Deep learning is already adding to human capabilities. If you use Google services to search the web, or use its apps to translate from one language to another or turn speech into text, technology has made you smarter, or more effective. Recently on a trip to China, a friend spoke English into his Android phone, which translated it to spoken Chinese for a taxi driver — just like the universal translator on Star Trek.
These and many other systems are already at work, helping you in your daily life even if you’re not aware of them. For instance, deep learning is beginning to take over the reading of X-ray images and photographs of skin lesions for cancer detection. Your local doctor will soon be able to spot problems that are evident today only to the best experts.
Even when you do know there’s a machine involved, you might not understand the complexity of what they’re actually doing —Behind Amazon’s Alexa is a bevy of deep learning networks that recognise your request, sift through data to answer your questions and take actions on your behalf.
Advancing learning
Deep learning has been highly effective at solving pattern recognition problems, but to go beyond this requires other brain systems. When an animal is rewarded for an action, it is more likely to take similar actions in the future. Dopamine neurons in the basal ganglia of the brain report the difference between expected and received rewards, called reward prediction error, which is used to change the strengths of connections in the brain that predict future rewards.
Coupling this approach, called reinforcement learning, with deep learning can give computers the power to identify unexpected possibilities. By recognising a pattern and then responding to it in a way that yields rewards, machines might approach behaviours along the lines of what might be called human creativity. This coupled approach is how Deep-Mind developed a programme called AlphaGo, which in 2016 defeated grandmaster Lee Sedol and the following year beat the world Go champion, Ke Jie.
Games are not as messy as the real world, which is filled with shifting uncertainties. Massimo Vergassola at the University of California, San Diego, and I recently used reinforcement learning to teach a glider in the field how to soar like a bird in turbulent thermals. Sensors can be attached to actual birds to test whether they use the same cues and respond the same way.
Despite these successes, researchers do not yet fully understand how deep learning solves these problems. Of course, we don’t know how the brain solves them either.
While the brain’s inner workings may remain elusive, it is only a matter of time before researchers develop a theory of deep learning. The difference is that when studying computers, researchers have access to every connection and pattern of activity in the network. The pace of progress is rapid, with research papers appearing daily on arXiv. Surprising advances are eagerly anticipated this December at the Neural Information Processing Systems conference in Montreal, which sold out 8,000 tickets in 11 minutes, leaving 9,000 hopeful registrants on the waiting list.
There is a long way to go before computers achieve general human intelligence. The largest deep learning network today has only the power of a piece of human neural cortex the size of a rice grain. And we don’t yet know how the brain dynamically organises interactions between larger brain areas.
Nature already has that level of integration, creating large-scale brain systems capable of operating all aspects of the human body while pondering deep questions and completing complex tasks. Ultimately, autonomous systems may become as complex, joining the myriad living creatures on our planet.
The writer is Francis Crick Professor and Director of the Computational Neurobiology Laboratory at Salk Institute for Biological Studies, and Distinguished Professor of Neurobiology, University of California San Diego. This article first appeared on www.theconversation.com