2017年6月19日 星期一

The mind in the machine: Demis Hassabis on artificial intelligence【人工智能】

 

Modern civilisation is a miraculous feat, one made possible by science. Every time I take a flight, I marvel at the technology that has allowed us to soar above the clouds as a matter of routine. We have mapped the genome, built supercomputers and the internet, landed probes on comets, smashed atoms at near light speed in particle accelerators and put a man on the Moon. How have we managed to do any of this? When one stops to contemplate what has been accomplished by our 3lb brains, it’s quite remarkable.

The scientific method might be the single most powerful idea humans have ever had, and progress since the Enlightenment has been simply astonishing. But we are now at a critical juncture where many of the systems we need to master are fiendishly complex, from climate change to macroeconomic issues to Alzheimer’s disease. Whether we can solve these challenges — and how fast we can get there — will affect the future wellbeing of billions of people and the environment we all live in.
The problem is that these challenges are so complex that even the world’s top scientists, clinicians and engineers can struggle to master all the intricacies necessary to make the breakthroughs required. It has been said that Leonardo da Vinci was perhaps the last person to have lived who understood the entire breadth of knowledge of their age. Since then we’ve had to specialise, and today it takes a lifetime to completely master even a single field such as astrophysics or quantum mechanics. 
The systems we now seek to understand are underpinned by a vast amount of data, usually highly dynamic, non-linear and with emergent properties that make it incredibly hard to find the structure and connections to reveal the insights hidden therein. Kepler and Newton could write equations to describe the motion of planets and objects on Earth, but few of today’s problems can be reduced down to a simple set of elegant and compact formulae. 
This is one of the greatest scientific challenges of our times. The founding fathers of the modern computer age — Alan Turing, John von Neumann, Claude Shannon — all understood the central importance of information theory, and today we have come to realise that almost everything can either be thought of or expressed in this paradigm. This is most evident in bioinformatics, where the genome is effectively a gigantic information coding schema. I believe that, one day, information will come to be viewed as being as fundamental as energy and matter. 
Demis Hassabis
At its core, intelligence can be viewed as a process that converts unstructured information into useful and actionable knowledge. The scientific promise of artificial intelligence (AI), to which I have devoted my life’s work, is that we may be able to synthesise, automate and optimise that process, using technology as a tool to help us acquire rapid new knowledge in fields that would remain intractable for humans unaided. 
***
Today, working on AI has become very fashionable. However, the term AI can mean myriad things depending on the context. The approach we take at DeepMind, the company I co-founded, focuses on notions of learning and generality, with the aim of developing the kind of AI we need for science. If we want computers to discover new knowledge, then we must give them the ability to truly learn for themselves. 
The algorithms we work on learn how to master tasks directly from raw experience, meaning that the knowledge they acquire is ultimately grounded in some form of sensory reality rather than in abstract symbols. We further require them to be general in the sense that the same system with the same parameters can perform well across a wide range of tasks. Both these tenets were demonstrated in DeepMind’s 2015 Nature paper in which a single program taught itself to play dozens of classic Atari games, with no input other than the pixels on the screen and the running score. We also use systems-level neuroscience as a key source of inspiration for new algorithmic and architectural ideas. After all, the brain is the only existence proof we have that a general-purpose experience-based learning system is even possible.
This is a radical departure from the approach of many of our predecessors. The difference is perhaps best illustrated by comparing two breakthrough programs that achieved world firsts in the field of games: IBM’s Deep Blue, which beat the world chess champion Garry Kasparov in 1997, and our recent AlphaGo program, which last year beat one of the world’s top players at the even more complex game of Go. Deep Blue used what is known as an “expert systems” approach: a team of programmers sat down with some chess grandmasters to explicitly distil and codify their knowledge into a sophisticated set of heuristics. A powerful supercomputer then used these handcrafted rules to assess a vast number of possible variations, calculating its way by brute force to the right move. 
Deep Blue’s victory against Kasparov represented a major milestone in the history of AI. But its win was more a testament to the brilliance of its team of programmers and grandmasters, as well as to the computational power of the contemporary hardware, than to any inherent intelligence in the program itself.
After chess was cracked, Go became the new holy grail for AI research. Go is around 3,000 years old and has profound cultural importance across Asia, where it is considered to be not just a game but an art form, and its professional champions are public icons. With an astonishing 10 to the power of 170 possible board configurations — more than the number of atoms in the universe — it is insoluble by brute-force methods. In fact, even writing a function to determine which side is winning in a particular Go position was long thought to be impossible, since a tiny change in the location of a single piece can radically alter the entire board state. Top human Go players deal with this enormous complexity by leaning heavily on their intuition and instinct, often describing moves as simply “feeling right”, in contrast to chess players, who rely more on precise calculation.
For AlphaGo we realised that in order to capture this intuitive aspect of the game we would have to take an approach radically different from chess programs such as Deep Blue. Rather than hand-coding human expert strategies, we used general-purpose techniques including deep neural networks to build a learning system, and showed it thousands of strong amateur games to help it develop its own understanding of what reasonable human play looks like. Then we had it play against different versions of itself thousands of times, each time learning from its mistakes and incrementally improving until it became immensely strong. In March 2016 we were ready to take on the ultimate challenge: playing the legendary Lee Se-dol, winner of 18 world titles and widely considered to be the greatest player of the past decade. 
More than 200 million people watched online as AlphaGo emerged a surprise 4-1 victor, with the consensus among experts that this breakthrough was a decade ahead of its time. More importantly, during the games AlphaGo played a handful of highly inventive winning moves, one of which — move 37 in game two — was so surprising it overturned hundreds of years of received wisdom and has been intensively examined by players since. In the course of winning, AlphaGo somehow taught the world completely new knowledge about perhaps the most studied game in history. 
***
These moments of algorithmic inspiration give us a glimpse of why AI could be so beneficial for science: the possibility of machine-aided scientific discovery. We believe the techniques underpinning AlphaGo are general-purpose and could be applied to a wide range of other domains, especially those with clear objective functions that can be optimised, and environments that can be accurately simulated, allowing for efficient high-speed experimentation. In energy efficiency, for instance, we used a variant of these algorithms to find a set of novel techniques able to reduce the energy used to cool Google’s data centres by 40 per cent, which we are now rolling out across the fleet, and which will deliver a huge cost saving and be great for the environment. 
© Caleb Charland
We believe that in the next few years scientists and researchers using similar approaches will generate insights in a multitude of areas, from superconductor material design to drug discovery. In many ways I see AI as analogous to the Hubble telescope — a scientific tool that allows us to see farther and better understand the universe around us. 
Of course, like any powerful technology AI must be used responsibly, ethically and to benefit everyone. We must also continue to be highly cognisant of both the utility and limitations of AI algorithms. But with rigorous attention to programs’ capabilities, and more research into the effects of the quality of the data we use as inputs and the transparency of their workings, we may find that AI can play a vital role in supporting all manner of experts by identifying patterns and sources that can escape human eyes alone.
It is in this collaboration between people and algorithms that incredible scientific progress lies over the next few decades. I believe that AI will become a kind of meta-solution for scientists to deploy, enhancing our daily lives and allowing us all to work more quickly and effectively. If we can deploy these tools broadly and fairly, fostering an environment in which everyone can participate in and benefit from them, we have the opportunity to enrich and advance humanity as a whole.
In doing so, we may learn something about ourselves, too. I’ve always felt that physics and neuroscience are in some ways the most fundamental subjects: one is concerned with the external world out there, and the other with the internal world in our minds. Between them they therefore cover everything. AI has the potential to help us to understand both better. As we discover more about the learning process itself and compare it to the human brain, we could one day attain a better understanding of what makes us unique, including shedding light on such enduring mysteries of the mind as dreaming, creativity and perhaps one day even consciousness. 
If AI can help us as a society to not only save the environment, cure disease and explore the universe, but also better understand ourselves — well, that may prove one of the greatest discoveries of them all. 
Demis Hassabis is co-founder and CEO of DeepMind
Photographs: Caleb Charland
Copyright The Financial Times Limited 2017. All rights reserved. You may share using our article tools. Please don't cut articles from FT.com and redistribute by email or post to the web.

沒有留言:

張貼留言

3分鐘解讀前瞻軌道建設研究報告(超商)【數據分析】

多數人都有逛過百貨公司的經驗. 百貨公司就是很典型的分層架構: 越接近大門入口, 櫃位的坪效越高; 越往樓上走, 租金越低人潮越少營業額越降. 為什麼百貨公司要這樣 開呢? 因為按照營收能力順序排列的話, 創造的利潤最大; 反過來把利潤最高的化妝品 擺在頂樓, 湯姆熊放一樓入口,...