See guys, I told you there’d be a part two. No one believed me, but I’ve proven you all wrong. Haha, petty minor victory for me. Harvey – 1, Readers – 0.
Just pretend that this post joins seamlessly in with my previous post. You know, for continuity’s sake. Anyhow, I’ll do a quick recap for those of you with poor long-term memory (by you I mean me – I have really bad memory). Last year Google started developing a computer program that would be able to play the incredibly complex board game Go. In October last year, it defeated the European Go champion which was the first time a computer Go program had beaten a professional human player on a full 19 × 19 board without handicap. Following more technical advances and lots of training, AlphaGo went on to play one of the best Go players in the world, Lee Sedol, a few weeks ago in a five-game tournament. Bearing in mind that Lee Sedol predicted he would sweep the AI aside, it came as quite a surprise to everyone when AlphaGo crushed him 4-1. And this is where I shall continue the tale…
Some of you may have heard of (or if you’re really old, remember) the DeepBlue vs Kasparov games of 1996 and 1997. That was the first time a computer program had beaten a world champion at a game of chess. This understandably shook many as they began to realise the viable threat of machine overcoming man. However, the last time I checked, the world is still being run by humans, so we’re in no danger, contrary to the beliefs of notorious technophobes Elon Musk and Stephen Hawking who believes that ‘the development of full artificial intelligence could spell the end of the human race’.
Looking back, the DeepBlue software is in fact relatively primitive. It worked by essentially sifting through different moves using its immense parallel processing power, and then picking out the best moves using an evaluation function – an algorithm that measures the ‘goodness’ of a given chess position. To make the search more efficient, DeepBlue employed a system called selective extensions so that instead of attempting to complete an exhaustive ‘brute force’ search into every possible position, it selectively chose distinct paths to follow, eliminating irrelevant searches in the process. Most modern-day chess programs will use this technique, but with varying degrees of efficiency. An amateur chess player can sit down and in a few hours probably write an evaluation function that is pretty good at evaluating chess positions – not quite grandmaster level, but it’s good enough that when you combine it with the search, it produces very high quality play.
Go, on the other hand, is much more complicated. It is very difficult to evaluate a position just by looking at it. It is a game that builds up over time, builds up structures and interacts in complex ways. You can’t just count up the pieces because in general each side probably has approximately the same number. Another aspect of Go that makes it so hard to compute is the fact that the search space is so large that it is impossible to use ‘brute force’. There are more possible Go positions than there are atoms in the universe, and more than a googol times more than there are chess positions. One of the advances that the Google crew made is coming up with a better way of evaluating positions through using a machine learning approach, which is much more akin to the way humans play the game. They built complex neural networks to take a description of the Go board as an input and process it through twelve different network layers containing millions of neuron-like connections, which would evaluate the position by abstracting the board, similar to what facial recognition software does. They then fed the neural networks with over thirty million moves by professional players, to simulate experience, until it could predict the human move 57% of the time. Through doing this, AlphaGo was able to determine the better game pathways to examine, so that the search became a lot easier – this is known as a Monte Carlo tree search. AlphaGo was then allowed to discover new strategies for itself, by playing thousands of games against itself, known as reinforcement learning.
Certainly the concept of neural networks and abstraction is one gradually edging towards the capability of human thought. Will we be seeing sentient robots in the near future? Probably not, which is probably much to the comfort of Mr Hawking over here. Machines require vast amounts of data to even be able to emulate a human in relatively simplistic things like a board game. It seems that they can only imitate what we do first, meaning that humans will always be a step ahead, guiding the machines along. Therefore I don’t believe in killer robots, and neither should you. I am, however, very interested in the development of robots, but one thing is certain – we should probably learn a little about our own brains first.