Game Complexity III: Artificial Intelligence

Updated: Apr 12, 2019

Pit mind against machine as we discuss the relationship between board games and artificial intelligence.


Welcome to the final entry in our game complexity series. If you haven't discovered the first two parts of the series, we recommend reading through those first, as we will be drawing reference to both articles here.


Part One: State-Space and Game-Tree Complexity

Part Two: Modern Board Game Complexity Analysis


In the previous entries of this series, we have learned how to measure complexity in games, and applied those lessons to a trio of modern board gaming classics, but we will now take a big step forward and explore the relevance of our findings to the world of artificial intelligence.


In the 1950s, some super smart folks starting thinking about artificial intelligence. During this time, Alan Turing developed his theory of computation, and his well-known Turing Test, which was published in his 1950's paper "Computing Machinery and Intelligence". In the same year, Claude Shannon, published Programming a Computer for Playing Chess, and in doing so, essentially invented the concept of automated game playing.

It's within this context where complexity values are important. Shannon calculated the conservative lower-bound game-tree size of chess, as we discussed in part one, known as the Shannon number, and calculated the state-space of chess. Shannon calculated these to demonstrate how difficult it would be to hard-code a chess strategy in a computer. Essentially, he stated that for a computer to lookup a move based on board-state alone would be functionally impossible, because searching such a large state-space would take far too long.

Computing power is measured in floating-point operations per second (FLOPS). The fastest computers in the world are measured in petaflops, where one petaflop is one quadrillion (10^15) operations per second. Humanity's most powerful computer (IBM's OLCF-4, aka. Summit) tops out at two hundred petaflops. In other words, it can perform in one second the same number operations the 2018 MacBook Pro can perform in nine days (visualization).

Returning to Shannon's postulation, if we searched for one specific board state of chess using the world's fastest computing machine, the search would still be running when the universe ended. Shannon theorized that instead of relying on look-ups, a computer would need to make decisions. To do so, Shannon then developed an evaluation system in which the computer would calculate the optimal move. To grossly oversimplify things, the procedure would simulate several moves, and which ever provided the most favorable simulated outcome, would be the move the computer selected.

Shannon's vision was embodied by the 1997 IBM machine, Deep Blue. This chess-playing computer was the first to ever defeat a reigning world champion, Gary Kasparov, under standard time controls. Ignoring the drama associated with the results, in which Kasparov accused IBM of cheating, Deep Blue's triumph marked an important moment in computer development.

World Chess champion Garry Kasparov (left) considering his opening for his final game against IBM's Deep Blue computer on May 11, 1997.

Inspired by Kasparov's defeat, Omar Syed, a computer engineer trained in artificial intelligence, invented Arimaa. Syed's goal was to use a standard chess set to create a game, that while being difficult for computers, would be easy to learn and fun to play for humans. For reference the state-space and game-tree complexities of chess are 43 and 123 respectively; and 43 and 402 for Arimma.

Starting in 2004, three annual Arimaa tournaments have been held: a World Championship (humans), a Computer Championship (computers), and the Arimaa Challenge (human champion vs. computer champion). For eleven years humans prevailed, but in 2015 Sharp, a program designed by David Wu, defeated three human champions to become the first computer Arimaa Challenge champion.

Since the time of Claude Shannon and Alan Turing, other experts have picked up the torch and pushed the boundaries of computing. We live in a world with an exorbitant amount of AI applied to games. Be it video games, or board game apps, AI is everywhere. And the progress of AI is linked to, or at least benchmarked by, game complexity. The more complex the game, the more complex the AI must be. And for the most part, humans have been able to stave off their AI contenders in these high complexity games. At least this was the case until AlphaGo made history.

AlphaGo is a Go-playing computer program, developed in tandem by Google and DeepMind, and in March 2015 it became the first computer Go program to beat a professional human player, Fan Hui, on a full 19x19 board, without the assistance of handicaps. After his defeat Fan Hui said,

"I didn't feel like I was playing against a computer at all. She plays like a human. You can usually tell you are playing against a computer, because it will make strange moves. With AlphaGo, there were no such moves, only normal moves. This is incredible."

Fan Hui, was also asked a fairly obvious question. Why even play against machines, when there is seemingly nothing to gain, but so much to lose? To this question, Fan Hui responded,

"I think they will change Go. We have played for more than four thousand years, but despite that, we still might know maybe only ten percent of the game, since it is so complex. Computers can be a partner to make progress together.”

AlphaGo did not let up and quickly challenged Lee Sedol to a five game match. Sedol, known as “The Strong Stone,” is the world’s most skilled active Go player. He is basically the Lebron James of Go, and was humanity’s last bastion of hope for maintaining our position as the champions of complex strategy. By the end of the five game match, despite his valiant efforts, Sedol fell 4-1 to the AI. AlphaGo’s victory over Sedol, was a huge moment in the history of AI, as never had a program defeated a human champion in such a complex game.

In addition to these landmark victories, the manner in which AlphaGo’s logic was developed is worthy of note. The program has two networks; a value network to evaluate the board, and a policy network to select moves. These networks were first trained in a process called "supervised learning,” in which human players honed the program’s skills. However this style of training limited AlphaGo's potential. Since humans were administering the training, AlphaGo could only be as good as its human teachers. With this in mind, the developers also pit AlphaGo against other programs, as well as itself. This unique training style, combined with a master Monte-Carlo search algorithm, yielded the most sophisticated board game playing AI in history.

Lee Sedol (right) during his first match against Alpha Go, on March 9th 2016.

The aftermath of AlphaGo’s triumph over Lee Sedol, managed to leak into worldwide headlines, and many internet publications. But Cade Metz at Wired provided the most inspiring outlook of the event.

"In game two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence. But in game four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine - no less and no more. It showed that although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own transcendent moments. And it seems that in the years to come, as we humans work with these machines, our genius will only grow in tandem with our creations."

AlphaGo continued to defeat Ke Jie, the world number one at the time, in May 2017, and has been awarded a 9-dan rank (the highest achievable) by the Chinese Weiqi Association. Before officially retiring the machine, DeepMind went on to publish a special set of AlphaGo vs. AlphaGo matches to potentially reveal new strategies for humans to build on. Professional player She Yue described the series as,

"Like nothing I've ever seen before. They're how I imagine games from far in the future."

Board games, like many other things, are very complex, and we humans can use all the help we can muster to fully explore them. Only time will tell how we continue to develop and apply artificial intelligence, but until then I’ll just hope someone improves the crummy Tigris & Euphrates mobile app AI.

End game board states of the five game match between Lee Sedol and Alpha Go.

Note: DeepMind has since developed a new AI, AlphaStar, dedicated to the real-time strategy computer game StarCraft II. In January 2019, AlphaStar defeated professional StarCraft II players, TLO and MaNa, 10-1, becoming the first AI to defeat professional level opponents with no game restrictions. Visit the DeepMind blog for more information.


Although learning algorithms have recently achieved superhuman performance in a number of two-player, zero-sum games, scalable multi-agent reinforcement learning algorithms that can discover effective strategies and conventions in complex, partially observable settings have proven elusive. We present the Bayesian action decoder (BAD), a new multiagent learning method that uses an approximate Bayesian update to obtain a public belief that conditions on the actions taken by all agents in the environment.


Update: Antoine Bauza's Hanabi has become DeepMind's next target, marking a shift of focus from zero-sum games (ie. Go) to partially observable games. Due to the limited information and communication in Hanabi, the artificial intelligence agents must exploit the theory of mind reasoning commonly employed by humans. The so called Hanabi Challenge aims to make progress in this area, where reinforcement learning algorithms have historically struggled.

References:

“AlphaStar: Mastering the Real-Time Strategy Game StarCraft II.” Deep Mind, 2019.


Celestin, Roger. Kasparov v Deep Blue. 11 May 1991.


“Exploring the Mysteries of Go with AlphaGo and China's Top Players.” Deep Mind, 2017.


Hunter, Craig A. “2018 MacBook Pro Review.” Hrtapps, 2018, hrtapps.com/blogs/20180712/.

Kohs, Greg, director. Alpha Go. 29 Sept. 2017.


Shannon, Claude E. “Programming a Computer for Playing Chess.” Computer Chess Compendium, 1988, pp. 2–13., doi:10.1007/978-1-4757-1968-0_1.


“TOP500 Supercomputer Sites.” TOP500 Supercomputer Sites, Nov. 2018.


Turing, Alan M. “Computing Machinery And Intelligence.” Mind, LIX, no. 236, 1950, pp. 433–460., doi:10.1093/mind/lix.236.433.

0 views