# Game Complexity I: State-Space & Game-Tree Complexities

**Board games are becoming increasingly complex. Learn how to quantify game complexity through state-space and game-tree complexity.**

Welcome to part one of the game complexity series where we will explore the complexity of modern (and ancient) board games. We'll begin with a thorough lesson in state-space and game-tree complexity and then analyze a trio of modern board games; namely Carcassonne, Agricola, and Through the Ages. Finally, we will discuss the importance of game complexity as it relates to artificial intelligence.

Let's start by discussing two key complexity metrics; state-space, and game-tree complexity, but before we dive head first into this complexity discussion, it’s important to set expectations for our analysis. These two metrics will let us compare the mathematical complexities of various games to one another, however these numbers do not explicitly correlate to the *difficulty* of said games. Which is to say, a mathematically complex game is not inherently a more difficult one. Complexity is the quantitative scope of the game, whereas difficultly relates more to the qualitative strategic elements. In the hobby game industry, complexity and difficulty are often used to mean similar things, but in our analysis, it’s important to loosen their associations.

State-space complexity is the count of legal positions that can be reached based on the initial position of a game. This sounds straight-forward enough. Take a turn, move a piece, or capture a pawn, and each time you do, you’ve created a new board state; the aggregation of which results in the game’s state-space complexity. However, two parts of this definition really makes these figures difficult to calculate precisely. Firstly, the *legal* distinction. It’s not difficult to determine the number of ways to configure a game’s pieces on and off the board, but determining the subset of which are legal is substantially more challenging.

For instance, in chess, any configuration with both kings in check, pawns on the first rank, or having both of your bishops on the same color square, would need to be removed from the total count. And if that wasn’t enough, calculations must account for the initial position of the game as well. If any of these configurations is unachievable based on initial setup, that configuration needs to be thrown out. You can just imagine the chess playing mathematicians utterly losing their marbles.

To combat the resulting analysis paralysis, scholars focus on the upper and lower bounds when evaluating state-space complexity. The upper-bound is the measure of all possible piece configurations, ignoring all the “legal” and “initial setup” constraints. In this case, having two kings in check would be included. The lower-bound is harder to explain conceptually, but it is a measure of the minimum state-space complexity, calculated using simplified mathematical models of the game in question.

Even for simple games, state-space complexity numbers are large, and awkward to discuss. To make this easier we will refer to these numbers as their log to base 10. That might make you cringe a little, but don’t fret, it’s quite elementary. Simply count the number of zeros that follow the first digit, and you’ll have your answer. This is also known as a number’s order of magnitude. For example, one thousand evaluates to 3, because three digits follow the one.

Let’s apply this newfound knowledge to real games and get some real numbers. For comparative purposes we are going to focus on a few abstract games, because they are easier to calculate. Our games will be, tic-tac-toe, chess, and Go. Everyone should be familiar with tic-tac-toe, and chess, but for those of you unfamiliar with old school Chinese games, Go is played on a 19x19 grid, and players alternate turns placing their *stones* on the grid. The goal of the game is control a majority of the board, but the details are not important. All you need to know is that Go is oft-considered the most complex abstract game in existence, and has become significant in the field of artificial intelligence, which we will discuss in part three of this series.

Tic-tac-toe has a upper bound state-space complexity of 4. There are nine spaces on the board, each of which could contain one of three possible values; an X, an O, or neither. Take three (X, O, or blank) to the power of nine (the spaces on the board), and we calculate that tic-tac-toe has 19863 unique board states. Again, many of these are illegal (such as having nine Xs and zero Os), but we are using upper-bound here. Also, since we are using log to base ten, 19863 evaluates down to 4.

Chess with its larger board and various pieces is much more difficult to calculate. If we include illegal positions, we can simply calculate the different ways to arrange the 32 pieces on the board. The first piece can occupy any of the 64 spaces on the board. The second piece can occupy any of the other 63 spaces. The third can occupy any of the remaining 62 spaces, and so on until we place the 32nd piece. However this method considers identical pieces as unique, such that switching the position of two pawns creates a new game state, but this is easily accommodated in our calculation, which is animated below.

This equation, originally presented by mathematician Claude Shannon in his seminal 1950 paper "Programming a Computer for Playing Chess", is the true count of unique arrangements for the thirty-two chess pieces (ignoring promotions), but what if there are only thirty-one pieces? Or twenty pieces? Shannon simply rounded his estimation up to account for such positions, but recently mathematicians, namely Shirish Chinchalkar, have homed in on a more accurate count. Chinchalkar calculates the upper-bound state-space complexity of chess to be a nearer to 46 (including promotions) , but to keep things simple we will stick with Claude Shannon's value.

The last game of our trio is Go. The 19x19 grid has 361 placement spaces, and each space can be unoccupied or contain either colored stone, or no stone. Using an identical method as tic-tac-toe, we can determine Go’s upper-bound to be 172, which flirts with incomprehensibility.

So to recap, tic-tac-toe has a state-space complexity of 4, chess is 43, and Go is 172. Like I said before, a lot of these positions are impossible, but these measures provide a good ballpark for comparison.

The second complexity measure we are going to discuss is called game-tree complexity. Game-tree complexity represents the number of distinct plays, or paths, for a given game. Each path consists of multiple different game states, so these numbers are going to be even larger than state-space complexity. For example, imagine every possible checkmate scenario in chess. Now count *each* possible game path that results in *each *different scenario. That is game-tree complexity.

Its difficult to estimate these values, but a simple equation can establish a reasonable lower-bound. This equation requires a branching factor, essentially the average number of legal moves available to the player each turn, and the average game length. For these calculations game length is measured in plies. A ply is a single turn taken by a single player, which is notably different from a round, which is the more layman term, as each player typically resolves one ply each round. As such you can effectively equate the total number of plies to the number of rounds multiplied by the number of players. Each of our examples has two plies per round, since there are only two players who alternate taking turns.

As one would expect a game with more choices is more complex than a game with fewer choices, but the inclusion of game length is important to note as well. The branching factors for tic-tac-toe, chess, and Go are 4, 35, and 250; with average game lengths of 9, 70, and 150 plies, respectively. After throwing these figures into our equation, we calculate tic-tac-toe to have a game-tree complexity of 5, while chess is 123, and Go is an incomprehensible 360.

And when I say incomprehensible, I genuinely mean it. This number is so large it defies understanding. But, if you don’t believe me, let me grant you some perspective. Below are a few figures to wrap your head around.

Even at the absolute limits of our universe we cannot near the enormity of 360. The observable universe is too young, too small, and too empty to provide analogous comparisons. Compounding the issue, geometric analogies are hard to for us to truly understand. We know the universe is big, but our brains are not suited to properly imagine its true scale. But we experience the passage of time everyday, so in a final, and more human analogy, lets try to understand how long 10^360 seconds is.

The flow chart below is a list of steps, each of which would take eons, but when done in order would eventually require one cennovemdecillion seconds (10^360) to complete. Set your stopwatch and begin.

Only after repeatedly letting our hair reach the sun, walking every road on Earth, defying odds in poker, the lottery, March Madness, and dice rolling, birthing quintuplets, exhausting the world's libraries, creating galaxy spanning towers of paper, filling the Grand Canyon, draining the oceans, outshining the Sun, depleting the world's economy, painting galaxies, and atomizing the universe, do we finally hear the stop watch expire.

This has all just been a verbose way of saying games are complex (and that numbers are big), but we will revisit the significance of these figures in part three, when we discuss artificial intelligence.

Next up in this series we will apply these same methods to calculating the state-space and game-tree complexities to modern board gaming classics; Carcassonne, Agricola, and Through the Ages.

Part Two: Modern Board Game Analysis

Part Three: Artificial Intelligence

In the meantime, perhaps we can finally stop complaining about lack of replayability.

References:

Allis, Louis Victor. “Searching for Solutions in Games and Artificial Intelligence.” *University of Limburg*, 1994.

Chinchalkar, S. “An Upper Bound for the Number of Reachable Positions.” *ICGA Journal*, vol. 19, no. 3, 1996, pp. 181–183., doi:10.3233/icg-1996-19305.

Shannon, Claude E. “Programming a Computer for Playing Chess.” *Computer Chess Compendium*, 1988, pp. 2–13., doi:10.1007/978-1-4757-1968-0_1.