This notebook serves as supporting material for topics covered in Chapter 5 - Adversarial Search in the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from games.py module. Let's import required classes, methods, global variables etc., from games module.
In [1]:
from games import (GameState, Game, Fig52Game, TicTacToe, query_player, random_player,
alphabeta_player, play_game, minimax_decision, alphabeta_full_search,
alphabeta_search, Canvas_TicTacToe)
GameState
namedtupleGameState
is a namedtuple which represents the current state of a game. Let it be Tic-Tac-Toe or any other game.
Game
classLet's have a look at the class Game
in our module. We see that it has functions, namely actions
, result
, utility
, terminal_test
, to_move
and display
.
We see that these functions have not actually been implemented. This class is actually just a template class; we are supposed to create the class for our game, TicTacToe
by inheriting this Game
class and implement all the methods mentioned in Game
. Do not close the popup so that you can follow along the description of code below.
In [2]:
%psource Game
Now let's get into details of all the methods in our Game
class. You have to implement these methods when you create new classes that would represent your game.
actions(self, state)
: Given a game state, this method generates all the legal actions possible from this state, as a list or a generator. Returning a generator rather than a list has the advantage that it saves space and you can still operate on it as a list.result(self, state, move)
: Given a game state and a move, this method returns the game state that you get by making that move on this game state.utility(self, state, player)
: Given a terminal game state and a player, this method returns the utility for that player in the given terminal game state. While implementing this method assume that the game state is a terminal game state. The logic in this module is such that this method will be called only on terminal game states.terminal_test(self, state)
: Given a game state, this method should return True
if this game state is a terminal state, and False
otherwise.to_move(self, state)
: Given a game state, this method returns the player who is to play next. This information is typically stored in the game state, so all this method does is extract this information and return it.display(self, state)
: This method prints/displays current state of the game.
In [3]:
%psource TicTacToe
The class TicTacToe
has been inherited from the class Game
. As mentioned earlier, you really want to do this. Catching bugs and errors becomes a whole lot easier.
Additional methods in TicTacToe:
__init__(self, h=3, v=3, k=3)
: When you create a class inherited from the Game
class (class TicTacToe
in our case), you'll have to create an object of this inherited class to initialize the game. This initialization might require some additional information which would be passed to __init__
as variables. For the case of our TicTacToe
game, this additional information would be the number of rows h
, number of columns v
and how many consecutive X's or O's are needed in a row, column or diagonal for a win k
. Also, the initial game state has to be defined here in __init__
.compute_utility(self, board, move, player)
: A method to calculate the utility of TicTacToe game. If 'X' wins with this move, this method returns 1; if 'O' wins return -1; else return 0.k_in_row(self, board, move, player, delta_x_y)
: This method returns True
if there is a line formed on TicTacToe board with the latest move else False.
Now, before we start implementing our TicTacToe
game, we need to decide how we will be representing our game state. Typically, a game state will give you all the current information about the game at any point in time. When you are given a game state, you should be able to tell whose turn it is next, how the game will look like on a real-life board (if it has one) etc. A game state need not include the history of the game. If you can play the game further given a game state, you game state representation is acceptable. While we might like to include all kinds of information in our game state, we wouldn't want to put too much information into it. Modifying this game state to generate a new one would be a real pain then.
Now, as for our TicTacToe
game state, would storing only the positions of all the X's and O's be sufficient to represent all the game information at that point in time? Well, does it tell us whose turn it is next? Looking at the 'X's and O's on the board and counting them should tell us that. But that would mean extra computing. To avoid this, we will also store whose move it is next in the game state.
Think about what we've done here. We have reduced extra computation by storing additional information in a game state. Now, this information might not be absolutely essential to tell us about the state of the game, but it does save us additional computation time. We'll do more of this later on.
The TicTacToe
game defines its game state as:
GameState = namedtuple('GameState', 'to_move, utility, board, moves')
The game state is called, quite appropriately, GameState
, and it has 4 variables, namely, to_move
, utility
, board
and moves
.
I'll describe these variables in some more detail:
to_move
: It represents whose turn it is to move next. This will be a string of a single character, either 'X' or 'O'.utility
: It stores the utility of the game state. Storing this utility is a good idea, because, when you do a Minimax Search or an Alphabeta Search, you generate many recursive calls, which travel all the way down to the terminal states. When these recursive calls go back up to the original callee, we have calculated utilities for many game states. We store these utilities in their respective GameState
s to avoid calculating them all over again.board
: A dict that stores all the positions of X's and O's on the boardmoves
: It stores the list of legal moves possible from the current position. Note here, that storing the moves as a list, as it is done here, increases the space complexity of Minimax Search from O(m)
to O(bm)
. Refer to Sec. 5.2.1 of the book.Now that we have decided how our game state will be represented, it's time to decide how our move will be represented. Becomes easy to use this move to modify a current game state to generate a new one.
For our TicTacToe
game, we'll just represent a move by a tuple, where the first and the second elements of the tuple will represent the row and column, respectively, where the next move is to be made. Whether to make an 'X' or an 'O' will be decided by the to_move
in the GameState
namedtuple.
So, we have finished implementation of the TicTacToe
class. What this class does is that, it just defines the rules of the game. We need more to create an AI that can actually play the game. This is where random_player
and alphabeta_player
come in.
The query_player
function allows you, a human opponent, to play the game. This function requires a display
method to be implemented in your game class, so that successive game states can be displayed on the terminal, making it easier for you to visualize the game and play accordingly.
The random_player
is a function that plays random moves in the game. That's it. There isn't much more to this guy.
The alphabeta_player
, on the other hand, calls the alphabeta_full_search
function, which returns the best move in the current game state. Thus, the alphabeta_player
always plays the best move given a game state, assuming that the game tree is small enough to search entirely.
The play_game
function will be the one that will actually be used to play the game. You pass as arguments to it, an instance of the game you want to play and the players you want in this game. Use it to play AI vs AI, AI vs human, or even human vs human matches!
Let's start by experimenting with the Fig52Game
first. For that we'll create an instance of the subclass Fig52Game inherited from the class Game:
In [4]:
game52 = Fig52Game()
First we try out our random_player(game, state)
. Given a game state it will give us a random move every time:
In [5]:
print(random_player(game52, 'A'))
print(random_player(game52, 'A'))
The alphabeta_player(game, state)
will always give us the best move possible:
In [6]:
print( alphabeta_player(game52, 'A') )
print( alphabeta_player(game52, 'B') )
print( alphabeta_player(game52, 'C') )
What the alphabeta_player
does is, it simply calls the method alphabeta_full_search
. They both are essentially the same. In the module, both alphabeta_full_search
and minimax_decision
have been implemented. They both do the same job and return the same thing, which is, the best move in the current state. It's just that alphabeta_full_search
is more efficient w.r.t time because it prunes the search tree and hence, explores lesser number of states.
In [7]:
minimax_decision('A', game52)
Out[7]:
In [8]:
alphabeta_full_search('A', game52)
Out[8]:
Demonstrating the play_game function on the game52:
In [9]:
play_game(game52, alphabeta_player, alphabeta_player)
Out[9]:
In [10]:
play_game(game52, alphabeta_player, random_player)
Out[10]:
In [11]:
#play_game(game52, query_player, alphabeta_player)
#play_game(game52, alphabeta_player, query_player)
Note that, here, if you are the first player, the alphabeta_player plays as MIN, and if you are the second player, the alphabeta_player plays as MAX. This happens because that's the way the game is defined in the class Fig52Game. Having a look at the code of this class should make it clear.
In [12]:
ttt = TicTacToe()
We can print a state using the display method:
In [13]:
ttt.display(ttt.initial)
Hmm, so that's the initial state of the game; no X's and no O's.
Let us create a new game state by ourselves to experiment:
In [14]:
my_state = GameState(
to_move = 'X',
utility = '0',
board = {(1,1): 'X', (1,2): 'O', (1,3): 'X',
(2,1): 'O', (2,3): 'O',
(3,1): 'X',
},
moves = [(2,2), (3,2), (3,3)]
)
So, how does this game state looks like?
In [15]:
ttt.display(my_state)
The random_player
will behave how he is supposed to i.e. pseudo-randomly:
In [16]:
random_player(ttt, my_state)
Out[16]:
In [17]:
random_player(ttt, my_state)
Out[17]:
But the alphabeta_player
will always give the best move, as expected:
In [18]:
alphabeta_player(ttt, my_state)
Out[18]:
Now let's make 2 players play against each other. We use the play_game
function for this. The play_game
function makes players play the match against each other and returns the utility for the first player, of the terminal state reached when the game ends. Hence, for our TicTacToe
game, if we get the output +1, the first player wins, -1 if the second player wins, and 0 if the match ends in a draw.
In [19]:
print(play_game(ttt, random_player, alphabeta_player))
The output is -1, hence random_player
loses implies alphabeta_player
wins.
Since, an alphabeta_player
plays perfectly, a match between two alphabeta_player
s should always end in a draw. Let's see if this happens:
In [20]:
for _ in range(10):
print(play_game(ttt, alphabeta_player, alphabeta_player))
A random_player
should never win against an alphabeta_player
. Let's test that.
In [21]:
for _ in range(10):
print(play_game(ttt, random_player, alphabeta_player))
In [22]:
bot_play = Canvas_TicTacToe('bot_play', 'random', 'alphabeta')
Now, let's play a game ourselves against a random_player
:
In [23]:
rand_play = Canvas_TicTacToe('rand_play', 'human', 'random')
Yay! We win. But we cannot win against an alphabeta_player
, however hard we try.
In [24]:
ab_play = Canvas_TicTacToe('ab_play', 'human', 'alphabeta')