Classical Search

This notebook serves as a supporting material for the chapter Solving Problems by Searching. The notebooks illustrate the use of the code repository and demonstrate how the code can be extended to solve various search related problems. The discussion of problem solving begins with a precise implementation of problems and their solutions. Then we move onto various informed and uninformed search strategies for solving problems.


In [2]:
%classpath add jar ../out/artifacts/aima_core_jar/aima-core.jar


Problem Solving Agents

The process of looking for a sequence of actions that reaches the goal is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. Thus, we have a simple “formulate, search, execute” design for the agent, as shown in Figure 3.1 of the textbook. After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do—typically, the first action of the sequence—and then removing that step from the sequence. Once the solution has been executed, the agent will formulate a new goal.

Let's have a look at the pseudocode of a simple problem solving agent and then see it's java implementation.


In [3]:
%%python
from notebookUtils import *
pseudocode('Simple Problem Solving Agent')


Out[3]:

AIMA3e

function SIMPLE-PROBLEM-SOLVING-AGENT(percept) returns an action
persistent: seq, an action sequence, initially empty
      state, some description of the current world state
      goal, a goal, initially null
      problem, a problem formulation

state ← UPDATE-STATE(state, percept)
if seq is empty then
   goal ← FORMULATE-GOAL(state)
   problem ← FORMULATE-PROBLEM(state, goal)
   seq ← SEARCH(problem)
   if seq = failure then return a null action
action ← FIRST(seq)
seq ← REST(seq)
return action


Figure ?? A simple problem-solving agent. It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over.

The implementation of the above pseudocode can be viewed here. This agent is implemented as an abstract agent which can be extended to construct other agents.

Well-defined problems and solutions

We will first formally define a problem. Then, we will have a look at how the code from the repository can be used to formulate new problems. Then, we will have a look at various toy problems which are already present in the code repository.

As per the textbook, a problem can be defined formally by five components. The initial state, applicable actions, the transition model, the goal test and the path cost function. This five component structure is implemented as an interface named Problem.java in the repository. Let's have a look at the implementation

public interface Problem<S, A> extends OnlineSearchProblem<S, A> {

    /**
     * Returns the initial state of the agent.
     */
    S getInitialState();

    /**
     * Returns the description of the possible actions available to the agent.
     */
    List<A> getActions(S state);

    /**
     * Returns the description of what each action does.
     */
    S getResult(S state, A action);

    /**
     * Determines whether a given state is a goal state.
     */
    boolean testGoal(S state);

    /**
     * Returns the <b>step cost</b> of taking action <code>action</code> in state <code>state</code> to reach state
     * <code>stateDelta</code> denoted by c(s, a, s').
     */
    double getStepCosts(S state, A action, S stateDelta);

    /**
     * Tests whether a node represents an acceptable solution. The default implementation
     * delegates the check to the goal test. Other implementations could make use of the additional
     * information given by the node (e.g. the sequence of actions leading to the node). A
     * solution tester implementation could for example always return false and internally collect
     * the paths of all nodes whose state passes the goal test. Search implementations should always
     * access the goal test via this method to support solution acceptance testing.
     */
    default boolean testSolution(Node<S, A> node) {
        return testGoal(node.getState());
    }
}

The states and actions are represented by the generic variables S and A respectively. Clearly, the methods represent the five components of a particular problem as follows:

  • initial state ← getInitialState()
  • applicable actions ← getActions(S state)
  • the transition model ← getResult(S state, A action)
  • the goal test ← testGoal(S state)
  • path cost function ← getStepCosts(S state, A action, S stateDelta)

Example Problems

A toy problem is intended to illustrate or exercise various problem solving methods. Let's extend the Problem interface to implement a toy problem. Let's implement the 8 puzzle problem which consists of a 3x3 board with eight numbered tiles and a blank space. It has the following five components:

  • States: A state description specifies the location of each of the eight tiles and the blank in one of the nine squares.
  • Initial state: Any state can be designated as the initial state. Note that any given goal can be reached from exactly half of the possible initial states. (proved in Exercise 3.4)
  • Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or Down. Different subsets of these are possible depending on where the blank is.
  • Transition model: Given a state and action, this returns the resulting state.
  • Path cost: Each step costs 1, so the path cost is the number of steps in the path.

Let's look at the implementation:

First we implement the states and actions applicable to the problem. The actions can be implemented as an enum whereas the states can be represented as an array of ints.


In [4]:
package aima.notebooks.classicalsearch;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import aima.core.search.framework.problem.Problem;

public class EightPuzzleProblem implements Problem<int[], EightPuzzleProblem.Action> {

    // This array represents the state
    int[] initialState = new int[9];
    
    /**
    * This enum represents the Action datatype
    */
    public enum Action {
        LEFT, RIGHT, UP, DOWN
    }
    
    // A constructor for the problem
    public EightPuzzleProblem(int[] initialState){
        this.initialState = initialState;
    }

    // Component One : The initial state.
    @Override
    public int[] getInitialState() {
        return initialState;
    }

    // Component Two : Applicable Actions
    @Override
    public List<Action> getActions(int[] state) {
        List<Action> actions = new ArrayList<>();
        if (this.canMoveGap(state, Action.UP))
            actions.add(Action.UP);
        if (this.canMoveGap(state, Action.DOWN))
            actions.add(Action.DOWN);
        if (this.canMoveGap(state, Action.LEFT))
            actions.add(Action.LEFT);
        if (this.canMoveGap(state, Action.RIGHT))
            actions.add(Action.RIGHT);
        return actions;
    }

    // Component Three : Transition Model
    @Override
    public int[] getResult(int[] state, Action action) {
        int[] result = state.clone();

        if (Action.UP.equals(action) && canMoveGap(state, Action.UP))
            moveGapUp(result);
        else if (Action.DOWN.equals(action) && canMoveGap(state, Action.DOWN))
            moveGapDown(result);
        else if (Action.LEFT.equals(action) && canMoveGap(state, Action.LEFT))
            moveGapLeft(result);
        else if (Action.RIGHT.equals(action) && canMoveGap(state, Action.RIGHT))
            moveGapRight(result);
        return result;
    }
    
    // Component Four : Goal Test
    @Override
    public boolean testGoal(int[] state) {
        return Arrays.equals(state, new int[]{0, 1, 2, 3, 4, 5, 6, 7, 8});
    }

    // Component Five : Path cost function
    @Override
    public double getStepCosts(int[] state, Action action, int[] stateDelta) {
        return 1.0;
    }

    private void moveGapRight(int[] result) {
        int gapPos = getGapPosition(result);
        int x = gapPos / 3;
        int y = gapPos % 3;
        if (!(y == 2)) {
            int valueOnRight = result[x * 3 + y + 1];
            setValue(result, x, y, valueOnRight);
            setValue(result, x, y + 1, 0);
        }
    }

    // All the methods below are just helper methods which aid the above necessary methods.
    
    // To move the gap to the left.
    private void moveGapLeft(int[] result) {
        int gapPos = getGapPosition(result);
        int x = gapPos / 3;
        int y = gapPos % 3;
        if (!(y == 0)) {
            int valueOnLeft = result[x * 3 + (y - 1)];
            setValue(result, x, y, valueOnLeft);
            setValue(result, x, y - 1, 0);
        }
    }

    // To move the gap to the cell below.
    private void moveGapDown(int[] result) {
        int gapPos = getGapPosition(result);
        int x = gapPos / 3;
        int y = gapPos % 3;
        if (!(x == 2)) {
            int valueOnBottom = result[(x + 1) * 3 + y];
            setValue(result, x, y, valueOnBottom);
            setValue(result, x + 1, y, 0);
        }
    }

    // To get the current location of the gap
    private int getGapPosition(int[] state) {
        return getPositionOf(state, 0);
    }

    // To get the position of any particular number.
    private int getPositionOf(int[] state, int val) {
        for (int i = 0; i < 9; i++)
            if (state[i] == val)
                return i;
        return -1;
    }

    // To check if we can move the gap to the position specified by where
    public boolean canMoveGap(int[] state, Action where) {
        boolean retVal = true;
        int absPos = getPositionOf(state, 0);
        if (where.equals(Action.LEFT))
            retVal = (absPos % 3 != 0);
        else if (where.equals(Action.RIGHT))
            retVal = (absPos % 3 != 2);
        else if (where.equals(Action.UP))
            retVal = ((absPos / 3) != 0);
        else if (where.equals(Action.DOWN))
            retVal = ((absPos / 3) != 2);
        return retVal;
    }

    // To move the gap to the cell above.
    public void moveGapUp(int[] result) {
        int gapPos = getGapPosition(result);
        int x = gapPos / 3;
        int y = gapPos % 3;
        if (!(x == 0)) {
            int valueOnTop = result[(x - 1) * 3 + y];
            setValue(result, x, y, valueOnTop);
            setValue(result, x - 1, y, 0);
        }
    }

    // To set the value of a particular cell.
    private void setValue(int[] result, int x, int y, int valueOnTop) {
        int absPos = x *3 + y;
        result[absPos] = valueOnTop;
    }
}


Out[4]:
aima.notebooks.classicalsearch.EightPuzzleProblem

So, in this way we can implement a Problem. Now let us see our problem class in action


In [5]:
import aima.notebooks.classicalsearch.EightPuzzleProblem;
import java.util.*;

int [] initialState = new int[] { 5, 4, 0, 6, 1, 8, 7, 3, 2 };

EightPuzzleProblem problem = new EightPuzzleProblem(initialState);

System.out.println("Initial State = " + Arrays.toString(problem.getInitialState()));
System.out.println("Available Actions = " + problem.getActions(initialState).toString());
System.out.println("Resulting State = " + Arrays.toString(problem.getResult(initialState,problem.getActions(initialState).get(0))));
System.out.println("isGoal\t"+ problem.testGoal(initialState));


Initial State = [5, 4, 0, 6, 1, 8, 7, 3, 2]
Available Actions = [DOWN, LEFT]
Resulting State = [5, 4, 8, 6, 1, 0, 7, 3, 2]
isGoal	false
Out[5]:
null

We have seen how we can implement the existing problem interface to create and define our own custom problems. The flexibility to define our own custom problem is necessary for experimentation. However, the code repository already includes robust implementations of a variety of search problems and their environments. For all the future purposes we will use the existing implementations as they are more robust and complex and have been thoroughly tested for errors. Now let's have a look at some of the common search problems and how they can be directly used from the code repository.

The GeneralProblem class can be used to create the existing problems. The problem parameters can be passed as constructor arguements.

The Eight Puzzle Problem

The eight puzzle problem can be constructed from the GeneralProblem class as follows:


In [6]:
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.agent.Action;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.Problem;
import java.util.*;


EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 7, 1, 8, 0, 4, 6, 2, 3, 5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board, EightPuzzleFunctions::getActions,
                                                                 EightPuzzleFunctions::getResult,
                                                                 GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));
System.out.println("Initial State = " + Arrays.toString(problem.getInitialState().getState()));
System.out.println("Available Actions = " + problem.getActions(problem.getInitialState()).toString());
System.out.println("Resulting State = " + Arrays.toString(problem.getResult(problem.getInitialState(),
                                                                            problem.getActions(problem.getInitialState()).get(0))
                                                          .getState()));
System.out.println("isGoal\t"+ problem.testGoal(problem.getInitialState()));


Initial State = [7, 1, 8, 0, 4, 6, 2, 3, 5]
Available Actions = [Action[name=Up], Action[name=Down], Action[name=Right]]
Resulting State = [0, 1, 8, 7, 4, 6, 2, 3, 5]
isGoal	false
Out[6]:
null

The NQueens Problem

The goal of the n-queens problem is to place n-queens on an nxn chessboard such that no queens attacks any other. The n-queens problem can be formulated as follows:


In [7]:
import aima.core.agent.Action;
import aima.core.environment.nqueens.*;
import aima.core.search.framework.problem.*;
import java.util.*;

Problem<NQueensBoard, QueenAction> problem = new GeneralProblem<>(new NQueensBoard(3),
                                                                  NQueensFunctions::getIFActions,
                                                                  NQueensFunctions::getResult, NQueensFunctions::testGoal);

System.out.println("Initial State \n" + problem.getInitialState().toString());
System.out.println("Available Actions \n" + problem.getActions(problem.getInitialState()).toString());
System.out.println("\n\nResulting State \n" + problem.getResult(problem.getInitialState(),
                                                                   problem.getActions(problem.getInitialState()).get(0)).toString());
System.out.println("isGoal\t"+ problem.testGoal(problem.getInitialState()));


Initial State 
---
---
---

Available Actions 
[Action[name=placeQueenAt, location=(0, 0)], Action[name=placeQueenAt, location=(0, 1)], Action[name=placeQueenAt, location=(0, 2)]]


Resulting State 
Q--
---
---

isGoal	false
Out[7]:
null

Route finding problem (Romania)

Route-finding algorithms are used in a variety of applications. Some, such as Web sites and in-car systems that provide driving directions, are relatively straightforward extensions of the Romania example. Others, such as routing video streams in computer networks, military operations planning, and airline travel-planning systems, involve much more complex specifications. Now let us formulate the map of Romania Problem:


In [8]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;

Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
				SimplifiedRoadMapOfPartOfRomania.ARAD,
				MapFunctions.createActionsFunction(romaniaMap),
				MapFunctions.createResultFunction(),
				GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
				MapFunctions.createDistanceStepCostFunction(romaniaMap));

System.out.println("Initial State  " + problem.getInitialState().toString());
System.out.println("\n\nAvailable Actions \n " + problem.getActions(problem.getInitialState()).toString());
System.out.println("\n\nResulting State  " + problem.getResult(problem.getInitialState(),
                                                                   problem.getActions(problem.getInitialState()).get(0)).toString());
System.out.println("\n\nisGoal\t"+ problem.testGoal(problem.getInitialState()));


Initial State  Arad


Available Actions 
 [Action[name=moveTo, location=Sibiu], Action[name=moveTo, location=Timisoara], Action[name=moveTo, location=Zerind]]


Resulting State  Sibiu


isGoal	false
Out[8]:
null

Search Algorithms

Search algorithms work by considering various possible action sequences. The possible action sequences starting at the initial state form a search tree with the initial state at the root then expanding current state to form branches. The nodes correspond to states in the state space of the problem. We expand the current state by applying each legal action to the current state, thereby generating a new set of $b$ states. We add $b$ branches from the parent node leading to $b$ new child node. Each of these $b$ child nodes is a leaf node, that is, a node with no children in the tree. The set of all leaf nodes available for expansion at any given point is called the frontier. The process of expanding nodes on the frontier continues until either a goal state is reached or there are no more states to expand.

Node expansion in a tree can be visualized here

Let's have a look at the pseudo code of general Tree Search algorithm.


In [9]:
%%python
from notebookUtils import *
pseudocode('Tree Search and Graph Search')


Out[9]:

AIMA4e

function GENERIC-SEARCH(problem) returns a solution, or failure
frontier ← a queue initially containing one path, for the problem's initial state
reached ← a table of {state: the best path that reached state}; initially empty
solution ← failure
while frontier is not empty and solution can possibly be improved do
   parent ← some node that we choose to remove from frontier
   for child in successors(parent) do
     schild.state
     if s is not in reached or child is a cheaper path than reached[s] then
       reached[s] ← child
       add child to frontier
       if child is a goal and is cheaper than solution then
         solution = child
return solution


Figure ?? In the GENERIC-SEARCH algorithm, we keep track of the best solution found so far, as well as a set of states that we have already reached, and a frontier of paths from which we will choose the next path to expand. In any specific search algorithm, we specify (1) the criteria for ordering the paths in the frontier, and (2) the procedure for determining when it is no longer possible to improve on a solution.

TREE-SEARCH and GRAPH-SEARCH

AIMA3e

function TREE-SEARCH(problem) returns a solution, or failure
 initialize the frontier using the initial state of problem
loop do
   if the frontier is empty then return failure
   choose a leaf node and remove it from the frontier
   if the node contains a goal state then return the corresponding solution
   expand the chosen node, adding the resulting nodes to the frontier


function GRAPH-SEARCH(problem) returns a solution, or failure
 initialize the frontier using the initial state of problem
initialize the explored set to be empty
loop do
   if the frontier is empty then return failure
   choose a leaf node and remove it from the frontier
   if the node contains a goal state then return the corresponding solution
   add the node to the explored set
   expand the chosen node, adding the resulting nodes to the frontier
    only if not in the frontier or explored set


Figure ?? An informal description of the general tree-search and graph-search algorithms. The parts of GRAPH-SEARCH marked in bold italic are the additions needed to handle repeated states.

Many a time newly generated states match previously generated states (known as a repeated state) and it leads to the generation of redundant paths. We avoid exploring a redundant path by augmenting the TREE-SEARCH algorithm with a data structure called explored set which remembers every expanded node. Clearly, the search tree constructed by the GRAPH-SEARCH algorithm contains at most one copy of each state.

The perspective of a Search Agent can be visualized here.

Infrastructure for search algorithms

As per textbook for each node $n$ of a tree, we have a structure that contains 4 components: These four component structure is implemented in a class named node.java in the repository.

  • private S state;
    
    n.STATE: The state in the state space to which the node corresponds;
  • private Node<S, A> parent;
    
    n.PARENT: The node in the search tree that generated this node;
  • private A action;
    
    n.ACTION: The action that was applied to the parent to generate the node;
  • private double pathCost;
    
    n.PATH-COST: The cost, traditionally denoted by g(n), of the path from the initial state to the node, as indicated by the parent pointers.

These pointers also allow the solution path to be extracted when a goal node is found; we use SOLUTION function to return a sequence of actions obtained by following parent pointers back to the root.

The appropriate data-structure for frontier is queue. The operation on queue are as follows:

  • protected abstract void addToFrontier(Node<S, A> node);
    
    INSERT(element,queue): A primitive operation which inserts the node at the tail of the frontier.
  • protected abstract Node<S, A> removeFromFrontier();
    
    POP(queue): A primitive operation which removes and returns the node at the head of the frontier.
  • protected abstract boolean isFrontierEmpty()
    
    EMPTY?(queue): A primitive operation which checks whether the frontier contains not yet expanded nodes.

Uninformed Searches

Uninformed Search (also called Blind Search) includes strategies that have no additional information about states beyond that provided in the problem definition. All these strategies can do is generate successor and distinguish goal state from a non-goal state.

Breadth First Search is a simple search strategy in which all the nodes are expanded at a given depth before any nodes at the next level are expanded. Therefore, it always chooses the shallowest unexpanded node for expansion by using a FIFO queue for the frontier. Here goal test is applied to each node when it is generated rather than when it is selected for expansion.

Note that the shallowest goal node is not necessarily the optimal one. Therefore, breadth-first search is optimal if the path cost is a non-decreasing function of depth of the node.

Working of Breadth First Search algorithm can be visualized here.

Let's have a look at the pseudo code of Breadth-First Search algorithm and then its implementation.


In [10]:
%%python
from notebookUtils import *
pseudocode('Breadth First Search')


Out[10]:

AIMA4e

function BREADTH-FIRST-SEARCH(problem) returns a solution, or failure
if problem's initial state is a goal then return empty path to initial state
frontier ← a FIFO queue initially containing one path, for the problem's initial state
reached ← a set of states; initially empty
solution ← failure
while frontier is not empty do
   parent ← the first node in frontier
   for child in successors(parent) do
     schild.state
     if s is a goal then
       return child
     if s is not in reached then
       add s to reached
       add child to the end of frontier
return solution


Figure 3.9 Breadth-first search algorithm.

AIMA3e

function BREADTH-FIRST-SEARCH(problem) returns a solution, or failure
node ← a node with STATE = problem.INITIAL-STATE, PATH-COST = 0
if problem.GOAL-TEST(node.STATE) then return SOLUTION(node)
frontier ← a FIFO queue with node as the only element
explored ← an empty set
loop do
   if EMPTY?(frontier) then return failure
   node ← POP(frontier) /* chooses the shallowest node in frontier */
   add node.STATE to explored
   for each action in problem.ACTIONS(node.STATE) do
      child ← CHILD-NODE(problem,node,action)
      if child.STATE is not in explored or frontier then
         if problem.GOAL-TEST(child.STATE) then return SOLUTION(child)
         frontier ← INSERT(child,frontier)


Figure ?? Breadth-first search on a graph.


In [11]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.uninformed.BreadthFirstSearch;

Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
				SimplifiedRoadMapOfPartOfRomania.ARAD,
				MapFunctions.createActionsFunction(romaniaMap),
				MapFunctions.createResultFunction(),
				GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
				MapFunctions.createDistanceStepCostFunction(romaniaMap));

BreadthFirstSearch breadthFirstSearch = new BreadthFirstSearch();
System.out.println(breadthFirstSearch.findActions(problem));


Optional[[Action[name=moveTo, location=Sibiu], Action[name=moveTo, location=Fagaras], Action[name=moveTo, location=Bucharest]]]
Out[11]:
null

In [12]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.framework.*;
import aima.core.search.framework.qsearch.*;
import java.util.Queue;


QueueFactory queueFactory = new QueueFactory();
Queue<Node> frontier ;

Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
				SimplifiedRoadMapOfPartOfRomania.ARAD,
				MapFunctions.createActionsFunction(romaniaMap),
				MapFunctions.createResultFunction(),
				GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
				MapFunctions.createDistanceStepCostFunction(romaniaMap));

GraphSearchBFS graphSearchBFS = new GraphSearchBFS();
frontier = queueFactory.createFifoQueue();

System.out.println(graphSearchBFS.findNode(problem, frontier));


Optional[[parent=[parent=[parent=[parent=null, action=null, state=Arad, pathCost=0.0], action=Action[name=moveTo, location=Sibiu], state=Sibiu, pathCost=140.0], action=Action[name=moveTo, location=Fagaras], state=Fagaras, pathCost=239.0], action=Action[name=moveTo, location=Bucharest], state=Bucharest, pathCost=450.0]]
Out[12]:
null

Instead of expanding the shallowest node, uniform cost search expand the node $n$ with the lowest path cost $g(n)$. This is done by storing the frontier as a priority queue ordered by $g$ . There are 2 significant differences from breadth-first search. The first is that goal test is applied to a node when it is selected for expansion. The second difference is that a test is added in case a better path is found to a node currently on the frontier.

Uniform Cost Search does not care about the number of steps a path has, but only about their total cost. Therefore, it expands nodes in order of their optimal path cost. Hence, the first goal node selected for expansion must be the optimal solution.

The difference in the working of Breadth First search algorithm and Uniform Cost search algorithm can be visualized here.

Let's have a look at the pseudo code of uniform cost search and then its implementation.


In [13]:
%%python
from notebookUtils import *
pseudocode('Uniform Cost Search')


Out[13]:

AIMA4e

function UNIFORM-COST-SEARCH(problem) returns a solution, or failure
if problem's initial state is a goal then return empty path to initial state
frontier ← a priority queue ordered by pathCost, with a node for the initial state
reached ← a table of {state: the best path that reached state}; initially empty
solution ← failure
while frontier is not empty and top(frontier) is cheaper than solution do
   parent ← pop(frontier)
   for child in successors(parent) do
     schild.state
     if s is not in reached or child is a cheaper path than reached[s] then
       reached[s] ← child
       add child to the frontier
       if child is a goal and is cheaper than solution then
         solution = child
return solution


Figure 3.11 Uniform-cost search on a graph. Finds optimal paths for problems with vary- ing step costs.

AIMA3e

function UNIFORM-COST-SEARCH(problem) returns a solution, or failure
node ← a node with STATE = problem.INITIAL-STATE, PATH-COST = 0
frontier ← a priority queue ordered by PATH-COST, with node as the only element
explored ← an empty set
loop do
   if EMPTY?(frontier) then return failure
   node ← POP(frontier) /* chooses the lowest-cost node in frontier */
   if problem.GOAL-TEST(node.STATE) then return SOLUTION(node)
   add node.STATE to explored
   for each action in problem.ACTIONS(node.STATE) do
      child ← CHILD-NODE(problem,node,action)
      if child.STATE is not in explored or frontier then
         frontier ← INSERT(child,frontier)
      else if child.STATE is in frontier with higher PATH-COST then
         replace that frontier node with child


Figure ?? Uniform-cost search on a graph. The algorithm is identical to the general graph search algorithm in Figure ??, except for the use of a priority queue and the addition of an extra check in case a shorter path to a frontier state is discovered. The data structure for frontier needs to support efficient membership testing, so it should combine the capabilities of a priority queue and a hash table.


In [14]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.uninformed.UniformCostSearch;

Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
				SimplifiedRoadMapOfPartOfRomania.ARAD,
				MapFunctions.createActionsFunction(romaniaMap),
				MapFunctions.createResultFunction(),
				GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
				MapFunctions.createDistanceStepCostFunction(romaniaMap));

UniformCostSearch uniformCostSearch = new UniformCostSearch();
System.out.println(uniformCostSearch.findActions(problem));


Optional[[Action[name=moveTo, location=Sibiu], Action[name=moveTo, location=RimnicuVilcea], Action[name=moveTo, location=Pitesti], Action[name=moveTo, location=Bucharest]]]
Out[14]:
null

Depth First Search algorithm always expands the deepest node in the current frontier of the search tree. Depth First search uses a LIFO queue for frontier which means that the most recently generated node is chosen for expansion. The properties of depth-first search depend strongly on whether the tree search or graph search version is used. The graph search version, which avoids repeated states and redundant paths, is complete in finite state space because it will eventually expand every node. The tree search version, on the other hand, is not complete.

Depth First Search algorithm is implemented here ans it's working can be visualized here

The failure of depth-first search in infinite state spaces can be alleviated by supplying depth-first search with a pre-determined depth limit $l$ i.e. nodes at depth $l$ are treated as if they have no successors. But unfortunately, it also introduces an additional source of incompleteness if the shallowest goal node is beyond the chosen depth limit.

Let's have a look at its pseudo code. Notice that depth-limited search can terminate with 2 kinds of failure: the standard failure value indicates no solution; the cutoff value indicates no solution within the depth limit.

This algorithm is implemented here and it's working can be visualized here.


In [15]:
%%python
from notebookUtils import *
pseudocode('Depth Limited Search')


Out[15]:

AIMA4e

function DEPTH-LIMITED-SEARCH(problem, l) returns a solution, or failure, or cutoff
frontier ← a FIFO queue initially containing one path, for the problem's initial state
solution ← failure
while frontier is not empty do
   parent ← pop(frontier)
   if depth(parent) > l then
     solution ← cutoff
     else
       for child in successors(parent) do
         if child is a goal then
           return child
         add child to frontier
return solution


Figure 3.14 An implementation of depth-limited tree search. The algorithm has two dif- ferent ways to signal failure to find a solution: it returns failure when it has exhausted all paths and proved there is no solution at any depth, and returns cutoff to mean there might be a solution at a deeper depth than l. Note that this algorithm does not keep track of reached states, and thus might visit the same state multiple times on different paths.

AIMA3e

function DEPTH-LIMITED-SEARCH(problem,limit) returns a solution, or failure/cutoff
return RECURSIVE-DLS(MAKE-NODE(problem.INITIAL-STATE),problem,limit)

function RECURSIVE-DLS(node,problem,limit) returns a solution, or failure/cutoff
if problem.GOAL-TEST(node.STATE) then return SOLUTION(node)
else if limit = 0 then return cutoff
else
   _cutoff_occurred?_ ← false
   for each action in problem.ACTIONS(node.STATE) do
      child ← CHILD-NODE(problem,node,action)
      result ← RECURSIVE-DLS(child,problem,limit-1)
      if result = cutoff then _cutoff_occurred?_ ← true
      else if resultfailure then return result
   if _cutoff_occurred?_ then return cutoff else return failure


Figure ?? A recursive implementation of depth-limited tree search.

Iterative deepening search is often used in combination with depth-first search, that finds the best depth limit. It calls depth-first search with increasing depth limits until a goal is found. It is complete, optimal for unit step costs, has time complexity comparable to breadth-first search, and has linear space complexity.

Iterative deepening search is analogous to breadth-first search as it explores a complete layer of new nodes at each iteration before going on to next layer. In general, iterative deepening is the preferred uniform search method when the search space is large and the depth of solution is unknown.

Working of Iterative deepening search can be visualized here.

Let's have a look at its pseudo code and implementation on Romania map problem.


In [16]:
%%python
from notebookUtils import *
pseudocode('Iterative Deepening Search')


Out[16]:

AIMA3e / AIMA4e

function ITERATIVE-DEEPENING-SEARCH(problem) returns a solution, or failure
for depth = 0 to ∞ do
   result ← DEPTH-LIMITED-SEARCH(problem,depth)
   if result ≠ cutoff then return result


Figure ?? The iterative deepening search algorithm, which repeatedly applies depth-limited search with increasing limits. It terminates when a solution is found or if the depth-limited search returns failure, meaning that no solution exists.


In [17]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.uninformed.IterativeDeepeningSearch;

Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();

Problem<String, MoveToAction> problem = new GeneralProblem<>(
        SimplifiedRoadMapOfPartOfRomania.ARAD,
        MapFunctions.createActionsFunction(romaniaMap),
        MapFunctions.createResultFunction(),
        GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
        MapFunctions.createDistanceStepCostFunction(romaniaMap));

IterativeDeepeningSearch iterativeDeepeningSearch = new IterativeDeepeningSearch();
System.out.println(iterativeDeepeningSearch.findActions(problem));


Optional[[Action[name=moveTo, location=Sibiu], Action[name=moveTo, location=Fagaras], Action[name=moveTo, location=Bucharest]]]
Out[17]:
null

The idea behind bidirectional search is to run two simultaneous searches: one forward from the initial state and the other backward from the goal, hoping that the two searches meet in the middle. This is implemented by replacing the goal test with a check to see whether the frontiers of the two searches intersect; if they do, a solution has been found. This can enormously reduce time complexity, but it is not always applicable and may require too much space.

Working of Bidirectional search (using breadth-first search in both the directions) can be visualized here and this algorithm is implemented here.

Informed Searches

Informed Searches include strategies that use problem-specific knowledge beyond the definition of the problem itself. These strategies can find a solution more efficiently than uninformed strategies.

The general approach we consider here is Best-First search. It is an instance of a general TREE-SEARCH or GRAPH-SEARCH algorithm in which a node is selected for expansion based on an evaluation function,$f(n)$. Heuristic functions $h(n)$ are the most common form in which additional knowledge of the problem is imparted to the search algorithm. For now, we consider the heuristic function to be an arbitrary, non-negative, problem-specific functions, with one constraint: if $n$ is a goal node, then $h(n) = 0$. Most best-first algorithms include heuristic function $h(n)$ as a component of $f$.

$h(n)$ = estimated cost of the cheapest path from the state at node $n$ to a goal state.

Best-First Search algorithm is implemented here

This algorithm tries to expand the node that is closest to the goal. Thus, it evaluates node by using just the heuristic function; that is, $f(n) = h(n)$. This algorithm is not always optimal. However, it is "greedy" as at each step it tries to get as close to the goal as it can. Sometimes, this algorithm is incomplete even in a finite state space.

Greedy best-first search algorithm is implemented here. Let's use this algorithm to solve 8Puzzle Problem.

If we want to find the shortest solution, we need a heuristic function that never overestimates the number of steps to the goal (admissible heuristic). Therefore, here are 2 commonly used heuristic functions for this problem:

  • $h1$ = the number of misplaced tiles. $h1$ is admissible heuristic because it is clear that any tile that is out of place must be moved atleast once.

  • $h2$ = the sum of distances of tiles from their goal positions. As tiles cannot move along diagonals, the distance we will count is the sum of the horizontal and the vertical distances. This is sometimes called the city block distance or Manhattan distance. $h2$ is also admissible because all any move can do is move one tile one step closer to the goal.

As expected, neither of these overestimates the true solution cost.

Now, let's use Greedy Best-First Search algorithm to solve Eight Puzzle Problem.


In [28]:
import aima.core.agent.Action;
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.GreedyBestFirstSearch;

import java.util.Arrays;
import java.util.List;

EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board, 
                                                                 EightPuzzleFunctions::getActions,
                                                                 EightPuzzleFunctions::getResult,
                                                                 GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));

// By Misplaced Tiles Heuristic function (h1)


int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();

try {
    SearchForActions<EightPuzzleBoard, Action> search = new GreedyBestFirstSearch<>(new GraphSearch<>(),
            EightPuzzleFunctions.createMisplacedTileHeuristicFunction());
    SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
    List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
    for (int i=0;i<actions.size();i++){
        System.out.print("Current State = " + Arrays.toString(state));
        EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
        System.out.print("  Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
        System.out.print("  isGoal = "+ problem.testGoal(eightPuzzleBoard));
        System.out.print("  Path length = " + i);
        System.out.println("  " + actions.get(i));
        state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
        System.out.println();
    }
    EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
    System.out.print("Final State = " + Arrays.toString(state));
    System.out.print("  Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
    System.out.print("  isGoal = "+ problem.testGoal(eightPuzzleBoard));
    System.out.print("  Path length = " + agent.getActions().size());
} catch (Exception e) {
    e.printStackTrace();
}


Current State = [1, 7, 4, 3, 0, 2, 6, 8, 5]  Misplaced Tiles Heuristic = 6.0  isGoal = false  Path length = 0  Action[name=Up]

Current State = [1, 0, 4, 3, 7, 2, 6, 8, 5]  Misplaced Tiles Heuristic = 6.0  isGoal = false  Path length = 1  Action[name=Right]

Current State = [1, 4, 0, 3, 7, 2, 6, 8, 5]  Misplaced Tiles Heuristic = 6.0  isGoal = false  Path length = 2  Action[name=Down]

Current State = [1, 4, 2, 3, 7, 0, 6, 8, 5]  Misplaced Tiles Heuristic = 5.0  isGoal = false  Path length = 3  Action[name=Down]

Current State = [1, 4, 2, 3, 7, 5, 6, 8, 0]  Misplaced Tiles Heuristic = 4.0  isGoal = false  Path length = 4  Action[name=Left]

Current State = [1, 4, 2, 3, 7, 5, 6, 0, 8]  Misplaced Tiles Heuristic = 3.0  isGoal = false  Path length = 5  Action[name=Up]

Current State = [1, 4, 2, 3, 0, 5, 6, 7, 8]  Misplaced Tiles Heuristic = 2.0  isGoal = false  Path length = 6  Action[name=Up]

Current State = [1, 0, 2, 3, 4, 5, 6, 7, 8]  Misplaced Tiles Heuristic = 1.0  isGoal = false  Path length = 7  Action[name=Left]

Final State = [0, 1, 2, 3, 4, 5, 6, 7, 8]  Misplaced Tiles Heuristic = 0.0  isGoal = true  Path length = 8
Out[28]:
null

In [29]:
import aima.core.agent.Action;
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.GreedyBestFirstSearch;

import java.util.Arrays;
import java.util.List;

EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board, 
                                                                 EightPuzzleFunctions::getActions,
                                                                 EightPuzzleFunctions::getResult,
                                                                 GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));

// By Manhattan Heuristic function (h2)

int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();

try {
    SearchForActions<EightPuzzleBoard, Action> search = new GreedyBestFirstSearch<>(new GraphSearch<>(),
            EightPuzzleFunctions.createManhattanHeuristicFunction());
    SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
    List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
    for (int i=0;i<actions.size();i++){
        System.out.print("Current State = " + Arrays.toString(state));
        EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
        System.out.print("   Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
        System.out.print("   isGoal = "+ problem.testGoal(eightPuzzleBoard));
        System.out.print("   Path length = " + i);
        System.out.println("   " + actions.get(i));
        state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
        System.out.println();
    }
    EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
    System.out.print("Final State = " + Arrays.toString(state));
    System.out.print("   Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
    System.out.print("   isGoal = "+ problem.testGoal(eightPuzzleBoard));
    System.out.print("   Path length = " + agent.getActions().size());
} catch (Exception e) {
    e.printStackTrace();
}


Current State = [1, 7, 4, 3, 0, 2, 6, 8, 5]   Manhattan Heuristic = 8.0   isGoal = false   Path length = 0   Action[name=Up]

Current State = [1, 0, 4, 3, 7, 2, 6, 8, 5]   Manhattan Heuristic = 7.0   isGoal = false   Path length = 1   Action[name=Right]

Current State = [1, 4, 0, 3, 7, 2, 6, 8, 5]   Manhattan Heuristic = 6.0   isGoal = false   Path length = 2   Action[name=Down]

Current State = [1, 4, 2, 3, 7, 0, 6, 8, 5]   Manhattan Heuristic = 5.0   isGoal = false   Path length = 3   Action[name=Down]

Current State = [1, 4, 2, 3, 7, 5, 6, 8, 0]   Manhattan Heuristic = 4.0   isGoal = false   Path length = 4   Action[name=Left]

Current State = [1, 4, 2, 3, 7, 5, 6, 0, 8]   Manhattan Heuristic = 3.0   isGoal = false   Path length = 5   Action[name=Up]

Current State = [1, 4, 2, 3, 0, 5, 6, 7, 8]   Manhattan Heuristic = 2.0   isGoal = false   Path length = 6   Action[name=Up]

Current State = [1, 0, 2, 3, 4, 5, 6, 7, 8]   Manhattan Heuristic = 1.0   isGoal = false   Path length = 7   Action[name=Left]

Final State = [0, 1, 2, 3, 4, 5, 6, 7, 8]   Manhattan Heuristic = 0.0   isGoal = true   Path length = 8
Out[29]:
null

A* Search: Minimizing the total estimated solution cost

This algorithm evaluates node by combining $g(n)$, the cost to reach the node, and $h(n)$, the cost to get from the node to the goal:

$f(n) = g(n) + h(n)$

Since $g(n)$ gives the path cost from the start node to the node $n$, and $h(n)$ is the estimated cost of the cheapest path from $n$ to the goal, we have $f(n)$ = estimated cost of the cheapest solution through n . A$*$ search is both complete and optimal, provided that $h(n)$ is admissible (for TREE SEARCH) or consistent (for GRAPH SEARCH). The algorithm is identical to UNIFORM-COST SEARCH except that A$*$ uses $g+h$ instead of $g$.

Whenever A* selects a node for expansion, the optimal path to that node has been found.

A$*$ algorithm is implemented here and it can be vizualised here. Now, let's solve 8Puzzle Problem using A$*$ algorithm.


In [30]:
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.agent.Action;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.AStarSearch;

import java.util.*;


EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board, 
                                                                 EightPuzzleFunctions::getActions,
                                                                 EightPuzzleFunctions::getResult,
                                                                 GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));

int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();

// By Misplaced Tiles Heuristic function

try {
    SearchForActions<EightPuzzleBoard, Action> search = new AStarSearch<>(new GraphSearch<>(),
            EightPuzzleFunctions.createMisplacedTileHeuristicFunction());
    SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
    List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
    for (int i=0;i<actions.size();i++){
        System.out.print("Current State = " + Arrays.toString(state));
        EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
        System.out.print("  Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
        System.out.print("  isGoal = "+ problem.testGoal(eightPuzzleBoard));
        System.out.print("  Path length = " + i);
        System.out.println("  " + actions.get(i));
        state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
        System.out.println();
    }
    EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
    System.out.print("Final State = " + Arrays.toString(state));
    System.out.print("  Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
    System.out.print("  isGoal = "+ problem.testGoal(eightPuzzleBoard));
    System.out.print("  Path length = " + agent.getActions().size());
} catch (Exception e) {
    e.printStackTrace();
}


Current State = [1, 7, 4, 3, 0, 2, 6, 8, 5]  Misplaced Tiles Heuristic = 6.0  isGoal = false  Path length = 0  Action[name=Up]

Current State = [1, 0, 4, 3, 7, 2, 6, 8, 5]  Misplaced Tiles Heuristic = 6.0  isGoal = false  Path length = 1  Action[name=Right]

Current State = [1, 4, 0, 3, 7, 2, 6, 8, 5]  Misplaced Tiles Heuristic = 6.0  isGoal = false  Path length = 2  Action[name=Down]

Current State = [1, 4, 2, 3, 7, 0, 6, 8, 5]  Misplaced Tiles Heuristic = 5.0  isGoal = false  Path length = 3  Action[name=Down]

Current State = [1, 4, 2, 3, 7, 5, 6, 8, 0]  Misplaced Tiles Heuristic = 4.0  isGoal = false  Path length = 4  Action[name=Left]

Current State = [1, 4, 2, 3, 7, 5, 6, 0, 8]  Misplaced Tiles Heuristic = 3.0  isGoal = false  Path length = 5  Action[name=Up]

Current State = [1, 4, 2, 3, 0, 5, 6, 7, 8]  Misplaced Tiles Heuristic = 2.0  isGoal = false  Path length = 6  Action[name=Up]

Current State = [1, 0, 2, 3, 4, 5, 6, 7, 8]  Misplaced Tiles Heuristic = 1.0  isGoal = false  Path length = 7  Action[name=Left]

Final State = [0, 1, 2, 3, 4, 5, 6, 7, 8]  Misplaced Tiles Heuristic = 0.0  isGoal = true  Path length = 8
Out[30]:
null

In [32]:
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.agent.Action;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.AStarSearch;

import java.util.*;


EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board, 
                                                                 EightPuzzleFunctions::getActions,
                                                                 EightPuzzleFunctions::getResult,
                                                                 GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));

int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();

// By Manhattan Heuristic function

try {
    SearchForActions<EightPuzzleBoard, Action> search = new AStarSearch<>(new GraphSearch<>(),
            EightPuzzleFunctions.createManhattanHeuristicFunction());
    SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
    List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
    for (int i=0;i<actions.size();i++){
        System.out.print("Current State = " + Arrays.toString(state));
        EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
        System.out.print("   Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
        System.out.print("   isGoal = "+ problem.testGoal(eightPuzzleBoard));
        System.out.print("   Path length = " + i);
        System.out.println("   " + actions.get(i));
        state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
        System.out.println();
    }
    EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
    System.out.print("Final State = " + Arrays.toString(state));
    System.out.print("   Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
    System.out.print("   isGoal = "+ problem.testGoal(eightPuzzleBoard));
    System.out.print("   Path length = " + agent.getActions().size());
} catch (Exception e) {
    e.printStackTrace();
}


Current State = [1, 7, 4, 3, 0, 2, 6, 8, 5]   Manhattan Heuristic = 8.0   isGoal = false   Path length = 0   Action[name=Up]

Current State = [1, 0, 4, 3, 7, 2, 6, 8, 5]   Manhattan Heuristic = 7.0   isGoal = false   Path length = 1   Action[name=Right]

Current State = [1, 4, 0, 3, 7, 2, 6, 8, 5]   Manhattan Heuristic = 6.0   isGoal = false   Path length = 2   Action[name=Down]

Current State = [1, 4, 2, 3, 7, 0, 6, 8, 5]   Manhattan Heuristic = 5.0   isGoal = false   Path length = 3   Action[name=Down]

Current State = [1, 4, 2, 3, 7, 5, 6, 8, 0]   Manhattan Heuristic = 4.0   isGoal = false   Path length = 4   Action[name=Left]

Current State = [1, 4, 2, 3, 7, 5, 6, 0, 8]   Manhattan Heuristic = 3.0   isGoal = false   Path length = 5   Action[name=Up]

Current State = [1, 4, 2, 3, 0, 5, 6, 7, 8]   Manhattan Heuristic = 2.0   isGoal = false   Path length = 6   Action[name=Up]

Current State = [1, 0, 2, 3, 4, 5, 6, 7, 8]   Manhattan Heuristic = 1.0   isGoal = false   Path length = 7   Action[name=Left]

Final State = [0, 1, 2, 3, 4, 5, 6, 7, 8]   Manhattan Heuristic = 0.0   isGoal = true   Path length = 8
Out[32]:
null

In [ ]: