This notebook serves as a supporting material for the chapter Solving Problems by Searching. The notebooks illustrate the use of the code repository and demonstrate how the code can be extended to solve various search related problems. The discussion of problem solving begins with a precise implementation of problems and their solutions. Then we move onto various informed and uninformed search strategies for solving problems.
In [2]:
%classpath add jar ../out/artifacts/aima_core_jar/aima-core.jar
The process of looking for a sequence of actions that reaches the goal is called search. A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. Thus, we have a simple “formulate, search, execute” design for the agent, as shown in Figure 3.1 of the textbook. After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do—typically, the first action of the sequence—and then removing that step from the sequence. Once the solution has been executed, the agent will formulate a new goal.
Let's have a look at the pseudocode of a simple problem solving agent and then see it's java implementation.
In [3]:
%%python
from notebookUtils import *
pseudocode('Simple Problem Solving Agent')
Out[3]:
The implementation of the above pseudocode can be viewed here. This agent is implemented as an abstract agent which can be extended to construct other agents.
We will first formally define a problem. Then, we will have a look at how the code from the repository can be used to formulate new problems. Then, we will have a look at various toy problems which are already present in the code repository.
As per the textbook, a problem can be defined formally by five components. The initial state, applicable actions, the transition model, the goal test and the path cost function. This five component structure is implemented as an interface named Problem.java in the repository. Let's have a look at the implementation
public interface Problem<S, A> extends OnlineSearchProblem<S, A> {
/**
* Returns the initial state of the agent.
*/
S getInitialState();
/**
* Returns the description of the possible actions available to the agent.
*/
List<A> getActions(S state);
/**
* Returns the description of what each action does.
*/
S getResult(S state, A action);
/**
* Determines whether a given state is a goal state.
*/
boolean testGoal(S state);
/**
* Returns the <b>step cost</b> of taking action <code>action</code> in state <code>state</code> to reach state
* <code>stateDelta</code> denoted by c(s, a, s').
*/
double getStepCosts(S state, A action, S stateDelta);
/**
* Tests whether a node represents an acceptable solution. The default implementation
* delegates the check to the goal test. Other implementations could make use of the additional
* information given by the node (e.g. the sequence of actions leading to the node). A
* solution tester implementation could for example always return false and internally collect
* the paths of all nodes whose state passes the goal test. Search implementations should always
* access the goal test via this method to support solution acceptance testing.
*/
default boolean testSolution(Node<S, A> node) {
return testGoal(node.getState());
}
}
The states and actions are represented by the generic variables S
and A
respectively. Clearly, the methods represent the
five components of a particular problem as follows:
getInitialState()
getActions(S state)
getResult(S state, A action)
testGoal(S state)
getStepCosts(S state, A action, S stateDelta)
A toy problem is intended to illustrate or exercise various problem solving methods. Let's extend the Problem
interface to implement a toy problem. Let's implement the 8 puzzle problem which consists of a 3x3 board with eight numbered tiles and a blank space. It has the following five components:
Let's look at the implementation:
First we implement the states and actions applicable to the problem. The actions can be implemented as an enum whereas the states can be represented as an array of ints.
In [4]:
package aima.notebooks.classicalsearch;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import aima.core.search.framework.problem.Problem;
public class EightPuzzleProblem implements Problem<int[], EightPuzzleProblem.Action> {
// This array represents the state
int[] initialState = new int[9];
/**
* This enum represents the Action datatype
*/
public enum Action {
LEFT, RIGHT, UP, DOWN
}
// A constructor for the problem
public EightPuzzleProblem(int[] initialState){
this.initialState = initialState;
}
// Component One : The initial state.
@Override
public int[] getInitialState() {
return initialState;
}
// Component Two : Applicable Actions
@Override
public List<Action> getActions(int[] state) {
List<Action> actions = new ArrayList<>();
if (this.canMoveGap(state, Action.UP))
actions.add(Action.UP);
if (this.canMoveGap(state, Action.DOWN))
actions.add(Action.DOWN);
if (this.canMoveGap(state, Action.LEFT))
actions.add(Action.LEFT);
if (this.canMoveGap(state, Action.RIGHT))
actions.add(Action.RIGHT);
return actions;
}
// Component Three : Transition Model
@Override
public int[] getResult(int[] state, Action action) {
int[] result = state.clone();
if (Action.UP.equals(action) && canMoveGap(state, Action.UP))
moveGapUp(result);
else if (Action.DOWN.equals(action) && canMoveGap(state, Action.DOWN))
moveGapDown(result);
else if (Action.LEFT.equals(action) && canMoveGap(state, Action.LEFT))
moveGapLeft(result);
else if (Action.RIGHT.equals(action) && canMoveGap(state, Action.RIGHT))
moveGapRight(result);
return result;
}
// Component Four : Goal Test
@Override
public boolean testGoal(int[] state) {
return Arrays.equals(state, new int[]{0, 1, 2, 3, 4, 5, 6, 7, 8});
}
// Component Five : Path cost function
@Override
public double getStepCosts(int[] state, Action action, int[] stateDelta) {
return 1.0;
}
private void moveGapRight(int[] result) {
int gapPos = getGapPosition(result);
int x = gapPos / 3;
int y = gapPos % 3;
if (!(y == 2)) {
int valueOnRight = result[x * 3 + y + 1];
setValue(result, x, y, valueOnRight);
setValue(result, x, y + 1, 0);
}
}
// All the methods below are just helper methods which aid the above necessary methods.
// To move the gap to the left.
private void moveGapLeft(int[] result) {
int gapPos = getGapPosition(result);
int x = gapPos / 3;
int y = gapPos % 3;
if (!(y == 0)) {
int valueOnLeft = result[x * 3 + (y - 1)];
setValue(result, x, y, valueOnLeft);
setValue(result, x, y - 1, 0);
}
}
// To move the gap to the cell below.
private void moveGapDown(int[] result) {
int gapPos = getGapPosition(result);
int x = gapPos / 3;
int y = gapPos % 3;
if (!(x == 2)) {
int valueOnBottom = result[(x + 1) * 3 + y];
setValue(result, x, y, valueOnBottom);
setValue(result, x + 1, y, 0);
}
}
// To get the current location of the gap
private int getGapPosition(int[] state) {
return getPositionOf(state, 0);
}
// To get the position of any particular number.
private int getPositionOf(int[] state, int val) {
for (int i = 0; i < 9; i++)
if (state[i] == val)
return i;
return -1;
}
// To check if we can move the gap to the position specified by where
public boolean canMoveGap(int[] state, Action where) {
boolean retVal = true;
int absPos = getPositionOf(state, 0);
if (where.equals(Action.LEFT))
retVal = (absPos % 3 != 0);
else if (where.equals(Action.RIGHT))
retVal = (absPos % 3 != 2);
else if (where.equals(Action.UP))
retVal = ((absPos / 3) != 0);
else if (where.equals(Action.DOWN))
retVal = ((absPos / 3) != 2);
return retVal;
}
// To move the gap to the cell above.
public void moveGapUp(int[] result) {
int gapPos = getGapPosition(result);
int x = gapPos / 3;
int y = gapPos % 3;
if (!(x == 0)) {
int valueOnTop = result[(x - 1) * 3 + y];
setValue(result, x, y, valueOnTop);
setValue(result, x - 1, y, 0);
}
}
// To set the value of a particular cell.
private void setValue(int[] result, int x, int y, int valueOnTop) {
int absPos = x *3 + y;
result[absPos] = valueOnTop;
}
}
Out[4]:
So, in this way we can implement a Problem. Now let us see our problem class in action
In [5]:
import aima.notebooks.classicalsearch.EightPuzzleProblem;
import java.util.*;
int [] initialState = new int[] { 5, 4, 0, 6, 1, 8, 7, 3, 2 };
EightPuzzleProblem problem = new EightPuzzleProblem(initialState);
System.out.println("Initial State = " + Arrays.toString(problem.getInitialState()));
System.out.println("Available Actions = " + problem.getActions(initialState).toString());
System.out.println("Resulting State = " + Arrays.toString(problem.getResult(initialState,problem.getActions(initialState).get(0))));
System.out.println("isGoal\t"+ problem.testGoal(initialState));
Out[5]:
We have seen how we can implement the existing problem interface to create and define our own custom problems. The flexibility to define our own custom problem is necessary for experimentation. However, the code repository already includes robust implementations of a variety of search problems and their environments. For all the future purposes we will use the existing implementations as they are more robust and complex and have been thoroughly tested for errors. Now let's have a look at some of the common search problems and how they can be directly used from the code repository.
The GeneralProblem
class can be used to create the existing problems. The problem parameters can be passed as constructor arguements.
In [6]:
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.agent.Action;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.Problem;
import java.util.*;
EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 7, 1, 8, 0, 4, 6, 2, 3, 5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board, EightPuzzleFunctions::getActions,
EightPuzzleFunctions::getResult,
GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));
System.out.println("Initial State = " + Arrays.toString(problem.getInitialState().getState()));
System.out.println("Available Actions = " + problem.getActions(problem.getInitialState()).toString());
System.out.println("Resulting State = " + Arrays.toString(problem.getResult(problem.getInitialState(),
problem.getActions(problem.getInitialState()).get(0))
.getState()));
System.out.println("isGoal\t"+ problem.testGoal(problem.getInitialState()));
Out[6]:
In [7]:
import aima.core.agent.Action;
import aima.core.environment.nqueens.*;
import aima.core.search.framework.problem.*;
import java.util.*;
Problem<NQueensBoard, QueenAction> problem = new GeneralProblem<>(new NQueensBoard(3),
NQueensFunctions::getIFActions,
NQueensFunctions::getResult, NQueensFunctions::testGoal);
System.out.println("Initial State \n" + problem.getInitialState().toString());
System.out.println("Available Actions \n" + problem.getActions(problem.getInitialState()).toString());
System.out.println("\n\nResulting State \n" + problem.getResult(problem.getInitialState(),
problem.getActions(problem.getInitialState()).get(0)).toString());
System.out.println("isGoal\t"+ problem.testGoal(problem.getInitialState()));
Out[7]:
Route-finding algorithms are used in a variety of applications. Some, such as Web sites and in-car systems that provide driving directions, are relatively straightforward extensions of the Romania example. Others, such as routing video streams in computer networks, military operations planning, and airline travel-planning systems, involve much more complex specifications. Now let us formulate the map of Romania Problem:
In [8]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
SimplifiedRoadMapOfPartOfRomania.ARAD,
MapFunctions.createActionsFunction(romaniaMap),
MapFunctions.createResultFunction(),
GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
MapFunctions.createDistanceStepCostFunction(romaniaMap));
System.out.println("Initial State " + problem.getInitialState().toString());
System.out.println("\n\nAvailable Actions \n " + problem.getActions(problem.getInitialState()).toString());
System.out.println("\n\nResulting State " + problem.getResult(problem.getInitialState(),
problem.getActions(problem.getInitialState()).get(0)).toString());
System.out.println("\n\nisGoal\t"+ problem.testGoal(problem.getInitialState()));
Out[8]:
Search algorithms work by considering various possible action sequences. The possible action sequences starting at the initial state form a search tree with the initial state at the root then expanding current state to form branches. The nodes correspond to states in the state space of the problem. We expand the current state by applying each legal action to the current state, thereby generating a new set of $b$ states. We add $b$ branches from the parent node leading to $b$ new child node. Each of these $b$ child nodes is a leaf node, that is, a node with no children in the tree. The set of all leaf nodes available for expansion at any given point is called the frontier. The process of expanding nodes on the frontier continues until either a goal state is reached or there are no more states to expand.
Node expansion in a tree can be visualized here
Let's have a look at the pseudo code of general Tree Search algorithm.
In [9]:
%%python
from notebookUtils import *
pseudocode('Tree Search and Graph Search')
Out[9]:
Many a time newly generated states match previously generated states (known as a repeated state) and it leads to the generation of redundant paths. We avoid exploring a redundant path by augmenting the TREE-SEARCH algorithm with a data structure called explored set which remembers every expanded node. Clearly, the search tree constructed by the GRAPH-SEARCH algorithm contains at most one copy of each state.
The perspective of a Search Agent can be visualized here.
As per textbook for each node $n$ of a tree, we have a structure that contains 4 components: These four component structure is implemented in a class named node.java in the repository.
private S state;
n.STATE
: The state in the state space to which the node corresponds;private Node<S, A> parent;
n.PARENT
: The node in the search tree that generated this node;private A action;
n.ACTION
: The action that was applied to the parent to generate the node;private double pathCost;
n.PATH-COST
: The cost, traditionally denoted by g(n), of the path from the initial state to the node, as indicated by the parent pointers.These pointers also allow the solution path to be extracted when a goal node is found; we use SOLUTION function to return a sequence of actions obtained by following parent pointers back to the root.
The appropriate data-structure for frontier is queue. The operation on queue are as follows:
protected abstract void addToFrontier(Node<S, A> node);
INSERT(element,queue)
: A primitive operation which inserts the node at the tail of the frontier.protected abstract Node<S, A> removeFromFrontier();
POP(queue)
: A primitive operation which removes and returns the node at the head of the frontier.protected abstract boolean isFrontierEmpty()
EMPTY?(queue)
: A primitive operation which checks whether the frontier contains not yet expanded nodes.Breadth First Search is a simple search strategy in which all the nodes are expanded at a given depth before any nodes at the next level are expanded. Therefore, it always chooses the shallowest unexpanded node for expansion by using a FIFO queue for the frontier
.
Here goal test is applied to each node when it is generated rather than when it is selected for expansion.
Note that the shallowest goal node is not necessarily the optimal one. Therefore, breadth-first search is optimal if the path cost is a non-decreasing function of depth of the node.
Working of Breadth First Search algorithm can be visualized here.
Let's have a look at the pseudo code of Breadth-First Search algorithm and then its implementation.
In [10]:
%%python
from notebookUtils import *
pseudocode('Breadth First Search')
Out[10]:
In [11]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.uninformed.BreadthFirstSearch;
Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
SimplifiedRoadMapOfPartOfRomania.ARAD,
MapFunctions.createActionsFunction(romaniaMap),
MapFunctions.createResultFunction(),
GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
MapFunctions.createDistanceStepCostFunction(romaniaMap));
BreadthFirstSearch breadthFirstSearch = new BreadthFirstSearch();
System.out.println(breadthFirstSearch.findActions(problem));
Out[11]:
In [12]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.framework.*;
import aima.core.search.framework.qsearch.*;
import java.util.Queue;
QueueFactory queueFactory = new QueueFactory();
Queue<Node> frontier ;
Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
SimplifiedRoadMapOfPartOfRomania.ARAD,
MapFunctions.createActionsFunction(romaniaMap),
MapFunctions.createResultFunction(),
GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
MapFunctions.createDistanceStepCostFunction(romaniaMap));
GraphSearchBFS graphSearchBFS = new GraphSearchBFS();
frontier = queueFactory.createFifoQueue();
System.out.println(graphSearchBFS.findNode(problem, frontier));
Out[12]:
Instead of expanding the shallowest node, uniform cost search expand the node $n$ with the lowest path cost $g(n)$. This is done by storing the frontier as a priority queue ordered by $g$ . There are 2 significant differences from breadth-first search. The first is that goal test is applied to a node when it is selected for expansion. The second difference is that a test is added in case a better path is found to a node currently on the frontier.
Uniform Cost Search does not care about the number of steps a path has, but only about their total cost. Therefore, it expands nodes in order of their optimal path cost. Hence, the first goal node selected for expansion must be the optimal solution.
The difference in the working of Breadth First search algorithm and Uniform Cost search algorithm can be visualized here.
Let's have a look at the pseudo code of uniform cost search and then its implementation.
In [13]:
%%python
from notebookUtils import *
pseudocode('Uniform Cost Search')
Out[13]:
In [14]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.uninformed.UniformCostSearch;
Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
SimplifiedRoadMapOfPartOfRomania.ARAD,
MapFunctions.createActionsFunction(romaniaMap),
MapFunctions.createResultFunction(),
GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
MapFunctions.createDistanceStepCostFunction(romaniaMap));
UniformCostSearch uniformCostSearch = new UniformCostSearch();
System.out.println(uniformCostSearch.findActions(problem));
Out[14]:
Depth First Search algorithm always expands the deepest node in the current frontier of the search tree. Depth First search uses a LIFO queue for frontier which means that the most recently generated node is chosen for expansion. The properties of depth-first search depend strongly on whether the tree search or graph search version is used. The graph search version, which avoids repeated states and redundant paths, is complete in finite state space because it will eventually expand every node. The tree search version, on the other hand, is not complete.
Depth First Search algorithm is implemented here ans it's working can be visualized here
The failure of depth-first search in infinite state spaces can be alleviated by supplying depth-first search with a pre-determined depth limit $l$ i.e. nodes at depth $l$ are treated as if they have no successors. But unfortunately, it also introduces an additional source of incompleteness if the shallowest goal node is beyond the chosen depth limit.
Let's have a look at its pseudo code. Notice that depth-limited search can terminate with 2 kinds of failure: the standard failure value indicates no solution; the cutoff value indicates no solution within the depth limit.
This algorithm is implemented here and it's working can be visualized here.
In [15]:
%%python
from notebookUtils import *
pseudocode('Depth Limited Search')
Out[15]:
Iterative deepening search is often used in combination with depth-first search, that finds the best depth limit. It calls depth-first search with increasing depth limits until a goal is found. It is complete, optimal for unit step costs, has time complexity comparable to breadth-first search, and has linear space complexity.
Iterative deepening search is analogous to breadth-first search as it explores a complete layer of new nodes at each iteration before going on to next layer. In general, iterative deepening is the preferred uniform search method when the search space is large and the depth of solution is unknown.
Working of Iterative deepening search can be visualized here.
Let's have a look at its pseudo code and implementation on Romania map problem.
In [16]:
%%python
from notebookUtils import *
pseudocode('Iterative Deepening Search')
Out[16]:
In [17]:
import aima.core.environment.map.*;
import aima.core.search.framework.problem.*;
import aima.core.search.uninformed.IterativeDeepeningSearch;
Map romaniaMap = new SimplifiedRoadMapOfPartOfRomania();
Problem<String, MoveToAction> problem = new GeneralProblem<>(
SimplifiedRoadMapOfPartOfRomania.ARAD,
MapFunctions.createActionsFunction(romaniaMap),
MapFunctions.createResultFunction(),
GoalTest.isEqual(SimplifiedRoadMapOfPartOfRomania.BUCHAREST),
MapFunctions.createDistanceStepCostFunction(romaniaMap));
IterativeDeepeningSearch iterativeDeepeningSearch = new IterativeDeepeningSearch();
System.out.println(iterativeDeepeningSearch.findActions(problem));
Out[17]:
The idea behind bidirectional search is to run two simultaneous searches: one forward from the initial state and the other backward from the goal, hoping that the two searches meet in the middle. This is implemented by replacing the goal test with a check to see whether the frontiers of the two searches intersect; if they do, a solution has been found. This can enormously reduce time complexity, but it is not always applicable and may require too much space.
Working of Bidirectional search (using breadth-first search in both the directions) can be visualized here and this algorithm is implemented here.
Informed Searches include strategies that use problem-specific knowledge beyond the definition of the problem itself. These strategies can find a solution more efficiently than uninformed strategies.
The general approach we consider here is Best-First search. It is an instance of a general TREE-SEARCH or GRAPH-SEARCH algorithm in which a node is selected for expansion based on an evaluation function,$f(n)$. Heuristic functions $h(n)$ are the most common form in which additional knowledge of the problem is imparted to the search algorithm. For now, we consider the heuristic function to be an arbitrary, non-negative, problem-specific functions, with one constraint: if $n$ is a goal node, then $h(n) = 0$. Most best-first algorithms include heuristic function $h(n)$ as a component of $f$.
$h(n)$ = estimated cost of the cheapest path from the state at node $n$ to a goal state.
Best-First Search algorithm is implemented here
This algorithm tries to expand the node that is closest to the goal. Thus, it evaluates node by using just the heuristic function; that is, $f(n) = h(n)$. This algorithm is not always optimal. However, it is "greedy" as at each step it tries to get as close to the goal as it can. Sometimes, this algorithm is incomplete even in a finite state space.
Greedy best-first search algorithm is implemented here. Let's use this algorithm to solve 8Puzzle Problem.
If we want to find the shortest solution, we need a heuristic function that never overestimates the number of steps to the goal (admissible heuristic). Therefore, here are 2 commonly used heuristic functions for this problem:
$h1$ = the number of misplaced tiles. $h1$ is admissible heuristic because it is clear that any tile that is out of place must be moved atleast once.
$h2$ = the sum of distances of tiles from their goal positions. As tiles cannot move along diagonals, the distance we will count is the sum of the horizontal and the vertical distances. This is sometimes called the city block distance or Manhattan distance. $h2$ is also admissible because all any move can do is move one tile one step closer to the goal.
As expected, neither of these overestimates the true solution cost.
Now, let's use Greedy Best-First Search algorithm to solve Eight Puzzle Problem.
In [28]:
import aima.core.agent.Action;
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.GreedyBestFirstSearch;
import java.util.Arrays;
import java.util.List;
EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board,
EightPuzzleFunctions::getActions,
EightPuzzleFunctions::getResult,
GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));
// By Misplaced Tiles Heuristic function (h1)
int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();
try {
SearchForActions<EightPuzzleBoard, Action> search = new GreedyBestFirstSearch<>(new GraphSearch<>(),
EightPuzzleFunctions.createMisplacedTileHeuristicFunction());
SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
for (int i=0;i<actions.size();i++){
System.out.print("Current State = " + Arrays.toString(state));
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print(" Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + i);
System.out.println(" " + actions.get(i));
state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
System.out.println();
}
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print("Final State = " + Arrays.toString(state));
System.out.print(" Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + agent.getActions().size());
} catch (Exception e) {
e.printStackTrace();
}
Out[28]:
In [29]:
import aima.core.agent.Action;
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.GreedyBestFirstSearch;
import java.util.Arrays;
import java.util.List;
EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board,
EightPuzzleFunctions::getActions,
EightPuzzleFunctions::getResult,
GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));
// By Manhattan Heuristic function (h2)
int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();
try {
SearchForActions<EightPuzzleBoard, Action> search = new GreedyBestFirstSearch<>(new GraphSearch<>(),
EightPuzzleFunctions.createManhattanHeuristicFunction());
SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
for (int i=0;i<actions.size();i++){
System.out.print("Current State = " + Arrays.toString(state));
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print(" Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + i);
System.out.println(" " + actions.get(i));
state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
System.out.println();
}
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print("Final State = " + Arrays.toString(state));
System.out.print(" Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + agent.getActions().size());
} catch (Exception e) {
e.printStackTrace();
}
Out[29]:
This algorithm evaluates node by combining $g(n)$, the cost to reach the node, and $h(n)$, the cost to get from the node to the goal:
$f(n) = g(n) + h(n)$
Since $g(n)$ gives the path cost from the start node to the node $n$, and $h(n)$ is the estimated cost of the cheapest path from $n$ to the goal, we have $f(n)$ = estimated cost of the cheapest solution through n . A$*$ search is both complete and optimal, provided that $h(n)$ is admissible (for TREE SEARCH) or consistent (for GRAPH SEARCH). The algorithm is identical to UNIFORM-COST SEARCH except that A$*$ uses $g+h$ instead of $g$.
Whenever A* selects a node for expansion, the optimal path to that node has been found.
A$*$ algorithm is implemented here and it can be vizualised here. Now, let's solve 8Puzzle Problem using A$*$ algorithm.
In [30]:
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.agent.Action;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.AStarSearch;
import java.util.*;
EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board,
EightPuzzleFunctions::getActions,
EightPuzzleFunctions::getResult,
GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));
int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();
// By Misplaced Tiles Heuristic function
try {
SearchForActions<EightPuzzleBoard, Action> search = new AStarSearch<>(new GraphSearch<>(),
EightPuzzleFunctions.createMisplacedTileHeuristicFunction());
SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
for (int i=0;i<actions.size();i++){
System.out.print("Current State = " + Arrays.toString(state));
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print(" Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + i);
System.out.println(" " + actions.get(i));
state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
System.out.println();
}
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print("Final State = " + Arrays.toString(state));
System.out.print(" Misplaced Tiles Heuristic = " + eightPuzzleFunctions.createMisplacedTileHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + agent.getActions().size());
} catch (Exception e) {
e.printStackTrace();
}
Out[30]:
In [32]:
import aima.core.environment.eightpuzzle.EightPuzzleBoard;
import aima.core.environment.eightpuzzle.EightPuzzleFunctions;
import aima.core.agent.Action;
import aima.core.search.agent.SearchAgent;
import aima.core.search.framework.Node;
import aima.core.search.framework.SearchForActions;
import aima.core.search.framework.problem.GoalTest;
import aima.core.search.framework.problem.GeneralProblem;
import aima.core.search.framework.problem.Problem;
import aima.core.search.framework.qsearch.GraphSearch;
import aima.core.search.informed.AStarSearch;
import java.util.*;
EightPuzzleBoard board = new EightPuzzleBoard(new int[] { 1 ,7 ,4 ,3 ,0 ,2 ,6 ,8 ,5 });
Problem<EightPuzzleBoard, Action> problem = new GeneralProblem<>(board,
EightPuzzleFunctions::getActions,
EightPuzzleFunctions::getResult,
GoalTest.isEqual(EightPuzzleFunctions.GOAL_STATE));
int[] state = problem.getInitialState().getState();
EightPuzzleFunctions eightPuzzleFunctions = new EightPuzzleFunctions();
// By Manhattan Heuristic function
try {
SearchForActions<EightPuzzleBoard, Action> search = new AStarSearch<>(new GraphSearch<>(),
EightPuzzleFunctions.createManhattanHeuristicFunction());
SearchAgent<EightPuzzleBoard, Action> agent = new SearchAgent<>(problem, search);
List<Action> actions = agent.getActions().subList(0, agent.getActions().size());
for (int i=0;i<actions.size();i++){
System.out.print("Current State = " + Arrays.toString(state));
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print(" Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + i);
System.out.println(" " + actions.get(i));
state = problem.getResult(eightPuzzleBoard, actions.get(i)).getState();
System.out.println();
}
EightPuzzleBoard eightPuzzleBoard = new EightPuzzleBoard(state);
System.out.print("Final State = " + Arrays.toString(state));
System.out.print(" Manhattan Heuristic = " + eightPuzzleFunctions.createManhattanHeuristicFunction().applyAsDouble(new Node<>(eightPuzzleBoard)));
System.out.print(" isGoal = "+ problem.testGoal(eightPuzzleBoard));
System.out.print(" Path length = " + agent.getActions().size());
} catch (Exception e) {
e.printStackTrace();
}
Out[32]:
In [ ]: