How does Randomized Hill Climbing Algorithm Work?

  • Hill climbing is a technique for certain classes of optimization problems. The idea is to start with a sub-optimal solution to a problem (i.e., start at the base of a hill) and then repeatedly improve the solution (walk up the hill) until some condition is maximized (the top of the hill is reached). source
  • In a hill-climbing heuristic, you start with an initial solution. Generate one or more neighboring solutions. Pick the best and continue until there are no better neighboring solutions. This will generally yield one solution. In hill-climbing, we need to know how to evaluate a solution, and how to generate a "neighbor."
  • Hill climbing is a simple search optimization algorithm. It starts at a vector location and evaluates moves that can be made from that vector. Whatever move results in the largest improvement to the score is taken. Hill climbing is very susceptible to local minima and maxima. Once hill climbing reaches a point where it can no longer find a better solution, the algorithm is complete.

Advantages:

  • Hill climbing is good for finding a local optimum (a solution that cannot be improved by considering a neighbouring configuration)
  • In convex problems, hill-climbing is optimal. Examples of algorithms that solve convex problems by hill-climbing include the simplex algorithm for linear programming and binary search.

Disadvantages:

  • Hill Climbing is not necessarily guaranteed to find the best possible solution (the global optimum) out of all possible solutions (the search space)

Main Function:


In [1]:
%pylab inline


Populating the interactive namespace from numpy and matplotlib

Code Source:


In [2]:
import sys
import numpy as np

In [4]:
class Train(object):
    """ Basic training class.  Allows for either minimization or maximization, though all implementations may not
    support both.
    """
    def __init__(self, goal_minimize=True):
        self.max_iterations = 100000
        self.position = []
        self.best_score = 0
        self.goal_minimize = goal_minimize
        self.display_final = True
        self.display_iteration = False
        self.stop_score = None

    def better_than(self, is_this, than_that):
        """Determine if one score is better than the other, based on minimization settings.
        @param is_this: The first score to compare.
        @param than_that: The second score to compare.
        @return: True, if the first score is better than the second.
        """
        if self.goal_minimize:
            return is_this < than_that
        else:
            return is_this > than_that

    def should_stop(self, iteration, best_score):
        """ Determine if we should stop.
        @param iteration: The current iteration.
        @param best_score: The current best score.
        @return: True, if we should stop.
        """
        if iteration > self.max_iterations:
            return True

        if self.stop_score is not None:
            if self.better_than(best_score, self.stop_score):
                return True

        return False

In [5]:
class TrainHillClimb(Train):
    """
        Train using hill climbing.  Hill climbing can be used to optimize the long term memory of a Machine Learning
        Algorithm. This is done by moving the current long term memory values to a new location if that new location
        gives a better score from the scoring function.
        http://en.wikipedia.org/wiki/Hill_climbing
    """
    def __init__(self, goal_minimize=True):
        Train.__init__(self, goal_minimize)

    def train(self, x0, funct, acceleration=1.2, step_size=1.0):
        """
        Train up to the specified maximum number of iterations using hill climbing.
        @param x0: The initial vector for long-term memory.
        @param funct: The score function.  We attempt to minimize or maximize this.
        @param acceleration: The acceleration (default=1.2)
        @param step_size: The step size (default=1.0)
        @return: The trained long-term memory vector.
        """
        iteration_number = 1
        self.position = list(x0)
        self.best_score = funct(self.position)

        step_size = [step_size] * len(x0)

        candidate = [0] * 5
        candidate[0] = -acceleration
        candidate[1] = -1 / acceleration
        candidate[2] = 0
        candidate[3] = 1 / acceleration
        candidate[4] = acceleration

        while not self.should_stop(iteration_number, self.best_score):
            if self.goal_minimize:
                best_step_score = sys.float_info.max
            else:
                best_step_score = sys.float_info.min

            for dimension in range(0, len(self.position)):
                best = -1
                for i in range(0, len(candidate)):
                    # Take a step
                    self.position[dimension] += candidate[i] * step_size[dimension]
                    # Obtain new trial score.
                    trial_score = funct(self.position)
                    # Step back, we only want to try movement in one dimension.
                    self.position[dimension] -= candidate[i] * step_size[dimension]

                    # Record best step taken
                    if self.better_than(trial_score, best_step_score):
                        best_step_score = trial_score
                        best = i

                if best != -1:
                    self.best_score = best_step_score
                    self.position[dimension] += candidate[best] * step_size[dimension]
                    step_size[dimension] += candidate[best]

            if self.display_iteration:
                print("Iteration #" + str(iteration_number) + ", Score: " + str(self.best_score))
            iteration_number += 1

        if self.display_final:
            print("Finished after " + str(iteration_number) + " iterations, final score is " + str(self.best_score))
        return self.position

Example Problems:


In [ ]: