Algorithms#

Functions to implement the randomized optimization and search algorithms.

Recommendation

The below functions are implemented within mlrose-ky. However, it is highly recommended to use the Runners for assignment.

Hill Climbing#

Use standard hill climbing to find the optimum for a given optimization problem.

hill_climb(
    problem,
    max_iters=float('inf'),
    restarts=0,
    init_state=None,
    curve=False,
    random_state=None
) 

Parameters [source]

  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm for each restart.
  • restarts (int, default: 0) – Number of random restarts.
  • init_state (array, default: None) – 1-D Numpy array containing starting state for algorithm. If None, then a random state is used.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.

Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References#

Russell, S. and P. Norvig (2010). Artificial Intelligence: A Modern Approach, 3rd edition. Prentice Hall, New Jersey, USA.

Random Hill Climbing#

Use randomized hill climbing to find the optimum for a given optimization problem.

random_hill_climb(
    problem,
    max_attempts=10,
    max_iters=float('inf'),
    restarts=0,
    init_state=None,
    curve=False,
    random_state=None
)

Parameters [source]

  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better neighbor at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • restarts (int, default: 0) – Number of random restarts.
  • init_state (array, default: None) – 1-D Numpy array containing starting state for algorithm. If None, then a random state is used.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.

Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References#

Brownlee, J (2011). Clever Algorithms: Nature-Inspired Programming Recipes. http://www.cleveralgorithms.com.

Simulated Annealing#

Use simulated annealing to find the optimum for a given optimization problem.

simulated_annealing(
    problem,
    schedule=<mlrose_ky.decay.GeomDecay object>,
    max_attempts=10,
    max_iters=float('inf'),
    init_state=None,
    curve=False,
    random_state=None
)

Parameters [source]

  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • schedule (schedule object, default: mlrose_ky.GeomDecay()) – Schedule used to determine the value of the temperature parameter.
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better neighbor at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • init_state (array, default: None) – 1-D Numpy array containing starting state for algorithm. If None, then a random state is used.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.

Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References#

Russell, S. and P. Norvig (2010). Artificial Intelligence: A Modern Approach, 3rd edition. Prentice Hall, New Jersey, USA.

Genetic Algorithms#

Use a standard genetic algorithm to find the optimum for a given optimization problem.

genetic_alg(
    problem,
    pop_size=200,
    mutation_prob=0.1,
    max_attempts=10,
    max_iters=float('inf'),
    curve=False,
    random_state=None
)

Parameters [source]

  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt(), ContinuousOpt() or TSPOpt().
  • pop_size (int, default: 200) – Size of population to be used in genetic algorithm.
  • mutation_prob (float, default: 0.1) – Probability of a mutation at each element of the state vector during reproduction, expressed as a value between 0 and 1.
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better state at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.

Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array of arrays containing the fitness of the entire population at every iteration. Only returned if input argument curve is True.

References#

Russell, S. and P. Norvig (2010). Artificial Intelligence: A Modern Approach, 3rd edition. Prentice Hall, New Jersey, USA.

MIMIC#

Use MIMIC to find the optimum for a given optimization problem.

mimic(
    problem,
    pop_size=200,
    keep_pct=0.2,
    max_attempts=10,
    max_iters=float('inf'),
    curve=False,
    random_state=None,
    fast_mimic=False
)

Warning

MIMIC cannot be used for solving continuous-state optimization problems.

Parameters [source]

  • problem (optimization object) – Object containing fitness function optimization problem to be solved. For example, DiscreteOpt() or TSPOpt().
  • pop_size (int, default: 200) – Size of population to be used in algorithm.
  • keep_pct (float, default: 0.2) – Proportion of samples to keep at each iteration of the algorithm, expressed as a value between 0 and 1.
  • max_attempts (int, default: 10) – Maximum number of attempts to find a better neighbor at each step.
  • max_iters (int, default: np.inf) – Maximum number of iterations of the algorithm.
  • curve (bool, default: False) – Boolean to keep fitness values for a curve. If False, then no curve is stored. If True, then a history of fitness values is provided as a third return value.
  • random_state (int, default: None) – If random_state is a positive integer, random_state is the seed used by np.random.seed(); otherwise, the random seed is not set.
  • fast_mimic (bool, default: False) – Activate fast mimic mode to compute the mutual information in vectorized form. Faster speed but requires more memory.

Returns:

  • best_state (array) – Numpy array containing state that optimizes the fitness function.
  • best_fitness (float) – Value of fitness function at best state.
  • fitness_curve (array) – Numpy array containing the fitness at every iteration. Only returned if input argument curve is True.

References#

De Bonet, J., C. Isbell, and P. Viola (1997). MIMIC: Finding Optima by Estimating Probability Densities. In Advances in Neural Information Processing Systems (NIPS) 9, pp. 424–430.