|  | Home | Libraries | People | FAQ | More | 
#include <boost/math/optimization/random_search.hpp> namespace boost::math::optimization { template <typename ArgumentContainer> struct random_search_parameters { using Real = typename ArgumentContainer::value_type; ArgumentContainer lower_bounds; ArgumentContainer upper_bounds; size_t max_function_calls = 0; ArgumentContainer const * initial_guess = nullptr; }; template <typename ArgumentContainer, class Func, class URBG> ArgumentContainer random_search(const Func cost_function, random_search_parameters<ArgumentContainer> const ¶ms, URBG &gen, std::invoke_result_t<Func, ArgumentContainer> value_to_reach = std::numeric_limits<std::invoke_result_t<Func, ArgumentContainer>>::quiet_NaN(), std::atomic<bool> *cancellation = nullptr, std::atomic<std::invoke_result_t<Func, ArgumentContainer>> *current_minimum_cost = nullptr, std::vector<std::pair<ArgumentContainer, std::invoke_result_t<Func, ArgumentContainer>>> *queries = nullptr); } // namespaces
      The random_search function
      searches for a global minimum of a function. There is no special sauce to this
      algorithm-it merely blasts function calls over threads. It's existence is justified
      by the "No free lunch" theorem in optimization, which "establishes
      that for any algorithm, any elevated performance over one class of problems
      is offset by performance over another class." In practice, it is not clear
      that the conditions of the NFL theorem holds, and on test cases, random_search is slower and less accurate
      than (say) differential evolution, jSO, and CMA-ES. However, it is often the
      case that rapid convergence is not the goal: For example, we often want to
      spend some time exploring the cost function surface before moving to a faster
      converging algorithm. In addition, random search is embarrassingly parallel,
      which allows us to avoid Amdahl's law-induced performance problems.
    
lower_bounds: A container
          representing the lower bounds of the optimization space along each dimension.
          The .size() of the bounds should return the dimension
          of the problem.
        upper_bounds: A container
          representing the upper bounds of the optimization space along each dimension.
          It should have the same size of lower_bounds,
          and each element should be >= the corresponding element of lower_bounds.
        max_function_calls: Defaults
          to 10000*threads.
        initial_guess: An optional
          guess for where we should start looking for solutions. This is provided
          for consistency with other optimization functions-it's not particularly
          useful for this function.
        template <typename ArgumentContainer, class Func, class URBG> ArgumentContainer random_search(const Func cost_function, random_search_parameters<ArgumentContainer> const ¶ms, URBG &gen, std::invoke_result_t<Func, ArgumentContainer> value_to_reach = std::numeric_limits<std::invoke_result_t<Func, ArgumentContainer>>::quiet_NaN(), std::atomic<bool> *cancellation = nullptr, std::atomic<std::invoke_result_t<Func, ArgumentContainer>> *current_minimum_cost = nullptr, std::vector<std::pair<ArgumentContainer, std::invoke_result_t<Func, ArgumentContainer>>> *queries = nullptr)
Parameters:
cost_function: The cost
          function to be minimized.
        params: The parameters
          to the algorithm as described above.
        gen: A uniform random bit
          generator, like std::mt19937_64.
        value_to_reach: An optional
          value that, if reached, stops the optimization. This is the most robust
          way to terminate the calculation, but in most cases the optimal value of
          the cost function is unknown. If it is, use it! Physical considerations
          can often be used to find optimal values for cost functions.
        cancellation: An optional
          atomic boolean to allow the user to stop the computation and gracefully
          return the best result found up to that point. N.B.: Cancellation is not
          immediate; the in-progress generation finishes.
        current_minimum_cost: An
          optional atomic variable to store the current minimum cost during optimization.
          This allows developers to (e.g.) plot the progress of the minimization
          over time and in conjunction with the cancellation
          argument allow halting the computation when the progress stagnates.
        queries: An optional vector
          to store intermediate results during optimization. This is useful for debugging
          and perhaps volume rendering of the objective function after the calculation
          is complete.
        Returns:
      The ArgumentContainer corresponding
      to the minimum cost found by the optimization.
    
An example exhibiting graceful cancellation and progress observability can be studied in random_search_example.cpp.