Contact:  Dan Fylstra
Frontline Systems 775-831-0300
press@solver.com

Backgrounder

Recently, genetic and evolutionary algorithms have received much publicity, plus a fair amount of "hype." In reality, these algorithms have both strengths and weaknesses compared to classical optimization methods. Each method is best suited to a certain class of real-world problems.In a 'genetic algorithm,' the problem is encoded in a series of bit strings that are manipulated by the algorithm; in an 'evolutionary algorithm,' the spreadsheet cells and formulas are used directly. Most commercial Solver products are based on evolutionary algorithms.

Solver Optimizes an Objective While Satisfying Constraints

A Solver problem includes decision variables, an objective function to be maximized or minimized, and constraints. An example problem might seek the best mix of products to build from an available inventory of common parts. The decision variables are the number of units of each product to build; the objective function is the profit earned by building and selling the products. Formulas calculate the number of parts of each type required to build each product, and the constraints are relations such as A1 <= B1 which specify that the number of parts used cannot exceed the quantity on hand.

Solver Algorithms Make Assumptions about Objective and Constraints

The Solver operates by changing the decision variables, seeking to maximize (or minimize) the objective function while meeting the constraints. Solver algorithms use different strategies to seek new values for the variables, and they make different assumptions about the relationships between the variables and the objective and constraints. In general, stronger assumptions about the formulas lead to faster and more accurate Solver algorithms -- but place greater restrictions on the models that the method can handle.

Simplex Method Assumes that Relationships are 'Linear'

The famous Simplex method for linear programming -- used in the standard Excel Solver and also included in the Premium Solver -- assumes that the objective function and constraints are linear functions of the variables. A linear function expresses proportionality -- its graph is a straight line. For such problems, the Simplex method is highly accurate, very fast -- often hundreds of times faster than other methods -- and yields the globally optimal solution in virtually all cases. Though the linearity assumption places the greatest restrictions on the form of Solver models, many practically significant problems satisfy these restrictions -- which is why the Simplex method has always been the most commonly used Solver algorithm, even though more general methods have been available -- even in the standard Excel Solver.

GRG Method Assumes that Relationships are 'Smooth Nonlinear'

The GRG (Generalized Reduced Gradient) method -- used in the standard Excel Solver since 1990, and also included in the Premium Solver -- assumes that the objective function and constraints are smooth nonlinear functions of the variables. A smooth nonlinear function has a smooth (possibly curved) graph with no sharp 'corners' or 'breaks.' For such problems, the GRG method is quite accurate and quite fast -- often 10 to 20 times faster than a genetic or evolutionary algorithm -- and yields a locally optimal solution. This solution is provably 'best within the vicinity,' but does not rule out other, better solutions that may be far away from the initial values of the variables.The GRG Solver can also find solutions to linear problems, but there is little reason to use it on such problems when the faster and more accurate Simplex method is available.

Evolutionary Algorithm Makes Few Assumptions about Relationships

The Evolutionary Solver included in the Premium Solver V3.5 makes almost no assumptions about the relationships between the decision variables, and the objective function and constraints. It accepts 'non-smooth' Excel functions such as IF, CHOOSE and LOOKUP whose graphs contain 'breaks,' as well as functions like ABS whose graph has 'sharp corners.' For such problems, the Evolutionary Solver is usually able to find 'good' (but not necessary optimal) solutions. Further, on smooth nonlinear problems where the GRG Solver can find only a locally optimal solution, the Evolutionary Solver can often find a better solution or even a globally optimal (or close to globally optimal) solution. The Evolutionary Solver can be used on smooth nonlinear convex problems where the GRG Solver does find the optimal solution, or even on linear problems where the Simplex method can be used, but it is much slower and less reliable than the classical methods on such problems.

Drawbacks of an Evolutionary Algorithm

Since the Evolutionary Solver can, in principle, find solutions to problems with non-smooth, smooth nonlinear, and even linear relationships, why not use it for all problems? In practice, any genetic or evolutionary algorithm has certain drawbacks, which go hand in hand with its advantages:

  • An evolutionary algorithm is much slower than alternatives such as the GRG and Simplex methods -- often by factors of a hundred times or more.
  • As problem size scales up (from, say, ten to a hundred or a thousand decision variables), an evolutionary algorithm is often overwhelmed by the dimensionality of the problem and is unable to find anything close to an optimal solution. But it's still possible to solve such large problems with the GRG or Simplex method Solver.
  • In an evolutionary algorithm, a solution is 'good' only in comparison to other, previously discovered solutions. An evolutionary algorithm actually has no concept of an 'optimal solution,' or any way to test whether a given solution is optimal (even locally optimal).
  • Since it doesn't know whether a given solution is optimal, an evolutionary algorithm never really knows when to stop. At best, 'heuristic' rules can be used to stop when the best solution found so far hasn't improved in a long time. In practice, evolutionary algorithms are usually stopped manually by the user, or by a predetermined limit on the amount of time or 'iterations.'

The Right Method for the User's Problem

Having a Solver algorithm that can find even 'good' (if not optimal) solutions for virtually any Excel spreadsheet model, regardless of the type of formulas used in the model, is clearly advantageous. Users' interest in such a capability was one motivation for the Evolutionary Solver in the Premium Solver V3.5.

But attempting to apply evolutionary algorithms to all Solver problems -- even those that can be modeled with linear or smooth nonlinear relationships -- often leads to poor results. Another motivation for the Premium Solver V3.5 was the frustration reported by users of competing products whose main or only feature is a genetic or evolutionary algorithm, who were attempting to apply this method to all problems.

With the Premium Solver V3.5, users are not limited to a genetic or evolutionary algorithm -- they have a full arsenal of Solver 'engines' at their disposal, suitable for whatever problems they encounter. Plus, diagnostic tools included in the Premium Solver, such as the Linearity Report, help users determine which Solver 'engine' should be applied to a given problem. And the Premium Solver User's Guide modeling techniques which can be used to replace non-smooth functions with equivalent smooth nonlinear or even linear functions -- making it possible to find much better, and provably optimal, solutions in far less time.

APPENDIX: How Does an Evolutionary Algorithm Work?

An evolutionary algorithm applies the principles of evolution found in nature to the task of finding an optimal solution to a Solver problem. Such an algorithm is different from 'classical' optimization methods in several ways:

First, it relies in part on random sampling. This makes it a nondeterministic method, which may yield different solutions on different runs -- even if the user hasn't changed the model at all. In contrast, the Simplex and GRG Solvers are deterministic methods -- they always yield the same solution for given starting values of the decision variables.

Second, where most classical optimization methods maintain a single best solution found so far, an evolutionary algorithm maintains a population of candidate solutions. Only one of these is 'best,' but the other members of the population are 'sample points' in other regions of the search space, where a better solution may later be found. The use of a population of solutions helps the evolutionary algorithm avoid becoming trapped at a local optimum, when an even better optimum may be found outside the vicinity of the current solution.

Third -- inspired by the role of mutation of an organism's DNA in natural evolution -- an evolutionary algorithm periodically makes random changes or mutations in one or more members of the current population, yielding a new candidate solution (which may be better or worse than existing population members). There are many possible ways to perform a mutation, and the Evolutionary Solver actually employs three different mutation strategies. The result of a mutation may be an infeasible solution, and the Evolutionary Solver attempts to 'repair' such a solution to make it feasible; this is sometimes, but not always, successful.

Fourth -- inspired by the role of sexual reproduction in the evolution of living things -- an evolutionary algorithm attempts to combine elements of existing solutions in order to create a new solution, with some of the features of each 'parent.' The elements (e.g. decision variable values) of existing solutions are combined in a crossover operation, inspired by the crossover of DNA strands that occurs in reproduction of biological organisms. As with mutation, there are many possible ways to perform a crossover operation -- some much better than others -- and the Evolutionary Solver actually employs multiple variations of two different crossover strategies. Fifth -- inspired by the role of natural selection in evolution -- an evolutionary algorithm performs a selection process in which the 'most fit' members of the population survive, and the 'least fit' members are eliminated.

In a constrained optimization problem, the notion of 'fitness' depends partly on whether a solution is feasible (i.e. whether it satisfies all of the constraints), and partly on its objective function value. The selection process is the step that guides the algorithm towards ever-better solutions.

How Do Classical Optimization Methods Work?

Classical optimization methods rely on assumptions about the problem functions to make faster, surer progress towards the optimal solution. For smooth nonlinear functions, they can compute derivatives or gradients that indicate the directions in which the functions are increasing or decreasing. For linear functions, they can take advantage of their 'straight-line' behavior to move immediately to the extreme values of such functions (as determined by other constraints) in a single step.

Where evolutionary algorithms often perform poorly on problems with many constraints, classical methods often perform better than ever, because they are able to make active use of the constraints to reduce the dimensionality of the problem. The simplistic 'hill-climbing' metaphor does not do justice to these methods: In most practical problems, the optimal solution is determined by the constraints -- not by the 'peak' of a 'hill' -- and these methods are designed to fully exploit information about the constraints.

< Back to Press Releases - Current Products

< Back to Analysts and Press Start Here