If the Options command button is clicked on the Solver pane, the following options are displayed. 

Add-on Options Dialog

Constraint Precision

The number entered here determines how closely the calculated values of the constraint left hand sides must match the right hand sides in order for the constraint to be satisfied.  A constraint is satisfied if the relation it represents is true within a small tolerance; the Precision value is that tolerance.  With the default setting of 1.0E-6 (0.000001), a calculated left hand side of -1.0E-7 would satisfy a constraint such as A1 >= 0.

Precision and Regular Constraints

Use caution in making this number much smaller, since the finite precision of computer arithmetic virtually ensures that the values calculated by Google Sheets and the Solver will differ from the expected or “true” values by a small amount.  On the other hand, setting the Precision to a much larger value would cause constraints to be satisfied too easily.  If your constraints are not being satisfied because the values you are calculating are very large (say in millions or billions of dollars), consider adjusting your formulas and data to work in units of millions, or checking the Use Automatic Scaling box instead of altering the Precision setting.  Generally, this setting should be kept in the range from 1.0E-6 (0.000001) to 1.0E-4 (0.0001).

Precision and Integer Constraints

Another use of Precision is determining whether an integer constraint, such as A1:A5 = integer, A1:A5 = binary or A1:A5 = alldifferent, is satisfied.  If the difference between the decision variable’s value and the closest integer value is less than the Precision setting, the variable value is treated as an integer.

Use Automatic Scaling

When this option is set to True, the Solver will attempt to scale the values of the objective and constraint functions internally in order to minimize the effects of a poorly scaled model.  A poorly scaled model is one that computes values of the objective, constraints, or intermediate results that differ by several orders of magnitude.  Poorly scaled models may cause difficulty for both linear and nonlinear solution algorithms, due to the effects of finite precision computer arithmetic.  If your model is nonlinear and you set Use Automatic Scaling to True, make sure that the initial values for the decision variables are “reasonable,” i.e. of roughly the same magnitudes that you expect for those variables at the optimal solution.  The effectiveness of the Automatic Scaling option depends on how well these starting values reflect the values encountered during the solution process. 

For more information, see “Problems with Poorly Scaled Models” in the chapter “Solver Result Messages.”

Ignore Integer Constraints

Select this menu item to solve the “relaxation” of the mixed integer problem (or MIP) temporarily ignoring the integer constraints.  It’s meaningful to Solve with Integer Constraints only if you have integer constraints in your model.

Integer Optimality

When you solve an integer programming problem, it often happens that the Branch & Bound method will find a good solution fairly quickly, but will require a great deal of computing time to find (or verify that it has found) the optimal integer solution.  The Integer Tolerance setting may be used to tell the Solver to stop if the best solution it has found so far is “close enough.” The Branch & Bound process starts by finding the optimal solution without considering the integer constraints (this is called the relaxation of the integer programming problem).  The objective value of the relaxation forms the initial “best bound” on the objective of the optimal integer solution, which can be no better than this.  During the optimization process, the Branch & Bound method finds “candidate” integer solutions, and it keeps the best solution so far as the “incumbent.”  By eliminating alternatives as its proceeds, the B&B method also tightens the “best bound” on how good the integer solution can be. Each time the Solver finds a new incumbent – an improved all-integer solution – it computes the maximum percentage difference between the objective of this solution and the current best bound on the objective:

If the absolute value of this maximum percentage difference is equal to or less than the Integer Tolerance, the Solver will stop and report the current integer solution as the optimal result, with the message “Solver found an integer solution within tolerance.”  If you set the Integer Tolerance to zero, the Solver will “prove optimality” by continuing to search until all alternatives have been explored and the optimal integer solution has been found. This may take a great deal of computing time.


The Convergence option controls the stopping conditions used by the LSGRG Solvers and the Evolutionary Solver that lead to the message “Solver has converged to the current solution.” 

The LSGRG Solver will stop and display the message “Solver has converged to the current solution” when the objective function value is changing very slowly for the last few iterations or trial solutions. More precisely, the LSGRG Solver stops if the absolute value of the relative change in the objective function is less than the value for the Convergence option for the last 5 iterations. While the default value of 1.0E-4 (0.0001) is suitable for most problems, it may be too large for some models, causing the LSGRG Solver to stop prematurely when this test is satisfied, instead of continuing for more Trial Solutions until the optimality (KKT) conditions are satisfied.  If you are getting this message when you are seeking a locally optimal solution, you can change the setting in the Convergence box to a smaller value such as 1.0E-5 or 1.0E-6; but you should also consider why it is that the objective function is changing so slowly. Perhaps you can add constraints or use different starting values for the variables, so that the Solver does not get “trapped” in a region of slow improvement.

The Evolutionary Solver will stop and display the message “Solver has converged to the current solution” if nearly all members of the current population of solutions have very similar “fitness” values.  Since the population may include members representing infeasible solutions, each “fitness” value is a combination of an objective function value and a penalty for infeasibility.  Since the population is initialized with trial solutions that are largely chosen at random, the comparison begins after the Solver has found a certain minimum number of improved solutions that were generated by the evolutionary process.  The stopping condition is satisfied if 99% of the population members all have fitness values that are within the Convergence tolerance of each other. If you believe that the message “Solver has converged to the current solution” is appearing prematurely, you can make the Convergence tolerance smaller, but you may also want to increase the Mutation Rate and/or the Population Size, in order to increase the diversity of the population of trial solutions.

Multistart Search

If this option is set to True, the multistart methods are used to seek a globally optimal solution.  If this box is unchecked, the other options described in this section are ignored.

The basic idea of the multistart method is to automatically run a nonlinear Solver from different starting points, reaching different locally optimal solutions, then select the best of these as the proposed globally optimal solution.  Topographic search multistart methods are included in Solver Add-on. 

The multistart method operates by generating candidate starting points for the nonlinear Solver (with randomly selected values between the bounds you specify for the variables).  These points are then grouped into “clusters” – through a method called multi-level single linkage – that are likely to lead to the same locally optimal solution, if used as starting points for the Solver.  The nonlinear Solver is then run repeatedly, once from (a representative starting point in) each cluster.  The process continues with successively smaller clusters that are increasingly likely to capture each possible locally optimal solution.  A Bayesian test is used to determine whether the process should continue or stop. For many smooth nonlinear problems, the multistart method has a limited guarantee that it will “converge in probability” to a globally optimal solution.  This means that as the number of runs of the nonlinear Solver increases, the probability that the globally optimal solution has been found also increases towards 100%.  (To attain convergence for constrained problems, an exact penalty function is used in the process of “clustering” the starting points.)  For most nonlinear problems, this method will at least yield very good solutions.  The multistart method, is a nondeterministic method, which by default may yield different solutions on different runs.  (To obtain the same solution on each run, you can set a Random Seed option for either of these solution algorithms, as discussed below.   

The multistart method can be used on smooth nonlinear problems that also contain integer variables and/or “alldifferent” constraints.  But this can take a great deal of solution time, since the multistart method is used for each subproblem generated by the Branch & Bound method for integer problems, and it can also impact the Solver’s ability to find feasible integer solutions.  If you have many integer variables, or alldifferent constraints, try the Evolutionary Solver as an alternative to the multistart method.

Population Size

The multistart methods generate a number of candidate starting points for the LSGRG and Evolutionary Solvers equal to the value that you enter in this box.  This set of starting points is referred to as a “population.” The minimum population size is 10 points; if you supply a value less than 10 in this box, or leave it blank, the multistart methods use a population size of 10 times the number of decision variables in the problem, but no more than 200.

Random Seed

The multistart methods use a process of random sampling to generate candidate starting points for the LSGRG and Evolutionary Solvers.  This process uses a random number generator that is normally “seeded” using the value of the system clock – so the random number sequence (and hence the generated candidate starting points) will be different each time you click Solve.  At times, however, you may wish to ensure that the same candidate starting points are generated on several successive runs – for example, in order to test different LSGRG Solver options on each search for a locally optimal solution.  To do this, enter an integer value into this box; this value will then be used to “seed” the random number generator each time you click Solve.

Mutation Rate

The Mutation Rate is the probability that some member of the population will be mutated to create a new trial solution (which becomes a candidate for inclusion in the population, depending on its fitness) during each “generation” or subproblem considered by the evolutionary algorithm.  In the Evolutionary Solver, a subproblem consists of a possible mutation step, a crossover step, an optional local search in the vicinity of a newly discovered “best” solution, and a selection step where a relatively “unfit” member of the population is eliminated.

There are many possible ways to mutate a member of the population, and the Evolutionary Solver actually employs five different mutation strategies, including “permutation-preserving” mutation strategies for variables that are members of an “alldifferent” group.  The Mutation Rate is effectively subdivided between these strategies, so increasing or decreasing the Mutation Rate affects the probability that each of the strategies will be used during a given “generation” or subproblem.

Max Time without Improvement

This option works in conjunction with the Tolerance option to limit the time the evolutionary algorithm spends without making any significant progress.  If the relative (i.e. percentage) improvement in the best solution’s “fitness” is less than the Tolerance value for the number of seconds in the Max Time without Improvement option, the Evolutionary Solver stops and displays the Solver Results dialog.  The message is “Solver cannot improve the current solution,” unless the evolutionary algorithm has discovered no feasible solutions at all, in which case the message is “Solver could not find a feasible solution.”  If you believe that this stopping condition is being met prematurely, you can either make the Tolerance value smaller (or even zero), or increase the number of seconds allowed by the Max Time without Improvement option.

This option is also used to control the maximum time that the Interval Global Solver will spend in its search without finding an “improved global solution” (a feasible solution with an objective value better than the currently best known solution).  If this time limit is exceeded, the Solver will stop and display the Solver Results dialog with the message “Solver cannot improve the current solution.”  For more information, see “Interval Global Solver Stopping Conditions” in the chapter “Solver Results Messages.”

Require Bounds on Variables

If using the Standard LSGRG Solver, this option is set to True by default, but it comes into play only when the Multistart Search box is checked.  The multistart methods generate candidate starting points for the LSGRG Solver by randomly sampling values between the bounds on the variables that you specify.  If you do not specify both upper and lower bounds on each of the decision variables, the multistart methods can still be used, but because the random sample must be drawn from an “infinite” range of values, this is unlikely to effective­ly cover the possible starting points (and therefore have a good chance of finding all of the locally optimal solutions), unless the LSGRG Solver is run on a great many subproblems, which will take a very long time.

The tighter the bounds on the variables that you can specify, the better the multistart methods are likely to perform.  (This is also true of the Evolutionary Solver.)  Hence, this option is checked by default, so that you will be automatically reminded to include both upper and lower bounds on all of the variables whenever you select Multistart Search.  If both the Multistart Search and Require Bounds on Variables boxes are checked, but you have not defined upper and lower bounds on all of the variables, Solver will stop with the result, “Not all variables have upper and lower bounds, as required by this Solver Engine."

Bounds on the variables are especially important to the performance of the Evolutionary Solver.  For example, the initial population of candidate solutions is created, in part, by selecting values at random from the ranges determined by each variable’s lower and upper bounds.  Bounds on the variables are also used in the mutation process – where a change is made to a variable value in some member of the existing population – and in several other ways in the Evolutionary Solver.  If you do not specify lower and upper bounds for all of the variables in your problem, the Evolutionary Solver can still proceed, but the almost-infinite range for these variables may significantly slow down the solution process, and make it much harder to find “good” solutions.  Hence, it pays for you to determine realistic lower and upper bounds for the variables, and enter them under Constraints in the Task Pane Model tab.

When this message appears, you must either add both upper and lower bounds to each variable by adding constraints in the Task Pane Model tab or set Assume Non-Negative to True to add lower bounds on the variables, or else uncheck the Require Bounds on Variables box (not recommended), then click Solve again to allow the Solver to proceed with the solution process.

Assume Non-negative

When this option is set to True, any decision variables that are not given explicit lower bounds via >=, binary, or alldifferent constraints in the Constraints list box of the Solver Pane will be given a lower bound of zero when the problem is solved.  This option has no effect for decision variables that do have explicit >= constraints, even if those constraints allow the variables to assume negative values.


Click OK to accept the engine settings and return to the Solver Pane.


Click Cancel to return to the Solver pane without accepting any changes to the Solver Option settings.