Nonlinear optimization problems are intrinsically more difficult to solve than linear problems, and there are fewer guarantees about what kind of solution Solver can find. If your smooth nonlinear problem is **convex**, Solver will normally find the *globally optimal* solution (subject to issues of **poor scaling** and the finite precision of computer arithmetic). But if your problem is **non-convex**, Solver will normally find only a *locally optimal* solution, close to the starting values of the decision variables, when you click Solve.

When dealing with a **non-convex** problem, it is a good idea to run Solver starting from several different sets of initial values for the decision variables. Since Solver follows a path from the starting values (guided by the direction and curvature of the objective function and constraints) to the final solution values, it will normally stop at a peak or valley closest to the starting values you supply. By starting from more than one point – ideally chosen based on your own knowledge of the problem – you can increase the chances that you have found the best possible “optimal solution.”

An easy way to do this is to select the **Use MultiStart** check box on the GRG Nonlinear tab of the Solver Options dialog: See **Multistart Methods for Global Optimization** for more information. However, this method uses randomly chosen starting points, so it doesn’t take advantage of your special knowledge of the problem.

Nonlinear problems with **integer constraints** are solved by a Branch and Bound method that runs the GRG method on a series of subproblems. If the GRG method stops prematurely due to slow convergence, or fails to find a feasible point on a given run, this may prevent the Branch & Bound method from finding the true integer optimal solution; though in most cases – given enough time – a good integer solution can be found.

Note that, when the GRG Nonlinear Solving method is selected in the dropdown list in the Solver Parameters dialog, the Generalized Reduced Gradient algorithm is used to solve the problem – *even if it is actually a linear model* that could be solved by the (faster and more reliable) Simplex LP method. The GRG method will usually find the optimal solution to a linear problem, but occasionally you will receive a Solver Result Message indicating some uncertainty about the status of the solution – especially if the model is **poorly scaled**. So you should always ensure that you have selected the right Solving method for your problem.