- Solver found a solution. All constraints and optimality conditions are satisfied.
- Solver has converged to the current solution. All constraints are satisfied.
- Solver cannot improve the current solution. All constraints are satisfied.
Bear in mind that all of these messages assume that the model is reasonably well-scaled, and/or that the Use Automatic Scaling box has been checked in the Solver Options dialog (it is unchecked by default). A poorly scaled model can "fool" all of the Solver's algorithmic methods, as discussed in Problems with Poorly Scaled Models.
When the message "Solver found a solution" appears, it means that the GRG Solver has found a locally optimal solution -- there is no other set of values for the decision variables close to the current values which yields a better value for the objective.
Figuratively, this means that the Solver has found a "peak" (if maximizing) or "valley" (if minimizing) -- but there may be other taller peaks or deeper valleys far away from the current solution. Mathematically, this message means that the Karush - Kuhn - Tucker (KKT) conditions for local optimality have been satisfied (to within a certain tolerance, related to the Precision setting in the Solver Options dialog).
The best that current nonlinear optimization methods can guarantee is to find a locally optimal solution. We recommend that you run the GRG Solver starting from several different sets of initial values for the decision variables -- ideally chosen based on your own knowledge of the problem. In this way you can increase the chances that you have found the best possible "optimal solution."
When the GRG Solver's second stopping condition is satisfied (before the KKT conditions are satisfied), the message "Solver converged to the current solution" appears. This means that the objective function value is changing very slowly for the last few iterations or trial solutions.
More precisely, the GRG Solver stops if the absolute value of the relative change in the objective function is less than the Convergence setting in the Solver Options dialog for the last few iterations. A poorly scaled model is more likely to trigger this stopping condition, even if the Use Automatic Scaling box in the Solver Options dialog is checked. So it pays to design your model to be reasonably well scaled in the first place: The typical values of the objective and constraints should not differ from each other, or from the decision variable values, by more than three or four orders of magnitude.
If you are getting this message when you are seeking a locally optimal solution, you may tighten the Convergence setting (by entering a smaller number in the Solver options dialog). But you should first consider why it is that the objective function is changing so slowly. Perhaps you can add constraints or use different starting values for the variables, so that the Solver does not get "trapped" in a region of slow improvement.
The GRG Solver's third stopping condition, which yields the message "Solver cannot improve the current solution," occurs only rarely. It means that the model is degenerate and the Solver is probably cycling. You should first read the discussion of poorly scaled models, which can contribute to this result. Otherwise, the technical issues involved are beyond the level of our Web discussion, as well as most of the Recommended Books. One possibility worth checking is that some of your constraints are redundant, and should be removed. If this suggestion doesn't help, you will probably need specialized consulting assistance.