A model in which the objective function and all of the constraints (other than integer constraints) are smooth nonlinear functions of the decision variables is called a nonlinear programming (NLP) or nonlinear optimization problem.  Such problems are intrinsically more difficult to solve than linear programming (LP) problems.  They may be convex or non-convex, and an NLP Solver must compute or approximate derivatives of the problem functions many times during the course of the optimization.  Since a non-convex NLP may have multiple feasible regions and multiple locally optimal points within such regions, there is no simple or fast way to determine with certainty that the problem is infeasible, that the objective function is unbounded, or that an optimal solution is the “global optimum” across all feasible regions.

The GRG Nonlinear Solving method uses the Generalized Reduced Gradient method as implemented in Lasdon and Waren’s GRG2 code.  The GRG method can be viewed as a nonlinear extension of the Simplex method, which selects a basis, determines a search direction, and performs a line search on each major iteration – solving systems of nonlinear equations at each step to maintain feasibility.  Other methods for nonlinear optimization include Sequential Quadratic Programming (SQP) and Interior Point or Barrier methods.