Smooth Nonlinear Optimization

Frontline Systems' optimizers solve smooth nonlinear optimization problems using these methods:

For an explanation of these types of problems, please see Optimization Problem Types: Smooth Nonlinear Optimization.

Generalized Reduced Gradient Method

The standard Microsoft Excel Solver, the Premium Solver, and the Premium Solver Platform use the Generalized Reduced Gradient (GRG) method as implemented in an enhanced version of Lasdon and Waren's GRG2 code.  The GRG2 code has been proven in use over many years as one of the most robust and reliable approaches to solving difficult NLP problems.  

The standard Microsoft Excel Solver has a limit of 200 decision variables and 100 constraints (in addition to bounds on the variables) for NLP problems.  The Premium Solver Platform has a limit of 500 decision variables and 250 constraints for such problems.  It also includes multistart methods for global optimization that use the GRG method on subproblems to find locally optimal solutions.  These solvers do not exploit sparsity in NLP models.

The Large-Scale GRG Solver uses a more powerful GRG method as implemented in Lasdon, Smith and Plummer's LSGRG code.  This solver uses sparse matrix storage methods, advanced methods for selecting a basis and dealing with degeneracy, "crashing" methods for finding a feasible solution quickly, and other algorithmic methods adapted for larger problems. It handles problems of up to 12,000 variables and 12,000 constraints in addition to bounds on the variables.

Sequential Quadratic Programming Method

The Large-Scale SQP Solver uses a Sequential Quadratic Programming (SQP) method to solve smooth nonlinear problems.  It is also highly effective at solving linear programming (LP) and quadratic programming (QP) problems, and it handles problems of unlimited size, subject to available time and memory.

The KNITRO Solver uses a related Sequential Linear-Quadratic Programming (SLQP) method, as an alternative to its Interior Point methods (see below), to solve smooth nonlinear problems.  It also handles problems of unlimited size, subject to available time and memory.

At each major iteration or trial solution, an SQP method obtains an updated search direction by solving a quadratic programming (QP) subproblem. The objective of this QP subproblem is a quadratic approximation of a modified Lagrangian function that depends on the nonlinear problem's objective and constraints; the constraints of the QP subproblem are linearizations at the current point of the nonlinear problem's constraints.

The Large-Scale SQP Solver uses a smooth augmented Lagrangian merit function, and maintains a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian. Its QP subproblems are solved using a reduced-Hessian active-set method that allows for variables appearing linearly in the objective and the constraints. It uses "elastic programming" techniques to deal with infeasibility in the original problem and the QP subproblems; for infeasible models, it is more likely to arrive at a "close to feasible" solution than most other SQP solvers.

The KNITRO Solver's SLQP method solves an LP problem to estimate the active set at the solution, using a linear approximation to the L1 penalty function inside a trust region. Then an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the first problem. The EQP incorporates a trust-region constraint and is solved (inexactly) by means of a projected conjugate gradient method.

Interior Point or Barrier Method

The MOSEK Solver uses an Interior Point method for convex problems, called the Homogeneous Self-Dual method, to solve large-scale LP, QP, QCP, and SOCP problems, and general smooth convex nonlinear problems of unlimited size, subject to available time and memory.

The KNITRO Solver uses an Interior Point nonlinear method, also known as a Barrier method.  It is highly effective on smooth nonlinear problems of unlimited size, even with many degrees of freedom.  The KNITRO algorithm uses trust regions and a merit function to promote convergence. Interior Point or Barrier methods solve a series of barrier subproblems.  They may perform one or more minimization steps on each barrier subproblem, then they decrease a barrier parameter, and repeat the process until the original problem has been solved to desired accuracy.

The KNITRO Solver can use either of two procedures in its minimization steps. In the version known as KNITRO-CG, each step is the sum of a normal step whose objective is to improve feasibility, and a tangential step computed using a projected conjugate gradient iteration. The version known as KNITRO-Direct always attempts to compute a new iterate by solving the primal-dual KKT system using direct linear algebra. In the case when this step cannot be guaranteed to be of good quality, or if negative curvature is detected, then the new iterate is computed by the KNITRO-CG procedure.

Interior Point methods generally require second derivative information (the Hessian of the Lagrangian of the objective and constraints) at each major iteration. The SOCP Barrier Solver and the MOSEK Solver rely upon the Premium Solver Platform's ability to compute analytic second derivatives via automatic differentiation.

The KNITRO Solver has several alternatives for computing derivatives: It can obtain analytic second derivatives from the Interpreter in the Premium Solver Platform via automatic differentiation; it can obtain analytic first derivatives via automatic differentiation, and use these to construct a Hessian approximation using a quasi-Newton (BFGS) or limited-memory quasi-Newton approach; or it can use analytic or estimated first derivatives to compute approximations of the Hessian-vector products used by the interior point method.