


MINOS
5.5  Description 


MINOS (invented
by Bruce Murtagh and Michael
Saunders) is a software package
for solving largescale optimization
problems (linear and nonlinear
programs). It is especially
effective for linear programs
and for problems with a nonlinear
objective function and sparse
linear constraints (e.g., quadratic
programs). MINOS can also process
large numbers of nonlinear constraints.
The nonlinear functions should
be smooth but need not be convex.
For
linear programs,
MINOS uses a sparse implementation
of the primal simplex method.









For nonlinear objective functions
(and linear constraints),
MINOS uses a reducedgradient
method with quasiNewton approximations
to the reduced Hessian.
For
problems with nonlinear constraints,
MINOS uses a sparse SLC algorithm
(a projected Lagrangian method,
related to Robinson's method).
It solves a sequence of sub
problems in which the constraints
are linearized and the objective
is an augmented Lagrangian
(involving all nonlinear functions).
Convergence is rapid near
a solution.








MINOS
makes use of nonlinear
function and gradient
values. The solution
obtained will be a local
optimum (which may or
may not be a global
optimum). If some of
the gradients are unknown,
they will be estimated
by finite differences.
If the linear constraints
have no feasible solution,
MINOS terminates as
soon as infeasibility
is confirmed. Infeasible
nonlinear constraints
are difficult to diagnose.
(They are treated more
methodically by SNOPT.)
For
large problems,
efficiency is improved
if only some of the
variables enter nonlinearly,
or if the number of
active constraints is
nearly as large as the
number of variables
(i.e., if there are
few degrees of freedom
at a solution). MINOS
can accommodate problems
with many degrees of
freedom (perhaps thousands),
but a few hundred or
less is preferable.










MINOS
is suitable for general
QP problems
(but may find just a
local optimum if the
quadratic is indefinite).
LSSOL and QPOPT
are suitable for LP
and QP problems of arbitrary
size if the constraint
matrix is essentially
dense, but the sparse
solvers MINOS and SNOPT
will usually be more
efficient even for dense
problems.
If
the constraints are
highly nonlinear,
or the functions and
gradients are expensive
to evaluate, SNOPT
or NPSOL
may be more effective.
(They use an SQP algorithm
in which the sub problems
have the same linearized
constraints as in MINOS,
but the objective is
a quadratic approximation
to the Lagrangian. Hence,
no function or gradient
values are needed during
the solution of each
QP. A merit function
promotes convergence
from arbitrary starting
points.)













