Constrained nonlinear programming problems (NLPs) abound in varieties of engineering applications. Examples include the design of digital filter banks in communications, digital filters in audio-visual equipment, artificial neural networks for time-series forecasting, placement and routing of components in computer-aided design, power-plant operation in nuclear reactors, phase and chemical-reaction equilibrium in chemical engineering, structural optimization in civil engineering, and heat exchangers in mechanical engineering. Improved solutions to these problems can lead to significant savings that exist in two forms: better solutions at the same cost, or solutions of similar quality at less cost.
In the simplest form, a problem can be formulated by an objective and one or more constraints. Such problems, under various assumptions, can be evaluated efficiently by many algorithms developed in the past. For instance, when the objective and constraints are continuous and differentiable and all problem variables are continuous, sequential quadratic programming can be used to find high-quality local optima. When functions have convex subspaces, then certain forms of mixed-integer constrained NLPs can be solved. In general, these methods are restricted to finding local optima for problems that satisfy the underlying assumptions and can only be used for global optimization in special cases.
In more general cases, the objective and constraints of an application may only be evaluated procedurally or through costly simulations. Further, function surfaces in large-scale complex problems often have large number of dimensions and can be very rugged, trapping local-search methods in local optima and often requiring restarts to bring the search to a new region. Problem variables may be continuous, discrete, or mixed-integer, involving objectives and constraints that may be highly nonlinear and non-differentiable. As a result, gradient-based methods are not practical because each gradient calculation may involve extensive computations. Existing stochastic methods, such as simulated annealing (SA) and genetic algorithms (GA), do not rely on gradients but are unable to handle nonlinear constraints well. As a result, some applications that are inherently constrained were formulated as unconstrained problems, due to a lack of efficient methods to solve them directly. The development of solution methods to general nonlinear constrained NLPs is, therefore, of significant importance.
The Novel Project involves the study of the theory and efficient algorithms to solve general discrete, continuous, and mixed-integer NLPs. Results in this project have been applied to solve large-scale problems in artificial intelligence, digital signal processing, neural-network learning, and operations research.