Confidence regions for global optima in nonlinear programming
Abstract
This dissertation is concerned with developing new statistical techniques for nonlinear optimization including nonconvex optimization. The approach is a statistical one and provides an upper confidence limit for the global maximum of a mathematical function g(x) of a vector x in a multi-dimensional 'feasible space', say S. Specifically the dissertation develops the statistical techniques for determining these confidence limits as well as algorithms implementing the techniques and computer programs executing the algorithms. From a statistical viewpoint the problem is one of determining an upper confidence limit for the upper limit of a finite range distribution given a sample from this distribution. The dissertation is therefore concerned predominantly with two issues: (a). the generation of the sample, and (b). the methods used to determine the upper confidence limit from the sample. Several techniques for generating the sample have been studied. In particular, we may single out the following sampling procedure. A random starting point, x?éÇ, lying in the 'feasible space', S, is selected and an ascent algorithm is followed for k iterative steps leading to a point x?éÇ[subscript k]. The corresponding 'sample value' generated is g?éü = g(x?éÇ[subscript k]). This stochastic process is repeated n times leading to the desired sample g[subscript t], t = 1, ..., n. Various modifications of this procedure have also been investigated. Notable among these is the use of a set of L systematically and deterministically selected values of g(x), say g[superscript (j)], j = 1, ..., L. In which case the t-th sample value, g[subscript (t)] is defined as the weighted average of these g[superscript (j)] and the t-th observation of g(x?éÇ[subscript k]). ...