Plenary Speaker Abstracts
Jonathan Borwein,
University of Newcastle
The Douglas Rachford Algorithm in the Absence of Convexity
The Douglas-Rachford iteration scheme, introduced half a century
ago in connection with nonlinear heat flow problems, aims to find
a point common to two or more closed constraint sets.
Convergence is ensured when the sets are convex subsets of a Hilbert
space, however, despite the absence of satisfactory theoretical
justification, the scheme has been routinely used to successfully
solve a diversity of practical optimization or feasibility problems
in which one or more of the constraints involved is non-convex.
As a first step toward addressing this deficiency, we provide convergence
results for a proto-typical non-convex (phase-recovery) scenario:
Finding a point in the intersection of the Euclidean sphere and
an affine subspace.
Much of my lecture will be interactive using the interactive geometry
package Cinderella and the html applets
http://www.carma.newcastle.edu.au/~jb616/reflection.html
http://www.carma.newcastle.edu.au/~jb616/expansion.html
http://www.carma.newcastle.edu.au/~jb616/lm-june.html
This is joint work with Brailey Sims. A companion article is Jonathan
M. Borwein and Brailey Sims, "The Douglas-Rachford algorithm
in the absence of convexity," Chapter 6, pp. 93-109 in Fixed-Point
Algorithms for Inverse Problems in Science and Engineering in Springer
Optimization and Its Applications, 2011.
Available at:
http://carma.newcastle.edu.au/jon/dr.pdf
Panos M. Pardalos,University
of Florida,
Network Analysis and Data Mining in Computational Neuro-Science
Over the last decades, network models have been widely appreciated
as an essential part in studying the dynamics of complex systems.
The dynamics of the system is tightly connected to the structure
of the underlying network. Human brain is an apt example of such
a complex system, and network modeling may prove to be beneficial
in studying the brain function. However, the network models of the
brain have additional complexity, due to its time varying structure.
In this lecture, preliminary results on the structure of brain network
models based on the analysis of physiological data (fMRI, EEG) are
presented. Moreover, we will discuss particular results on Parkinsons
and epileptic human brain
network models.
This is joint work with Jui-Hong Chien, Dmytro Korenkevych, and
Syed N Mujahid.
Javier Pena,
Carnegie
Mellon University
First-order Algorithms for
Large-scale Convex Optimization
A number of interesting problems in game theory, signal processing,
and machine learning can be modeled as convex optimization problems.
However, for many practical instances of current interest, these
optimization problems are immense and not amenable to traditional
optimization techniques. This talk discusses some theoretical and
computational developments in the modern class of "accelerated
first-order algorithms" for convex optimization. We describe
various deterministic and stochastic first-order algorithms and
their effectiveness in solving computational challenges in algorithmic
game theory and in regularized linear regression.
Alexander
Shapiro, Georgia Institute of
Technology
Risk Averse Stochastic Programming
In recent years a theory of the so-called coherent risk measures
had emerged. Its application to static (two-stage) stochastic programming
is well established and generally accepted by the scientific community.
The multistage case, on the other hand, is much more delicate. In
this talk we discuss various approaches to multistage risk averse
stochastic programming and point to involved conceptual and numerical
difficulties.
Contributed
Speaker Abstracts
Preprocessing and Regularization for
Degenerate Semidefinite Programs
by
Yuen-Lam Cheng
Current popular algorithms for semidefinite programs (SDP) rely
on primal-dual interior point methods, and require Slater's constraint
qualification to hold for both the primal and dual problems. However,
in practice there are many instances of SDP where Slater's constraint
qualification fails. These instances, particularly those that have
nonzero duality gap or have unattainable optimal values, may present
numerical difficulty to standard SDP solvers. We propose a backward
stable preprocessing technique using rank-revealing rotations of
the problem followed by facial reduction. This results in a smaller,
well-posed, nearby problem that can be solved by standard SDP solves.
Risk Management of Portfolios by CVaR
Optimization
by
Thomas Coleman
University Waterloo
Coauthors: Yuying Li, Lei Zhu
The optimal portfolio selection problem is the fundamental optimization
problem in finance - optimally balancing risk and return, with possible
additional constraints. Unfortunately the classical optimization
approach is very sensitive to estimation error, especially with
respect to the estimated mean return, and the resulting efficient
frontier may be of little practical value. Indeed it may be dangerous
from a risk management point of view. A popular alternative, usually
under the banner of "robust optimization" is ultra-conservative
and, we argue, not really robust! In this sense it may also be of
questionable practical value. We propose an alternative optimization
approach - a CVaR optimization formulation that is relatively insensitive
to estimation error, yields diversified optimal portfolios, and
can be implemented efficiently. We discuss this promising approach
in this talk and present strongly supportive numerical results.
Identifiable sets in optimization
by
Dmitriy Drusvyatskiy
School of Operations Research and Information Engineering, Cornell
University
Coauthors: Adrian S. Lewis
Seeking only the most essential ingredients of sensitivity analysis
leads to the notion of a locally minimal identifiable set. I will
discuss how for concrete optimization problems, such sets typically
exist and are smooth - thereby allowing one to reduce nonsmooth
sensitivity analysis to the classical smooth setting.
Complexity of Low-Rank Matrix Approximations
with Weights or Missing Data
by
Nicolas Gillis
University of Waterloo
Coauthors: François Glineur
Weighted low-rank approximation (WLRA), a dimensionality reduction
technique for data analysis, has been successfully used in several
applications, such as in collaborative filtering to design recommender
systems or in computer vision to recover structure from motion.
In this talk, after introducing and motivating the problem, we prove
that computing an optimal weighted low-rank approximation is NP-hard,
already when a rank-one approximation is sought. In fact, we show
that it is hard to compute approximate solutions to the WLRA problem
with some prescribed accuracy. Our proofs are based on reductions
from the maximum-edge biclique problem, and apply to strictly positive
weights as well as to binary weights; the latter corresponding to
low-rank matrix approximation with missing data, also known as low-rank
matrix completion with noise.
Superlinear elliptic equations with subcritical
and critical exponential growth without the Ambrosetti-Rabinowitz
condition
by
Nguyen Lam
Wayne State University
Coauthors: Guozhen Lu
In this talk, we discuss about the existence of a nontrivial nonnegative
solution to a class of elliptic equations which do not satisfy the
Ambrosetti-Rabinowitz (AR) condition where the nonlinear term is
superlinear at 0 and of subcritical or critical exponential growth
at #8734;. We will avoid the well-known Ambrosetti-Rabinowitz condition
using a suitable version of Mountain Pass Theorem introduced by
G. Cerami. The Trudinger-Moser inequality plays an important role
in establishing our results.
The Legendre-Fenchel conjugate of a product
of positive definite quadratic forms
by
Minghua Lin
The Legendre-Fenchel transform (or conjugate) plays an important
role in convex analysis. In particular, it appears in the analysis
of dual versions of a given optimization problem. It is useful to
find explicit (if possible) Legendre-Fenchel conjugates of functions.
In this talk, we show results on theLegendre-Fenchel conjugate of
a product of positive definite quadratic forms. This relates to
an open question posed by Hiriart-Urruty in the field of nonlinear
analysis and optimization. This problem was also recently studied
by Zhao, who showed that the Legendre-Fenchel conjugate can be found
provided the quadratic form is convex. We present strengthened conditions
that ensures the convexity of the product of positive definite quadratic
forms. The result also relates to the generalized Wielandt's inequality.
The talk is based on joint work with G. Sinnamon
Facility Location with Economies of Scale
and Congestion: Models and Column Generation Heuristics
by
Da Lu
Most literature on facility location assumes a fixed set-up and
a linear variable cost. It is known, however, that as volume increases
cost savings are achieved through economies of scale, but when volume
exceeds a certain level, congestion occurs and marginal costs start
to increase. This is best captured by an inverse S-shaped cost function
that is initially concave and then turns convex. We study such class
of location problems and propose solution methods based on Lagrangian
relaxation, column generation, and branch-and-bound. We introduce
a nonlinear mixed integer programming formulation that is decomposable
by the status of the facilities, i.e., closed, operating under economies
of scale, or operating under congestion. The resulting concave and
convex subproblems are then solved efficiently as piecewise concave
and convex bounded knapsack problems respectively. A heuristic solution
is found based on primal information from the Dantzig-Wolfe master
problems and the solution of the subproblems. Armed with the Lagrangian
lower bound and the heuristic solution, the procedure is embedded
in a branch-and-price type algorithm. Unfortunately, due to the
nonlinearity of the problem, global optimality is not guaranteed,
but high quality solutions are achieved depending on the amount
of branching performed. The methodology is tested on three function
types and four cost settings. Solutions with an average gap of 1.1\%
are found within an average of 22 minutes.
Characterizing generalized derivatives
of set-valued maps: Extending the Aubin and Mordukhovich criterions
by
C.H. Jeffrey Pang
Massachusetts Institute of Technology
For a set-valued map, we characterize, in terms of its (unconvexified
or convexified) graphical derivatives near the point of interest,
positively homogeneous maps that are generalized derivatives in
a certain sense. These results generalize the Aubin and Mordukhovich
criterions. Finally, the convexified limiting coderivative has a
bijective relationship with the set of possible generalized derivatives.
Generalized Damped Newton's Method For
Nonsmooth Equations
by
Hung Phan
Mathematics, University of British Columbia Okanagan
Coauthors: Boris S. Mordukhovich (Department of Mathematics, Wayne
State University)
This paper concerns developing a numerical method of the Newton-type
with line search to solve systems of nonlinear equations described
by nonsmooth continuous functions. The method is also known as the
damped Newton's method. We propose and justify a new generalized
damped Newton algorithm based on graphical derivatives, which have
recently been used to derive new Newton-type methods for solving
nonsmooth equations. Based on advanced techniques of variational
analysis and generalized differentiation, we establish the well-posedness
of the algorithm, its local and global convergence.
Efficient solutions for large-scale trust
region subproblem
by
Ting Kei Pong
University of Waterloo
Coauthors: Henry Wolkowicz; Heng Ye
The Trust Region Subproblem (TRS) is the minimization of a quadratic
(possibly non-convex) function over a ball. It is the main step
of the trust region method for unconstrained optimization, and is
a basic tool for regularization. Many efficient algorithms have
been proposed and implemented. In particular, current algorithms
can exploit sparsity for large scale problems and handle degeneracy,
i.e., the so-called hard case. In this talk, we show that a shift
and deflate approach changes the hard case into a trivial case.
We use this approach, in addition to several additional techniques,
to derive an efficient algorithm for solving large instances of
TRS, based on the algorithm by Rendl and Wolkowicz.
This is an ongoing work with Henry Wolkowicz and Heng Ye.
Subdifferentials of Supremum Lipschitzian
Functions and its Applications to Nonsmooth Semi-Infinite and Infinite
Programs
by
Nghia Tran
Department of Mathematics, Wayne State University
Coauthors: Boris Mordukhovich
The paper concerns the study of subdifferentials of a function which
is the supremum of an arbitrary family of uniformly Lipschitzian
functions in Asplund spaces. As a consequence we get involved first-order
optimality conditions for nonsmooth optimization problems of the
so-called infinite programming that are generally defined on infinite-dimensional
spaces of decision variables and contain infinitely many of inequality
constraints with arbitrary index sets. These problems reduce to
semi-infinite programs when the spaces of decision variables are
finite-dimensional. We also extend the classical Mangasarian-Fromovitz
constraint qualification and introduce some new types of closedness
conditions to such infinite programs.
An Approximation Algorithm For Generating
Representative Pareto Sets For Multiobjective Integer Progragramming
Problems
by
Ozgu Turgut
Coauthors: Alper Murat
In real life most of the problems have to be evaluated by considering
more than one criteria which generally have conflicting characteristics.
The proposed study develops an algorithm for efficiently generating
representative Pareto front for multi-objective integer programming
problems. The algorithm proposed aims at obtaining a diverse representation,
which is well dispersed on the actual Pareto front. Time requirement
of the algorithm can be tuned by the decision maker without disturbing
the represantativeness quality of the solution set significantly.
The idea behind the algorithm is based on finding an evenly spaced
reference points constructing grids on the criteria space. Next,
each grid is searched for non-dominated solutions by utilizing a
scalarization function which combines the reference point and Tchebyshev
methods and guarantees local Pareto optimality. The proposed method
is compared with other algorithms in literature in terms of the
quality of the solution sets generated and computational efficiency.
An
Optimal Trend Following Trading Strategy
by
Jim Zhu
Modeling an equity market
with up and down trends using geometric Browning motions whose drifts
switch between two regimes following a markovian chain, we show
that if only long and flat positions are allowed the strategy that
optimizes the cumulative return is a trend following method characterized
by a sequence of stopping times. It is shown that these stopping
times can be determined by two threshold curves that can be obtained
by solving the associated HJB equations. We also discuss how to
handle short selling. Simulations and empirical tests with historical
market data are conducted to see how these methods work in practice.
This is a joint research with Min Dai (National University of Singapore)
and Qing Zhang (University of Georgia, Athens).
Back to top