|
Fields Industrial Optimization Seminar
2009-10
|
Supported by
|
|
|
Started in November 2004 this series meets once a month, on the
first Tuesday, in the early evening. Each meeting will comprise
two related lectures on a topic in optimization; typically, one
speaker will be a university-based research and the other from the
private or government sector.We welcome the participation of everyone
in the academic or industrial community with an interest in optimization
-- theory or practice, expert or student.
Please subscribe to the Fields mail list
to be informed of upcoming seminars or
=================================================
Seminars start at 5:00 pm at the Fields Institute, 222 College Street,
Room 230 (map)
June 8th, 2010 |
Robert Freund, Deputy Dean
and Theresa Seley Professor in Management Science,
Sloan School of Management, MIT
Primal-Dual Geometry of Level Sets in Linear and Conic Optimization,
and their Explanatory Value in the Practical Efficiency of Interior-Point
Algorithms
We present a geometric relationship between the primal objective
function level sets and dual objective function level sets
of a linear program, and extend this result to conic convex
optimization problems, including semi-definite programming.
The geometric relationship suggests new ways, both theoretical
and computational, to measure the conditioning of a linear
program and also a conic convex optimization problem. We present
computational evidence that shows that our geometric measures
are highly correlated with interior-point method iteration
counts on problems in the SDPLIB suite.
============
Rama Ramakrishnan, Founder and President, 8020 Analytics
Math and the Merchant: The Application of Econometrics,
Machine Learning and Optimization to Improve Retail Decision-Making
Retailing is a global, multi-trillion dollar industry and
touches just about every person on the planet. Yet, despite
its size and complexity, many if not most retail decisions
are made primarily on instinct. This has started to change
in the last decade with the availability of granular and timely
sales data coupled with the development of sophisticated decision-support
software that analyzes this data and recommends optimal decisions.
Retailers are rapidly adopting this technology to help improve
the financial consequences of the numerous decisions they
make every day.
In this talk, we focus on decisions made by retailers in the
merchandising process, perhaps the most important business
process within retail. We identify some of the core analytical
decision problems in this area and describe the algorithmic/modeling
techniques that have been developed to address these problems.
We highlight the inter-disciplinary nature of "winning"
approaches i.e., they tend to be hybrids that draw on ideas
from disciplines such as econometrics, machine learning and
optimization. We conclude with a summary of the financial
impact of these techniques and lessons learned in applying
these techniques in the field.
|
May 4th, 2010 |
Bruce Shepherd,
McGill University
Robust Optimization of Networks
Robust optimization is one approach for dealing with optimization
problems where some input parameters (e.g., interest rates,
or weather forecasts) are unknown ahead of time.The robustness
model assumes that each input will ultimately come from a
known, bounded "universe",denoted U, of legal inputs.
A solution that is valid for all inputs in U is called robust.
Robustoptimization then seeks a minimum cost robust solution.
In Discrete Optimization, much of the work in this area has
focused on network design problems where the uncertainty is
in the traffic demand. This is because in modern data networks,
traffic patterns are uncertain and rapidly changing. Moreover,
these traffic patterns are not typically known even after
the fact, so one seeks solutions which use so-called oblivious
routing.
In this framework, the universe U is a convex body of the
possible demand matrices [Dij ] (entry Dij represents demand
between nodes i, j). Given a graph topology G = (V,E) and
associated edge costs c(e) ? 0, the goal is to
equip G with a cheapest capacity vector u : E ? R+ so that
it can support any demand matrix in U. This problems
traditional version only designs the network for a single
demand matrix [Dij ] and in the absence of other constraints,
this is trivial! For each i, j reserve Dij units of capacity
on a shortest i - j path. In the robust setting the problem
becomes surprisingly complex. Some of these problems can be
viewed as minimum cost versions of classical results inspired
by parallel computing, e.g., to find oblivious routing that
minimization congestion. We describe some ofthe recent progress
in robust network design, and briefly discuss an empirical
study of a proposed architecture for IP routing over optical
networks.
============
Matthew Andrews, Bell Labs
Minimum cost network design in telecommunication networks
under general cost functions
One of the most fundamental problems in telecom network design
is to determine the least-cost method of deploying capacity
such that the network can support a desired set of traffic demands.
Capital expenditures associated with network deployments are
enormous and so even small percentage savings can translate
into significant dollar amounts.
We shall examine a number of min-cost network design problems
that arise under varying notions of network cost. Since the
mid-1990s there has been a great deal of work on the "Buy-at-Bulk"
network design problem in which the cost of capacity on a
link is concave and hence exhibits economies of scale. This
is a natural function when we are concerned with the cost
of purchasing the equipment necessary to support the capacity.
More recently there has been increasing attention paid to
the energy costs associated with operating the link. In this
case the capacity costs can be convex, concave, or a mixture
of both.
In this talk we shall discuss how the best algorithms for
these problems are affected by the nature of the cost function.
We shall also describe how these algorithms are used to solve
practical problems in commercial network design.
|
April 13th, 2010 |
Il Yong Kim, Queens University
Structural Topology Optimization and Design Space Optimization
Structural topology optimization determines the optimum placement
of material within the domain of a structure using gradient-based
algorithms or update algorithms and the finite element method.
The optimization problem involves a very large number of design
variables, and the optimum solution is often sensitive to
the choice of the initial design domain and finite element
mesh. The design space optimization (DSO) considers the number
of design variables as being a design variable and can effectively
solve a wide range of topology optimization problems with
many design variables. This seminar will introduce the fundamental
concept of structural topology optimization and DSO, mathematical
formulation and sensitivity analysis, and practical applications.
The applications include automotive component design and biomechanical
bone remodeling simulation.
============
Rajesh Jugulum, Bank of America
Robust Optimization
Robust Optimization Methods (also known as Taguchi Methods)
have been successfully applied in many applications to improve
the performance of products/processes by designing them in
such a way that they are minimally impacted by various sources
of variation, called noise factors. These methods are proved
to be extremely useful and cost effective. This presentation
will focus on the discussion of robust optimization methods
in the areas of total product development, testing and multivariate
information systems with several case examples.
|
March 2nd, 2010 |
Forbes Burkowski,
Department of Computer Science, University of Waterloo
Graph Localization and Molecular Conformation
When applied to molecular conformation, graph localization
refers to computations that are used to determine atomic coordinates
when given all or some subset of the inter-atomic distances.
The problem becomes somewhat difficult when the inter-atomic
distances are all below some low threshold, say 6 Angstroms,
and even more of a challenge when the data are corrupted by
noise. This last possibility is typical of the data that are
provided by nuclear magnetic resonance experiments, a technique
that has gained considerable recent importance for the structural
determination of macromolecules that are difficult to analyze
using conventional x-ray methods. This talk will review some
of the computational approaches to graph localization, highlighting
the extra assumptions that are put in place when the data
are both sparse and approximate. This discussion concludes
with a presentation of our approach which involves the
following steps: A biologically motivated heuristic strategy
is used to define a set of overlapping cliques in the graph
representing the problem. We then apply an iterative procedure
that alternates between truncation of a spectral decomposition
and the averaging of distances contained in the overlap regions.
After these smoothing operations are completed, the coordinates
of atoms can be computed for each clique. Since each clique
will have its own frame of reference, the remainder of the
algorithm uses a progressive coalescence approach to bring
all the cliques into a common frame of reference. We conclude
the talk with some future directions dealing with geometric
analysis of ligand binding to drug receptor sites.
============
Zsolt Zsoldos, SimBioSys, Toronto
Algorithmic and mathematical challenges in protein-ligand
docking and scoring
Scientific and technological advancements during the past
two decades have altered the way pharmaceutical research produces
new bioactive molecules. Traditional "trial and error"
drug discovery efforts are gradually replaced by structure
based rational drug design, virtual screening(VS) and 3D modelling.
There are two approaches for VS, ligand-based similarity search
and docking. The flexible ligand docking problem is divided
into two subproblems: pose/conformation search and scoring
function. The search algorithm must be fast, provide a manageable
number of candidates, and be able to find the optimal pose/conformation
of the complex. In eHiTS, both the receptor cavity and the
candidate ligands are described by geometric shape and chemical
feature graph based on distorted polyhedra. The graphs are
mapped to each other with a specialized clique detection algorithm.
A linear time complexity matching algorithm will also be presented
that guarantees the solution with optimal score. The scoring
sub-problem can be solved with various approaches including
physical, empirical and statistics based models of the interactions.
A new statistical model will be described that is based on
interacting surface points and their normal vectors.
|
March 2nd, 2010 |
Forbes Burkowski,
Department of Computer Science, University of Waterloo
Graph Localization and Molecular Conformation
When applied to molecular conformation, graph localization
refers to computations that are used to determine atomic coordinates
when given all or some subset of the inter-atomic distances.
The problem becomes somewhat difficult when the inter-atomic
distances are all below some low threshold, say 6 Angstroms,
and even more of a challenge when the data are corrupted by
noise. This last possibility is typical of the data that are
provided by nuclear magnetic resonance experiments, a technique
that has gained considerable recent importance for the structural
determination of macromolecules that are difficult to analyze
using conventional x-ray methods. This talk will review some
of the computational approaches to graph localization, highlighting
the extra assumptions that are put in place when the data
are both sparse and approximate. This discussion concludes
with a presentation of our approach which involves the
following steps: A biologically motivated heuristic strategy
is used to define a set of overlapping cliques in the graph
representing the problem. We then apply an iterative procedure
that alternates between truncation of a spectral decomposition
and the averaging of distances contained in the overlap regions.
After these smoothing operations are completed, the coordinates
of atoms can be computed for each clique. Since each clique
will have its own frame of reference, the remainder of the
algorithm uses a progressive coalescence approach to bring
all the cliques into a common frame of reference. We conclude
the talk with some future directions dealing with geometric
analysis of ligand binding to drug receptor sites.
============
Zsolt Zsoldos, SimBioSys, Toronto
Algorithmic and mathematical challenges in protein-ligand
docking and scoring
Scientific and technological advancements during the past
two decades have altered the way pharmaceutical research produces
new bioactive molecules. Traditional "trial and error"
drug discovery efforts are gradually replaced by structure
based rational drug design, virtual screening(VS) and 3D modelling.
There are two approaches for VS, ligand-based similarity search
and docking. The flexible ligand docking problem is divided
into two subproblems: pose/conformation search and scoring
function. The search algorithm must be fast, provide a manageable
number of candidates, and be able to find the optimal pose/conformation
of the complex. In eHiTS, both the receptor cavity and the
candidate ligands are described by geometric shape and chemical
feature graph based on distorted polyhedra. The graphs are
mapped to each other with a specialized clique detection algorithm.
A linear time complexity matching algorithm will also be presented
that guarantees the solution with optimal score. The scoring
sub-problem can be solved with various approaches including
physical, empirical and statistics based models of the interactions.
A new statistical model will be described that is based on
interacting surface points and their normal vectors.
|
February 2, 2010 |
B. Wayne Bequette, Isermann Department of Chemical
and Biological Engineering Rensselaer Polytechnic Institute,
Troy, NY
Systems and Control Applications in Diabetes
An individual with Type 1 diabetes no longer has the ability
to produce insulin and must receive multiple daily insulin
injections, or continuous insulin infusion using a pump. Diabetics
must closely monitor their blood glucose levels by pricking
their fingers several times each day to obtain blood glucose
values from glucose test meters. The availability of a continuous
sensor for subcutaneous glucose concentration has the potential
to radically improve blood glucose regulation, and is also
necessary for the development of a closed-loop artificial
pancreas. In this talk we will discuss the application of
optimal estimation theory to continuous glucose monitoring,
including an alarm warning of possible future hypoglycemic
(low blood glucose) threshold violations, which could lead
to dangerous conditions. In addition, we discuss progress
towards a closed-loop artificial pancreas using an optimization-based
model redictive control (MPC) approach.
============
Jeffrey D. Kelly, Honeywell Process Solutions, Toronto
On the Modeling and Solving of Large-Scale Manufacturing
Optimization Problems in Diverse Industries
Optimizing both the process and the production in complex
manufacturing facilities in off, on and in-line environments
is the focus of this presentation. With the astounding advancements
in both computer hardware and computational software, going
beyond simulation into optimization with respect to automated
decision-making has tremendous industrial significance and
has been the focus of decades of academic research and practical
application. One of the major challenges in applying these
technologies is "how do we describe the hierarchical,
spatial and temporal complexities of the problem with enough
accuracy to significantly improve the manufacturing?".
Our approach is to successively develop the "allegoric",
"arrayic" and "algorithmic" details of
the problem into a new form of algebraic-like modeling-system
which can be easily interfaced to any commercial and public-domain
mathematical and meta-heuristical solving-system such as mixed-integer
linear programming, successive linear/quadratic programming
and evolutionary-type algorithms. The deployment of the solution
in terms of whether it is executed in-line (scheduling), on-line
(control) or off-line (analysis) following what we call the
continuous-improvement plan-perform-perfect loop is also discussed.
|
2009
|
|
December 1
5:00 p.m./2009 |
Biomass in Canada: A renewed opportunity for Operations
Research
C. Tattersall Smith, Dean and Professor of the Faculty
of Forestry, University of Toronto
Biomass in Canada and the requirement for development and
management of new supply chains and technology.
Mitigation of climate change, scarcity and high prices
of fossil fuels and securing of the energy supply have brought
forests into focus in global energy strategies. If produced
sustainably, biofuels have the potential to reduce greenhouse
gas (GHG) emissions in the transport sector, diversify Canada's
energy supplies, contribute to domestic energy security, reduce
the risk of disease and wildfire, and provide opportunities
for investment, economic growth and job creation in the economy
of Ontario. This talk will give an overview of the biomass
industry in Canada, its potential and the many areas that
OR and optimization can have an impact on, such as the new
"Biomass Feedstocks for Energy Markets" program
that is currently being launched by the International Energy
Agency.
Eric J. McCarthy, Director of Fuels at Energy Markets,
Ontario Power Generation
Biomass at OPG
Ontario Power Generation is actively studying the use
of renewable biomass as a replacement fuel for coal in some
of its electricity generating units. We are assessing the
impacts of biomass on equipment, required modifications and
the safe handling and storage of biomass fuel. In addition
to this, sustainable fuel supplies must be developed. This
talk will focus on the challenges and logistical issues that
will be need to be resolved for a successful implementation
of a biomass program, and the experiences that we have gained
sofar.
|
November 3
5:00 p.m. |
Dominique Pelletier, Canada Research Chair on Analysis,
Characterization and optimization of Complex Flow Systems,
École Polytechnique de Montréal
Simulation-Based Engineering Analysis and Design: Issues
and Challenges
The so-called high fidelity models are very popular in design
optimization. While there is a long track record of this approach
in structural optimization, design optimization of flow systems
is a much younger discipline fraught with success stories
and failures. We take a look at Computational Fluid Dynamics
that have an impact on the performance of the optimization
process. How does one control numerical errors so that they
don't lead to sub-optimal designs or even worse, to a breakdown
of the optimization process? How does one identify key parameters
controlling the flow responses? How does one determine the
uncertainty of the flow response induced by uncertainties
in the input data? (How to cascade uncertainty though a CFD
code?) Usually users of design optimization are not CFD experts
and most often rely on commercial or third-party code. How
can they make sure that these CFD tools work correctly? Finally,
the question of how appropriate the flow model is for the
flow physics at hand must be settled. We will illustrate these
various points with examples taken from our research.
============
Jean-Francois Hétu, Industrial Materials Institute,
National Research Council of Canada
Putting CFD to work in industry: issues and challenges
for optimization
We will outline our strategy for developing industrial strength
CFD tools for analysis and optimization of manufacturing processes,
including discretization by finite elements, modeling complex
physics, parallel processing and its implications for simulations
software. We will highlight cases that lend themselves to
optimization and others that have resisted for many years
with explanations as to why. In some instances, advances made
at Polytechnique have had a strong impact on our ability to
provide support to the Canadian Manufacturing industry. We
close with a discussion of our most challenging problems.
|
October 6
5:00 p.m. |
Ignacio Grossmann,
Center for Advanced Process Decision-making, Carnegie Mellon
University, Pittsburgh
Mathematical Programming Approaches to Enterprise-wide Optimization
of Process Industries
Enterprise-wide optimization (EWO) is a new emerging area that
lies at the interface of chemical engineering and operations
research, and has become a major goal in the process industries
due to the increasing pressures for remaining competitive in
the global marketplace. EWO involves optimizing the operations
of supply, production and distribution activities of a company
to reduce costs and inventories. A major focus in EWO is the
optimization of manufacturing plants as part of the overall
optimization of the supply chain. Major operational items include
production planning, scheduling, and control. This talk provides
an overview of major challenges in the development of deterministic
and stochastic linear/nonlinear mixed-integer optimization models
for planning and scheduling for the optimization of entire supply
chains that are involved in EWO problems. We illustrate the
application of these ideas in four major problems: a) integration
of planning and scheduling in batch processes, b) optimization
of responsive process supply chains, c) optimization of process
supply chains with stochastic inventory, d) optimization of
oilfield infrastructures under
uncertainty.
============
Larry Megan, Advanced Control and Operations Research
R&D, Praxair Inc.
Optimizing the Industrial Gas Supply Chain Across Organizational
Boundaries
Global competition and increasing energy costs make it
more important than ever to optimize the energy efficiency
of US manufacturing industries. While several industries and
organizations have focused on the efficiency of individual
units or plants, organizational boundaries have limited most
from doing a higher level optimization of an entire supply
chain. One specific area of potential impact is the Industrial
Gas supply chain, where the typical supply chain includes
the energy utility, the industrial gas producer, the manufacturing
plant using the industrial gases, and the downstream manufacturing
or production operations. The operational cost of the entire
supply chain is likely much higher than it needs to be as
a result of uncoordinated supply and demand. Such a lack of
coordination inherently limits the efficiency of the overall
process, increasing energy use, emissions, and labor. This
presentation discusses the impact of optimizing the overall
supply chain by developing tools to facilitate such an analysis,
identifying the data management infrastructure necessary to
enable such collaboration, and developing methods to show
how different companies, each with their own financial objectives,
can best share the benefit of such collaboration. Such work
is needed to transform our approach in energy intensive industries
and lead to a transformative step change in manufacturing
competitiveness.
|
|
|