Theme
Organizers: Ray Spiteri (Saskatchewan) and Buks van Rensburg (York)
For
the first time at the CAIMS annual meeting, the two distinct communities
of computational PDE and discrete mathematics will be connected
in this theme. Discretizations of PDE lead to (large) discrete dynamical
systems, and a popular strategy of studying discrete dynamical systems
is to go in the opposite direction (ie, limits of large numbers,
continuum limits). As a consequence, we hope that linking the two
communities will lead to a cross-fertilization of ideas. The session
will focus on scientific computing and numerical modeling of complex
systems such as percolation, polymers, Monte Carlo simulations of
protein folding and numerical simulation of biological and biomedical
systems including knotting and linking in biopolymers.
Speakers:
Dhavide Aruliah (University of Ontario
Institute of Technology
Robert Bridson (University of British
Columbia)
Matthew Emmett (University of North
Carolina)
Peter Grassberger (University of
Calgary)
Chen Greif (University of British
Columbia)
Ray Kapral (University of Toronto)
Vince Lyzinski (John Hopkins University)
Felicia Magpantay (McGill University)
Colin Macdonald (University of
Oxford)
Peter Minev (University of Alberta)
Enzo Orlandini (University of Padova)
George Patrick (University of Saskatchewan)
Thomas Prellberg (Queen Mary University,
London)
Oluwaseun Sharomi (Saskatchewan)
Erkan Tuzel (Worcester
Polytechnic Institute)
Buks van Rensburg (York)
Ivona
Bezakova (Rochester Institute of Technology)
(talk cancelled)
Daniel Stefankovic (University of Rochester) (talk cancelled)
Plenary
Speaker
Ted Kim (Santa
Barbara)
Subspace Simulation of Non-linear Materials
Everyday
materials such as biological tissue exhibit geometric and constitutive
non-linearities which are crucial in applications such as surgical
simulation and realistic human animation. However, these same
non-linearities also make efficient finite element simulation
difficult, as computing non-linear forces and their Jacobians
can be computationally intensive. Reduced order methods, which
limit simulations to a low-dimensional subspace, have the potential
to accelerate these simulations by orders of magnitude. In this
talk, I will describe a subspace method we have developed that
efficiently computes all the necessary non-linear quantities by
evaluating them at a discrete set of "cubature" points,
an online integration method that can accelerate simulations even
when the subspace is not known a priori, and a domain decomposition
method that efficiently adds deformations to discretely partitioned,
articulated simulations. Using our methods, we have observed speedups
of one to four orders of magnitude.
Bio:
Theodore Kim joined the faculty of Media Arts and Technology Program
at the University of California, Santa Barbara, in the Fall of
2011 as an Assistant Professor. He conducts research in physically-based
simulation for computer graphics, particularly in fluid dynamics,
solid mechanics, and multi-core techniques. Techniques from his
research have been used in over a dozen movies. Previously, he
has been an Assistant Professor in Computer Science at the University
of Saskatchewan, a Post-Doctoral Associate at Cornell University,
and a Post-Doctoral Fellow at IBM TJ Watson Research Center. He
received his PhD in Computer Science from the University of North
Carolina at Chapel Hill in 2006.
Speakers
Dhavide
Aruliah (University of Ontario Institute of Technology)
Toward Accurate Methods for Forensic Bloodstain Pattern Analysis
At
present, bloodstain pattern analysis (BPA) is largely an empirical,
qualitative sub-discipline of forensic science. Forensic BPA specialists
analyze bloodstains found at crime scenes and attempt to infer
the location of the blood-letting impact (or impacts) that caused
to the bloodstains. Traditional BPA methods for reconstructing
crime scenes (notably stringing) ignore the effects of gravity
and aerodynamic drag on trajectories of blood droplets in flight.
Such assumptions may produce acceptable qualitative predictions
under certain conditions (e.g., when analyzing bloodstains caused
by droplets moving at high speeds due to discharge of a firearm).
However, in many circumstances, back-tracking blood droplets along
straight lines from bloodstains to a static impact location are
misleading (e.g., when bloodstains are caused by assault with
a blunt instrument or an edged weapon). Our ultimate aim is to
develop software tools for quantitative analysis to support forensic
BPA analysts.
We
describe our framework for recording simulated blood-letting events
and extracting droplet flight trajectories. The simulations consist
of fake blood encased in ballistic gel being splattered by projectiles.
The resulting blood flight trajectories are recorded by a stereo
pair of high-speed cameras and the bloodstains are photographed
to provide a collection of video and static image data sets to
validate our inversion framework. We employ a sophisticated algorithm
incorporating background removal, segmentation, 2D motion tracking
and 3D reconstruction to extract meaningful flight trajectories
from the video data.
Robert
Bridson (University of British Columbia)
Generic implicit constrained dynamics and algebraic preconditioners
for graphics
The
emerging approach of "virtual practical effects" in
feature film production leverages artists' intuition for the real
world by simulating physics in a virtual world - instead of struggling
to control complex software models, artists set up effects as
they might in reality and let physics do the rest. This demands
robust numerical solvers which can couple diverse physics together
in unanticipated ways. We present a framework which unifies many
previous elements (from viscous incompressible fluids to rigid
body contact to inextensible cloth) and reduces to a sequence
of least-squares-like problems. We further explore a new approach
to algebraic almost-black-box domain decomposition which shows
promise for tackling linear systems of this category.
Matthew
Emmett (University of North Carolina)
The Parallel Full Approximation Scheme in Space and Time (PFASST)
algorithm
The
Parallel Full Approximation Scheme in Space and Time (PFASST)
algorithm for parallelizing PDEs in time is presented. To efficiently
parallelize PDEs in time, the PFASST algorithm decomposes the
time domain into several time slices. After a provisional solution
is obtained using a relatively inexpensive time integration scheme,
the solution is iteratively improved using a deferred correction
scheme. To further improve parallel efficiency, the PFASST algorithm
uses a hierarchy of discretizations at different spatial and temporal
resolutions and employs an analog of the multi-grid full approximation
scheme to transfer information between the discretizations.
Peter
Grassberger (University of Calgary)
"Who said that he understands percolation?"
Although
percolation theory was considered a mature subject several years
ago, recent progress has changed this radically. While "classical"
or "ordinary" percolation (OP) is a second order phase
transition between long range connectivity and disconnectedness
on diluted regular lattices or random graphs, examples have now
been found where this transition can range from infinite order
to first order. The latter is of particular importance in social
sciences, where first order percolation transitions show up as
a consequence of synergistic effects, and I will point out analogies
with the relationship between percolation and rough interfaces
in physics. Another case where first order percolation transitions
show up is interdependent networks, although first claims about
this have to be substantially modified -- in some cases of interdependent
networks the transition is second order but in a new universality
class. A similar but even more unexpected result holds for so-called
"Achleoptas processes" that were originally claimed
to show first order transitions, but which actually show second
order transitions with a completely new type of finite size scaling.
Finally, I will present "agglomerative percolation"
(AP), a model originally introduced to understand the claim that
network renormalization can demonstrate the fractality of some
small world networks. Due to a spontaneously broken symmetry on
bipartite graphs, AP leads e.g. to different scaling behaviors
on square and triangular 2-d lattices, in flagrant violations
of universality.
Chen
Greif (University of British Columbia)
Numerical Solution of Saddle-Point Linear Systems
Constrained
partial differential equations and optimization problems typically
require the need to solve special linear systems known as saddle-point
systems. When the matrices are very large and sparse, iterative
methods must be used. A challenge here is to derive and apply
solution methods that exploit the properties and the structure
of the given discrete operators, and yield fast convergence while
imposing reasonable computer storage requirements. In this talk
I will provide an overview of solution techniques. In particular,
we will discuss effective preconditioners and their spectral properties,
bounds on convergence rates, and computational challenges.
Ray
Kapral (Toronto)
Mesoscopic Dynamics of Biopolymers and Protein Machines
The
dynamics of polymers and biopolymers in solution and in crowded
molecular environments which mimic some features of the interior
of a biochemical cell will be discussed. In particular, the dynamics
of protein machines that utilize chemical energy to effect cyclic
conformational changes will be described. The investigation of
the dynamics of such complex systems requires knowledge of the
time evolution on physically relevant long distance and time scales.
This often necessitates a coarse grained or mesoscopic treatment
of the dynamics. A particle-based mesoscopic dynamical method,
multiparticle collision dynamics, which conserves mass, momentum
and energy, has been constructed and utilized to study the dynamics
of these systems. The dynamics can be described by a Markov chain
model in the full phase space of the system and, using projection
operator or other methods, can be shown to lead to the continuum
Navier-Stokes or reaction-diffusion descriptions on long distance
and time scales.
Vince
Lyzinski (Johns Hopkins)
Strong Stationary Duality for Diffusions
Strong
stationary duality has had a wide-ranging impact on Markov chain
theory since its conception by Diaconis and Fill in 1990. Its
diverse applications range from perfect sampling extensions of
Markov Chain Monte Carlo to the establishment of cut-off phenomena
for wide classes of Markov chains. We extend the idea of strong
stationary duality to one-dimensional diffusion processes and
in doing so recover some classical Markov chain results in the
diffusion setting. This is joint work with my PhD advisor James
Allen Fill.
Colin
Macdonald (University of Oxford)
Mathematics and Algorithms for Simple Computing on Surfaces
The
Closest Point Method is a simple numerical technique for solving
partial differential equations (PDEs) on curved surfaces or other
general domains. The method works by embedding the surface in
a higher-dimensional space and solving the PDE in that space.
Numerically, we can use simple finite difference and interpolation
schemes on uniform Cartesian grids. This presentation provides
a brief overview of the Closest Point Method and outlines some
current results. In the spirit of a minisymposium that is examining
links between continuous and discrete computational mathematics,
I will discuss the issue of finding a narrow band surrounding
a complicated surface (this is useful for an efficient implementation)
and how we approach this (discrete) problem.
Joint
work with Tom Maerz, Ingrid von Glehn, and Yujia Chen (Oxford)
and Steve Ruuth (SFU).
Felicia
Magpantay (York
University)
Stability of backward Euler applied to a model state dependent
delay differential equation
We
consider the stability properties of the backward Euler method
for delay differential equations (DDEs) with respect to a test
equation. We consider two cases: (i) constant delay (ii) state
dependent delay. Runge-Kutta methods applied to DDEs have been
studied by many authors who have mainly considered stability regions
independent of the delay and/or require the step-size to be a
submultiple of the delay (for the constant delay case). These
assumptions put too much restriction on the method and cannot
be used in the state dependent case. We direct attention to the
dependence of the stability regions to the delay function and
present results that use general step sizes. The techniques used
are derived from the method of Lyapunov functionals to directly
prove the stability of zero solution. We also prove the local
stability of backward Euler when the stepsize used in an important
case.
Joint
work with A.R. Humphries and N. Guglielmi.
Peter
Minev (University
of Alberta)
A Direction Splitting Algorithm for Flow Problems in Complex/Moving
Geometries
An
extension of the direction splitting method for the incompressible
Navier-Stokes equations proposed in [1], to flow problems in complex,
possibly time dependent geometries will be presented. The idea
stems from the idea of the fictitious domain/penalty methods for
flows in complex geometry. In our case, the velocity boundary
conditions on the domain boundary are approximated with a second-order
of accuracy while the pressure subproblem is harmonically extended
in a fictitious domain such that the overall domain of the problem
is of a simple rectangular/parallelepiped shape.
The
new technique is still unconditionally stable for the Stokes problem
and retains the same convergence rate in both, time and space,
as the Crank-Nicolson scheme. A key advantage of this approach
is that the algorithm has a very impressive parallel performance
since it requires the solution of one-dimensional problems only,
which can be performed very efficiently in parallel by a domain-decomposition
Schur complement approach. Numerical results illustrating the
convergence of the scheme in space and time will be presented.
Finally, the implementation of the scheme for particulate flows
will be discussed and some validation results for such flows will
be presented.
[1]
J.L. Guermond, P.D. Minev, A new class of massively parallel direction
splitting for the incompressible Navier-Stokes equations. Computer
Methods in Applied Mechanics and Engineering, 200 (2011), 2083â€2093.
Enzo
Orlandini (Padova)
Dynamics of knotted polymers
Knots
are frequent in long polymer rings at equilibrium and it is now
well established that their presence can affect to some extent
the static properties of the chain. On the other hand, the presence
of topological constraints (knots) in circular and linear polymers
may influence also their dynamical properties. This has been indeed
shown in recent experiments where the motion of a single knotted
DNA has been followed within a viscous solution and in the presence
of a stretching force. These experiments raise interesting challenges
to the theoretical comprehension of the problem, an issue which
is still in its infancy. As a first step towards the understanding
of the mechanism underlying the mobility of a knot along the ring
backbone and its effect on the overall polymer dynamic we investigate,
by Monte Carlo and molecular dynamics simulations, the dynamics
of knotted rings under good solvent conditions. By using an algorithm
that detects position and size of the knot we are able to monitor
the motion of the topologically entangled sub region both in space
and along the ring backbone.
This
allows identifying in knotted rings a novel, slow topological
timescale, whose origin can be related to a self-reptation motion
of the knotted region.
For
open chains, knotted configurations do not represent an equilibrium
state any more. However, under suitable conditions, (for example
very tight knots or quite rigid chains) knotted metastable states
persist for a very long time and a statistical description of
their dynamical properties is then possible. By performing off
lattice molecular dynamic simulations of a semiflexible polymer
we estimate the average living time and the survival probability
as a function of the initial conditions (size of the initial knot)
and knot type. This analysis has been extended to the case in
which the linear polymer is subject to an external stretching
force.
George
Patrick (University of Saskatchewan)
Automatically
generated variational integrators
Many
fundamental physical systems have variational formulations, such
as mechanical systems in their Lagrangian formulation. Discretization
of the variational principles leads to (implicit) symplectic and
momentum preserving one step integration methods. However, such
methods can be very complicated.
I
will describe some advances in the basic theory of variational
integrators, and a software system called AUDELSI, which converts
any ordinary one step method into a variational integrator of
the same order.
Thomas
Prellberg (Queen Mary University London)
Rare event sampling with stochastic growth algorithms
We
discuss uniform sampling algorithms that are based on stochastic
growth methods, using sampling of extreme configurations of polymers
in simple lattice models as a motivation. We shall show how a
series of clever enhancements to a fifty-odd year old algorithm,
the Rosenbluth method, leads to a suite of cutting-edge algorithms
capable of uniform sampling of equilibrium statistical mechanical
systems of polymers in situations where competing algorithms failed
to perform well. Examples range from collapsed homo-polymers near
sticky surfaces to models of protein folding.
Oluwaseun
Sharomi (University of Saskatchewan)
The Significance of Order in the Numerical Simulation of the
Bidomain Equation in Parallel
The
propagation of electrical activity in the heart can be modelled
by the bidomain equations. However to obtain clinically useful
data from the bidomain equations, they must be solved with many
millions of unknowns. Naturally, to obtain such data in real time
is the ultimate goal, but at present we are still an order of
magnitude or two away from being able to do so. The spectral/hp
element method can be considered to be a high-order extension
of the traditional finite or spectral
element methods, where convergence is not only possible through
reducing the mesh size h but also through increasing the local
polynomial order p of the basis functions used to expand the solution.
We are interested in evaluating the effectiveness of a high-order
method in serial against that of a low-order method in parallel.
We find that high-order methods in serial can outperform low-order
methods in parallel. These findings suggest software developers
for the bidomain equations should not forego the implementation
of high-order methods in their efforts to parallelize their solvers.
Erkan
Tuzel (Physics,
Worcester Polytechnic Institute)
Constrained Polymer Dynamics in a Mesoscale Solvent
While
modeling polymer solutions, the presence of multiple time scales,
such as the intermolecular bond potentials, makes quantitative
analysis of results difficult, and simulations costly. Here, we
show how these degrees of freedom can be replaced by rigid bond
constraintsas commonly done in Brownian dynamicsfor
polymers embedded in a recently introduced hydrodynamic solvent
known as Stochastic Rotation Dynamics (SRD) (or Multi-Particle
Collision Dynamics - MPCD). We then discuss applications of this
approach to various systems of biological interest.
EJ
Janse van Rensburg (York University)
Statistics of knotted lattice polygons
In
this talk I discuss the implementation of the GAS algorithm using
BFACF elementary moves which we implement to sample knotted polygons
in the cubic lattice. The GAS algorithm is an approximate enumeration
algorithm, and I show how it can be implemented to estimate the
number of distinct polygons of given length and fixed knot types.
The numerical results we obtain make it possible to examine the
scaling of knotted lattice polygons. For example, our data indicate
that unknotted polygons dominate cubic lattice polygon statistics
up to lengths of about 170,000 steps, thereafter polygons of knot
type the trefoil becomes more numerous. In addition, the relative
frequencies of various knot types can be determined -- we found
that trefoil knots are about 28 times more likely to occur than
figure eight knots in long polygons.
Contributed
Talks
*Ken Roberts, Western University
Coauthors: S. R. Valluri, Muralikrishna Molli, M. ChandraSekhar,
K. Venkataramaniah, P. C. Deshmukh
A Study of Polylogarithmic Equations for Thermoelectric Nanomaterials
In
the design of thermoelectric nanomaterials, various expressions
arise which involve the polylogarithms. A computational study
of some of those equations has led to curious results, suggesting
additional properties to be explored, either the mathematics of
polylogarithms, or the statistical mechanics underlying the material
models which lead to polylogarithms.
We will present a progress report on our efforts to explore and
understand these relationships. There are possibly some insights
to be gained into the statistical mechanics of thermoelectric
materials, via utilizing polylogarithms of complex order, or via
generalizing polylogarithms to a form related to the Dirichlet
L-series in analytic number theory.
Thomas
Humphries
Department of Mathematics and Statistics, Memorial University of
Newfoundland
Coauthors: Ronald Haynes, Lesley James
Simultaneous optimization of oil well placement and control using
a hybrid global-local strategy
Two
important decisions in maximizing production from an oil field
are where to place injection and production wells, and how to
control the flow rates at these wells. In this presentation we
address the reservoir optimization problem using an automatic
approach that hybridizes two black-box optimization methods: particle
swarm optimization (PSO) and generalized pattern search (GPS).
PSO provides a semi-random, global exploration of search space,
while GPS systematically explores local regions of space. We present
simulation results showing that this hybridized approach outperforms
the independent application of PSO to this problem.
Erin
Moulding, Department of Mathematics, UBC
Coauthors: Chen Greif, UBC, and Dominique Orban, École Polytechnique
Bounds on the Eigenvalues of 3x3 Block Matrices arising from
Interior-Point Methods
Interior-point
methods feature prominently in the solution of constrained optimization
problems, and involve solving linear systems with a sequence of
3x3 block matrices which become increasingly ill-conditioned.
A review of the literature suggests that most practitioners reduce
these to either 2x2 block saddle-point matrices or 1x1 block normal
equations. In this talk we explore whether it pays off to perform
such reductions. We use energy estimates to obtain bounds on the
eigenvalues of the unreduced matrix, which indicate that in terms
of spectral structure, it may be better to keep the matrix in
its original form.
---------------------------------------------------------------
Wayne
Enright ,
University of Toronto
Coauthors: Bo
Wang
Parameter Estimation for ODEs using a Cross-Entropy Approach
Parameter
Estimation for ODEs and DDEs is an important topic in numerical
analysis. In this paper, we present a novel approach to address
this inverse problem. Cross-entropy algorithms are general algorithm
which can be applied to solve global optimization problems. The
main steps of cross-entropy methods are first to generate a set
of trial samples from a certain distribution, then to update the
distribution based on these generated sample trials. To overcome
the prohibitive computation of standard cross-entropy algorithms,
we develop a modification combining local search techniques. The
modified cross-entropy algorithm can speed the convergence rate
and improve the accuracy simultaneously.
Two different coding schemes (continuous coding and discrete coding)
are also introduced. Continuous coding uses a truncated multivariate
Gaussian to generate trial samples, while discrete coding reduces
the search space to a finite (but dense) subset of the feasible
parameter values and uses a Bernoulli distribution to generate
the trial samples (which are fixed point approximation of the
actual parameters) . Extensive numerical and real experiments
are conducted to illustrate the power and advantages of the proposed
methods. Compared to other existing state-of-the-art approaches
on some benchmark problems for parameter estimation, our methods
have three main advantages: 1) They are robust to noise in the
data to be fitted; 2) They are not sensitive to the number of
observation points (in contrast to most existing approaches) ;
3) The modified versions exhibit faster convergence than the original
versions, thus they are more efficient, without sacrificing accuracy.
------------------------------------------------------------------------
TALK
CANCELLED
Ivona
Bezakova (RIT)
Counting and sampling minimum
cuts in weighted planar graphs
We
will discuss two minimum cut problems in weighted planar graphs:
minimum source-sink cuts and contiguous minimum single-source-multi-sink
cuts.
A
source-sink cut is a set of vertices containing the source vertex
and not the sink vertex (or, in the case of multiple sinks, not
containing any of the sink vertices). A cut is minimum if the sum
of the weights of the cut edges, connecting a vertex in the cut
set with a vertex outside the cut set, is the smallest possible.
A cut is contiguous if the cut set can be separated from the remaining
vertices by a simply connected planar region whose boundary intersects
only the cut edges.
We
will present an O(n^2) algorithm counting all minimum source-sink
cuts in weighted planar graphs, where n is the number of vertices.
We will also sketch an O(n^3) algorithm counting all contiguous
minimum single-source-multi-sink cuts. In both cases, having completed
the counting part, subsequent sampling is very fast: a uniformly
random cut can be produced in additional linear time.
The
counting algorithms share a common outline. First, we reduce the
problem to the problem of counting a different type of cuts in an
unweighted planar directed acyclic graph (these cuts can also be
thought of as maximal antichains in the corresponding partially
ordered set). These cuts correspond to certain cycles in the planar
dual graph and we employ dynamic programming to count them. We will
discuss the first algorithm in detail and briefly sketch the issues
encountered by the contiguity requirement.
Minimum
source-sink and contiguous minimum single-source-multi-sinks cuts
have applications in computer vision and medical imaging where the
underlying graph often forms a 2D grid (lattice).
Based
on joint works with Adam Friedlander and Zach Langley
Daniel
Stefankovic (Rochester)
Connection
between counting and sampling
Counting
problems arise in a wide variety of areas, for example, computer
science, enumerative combinatorics, and statistical physics (estimating
the value of a partition function is a counting problem). I will
talk about the following aspect of approximate counting: given a
sampling algorithm, how can one efficiently translate it into a
counting algorithm?
back
to top
|