The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
March 2018

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

People matching "Matrix analytic methods"

Professor Nigel Bean
Chair of Applied Mathematics

More about Nigel Bean...
Associate Professor Nicholas Buchdahl
Reader in Pure Mathematics

More about Nicholas Buchdahl...
Dr David Green
Lecturer in Applied Mathematics

More about David Green...
Professor Finnur Larusson
Associate Professor in Pure Mathematics

More about Finnur Larusson...
Professor Matthew Roughan
Professor of Applied Mathematics

More about Matthew Roughan...

Courses matching "Matrix analytic methods"

Applications of Quantitative Methods in Finance I

Together with MATHS 1009 Introduction to Financial Mathematics I, this course provides an introduction to the basic mathematical concepts and techniques used in finance and business and includes topics from calculus, linear algebra and probability, emphasising their inter-relationships and applications to the financial area; introduces students to the use of computers in mathematics; develops problem solving skills with a particular emphasis on financial and business applications. Topics covered are: Calculus: differential and integral calculus with applications; functions of two real variables. Probability: basic concepts, conditional probability; probability distributions and expected value with applications to business and finance.

More about this course...

Numerical Methods

To explore complex systems, physicists, engineers, financiers and mathematicians require computational methods since mathematical models are only rarely solvable algebraically. Numerical methods, based upon sound computational mathematics, are the basic algorithms underpinning computer predictions in modern systems science. Such methods include techniques for simple optimisation, interpolation from the known to the unknown, linear algebra underlying systems of equations, ordinary differential equations to simulate systems, and stochastic simulation under unknown influences. Topics covered are: the mathematical and computational foundations of the numerical approximation and solution of scientific problems; simple optimisation; vectorisation; clustering; polynomial and spline interpolation; pattern recognition; integration and differentiation; solution of large scale systems of linear and nonlinear equations; modelling and solution with sparse equations; explicit schemes to solve ordinary differential equations; random numbers; stochastic system simulation

More about this course...

Variational Methods and Optimal Control III

Many problems of optimisation and control in the sciences and engineering seek to find the shape of a curve or surface satisfying certain conditions so as to maximise or minimise some quantity. For example, shape a yacht hull so as to minimise fluid drag. Variational methods involve an extension of calculus techniques to handle such problems. This course develops an appropriate methodology, illustrated by a variety of physical and engineering problems. Topics covered are: Classical Calculus of Variations problems such as calculation of the shape of geodesics, the Cantenary, and the Brachystochrone; the derivation and use of the simpler Euler-Lagrange equations for second-order (the Euler-Poisson equation), multiple dependent variables (Hamilton's equations), and multiple independent variables (minimal surfaces); constrained problems and problems with non-integral constraints; Euler's finite differences, Ritz's method and Kantorich's method; conservation laws and Noether's theorem; classification of extremals using second variation; optimal control via the Pontryagin Maximum Principle, and its applications to space-flight calculations.

More about this course...

Events matching "Matrix analytic methods"

Stability of time-periodic flows
15:10 Fri 10 Mar, 2006 :: G08 Mathematics Building University of Adelaide :: Prof. Andrew Bassom, School of Mathematics and Statistics, University of Western Australia

Time-periodic shear layers occur naturally in a wide range of applications from engineering to physiology. Transition to turbulence in such flows is of practical interest and there have been several papers dealing with the stability of flows composed of a steady component plus an oscillatory part with zero mean. In such flows a possible instability mechanism is associated with the mean component so that the stability of the flow can be examined using some sort of perturbation-type analysis. This strategy fails when the mean part of the flow is small compared with the oscillatory component which, of course, includes the case when the mean part is precisely zero.

This difficulty with analytical studies has meant that the stability of purely oscillatory flows has relied on various numerical methods. Until very recently such techniques have only ever predicted that the flow is stable, even though experiments suggest that they do become unstable at high enough speeds. In this talk I shall expand on this discrepancy with emphasis on the particular case of the so-called flat Stokes layer. This flow, which is generated in a deep layer of incompressible fluid lying above a flat plate which is oscillated in its own plane, represents one of the few exact solutions of the Navier-Stokes equations. We show theoretically that the flow does become unstable to waves which propagate relative to the basic motion although the theory predicts that this occurs much later than has been found in experiments. Reasons for this discrepancy are examined by reference to calculations for oscillatory flows in pipes and channels. Finally, we propose some new experiments that might reduce this disagreement between the theoretical predictions of instability and practical realisations of breakdown in oscillatory flows.
A Bivariate Zero-inflated Poisson Regression Model and application to some Dental Epidemiological data
14:10 Fri 27 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir Paul

Data in the form of paired (pre-treatment, post-treatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zero-inflated bivariate Poisson regression (ZIBPR) model for the paired (pre-treatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zero-inflated Poisson regression (ZIPR) model of the post-treatment count with the pre-treatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zero-inflated Poisson regression model in which the pre-treatment DMFT index is taken to be a covariate
A mathematical look at dripping honey
15:10 Fri 4 May, 2007 :: G08 Mathematics Building University of Adelaide :: Dr Yvonne Stokes :: University of Adelaide

Honey dripping from an upturned spoon is an everyday example of a flow that extends and breaks up into drops. Such flows have been of interest for over 300 years, attracting the attention of Plateau and Rayleigh among others. Theoretical understanding has, however, lagged behind experimental investigation, with major progress being made only in the last two decades, driven by industrial applications including ink-jet printing, spinning of polymer and glass fibres, blow-moulding of containers, light bulbs and glass tubing, and rheological measurement by fibre extension. Albeit, the exact details of the final stages of breakup are yet to be fully resolved. An aspect that is relatively unexplored is the evolution of drop and filament from some initial configuration, and the influence of initial conditions on the final breakup. We will consider a drop of very viscous fluid hanging beneath a solid boundary, similar to honey dripping from an upturned spoon, using methods that allow examination of development and behaviour from early time, when a drop and filament begin to form, out to large times when the bulk of the fluid forms a drop at the bottom of a long thin filament which connects it with the upper boundary. The roles of gravity, inertia and surface tension will be examined.
Flooding in the Sundarbans
15:10 Fri 18 May, 2007 :: G08 Mathematics Building University of Adelaide :: Steve Need

The Sunderbans is a region of deltaic isles formed in the mouth of the Ganges River on the border between India and Bangladesh. As the largest mangrove forest in the world it is a world heritage site, however it is also home to several remote communities who have long inhabited some regions. Many of the inhabited islands are low-lying and are particularly vulnerable to flooding, a major hazard of living in the region. Determining suitable levels of protection to be provided to these communities relies upon accurate assessment of the flood risk facing these communities. Only recently the Indian Government commissioned a study into flood risk in the Sunderbans with a view to determine where flood protection needed to be upgraded.

Flooding due to rainfall is limited due to the relatively small catchment sizes, so the primary causes of flooding in the Sunderbans are unnaturally high tides, tropical cyclones (which regularly sweep through the bay of Bengal) or some combination of the two. Due to the link between tidal anomaly and drops in local barometric pressure, the two causes of flooding may be highly correlated. I propose stochastic methods for analysing the flood risk and present the early work of a case study which shows the direction of investigation. The strategy involves linking several components; a stochastic approximation to a hydraulic flood routing model, FARIMA and GARCH models for storm surge and a stochastic model for cyclone occurrence and tracking. The methods suggested are general and should have applications in other cyclone affected regions.

Adaptive Fast Convergence - Towards Optimal Reconstruction Guarantees for Phylogenetic Trees
16:00 Tue 1 Apr, 2008 :: School Board Room :: Schlomo Moran :: Computer Science Department, Technion, Haifa, Israel

One of the central challenges in phylogenetics is to be able to reliably resolve as much of the topology of the evolutionary tree from short taxon-sequences. In the past decade much attention has been focused on studying fast converging reconstruction algorithms, which guarantee (w.h.p) correct reconstruction of the entire tree from sequences of near-minimal length (assuming some accepted model of sequence evolution along the tree). The major drawback of these methods is that when the sequences are too short to correctly reconstruct the tree in its entirety, they do not provide any reconstruction guarantee for sufficiently long edges. Specifically, the presence of some very short edges in the model tree may prevent these algorithms from reconstructing even edges of moderate length.

In this talk we present a stronger reconstruction guarantee called "adaptive fast convergence", which provides guarantees for the correct reconstruction of all sufficiently long edges of the original tree. We then present a general technique, which (unlike previous reconstruction techniques) employs dynamic edge-contraction during the reconstruction of the tree. We conclude by demonstrating how this technique is used to achieve adaptive fast convergence.

Computational Methods for Phase Response Analysis of Circadian Clocks
15:10 Fri 18 Jul, 2008 :: G04 Napier Building University of Adelaide. :: Prof. Linda Petzold :: Dept. of Mechanical and Environmental Engineering, University of California, Santa Barbara

Circadian clocks govern daily behaviors of organisms in all kingdoms of life. In mammals, the master clock resides in the suprachiasmatic nucleus (SCN) of the hypothalamus. It is composed of thousands of neurons, each of which contains a sloppy oscillator - a molecular clock governed by a transcriptional feedback network. Via intercellular signaling, the cell population synchronizes spontaneously, forming a coherent oscillation. This multi-oscillator is then entrained to its environment by the daily light/dark cycle.

Both at the cellular and tissular levels, the most important feature of the clock is its ability not simply to keep time, but to adjust its time, or phase, to signals. We present the parametric impulse phase response curve (pIPRC), an analytical analog to the phase response curve (PRC) used experimentally. We use the pIPRC to understand both the consequences of intercellular signaling and the light entrainment process. Further, we determine which model components determine the phase response behavior of a single oscillator by using a novel model reduction technique. We reduce the number of model components while preserving the pIPRC and then incorporate the resultant model into a couple SCN tissue model. Emergent properties, including the ability of the population to synchronize spontaneously are preserved in the reduction. Finally, we present some mathematical tools for the study of synchronization in a network of coupled, noisy oscillators.

Free surface Stokes flows with surface tension
15:10 Fri 5 Sep, 2008 :: G03 Napier Building University of Adelaide :: Prof. Darren Crowdy :: Imperial College London

In this talk, we will survey a number of different free boundary problems involving slow viscous (Stokes) flows in which surface tension is active on the free boundary. Both steady and unsteady flows will be considered. Motivating applications range from industrial processes such as viscous sintering (where end-products are formed as a result of the surface-tension-driven densification of a compact of smaller particles that are heated in order that they coalesce) to biological phenomena such as understanding how organisms swim (i.e. propel themselves) at low Reynolds numbers. Common to our approach to all these problems will be an analytical/theoretical treatment of model problems via complex variable methods -- techniques well-known at infinite Reynolds numbers but used much less often in the Stokes regime. These model problems can give helpful insights into the behaviour of the true physical systems.
Sloshing in tanks of liquefied natural gas (LNG) vessels
15:10 Wed 22 Apr, 2009 :: Napier LG29 :: Prof. Frederic Dias :: ENS, Cachan

The last scientific conversation I had with Ernie Tuck was on liquid impact. As a matter of fact, we discussed the paper by J.H. Milgram, Journal of Fluid Mechanics 37 (1969), entitled "The motion of a fluid in a cylindrical container with a free surface following vertical impact." Liquid impact is a key issue in sloshing and in particular in sloshing in tanks of LNG vessels. Numerical simulations of sloshing have been performed by various groups, using various types of numerical methods. In terms of the numerical results, the outcome is often impressive, but the question remains of how relevant these results are when it comes to determining impact pressures. The numerical models are too simplified to reproduce the high variability of the measured pressures. In fact, for the time being, it is not possible to simulate accurately both global and local effects. Unfortunately it appears that local effects predominate over global effects when the behaviour of pressures is considered. Having said this, it is important to point out that numerical studies can be quite useful to perform sensitivity analyses in idealized conditions such as a liquid mass falling under gravity on top of a horizontal wall and then spreading along the lateral sides. Simple analytical models inspired by numerical results on idealized problems can also be useful to predict trends. The talk is organized as follows: After a brief introduction on the sloshing problem and on scaling laws, it will be explained to what extent numerical studies can be used to improve our understanding of impact pressures. Results on a liquid mass hitting a wall obtained by a finite-volume code with interface reconstruction as well as results obtained by a simple analytical model will be shown to reproduce the trends of experiments on sloshing. This is joint work with L. Brosset (GazTransport & Technigaz), J.-M. Ghidaglia (ENS Cachan) and J.-P. Braeunig (INRIA).
Chern-Simons classes on loop spaces and diffeomorphism groups
13:10 Fri 12 Jun, 2009 :: School Board Room :: Prof Steve Rosenberg :: Boston University

The loop space LM of a Riemannian manifold M comes with a family of Riemannian metrics indexed by a Sobolev parameter. We can construct characteristic classes for LM using the Wodzicki residue instead of the usual matrix trace. The Pontrjagin classes of LM vanish, but the secondary or Chern-Simons classes may be nonzero and may distinguish circle actions on M. There are similar results for diffeomorphism groups of manifolds.
Strong Predictor-Corrector Euler Methods for Stochastic Differential Equations
15:10 Fri 19 Jun, 2009 :: LG29 :: Prof. Eckhard Platen :: University of Technology, Sydney

This paper introduces a new class of numerical schemes for the pathwise approximation of solutions of stochastic differential equations (SDEs). The proposed family of strong predictor-corrector Euler methods are designed to handle scenario simulation of solutions of SDEs. It has the potential to overcome some of the numerical instabilities that are often experienced when using the explicit Euler method. This is of importance, for instance, in finance where martingale dynamics arise for solutions of SDEs with multiplicative diffusion coefficients. Numerical experiments demonstrate the improved asymptotic stability properties of the proposed symmetric predictor-corrector Euler methods.
Contemporary frontiers in statistics
15:10 Mon 28 Sep, 2009 :: Badger Labs G31 Macbeth Lectrue :: Prof. Peter Hall :: University of Melbourne

The availability of powerful computing equipment has had a dramatic impact on statistical methods and thinking, changing forever the way data are analysed. New data types, larger quantities of data, and new classes of research problem are all motivating new statistical methods. We shall give examples of each of these issues, and discuss the current and future directions of frontier problems in statistics.
Modelling and pricing for portfolio credit derivatives
15:10 Fri 16 Oct, 2009 :: MacBeth Lecture Theatre :: Dr Ben Hambly :: University of Oxford

The current financial crisis has been in part precipitated by the growth of complex credit derivatives and their mispricing. This talk will discuss some of the background to the `credit crunch', as well as the models and methods used currently. We will then develop an alternative view of large basket credit derivatives, as functions of a stochastic partial differential equation, which addresses some of the shortcomings.
Analytic torsion for twisted de Rham complexes
13:10 Fri 30 Oct, 2009 :: School Board Room :: Prof Mathai Varghese :: University of Adelaide

We define analytic torsion for the twisted de Rham complex, consisting of differential forms on a compact Riemannian manifold X with coefficients in a flat vector bundle E, with a differential given by a flat connection on E plus a closed odd degree differential form on X. The definition in our case is more complicated than in the case discussed by Ray-Singer, as it uses pseudodifferential operators. We show that this analytic torsion is independent of the choice of metrics on X and E, establish some basic functorial properties, and compute it in many examples. We also establish the relationship of an invariant version of analytic torsion for T-dual circle bundles with closed 3-form flux. This is joint work with Siye Wu.
Eigen-analysis of fluid-loaded compliant panels
15:10 Wed 9 Dec, 2009 :: Santos Lecture Theatre :: Prof Tony Lucey :: Curtin University of Technology

This presentation concerns the fluid-structure interaction (FSI) that occurs between a fluid flow and an arbitrarily deforming flexible boundary considered to be a flexible panel or a compliant coating that comprises the wetted surface of a marine vehicle. We develop and deploy an approach that is a hybrid of computational and theoretical techniques. The system studied is two-dimensional and linearised disturbances are assumed. Of particular novelty in the present work is the ability of our methods to extract a full set of fluid-structure eigenmodes for systems that have strong spatial inhomogeneity in the structure of the flexible wall.

We first present the approach and some results of the system in which an ideal, zero-pressure gradient, flow interacts with a flexible plate held at both its ends. We use a combination of boundary-element and finite-difference methods to express the FSI system as a single matrix equation in the interfacial variable. This is then couched in state-space form and standard methods used to extract the system eigenvalues. It is then shown how the incorporation of spatial inhomogeneity in the stiffness of the plate can be either stabilising or destabilising. We also show that adding a further restraint within the streamwise extent of a homogeneous panel can trigger an additional type of hydroelastic instability at low flow speeds. The mechanism for the fluid-to-structure energy transfer that underpins this instability can be explained in terms of the pressure-signal phase relative to that of the wall motion and the effect on this relationship of the added wall restraint.

We then show how the ideal-flow approach can be conceptually extended to include boundary-layer effects. The flow field is now modelled by the continuity equation and the linearised perturbation momentum equation written in velocity-velocity form. The near-wall flow field is spatially discretised into rectangular elements on an Eulerian grid and a variant of the discrete-vortex method is applied. The entire fluid-structure system can again be assembled as a linear system for a single set of unknowns - the flow-field vorticity and the wall displacements - that admits the extraction of eigenvalues. We then show how stability diagrams for the fully-coupled finite flow-structure system can be assembled, in doing so identifying classes of wall-based or fluid-based and spatio-temporal wave behaviour.

A solution to the Gromov-Vaserstein problem
15:10 Fri 29 Jan, 2010 :: Engineering North N 158 Chapman Lecture Theatre :: Prof Frank Kutzschebauch :: University of Berne, Switzerland

Any matrix in $SL_n (\mathbb C)$ can be written as a product of elementary matrices using the Gauss elimination process. If instead of the field of complex numbers, the entries in the matrix are elements of a more general ring, this becomes a delicate question. In particular, rings of complex-valued functions on a space are interesting cases. A deep result of Suslin gives an affirmative answer for the polynomial ring in $m$ variables in case the size $n$ of the matrix is at least 3. In the topological category, the problem was solved by Thurston and Vaserstein. For holomorphic functions on $\mathbb C^m$, the problem was posed by Gromov in the 1980s. We report on a complete solution to Gromov's problem. A main tool is the Oka-Grauert-Gromov h-principle in complex analysis. Our main theorem can be formulated as follows: In the absence of obvious topological obstructions, the Gauss elimination process can be performed in a way that depends holomorphically on the matrix. This is joint work with Bj\"orn Ivarsson.
The fluid mechanics of gels used in tissue engineering
15:10 Fri 9 Apr, 2010 :: Santos Lecture Theatre :: Dr Edward Green :: University of Western Australia

Tissue engineering could be called 'the science of spare parts'. Although currently in its infancy, its long-term aim is to grow functional tissues and organs in vitro to replace those which have become defective through age, trauma or disease. Recent experiments have shown that mechanical interactions between cells and the materials in which they are grown have an important influence on tissue architecture, but in order to understand these effects, we first need to understand the mechanics of the gels themselves.

Many biological gels (e.g. collagen) used in tissue engineering have a fibrous microstructure which affects the way forces are transmitted through the material, and which in turn affects cell migration and other behaviours. I will present a simple continuum model of gel mechanics, based on treating the gel as a transversely isotropic viscous material. Two canonical problems are considered involving thin two-dimensional films: extensional flow, and squeezing flow of the fluid between two rigid plates. Neglecting inertia, gravity and surface tension, in each regime we can exploit the thin geometry to obtain a leading-order problem which is sufficiently tractable to allow the use of analytical methods. I discuss how these results could be exploited practically to determine the mechanical properties of real gels. If time permits, I will also talk about work currently in progress which explores the interaction between gel mechanics and cell behaviour.

Random walk integrals
13:10 Fri 16 Apr, 2010 :: School Board Room :: Prof Jonathan Borwein :: University of Newcastle

Following Pearson in 1905, we study the expected distance of a two-dimensional walk in the plane with unit steps in random directions---what Pearson called a "ramble". A series evaluation and recursions are obtained making it possible to explicitly determine this distance for small number of steps. Closed form expressions for all the moments of a 2-step and a 3-step walk are given, and a formula is conjectured for the 4-step walk. Heavy use is made of the analytic continuation of the underlying integral.
Mathematical epidemiology with a focus on households
15:10 Fri 23 Apr, 2010 :: Napier G04 :: Dr Joshua Ross :: University of Adelaide

Mathematical models are now used routinely to inform national and global policy-makers on issues that threaten human health or which have an adverse impact on the economy. In the first part of this talk I will provide an overview of mathematical epidemiology starting with the classical deterministic model and leading to some of the current challenges. I will then present some of my recently published work which provides computationally-efficient methods for studying a mathematical model incorporating household structure. We will conclude by briefly discussing some "work-in-progess" which utilises these methods to address the issues of inference, and mixing pattern and contact structure, for emerging infections.
Understanding convergence of meshless methods: Vortex methods and smoothed particle hydrodynamics
15:10 Fri 14 May, 2010 :: Santos Lecture Theatre :: A/Prof Lou Rossi :: University of Delaware

Meshless methods such as vortex methods (VMs) and smoothed particle hydrodynamics (SPH) schemes offer many advantages in fluid flow computations. Particle-based computations naturally adapt to complex flow geometries and so provide a high degree of computational efficiency. Also, particle based methods avoid CFL conditions because flow quantities are integrated along characteristics. There are many approaches to improving numerical methods, but one of the most effective routes is quantifying the error through the direct estimate of residual quantities. Understanding the residual for particle schemes requires a different approach than for meshless schemes but the rewards are significant. In this seminar, I will outline a general approach to understanding convergence that has been effective in creating high spatial accuracy vortex methods, and then I will discuss some recent investigations in the accuracy of diffusion operators used in SPH computations. Finally, I will provide some sample Navier-Stokes computations of high Reynolds number flows using BlobFlow, an open source implementation of the high precision vortex method.
A variance constraining ensemble Kalman filter: how to improve forecast using climatic data of unobserved variables
15:10 Fri 28 May, 2010 :: Santos Lecture Theatre :: A/Prof Georg Gottwald :: The University of Sydney

Data assimilation aims to solve one of the fundamental problems ofnumerical weather prediction - estimating the optimal state of the atmosphere given a numerical model of the dynamics, and sparse, noisy observations of the system. A standard tool in attacking this filtering problem is the Kalman filter.

We consider the problem when only partial observations are available. In particular we consider the situation where the observational space consists of variables which are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. We derive the corresponding Kalman filter in a variational setting.

We analyze the variance constraining Kalman filter (VCKF) filter for a simple linear toy model and determine its range of optimal performance. We explore the variance constraining Kalman filter in an ensemble transform setting for the Lorenz-96 system, and show that incorporating the information on the variance on some un-observable variables can improve the skill and also increase the stability of the data assimilation procedure.

Using methods from dynamical systems theory we then systems where the un-observed variables evolve deterministically but chaotically on a fast time scale.

This is joint work with Lewis Mitchell and Sebastian Reich.

On affine BMW algebras
13:10 Fri 25 Jun, 2010 :: Napier 208 :: Prof Arun Ram :: University of Melbourne

I will describe a family of algebras of tangles (which give rise to link invariants following the methods of Reshetikhin-Turaev and Jones) and describe some aspects of their structure and their representation theory. The main goal will be to explain how to use universal Verma modules for the symplectic group to compute the representation theory of affine BMW (Birman-Murakami-Wenzl) algebras.
Adjoint methods for adaptive error control, optimization, and uncertainty quantification
15:10 Fri 16 Jul, 2010 :: Napier G03 :: Dr Varis Carey :: Colorado State University

We give an introduction to the use of adjoint equations (and solutions) for numerical error control and solution enhancement of PDEs. In addition, the same equations can be used for optimization routines and uncertainty quantification. We discuss the modification of these methods in the context of operator splitting and to non-variational (e.g. finite volume) methods. Finally, we conclude with an application of the method to the shallow water equations and discuss some of the hurdles that need to be overcome when extending adjoint methodologies to ocean and atmospheric modeling.
Eynard-Orantin invariants and enumerative geometry
13:10 Fri 6 Aug, 2010 :: Ingkarni Wardli B20 (Suite 4) :: Dr Paul Norbury :: University of Melbourne

As a tool for studying enumerative problems in geometry Eynard and Orantin associate multilinear differentials to any plane curve. Their work comes from matrix models but does not require matrix models (for understanding or calculations). In some sense they describe deformations of complex structures of a curve and conjectural relationships to deformations of Kahler structures of an associated object. I will give an introduction to their invariants via explicit examples, mainly to do with the moduli space of Riemann surfaces, in which the plane curve has genus zero.
Simultaneous confidence band and hypothesis test in generalised varying-coefficient models
15:05 Fri 10 Sep, 2010 :: Napier LG28 :: Prof Wenyang Zhang :: University of Bath

Generalised varying-coefficient models (GVC) are very important models. There are a considerable number of literature addressing these models. However, most of the existing literature are devoted to the estimation procedure. In this talk, I will systematically investigate the statistical inference for GVC, which includes confidence band as well as hypothesis test. I will show the asymptotic distribution of the maximum discrepancy between the estimated functional coefficient and the true functional coefficient. I will compare different approaches for the construction of confidence band and hypothesis test. Finally, the proposed statistical inference methods are used to analyse the data from China about contraceptive use there, which leads to some interesting findings.
Totally disconnected, locally compact groups
15:10 Fri 17 Sep, 2010 :: Napier G04 :: Prof George Willis :: University of Newcastle

Locally compact groups occur in many branches of mathematics. Their study falls into two cases: connected groups, which occur as automorphisms of smooth structures such as spheres for example; and totally disconnected groups, which occur as automorphisms of discrete structures such as trees. The talk will give an overview of the currently developing structure theory of totally disconnected locally compact groups. Techniques for analysing totally disconnected groups will be described that correspond to the familiar Lie group methods used to treat connected groups. These techniques played an essential role in the recent solution of a problem raised by R. Zimmer and G. Margulis concerning commensurated subgroups of arithmetic groups.
Statistical physics and behavioral adaptation to Creation's main stimuli: sex and food
15:10 Fri 29 Oct, 2010 :: E10 B17 Suite 1 :: Prof Laurent Seuront :: Flinders University and South Australian Research and Development Institute

Animals typically search for food and mates, while avoiding predators. This is particularly critical for keystone organisms such as intertidal gastropods and copepods (i.e. millimeter-scale crustaceans) as they typically rely on non-visual senses for detecting, identifying and locating mates in their two- and three-dimensional environments. Here, using stochastic methods derived from the field of nonlinear physics, we provide new insights into the nature (i.e. innate vs. acquired) of the motion behavior of gastropods and copepods, and demonstrate how changes in their behavioral properties can be used to identify the trade-offs between foraging for food or sex. The gastropod Littorina littorea hence moves according to fractional Brownian motions while foraging for food (in accordance with the fractal nature of food distributions), and switch to Brownian motion while foraging for sex. In contrast, the swimming behavior of the copepod Temora longicornis belongs to the class of multifractal random walks (MRW; i.e. a form of anomalous diffusion), characterized by a nonlinear moment scaling function for distance versus time. This clearly differs from the traditional Brownian and fractional Brownian walks expected or previously detected in animal behaviors. The divergence between MRW and Levy flight and walk is also discussed, and it is shown how copepod anomalous diffusion is enhanced by the presence and concentration of conspecific water-borne signals, and is dramatically increasing male-female encounter rates.
Real analytic sets in complex manifolds I: holomorphic closure dimension
13:10 Fri 4 Mar, 2011 :: Mawson 208 :: Dr Rasul Shafikov :: University of Western Ontario

After a quick introduction to real and complex analytic sets, I will discuss possible notions of complex dimension of real sets, and then discuss a structure theorem for the holomorphic closure dimension which is defined as the dimension of the smallest complex analytic germ containing the real germ.
Real analytic sets in complex manifolds II: complex dimension
13:10 Fri 11 Mar, 2011 :: Mawson 208 :: Dr Rasul Shafikov :: University of Western Ontario

Given a real analytic set R, denote by A the subset of R of points through which there is a nontrivial complex variety contained in R, i.e., A consists of points in R of positive complex dimension. I will discuss the structure of the set A.
Bioinspired computation in combinatorial optimization: algorithms and their computational complexity
15:10 Fri 11 Mar, 2011 :: 7.15 Ingkarni Wardli :: Dr Frank Neumann :: The University of Adelaide

Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems. The computational complexity analysis of this type of algorithms has significantly increased the theoretical understanding of these successful algorithms. In this talk, I will give an introduction into this field of research and present some important results that we achieved for problems from combinatorial optimization. These results can also be found in my recent textbook "Bioinspired Computation in Combinatorial Optimization -- Algorithms and Their Computational Complexity".
Modelling of Hydrological Persistence in the Murray-Darling Basin for the Management of Weirs
12:10 Mon 4 Apr, 2011 :: 5.57 Ingkarni Wardli :: Aiden Fisher :: University of Adelaide

The lakes and weirs along the lower Murray River in Australia are aggregated and considered as a sequence of five reservoirs. A seasonal Markov chain model for the system will be implemented, and a stochastic dynamic program will be used to find optimal release strategies, in terms of expected monetary value (EMV), for the competing demands on the water resource given the stochastic nature of inflows. Matrix analytic methods will be used to analyse the system further, and in particular enable the full distribution of first passage times between any groups of states to be calculated. The full distribution of first passage times can be used to provide a measure of the risk associated with optimum EMV strategies, such as conditional value at risk (CVaR). The sensitivity of the model, and risk, to changing rainfall scenarios will be investigated. The effect of decreasing the level of discretisation of the reservoirs will be explored. Also, the use of matrix analytic methods facilitates the use of hidden states to allow for hydrological persistence in the inflows. Evidence for hydrological persistence of inflows to the lower Murray system, and the effect of making allowance for this, will be discussed.
On parameter estimation in population models
15:10 Fri 6 May, 2011 :: 715 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide

Essential to applying a mathematical model to a real-world application is calibrating the model to data. Methods for calibrating population models often become computationally infeasible when the populations size (more generally the size of the state space) becomes large, or other complexities such as time-dependent transition rates, or sampling error, are present. Here we will discuss the use of diffusion approximations to perform estimation in several scenarios, with successively reduced assumptions: (i) under the assumption of stationarity (the process had been evolving for a very long time with constant parameter values); (ii) transient dynamics (the assumption of stationarity is invalid, and thus only constant parameter values may be assumed); and, (iii) time-inhomogeneous chains (the parameters may vary with time) and accounting for observation error (a sample of the true state is observed).
The Cauchy integral formula
12:10 Mon 9 May, 2011 :: 5.57 Ingkarni Wardli :: Stephen Wade :: University of Adelaide

In this talk I will explain a simple method used for calculating the Hilbert transform of an analytic function, and provide some assurance that this isn't a bad thing to do in spite of the somewhat ominous presence of infinite areas. As it turns out this type of integral is not without an application, as will be demonstrated by one application to a problem in fluid mechanics.
Optimal experimental design for stochastic population models
15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane

Markov population processes are popular models for studying a wide range of phenomena including the spread of disease, the evolution of chemical reactions and the movements of organisms in population networks (metapopulations). Our ability to use these models effectively can be limited by our knowledge about parameters, such as disease transmission and recovery rates in an epidemic. Recently, there has been interest in devising optimal experimental designs for stochastic models, so that practitioners can collect data in a manner that maximises the precision of maximum likelihood estimates of the parameters for these models. I will discuss some recent work on optimal design for a variety of population models, beginning with some simple one-parameter models where the optimal design can be obtained analytically and moving on to more complicated multi-parameter models in epidemiology that involve latent states and non-exponentially distributed infectious periods. For these more complex models, the optimal design must be arrived at using computational methods and we rely on a Gaussian diffusion approximation to obtain analytical expressions for Fisher's information matrix, which is at the heart of most optimality criteria in experimental design. I will outline a simple cross-entropy algorithm that can be used for obtaining optimal designs for these models. We will also explore the improvements in experimental efficiency when using the optimal design over some simpler designs, such as the design where observations are spaced equidistantly in time.
Inference and optimal design for percolation and general random graph models (Part I)
09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Inference and optimal design for percolation and general random graph models (Part II)
10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Spectra alignment/matching for the classification of cancer and control patients
12:10 Mon 8 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide

Proteomic time-of-flight mass spectrometry produces a spectrum based on the peptides (chains of amino acids) in each patient’s serum sample. The spectra contain data points for an x-axis (peptide weight) and a y-axis (peptide frequency/count/intensity). It is our end goal to differentiate cancer (and sub-types) and control patients using these spectra. Before we can do this, peaks in these data must be found and common peptides to different spectra must be found. The data are noisy because of biotechnological variation and calibration error; data points for different peptide weights may in fact be same peptide. An algorithm needs to be employed to find common peptides between spectra, as performing alignment ‘by hand’ is almost infeasible. We borrow methods suggested in the literature by metabolomic gas chromatography-mass spectrometry and extend the methods for our purposes. In this talk I will go over the basic tenets of what we hope to achieve and the process towards this.
Laplace's equation on multiply-connected domains
12:10 Mon 29 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Hayden Tronnolone :: University of Adelaide

Various physical processes take place on multiply-connected domains (domains with some number of 'holes'), such as the stirring of a fluid with paddles or the extrusion of material from a die. These systems may be described by partial differential equations (PDEs). However, standard numerical methods for solving PDEs are not well-suited to such examples: finite difference methods are difficult to implement on multiply-connected domains, especially when the boundaries are irregular or moving, while finite element methods are computationally expensive. In this talk I will describe a fast and accurate numerical method for solving certain PDEs on two-dimensional multiply-connected domains, considering Laplace's equation as an example. This method takes advantage of complex variable techniques which allow the solution to be found with spectral accuracy provided the boundary data is smooth. Other advantages over traditional numerical methods will also be discussed.
Twisted Morava K-theory
13:10 Fri 9 Sep, 2011 :: 7.15 Ingkarni Wardli :: Dr Craig Westerland :: University of Melbourne

Morava's extraordinary K-theories K(n) are a family of generalized cohomology theories which behave in some ways like K-theory (indeed, K(1) is mod 2 K-theory). Their construction exploits Quillen's description of cobordism in terms of formal group laws and Lubin-Tate's methods in class field theory for constructing abelian extensions of number fields. Constructed from homotopy-theoretic methods, they do not admit a geometric description (like deRham cohomology, K-theory, or cobordism), but are nonetheless subtle, computable invariants of topological spaces. In this talk, I will give an introduction to these theories, and explain how it is possible to define an analogue of twisted K-theory in this setting. Traditionally, K-theory is twisted by a three-dimensional cohomology class; in this case, K(n) admits twists by (n+2)-dimensional classes. This work is joint with Hisham Sati.
Mathematical modelling of lobster populations in South Australia
12:10 Mon 12 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr John Feenstra :: University of Adelaide

Just how many lobsters are there hanging around the South Australian coastline? How is this number changing over time? What is the demographic breakdown of this number? And what does it matter? Find out the answers to these questions in my upcoming talk. I will provide a brief flavour of the kinds of quantitative methods involved, showcasing relevant applications of regression, population modelling, estimation, as well as simulation. A product of these analyses are biological performance indicators which are used by government to help decide on fishery controls such as yearly total allowable catch quotas. This assists in maintaining the sustainability of the fishery and hence benefits both the fishers and the lobsters they catch.
Estimating transmission parameters for the swine flu pandemic
15:10 Fri 23 Sep, 2011 :: 7.15 Ingkarni Wardli :: Dr Kathryn Glass :: Australian National University

Following the onset of a new strain of influenza with pandemic potential, policy makers need specific advice on how fast the disease is spreading, who is at risk, and what interventions are appropriate for slowing transmission. Mathematical models play a key role in comparing interventions and identifying the best response, but models are only as good as the data that inform them. In the early stages of the 2009 swine flu outbreak, many researchers estimated transmission parameters - particularly the reproduction number - from outbreak data. These estimates varied, and were often biased by data collection methods, misclassification of imported cases or as a result of early stochasticity in case numbers. I will discuss a number of the pitfalls in achieving good quality parameter estimates from early outbreak data, and outline how best to avoid them. One of the early indications from swine flu data was that children were disproportionately responsible for disease spread. I will introduce a new method for estimating age-specific transmission parameters from both outbreak and seroprevalence data. This approach allows us to take account of empirical data on human contact patterns, and highlights the need to allow for asymmetric mixing matrices in modelling disease transmission between age groups. Applied to swine flu data from a number of different countries, it presents a consistent picture of higher transmission from children.
Estimating disease prevalence in hidden populations
14:05 Wed 28 Sep, 2011 :: B.18 Ingkarni Wardli :: Dr Amber Tomas :: The University of Oxford

Estimating disease prevalence in "hidden" populations such as injecting drug users or men who have sex with men is an important public health issue. However, traditional design-based estimation methods are inappropriate because they assume that a list of all members of the population is available from which to select a sample. Respondent Driven Sampling (RDS) is a method developed over the last 15 years for sampling from hidden populations. Similarly to snowball sampling, it leverages the fact that members of hidden populations are often socially connected to one another. Although RDS is now used around the world, there are several common population characteristics which are known to cause estimates calculated from such samples to be significantly biased. In this talk I'll discuss the motivation for RDS, as well as some of the recent developments in methods of estimation.
Likelihood-free Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis
15:10 Fri 21 Oct, 2011 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South Wales

A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed ``likelihood-free'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines. In this talk, I'll present a brief, non-technical overview of the ideas behind likelihood-free methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis.
Stability analysis of nonparallel unsteady flows via separation of variables
15:30 Fri 18 Nov, 2011 :: 7.15 Ingkarni Wardli :: Prof Georgy Burde :: Ben-Gurion University

The problem of variables separation in the linear stability equations, which govern the disturbance behavior in viscous incompressible fluid flows, is discussed. Stability of some unsteady nonparallel three-dimensional flows (exact solutions of the Navier-Stokes equations) is studied via separation of variables using a semi-analytical, semi-numerical approach. In this approach, a solution with separated variables is defined in a new coordinate system which is sought together with the solution form. As the result, the linear stability problems are reduced to eigenvalue problems for ordinary differential equations which can be solved numerically. In some specific cases, the eigenvalue problems can be solved analytically. Those unique examples of exact (explicit) solution of the nonparallel unsteady flow stability problems provide a very useful test for methods used in the hydrodynamic stability theory. Exact solutions of the stability problems for some stagnation-type flows are presented.
Plurisubharmonic subextensions as envelopes of disc functionals
13:10 Fri 2 Mar, 2012 :: B.20 Ingkarni Wardli :: A/Prof Finnur Larusson :: University of Adelaide

I will describe new joint work with Evgeny Poletsky. We prove a disc formula for the largest plurisubharmonic subextension of an upper semicontinuous function on a domain $W$ in a Stein manifold to a larger domain $X$ under suitable conditions on $W$ and $X$. We introduce a related equivalence relation on the space of analytic discs in $X$ with boundary in $W$. The quotient is a complex manifold with a local biholomorphism to $X$, except it need not be Hausdorff. We use our disc formula to generalise Kiselman's minimum principle. We show that his infimum function is an example of a plurisubharmonic subextension.
Are Immigrants Discriminated in the Australian Labour Market?
12:10 Mon 7 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Wei Xian Lim :: University of Adelaide

In this talk, I will present what I did in my honours project, which was to determine if immigrants, categorised as immigrants from English speaking countries and Non-English speaking countries, are discriminated in the Australian labour market. To determine if discrimination exists, a decomposition of the wage function is applied and analysed via regression analysis. Two different methods of estimating the unknown parameters in the wage function will be discussed: 1. the Ordinary Least Square method, 2. the Quantile Regression method. This is your rare chance of hearing me talk about non-nanomathematics related stuff!
The classification of Dynkin diagrams
12:10 Mon 21 May, 2012 :: 5.57 Ingkarni Wardli :: Mr Alexander Hanysz :: University of Adelaide

The idea of continuous symmetry is often described in mathematics via Lie groups. These groups can be classified by their root systems: collections of vectors satisfying certain symmetry properties. The root systems are described in a concise way by Dynkin diagrams, and it turns out, roughly speaking, that there are only seven possible shapes for a Dynkin diagram. In this talk I'll describe some simple examples of Lie groups, explain what a root system is, and show how a Dynkin diagram encodes this information. Then I'll give a very brief sketch of the methods used to classify Dynkin diagrams.
Enhancing the Jordan canonical form
15:10 Fri 1 Jun, 2012 :: B.21 Ingkarni Wardli :: A/Prof Anthony Henderson :: The University of Sydney

In undergraduate linear algebra, we teach the Jordan canonical form theorem: that every similarity class of n x n complex matrices contains a special matrix which is block-diagonal with each block having a very simple form (a single eigenvalue repeated down the diagonal, ones on the super-diagonal, and zeroes elsewhere). This is of course very useful for matrix calculations. After explaining some of the general context of this result, I will focus on a case which, despite its close proximity to the Jordan canonical form theorem, has only recently been worked out: the classification of pairs of a vector and a matrix.
IGA Workshop: Dendroidal sets
14:00 Tue 12 Jun, 2012 :: Ingkarni Wardli B17 :: Dr Ittay Weiss :: University of the South Pacific

A series of four 2-hour lectures by Dr. Ittay Weiss. The theory of dendroidal sets was introduced by Moerdijk and Weiss in 2007 in the study of homotopy operads in algebraic topology. In the five years that have past since then several fundamental and highly non-trivial results were established. For instance, it was established that dendroidal sets provide models for homotopy operads in a way that extends the Joyal-Lurie approach to homotopy categories. It can be shown that dendroidal sets provide new models in the study of n-fold loop spaces. And it is very recently shown that dendroidal sets model all connective spectra in a way that extends the modeling of certain spectra by Picard groupoids. The aim of the lecture series will be to introduce the concepts mentioned above, present the elementary theory, and understand the scope of the results mentioned as well as discuss the potential for further applications. Sources for the course will include the article "From Operads to Dendroidal Sets" (in the AMS volume on mathematical foundations of quantum field theory (also on the arXiv)) and the lecture notes by Ieke Moerdijk "simplicial methods for operads and algebraic geometry" which resulted from an advanced course given in Barcelona 3 years ago. No prior knowledge of operads will be assumed nor any knowledge of homotopy theory that is more advanced then what is required for the definition of the fundamental group. The basics of the language of presheaf categories will be recalled quickly and used freely.
Comparison of spectral and wavelet estimators of transfer function for linear systems
12:10 Mon 18 Jun, 2012 :: B.21 Ingkarni Wardli :: Mr Mohd Aftar Abu Bakar :: University of Adelaide

We compare spectral and wavelet estimators of the response amplitude operator (RAO) of a linear system, with various input signals and added noise scenarios. The comparison is based on a model of a heaving buoy wave energy device (HBWED), which oscillates vertically as a single mode of vibration linear system. HBWEDs and other single degree of freedom wave energy devices such as the oscillating wave surge convertors (OWSC) are currently deployed in the ocean, making single degree of freedom wave energy devices important systems to both model and analyse in some detail. However, the results of the comparison relate to any linear system. It was found that the wavelet estimator of the RAO offers no advantage over the spectral estimators if both input and response time series data are noise free and long time series are available. If there is noise on only the response time series, only the wavelet estimator or the spectral estimator that uses the cross-spectrum of the input and response signals in the numerator should be used. For the case of noise on only the input time series, only the spectral estimator that uses the cross-spectrum in the denominator gives a sensible estimate of the RAO. If both the input and response signals are corrupted with noise, a modification to both the input and response spectrum estimates can provide a good estimator of the RAO. However, a combination of wavelet and spectral methods is introduced as an alternative RAO estimator. The conclusions apply for autoregressive emulators of sea surface elevation, impulse, and pseudorandom binary sequences (PRBS) inputs. However, a wavelet estimator is needed in the special case of a chirp input where the signal has a continuously varying frequency.
K-theory and unbounded Fredholm operators
13:10 Mon 9 Jul, 2012 :: Ingkarni Wardli B19 :: Dr Jerry Kaminker :: University of California, Davis

There are several ways of viewing elements of K^1(X). One of these is via families of unbounded self-adjoint Fredholm operators on X. Each operator will have discrete spectrum, with infinitely many positive and negative eigenvalues of finite multiplicity. One can associate to such a family a geometric object, its graph, and the Chern character and other invariants of the family can be studied from this perspective. By restricting the dimension of the eigenspaces one may sometimes use algebraic topology to completely determine the family up to equivalence. This talk will describe the general framework and some applications to families on low-dimensional manifolds where the methods work well. Various notions related to spectral flow, the index gerbe and Berry phase play roles which will be discussed. This is joint work with Ron Douglas.
Inquiry-based learning: yesterday and today
15:30 Mon 9 Jul, 2012 :: Ingkarni Wardli B19 :: Prof Ron Douglas :: Texas A&M University

The speaker will report on a project to develop and promote approaches to mathematics instruction closely related to the Moore method -- methods which are called inquiry-based learning -- as well as on his personal experience of the Moore method. For background, see the speaker's article in the May 2012 issue of the Notices of the American Mathematical Society. To download the article, click on "Media" above.
2012 AMSI-SSAI Lecture: Approximate Bayesian computation (ABC): advances and limitations
11:00 Fri 13 Jul, 2012 :: Engineering South S112 :: Prof Christian Robert :: Universite Paris-Dauphine

The lack of closed form likelihoods has been the bane of Bayesian computation for many years and, prior to the introduction of MCMC methods, a strong impediment to the propagation of the Bayesian paradigm. We are now facing models where an MCMC completion of the model towards closed-form likelihoods seems unachievable and where a further degree of approximation appears unavoidable. In this talk, I will present the motivation for approximative Bayesian computation (ABC) methods, the consistency results already available, the various Monte Carlo implementations found in the current literature, as well as the inferential, rather than computational, challenges set by these methods. A recent advance based on empirical likelihood will also be discussed.
Knot Theory
12:10 Mon 10 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr Konrad Pilch :: University of Adelaide

The ancient Chinese used it, the Celts had this skill in spades, it was a big skill of seafarers and pirates, and even now we need it if only to be able to wear shoes! This talk will be about Knot Theory. Knot theory has a colourful and interesting past and I will touch on the why, the what and the when of knots in mathematics. I shall also discuss the major problems concerning knots including the different methods of classification of knots, the unresolved questions about knots, and why have they even been studied. It will be a thorough immersion that will leave you knotted!
Krylov Subspace Methods or: How I Learned to Stop Worrying and Love GMRes
12:10 Mon 17 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr David Wilke :: University of Adelaide

Many problems within applied mathematics require the solution of a linear system of equations. For instance, models of arterial umbilical blood flow are obtained through a finite element approximation, resulting in a linear, n x n system. For small systems the solution is (almost) trivial, but what happens when n is large? Say, n ~ 10^6? In this case matrix inversion is expensive (read: completely impractical) and we seek approximate solutions in a reasonable time. In this talk I will discuss the basic theory underlying Krylov subspace methods; a class of non-stationary iterative methods which are currently the methods-of-choice for large, sparse, linear systems. In particular I will focus on the method of Generalised Minimum RESiduals (GMRes), which is of the most popular for nonsymmetric systems. It is hoped that through this presentation I will convince you that a) solving linear systems is not necessarily trivial, and that b) my lack of any tangible results is not (entirely) a result of my own incompetence.
Complex analysis in low Reynolds number hydrodynamics
15:10 Fri 12 Oct, 2012 :: B.20 Ingkarni Wardli :: Prof Darren Crowdy :: Imperial College London

It is a well-known fact that the methods of complex analysis provide great advantage in studying physical problems involving a harmonic field satisfying Laplace's equation. One example is in ideal fluid mechanics (infinite Reynolds number) where the absence of viscosity, and the assumption of zero vorticity, mean that it is possible to introduce a so-called complex potential -- an analytic function from which all physical quantities of interest can be inferred. In the opposite limit of zero Reynolds number flows which are slow and viscous and the governing fields are not harmonic it is much less common to employ the methods of complex analysis even though they continue to be relevant in certain circumstances. This talk will give an overview of a variety of problems involving slow viscous Stokes flows where complex analysis can be usefully employed to gain theoretical insights. A number of example problems will be considered including the locomotion of low-Reynolds-number micro-organisms and micro-robots, the friction properties of superhydrophobic surfaces in microfluidics and problems of viscous sintering and the manufacture of microstructured optic fibres (MOFs).
Twisted analytic torsion and adiabatic limits
13:10 Wed 5 Dec, 2012 :: Ingkarni Wardli B17 :: Mr Ryan Mickler :: University of Adelaide

We review Mathai-Wu's recent extension of Ray-Singer analytic torsion to supercomplexes. We explore some new results relating these two torsions, and how we can apply the adiabatic spectral sequence due to Forman and Farber's analytic deformation theory to compute some spectral invariants of the complexes involved, answering some questions that were posed in Mathai-Wu's paper.
Twistor theory and the harmonic hull
15:10 Fri 8 Mar, 2013 :: B.18 Ingkarni Wardli :: Prof Michael Eastwood :: Australian National University

Harmonic functions are real-analytic and so automatically extend as functions of complex variables. But how far do they extend? This question may be answered by twistor theory, the Penrose transform, and associated conformal geometry. Nothing will be supposed about such matters: I shall base the constructions on an elementary yet mysterious formula of Bateman from 1904. This is joint work with Feng Xu.
How fast? Bounding the mixing time of combinatorial Markov chains
15:10 Fri 22 Mar, 2013 :: B.18 Ingkarni Wardli :: Dr Catherine Greenhill :: University of New South Wales

A Markov chain is a stochastic process which is "memoryless", in that the next state of the chain depends only on the current state, and not on how it got there. It is a classical result that an ergodic Markov chain has a unique stationary distribution. However, classical theory does not provide any information on the rate of convergence to stationarity. Around 30 years ago, the mixing time of a Markov chain was introduced to measure the number of steps required before the distribution of the chain is within some small distance of the stationary distribution. One reason why this is important is that researchers in areas such as physics and biology use Markov chains to sample from large sets of interest. Rigorous bounds on the mixing time of their chain allows these researchers to have confidence in their results. Bounding the mixing time of combinatorial Markov chains can be a challenge, and there are only a few approaches available. I will discuss the main methods and give examples for each (with pretty pictures).
What in the world is a chebfun?
12:10 Mon 15 Apr, 2013 :: B.19 Ingkarni Wardli :: Hayden Tronnolone :: University of Adelaide

Good question. Many functions encountered in practice can be well-approximated by a linear combination of Chebyshev polynomials, which then allows the use of some powerful numerical techniques. I will give a very brief overview of the theory behind some of these methods, demonstrate how they may be implemented using the MATLAB package known as Chebfun, and answer the question posed in the title along the way. No knowledge of approximation theory or MATLAB is required, however, you will need to accept the transliteration "Chebyshev".
Models of cell-extracellular matrix interactions in tissue engineering
15:10 Fri 3 May, 2013 :: B.18 Ingkarni Wardli :: Dr Ed Green :: University of Adelaide

Tissue engineers hope in future to be able to grow functional tissues in vitro to replace those that are damaged by injury, disease, or simple wear and tear. They use cell culture methods, such as seeding cells within collagen gels, that are designed to mimic the cells' environment in vivo. Amongst other factors, it is clear that mechanical interactions between cells and the extracellular matrix (ECM) in which they reside play an important role in tissue development. However, the mechanics of the ECM is complex, and at present, its role is only partly understood. In this talk, I will present mathematical models of some simple cell-ECM interaction problems, and show how they can be used to gain more insight into the processes that regulate tissue development.
Markov decision processes and interval Markov chains: what is the connection?
12:10 Mon 3 Jun, 2013 :: B.19 Ingkarni Wardli :: Mingmei Teo :: University of Adelaide

Markov decision processes are a way to model processes which involve some sort of decision making and interval Markov chains are a way to incorporate uncertainty in the transition probability matrix. How are these two concepts related? In this talk, I will give an overview of these concepts and discuss how they relate to each other.
K-homology and the quantization commutes with reduction problem
12:10 Fri 5 Jul, 2013 :: 7.15 Ingkarni Wardli :: Prof Nigel Higson :: Pennsylvania State University

The quantization commutes with reduction problem for Hamiltonian actions of compact Lie groups was solved by Meinrenken in the mid-1990s using geometric techniques, and solved again shortly afterwards by Tian and Zhang using analytic methods. In this talk I shall outline some of the close links that exist between the problem, the two solutions, and the geometric and analytic versions of K-homology theory that are studied in noncommutative geometry. I shall try to make the case for K-homology as a useful conceptual framework for the solutions and (at least some of) their various generalizations.
The Hamiltonian Cycle Problem and Markov Decision Processes
15:10 Fri 2 Aug, 2013 :: B.18 Ingkarni Wardli :: Prof Jerzy Filar :: Flinders University

We consider the famous Hamiltonian cycle problem (HCP) embedded in a Markov decision process (MDP). More specifically, we consider a moving object on a graph G where, at each vertex, a controller may select an arc emanating from that vertex according to a probabilistic decision rule. A stationary policy is simply a control where these decision rules are time invariant. Such a policy induces a Markov chain on the vertices of the graph. Therefore, HCP is equivalent to a search for a stationary policy that induces a 0-1 probability transition matrix whose non-zero entries trace out a Hamiltonian cycle in the graph. A consequence of this embedding is that we may consider the problem over a number of, alternative, convex - rather than discrete - domains. These include: (a) the space of stationary policies, (b) the more restricted but, very natural, space of doubly stochastic matrices induced by the graph, and (c) the associated spaces of so-called "occupational measures". This approach to the HCP has led to both theoretical and algorithmic approaches to the underlying HCP problem. In this presentation, we outline a selection of results generated by this line of research.
What is Tight Clustering?
12:10 Mon 12 Aug, 2013 :: B.19 Ingkarni Wardli :: Chris Davies :: University of Adelaide

Most clustering methods partition the observations in such a way that those in the same cluster are more similar to each other than they are to observations in different clusters. However, in some situations you might not want to assign all observations into clusters. That is, you might prefer to consider some subjects to have characteristics so dissimilar from others that they are not assigned to any cluster. In this seminar I will describe an algorithm that can be used to assign some observations into tight and stable clusters, while leaving some observations unassigned.
K-theory and solid state physics
12:10 Fri 13 Sep, 2013 :: Ingkarni Wardli B19 :: Dr Keith Hannabuss :: Balliol College, Oxford

More than 50 years ago Dyson showed that there is a nine-fold classification of random matrix models, the classes of which are each associated with Riemannian symmetric spaces. More recently it was realised that a related argument enables one to classify the insulating properties of fermionic systems (with the addition of an extra class to give 10 in all), and can be described using K-theory. In this talk I shall give a survey of the ideas, and a brief outline of work with Guo Chuan Thiang.
Classification Using Censored Functional Data
15:10 Fri 18 Oct, 2013 :: B.18 Ingkarni Wardli :: A/Prof Aurore Delaigle :: University of Melbourne

We consider classification of functional data. This problem has received a lot of attention in the literature in the case where the curves are all observed on the same interval. A difficulty in applications is that the functional curves can be supported on quite different intervals, in which case standard methods of analysis cannot be used. We are interested in constructing classifiers for curves of this type. More precisely, we consider classification of functions supported on a compact interval, in cases where the training sample consists of functions observed on other intervals, which may differ among the training curves. We propose several methods, depending on whether or not the observable intervals overlap by a significant amount. In the case where these intervals differ a lot, our procedure involves extending the curves outside the interval where they were observed. We suggest a new nonparametric approach for doing this. We also introduce flexible ways of combining potential differences in shapes of the curves from different populations, and potential differences between the endpoints of the intervals where the curves from each population are observed.
Developing Multiscale Methodologies for Computational Fluid Mechanics
12:10 Mon 11 Nov, 2013 :: B.19 Ingkarni Wardli :: Hammad Alotaibi :: University of Adelaide

Recently the development of multiscale methods is one of the most fertile research areas in mathematics, physics, engineering and computer science. The need for multiscale modeling comes usually from the fact that the available macroscale models are not accurate enough, and the microscale models are not efficient enough. By combining both viewpoints, one hopes to arrive at a reasonable compromise between accuracy and efficiency. In this seminar I will give an overview of the recent efforts on developing multiscale methods such as patch dynamics scheme which is used to address an important class of time dependent multiscale problems.
Holomorphic null curves and the conformal Calabi-Yau problem
12:10 Tue 28 Jan, 2014 :: Ingkarni Wardli B20 :: Prof Franc Forstneric :: University of Ljubljana

I shall describe how methods of complex analysis can be used to give new results on the conformal Calabi-Yau problem concerning the existence of bounded metrically complete minimal surfaces in real Euclidean 3-space R^3. We shall see in particular that every bordered Riemann surface admits a proper complete holomorphic immersion into the ball of C^2, and a proper complete embedding as a holomorphic null curve into the ball of C^3. Since the real and the imaginary parts of a holomorphic null curve in C^3 are conformally immersed minimal surfaces in R^3, we obtain a bounded complete conformal minimal immersion of any bordered Riemann surface into R^3. The main advantage of our methods, when compared to the existing ones in the literature, is that we do not need to change the conformal type of the Riemann surface. (Joint work with A. Alarcon, University of Granada.)
15:10 Fri 11 Apr, 2014 :: 5.58 Ingkarni Wardli :: Associate Professor John Middleton :: SARDI Aquatic Sciences and University of Adelaide

Aquaculture farming involves daily feeding of finfish and a subsequent excretion of nutrients into Spencer Gulf. Typically, finfish farming is done in six or so 50m diameter cages and over 600m X 600m lease sites. To help regulate the industry, it is desired that the finfish feed rates and the associated nutrient flux into the ocean are determined such that the maximum nutrient concentration c does not exceed a prescribed value (say cP) for ecosystem health. The prescribed value cP is determined by guidelines from the E.P.A. The concept is known as carrying capacity since limiting the feed rates limits the biomass of the farmed finfish. Here, we model the concentrations that arise from a constant input flux (F) of nutrients in a source region (the cage or lease) using the (depth-averaged) two dimensional, advection diffusion equation for constant and sinusoidal (tides) currents. Application of the divergence theorem to this equation results in a new scale estimate of the maximum flux F (and thus feed rate) that is given by F= cP /T* (1) where cP is the maximum allowed concentration and T* is a new time scale of “flushing” that involves both advection and diffusion. The scale estimate (1) is then shown to compare favourably with mathematically exact solutions of the advection diffusion equation that are obtained using Green’s functions and Fourier transforms. The maximum nutrient flux and associated feed rates are then estimated everywhere in Spencer Gulf through the development and validation of a hydrodynamic model. The model provides seasonal averages of the mean currents U and horizontal diffusivities KS that are needed to estimate T*. The diffusivities are estimated from a shear dispersal model of the tides which are very large in the gulf. The estimates have been provided to PIRSA Fisheries and Aquaculture to assist in the sustainable expansion of finfish aquaculture.
Bayesian Indirect Inference
12:10 Mon 14 Apr, 2014 :: B.19 Ingkarni Wardli :: Brock Hermans :: University of Adelaide

Bayesian likelihood-free methods saw the resurgence of Bayesian statistics through the use of computer sampling techniques. Since the resurgence, attention has focused on so-called 'summary statistics', that is, ways of summarising data that allow for accurate inference to be performed. However, it is not uncommon to find data sets in which the summary statistic approach is not sufficient. In this talk, I will be summarising some of the likelihood-free methods most commonly used (don't worry if you've never seen any Bayesian analysis before), as well as looking at Bayesian Indirect Likelihood, a new way of implementing Bayesian analysis which combines new inference methods with some of the older computational algorithms.
Outlier removal using the Bayesian information criterion for group-based trajectory modelling
12:10 Mon 28 Apr, 2014 :: B.19 Ingkarni Wardli :: Chris Davies :: University of Adelaide

Attributes measured longitudinally can be used to define discrete paths of measurements, or trajectories, for each individual in a given population. Group-based trajectory modelling methods can be used to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Existing methods generally allocate every individual trajectory into one of the estimated groups. However this does not allow for the possibility that some individuals may be following trajectories so different from the rest of the population that they should not be included in a group-based trajectory model. This results in these outlying trajectories being treated as though they belong to one of the groups, distorting the estimated trajectory groups and any subsequent analyses that use them. We have developed an algorithm for removing outlying trajectories based on the maximum change in Bayesian information criterion (BIC) due to removing a single trajectory. As well as deciding which trajectory to remove, the number of groups in the model can also change. The decision to remove an outlying trajectory is made by comparing the log-likelihood contributions of the observations to those of simulated samples from the estimated group-based trajectory model. In this talk the algorithm will be detailed and an application of its use will be demonstrated.
Network-based approaches to classification and biomarker identification in metastatic melanoma
15:10 Fri 2 May, 2014 :: B.21 Ingkarni Wardli :: Associate Professor Jean Yee Hwa Yang :: The University of Sydney

Finding prognostic markers has been a central question in much of current research in medicine and biology. In the last decade, approaches to prognostic prediction within a genomics setting are primarily based on changes in individual genes / protein. Very recently, however, network based approaches to prognostic prediction have begun to emerge which utilize interaction information between genes. This is based on the believe that large-scale molecular interaction networks are dynamic in nature and changes in these networks, rather than changes in individual genes/proteins, are often drivers of complex diseases such as cancer. In this talk, I use data from stage III melanoma patients provided by Prof. Mann from Melanoma Institute of Australia to discuss how network information can be utilize in the analysis of gene expression analysis to aid in biological interpretation. Here, we explore a number of novel and previously published network-based prediction methods, which we will then compare to the common single-gene and gene-set methods with the aim of identifying more biologically interpretable biomarkers in the form of networks.
Computing with groups
15:10 Fri 30 May, 2014 :: B.21 Ingkarni Wardli :: Dr Heiko Dietrich :: Monash University

Groups are algebraic structures which show up in many branches of mathematics and other areas of science; Computational Group Theory is on the cutting edge of pure research in group theory and its interplay with computational methods. In this talk, we consider a practical aspect of Computational Group Theory: how to represent a group in a computer, and how to work with such a description efficiently. We will first recall some well-established methods for permutation group; we will then discuss some recent progress for matrix groups.
All's Fair in Love and Statistics
12:35 Mon 28 Jul, 2014 :: B.19 Ingkarni Wardli :: Annie Conway :: University of Adelaide

Earlier this year published an article about a "math genius" who found true love after scraping and analysing data from a dating site. In this talk I will be investigating the actual mathematics that he used, in particular methods for clustering categorical data, and whether or not the approach was successful.
Frequentist vs. Bayesian.
12:10 Mon 18 Aug, 2014 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide

Abstract: There are two frameworks in which we can do statistical analyses. Choosing one framework over the other can be* as controversial as choosing between team Jacob and... that other guy. In this talk, I aim to give a very very simple explanation of the main difference between frequentist and Bayesian methods. I'll probably flip a coin and show you a video too. * to people who really care.
Ideal membership on singular varieties by means of residue currents
12:10 Fri 29 Aug, 2014 :: Ingkarni Wardli B20 :: Richard Larkang :: University of Adelaide

On a complex manifold X, one can consider the following ideal membership problem: Does a holomorphic function on X belong to a given ideal of holomorphic functions on X? Residue currents give a way of expressing analytically this essentially algebraic problem. I will discuss some basic cases of this, why such an analytic description might be useful, and finish by discussing a generalization of this to singular varieties.
Testing Statistical Association between Genetic Pathways and Disease Susceptibility
12:10 Mon 1 Sep, 2014 :: B.19 Ingkarni Wardli :: Andy Pfieffer :: University of Adelaide

A major research area is the identification of genetic pathways associated with various diseases. However, a detailed comparison of methods that have been designed to ascertain the association between pathways and diseases has not been performed. I will give the necessary biological background behind Genome-Wide Association Studies (GWAS), and explain the shortfalls in traditional GWAS methodologies. I will then explore various methods that use information about genetic pathways in GWAS, and explain the challenges in comparing these methods.
Inferring absolute population and recruitment of southern rock lobster using only catch and effort data
12:35 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide

Abundance estimates from a data-limited version of catch survey analysis are compared to those from a novel one-parameter deterministic method. Bias of both methods is explored using simulation testing based on a more complex data-rich stock assessment population dynamics fishery operating model, exploring the impact of both varying levels of observation error in data as well as model process error. Recruitment was consistently better estimated than legal size population, the latter most sensitive to increasing observation errors. A hybrid of the data-limited methods is proposed as the most robust approach. A more statistically conventional error-in-variables approach may also be touched upon if enough time.
Spectral asymptotics on random Sierpinski gaskets
12:10 Fri 26 Sep, 2014 :: Ingkarni Wardli B20 :: Uta Freiberg :: Universitaet Stuttgart

Self similar fractals are often used in modeling porous media. Hence, defining a Laplacian and a Brownian motion on such sets describes transport through such materials. However, the assumption of strict self similarity could be too restricting. So, we present several models of random fractals which could be used instead. After recalling the classical approaches of random homogenous and recursive random fractals, we show how to interpolate between these two model classes with the help of so called V-variable fractals. This concept (developed by Barnsley, Hutchinson & Stenflo) allows the definition of new families of random fractals, hereby the parameter V describes the degree of `variability' of the realizations. We discuss how the degree of variability influences the geometric, analytic and stochastic properties of these sets. - These results have been obtained with Ben Hambly (University of Oxford) and John Hutchinson (ANU Canberra).
A Hybrid Markov Model for Disease Dynamics
12:35 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Nicolas Rebuli :: University of Adelaide

Modelling the spread of infectious diseases is fundamental to protecting ourselves from potentially devastating epidemics. Among other factors, two key indicators for the severity of an epidemic are the size of the epidemic and the time until the last infectious individual is removed. To estimate the distribution of the size and duration of an epidemic (within a realistic population) an epidemiologist will typically use Monte Carlo simulations of an appropriate Markov process. However, the number of states in the simplest Markov epidemic model, the SIR model, is quadratic in the population size and so Monte Carlo simulations are computationally expensive. In this talk I will discuss two methods for approximating the SIR Markov process and I will demonstrate the approximation error by comparing probability distributions and estimates of the distributions of the final size and duration of an SIR epidemic.
Topology, geometry, and moduli spaces
12:10 Fri 10 Oct, 2014 :: Ingkarni Wardli B20 :: Nick Buchdahl :: University of Adelaide

In recent years, moduli spaces of one kind or another have been shown to be of great utility, this quite apart from their inherent interest. Many of their applications involve their topology, but as we all know, understanding of topological structures is often facilitated through the use of geometric methods, and some of these moduli spaces carry geometric structures that are considerable interest in their own right. In this talk, I will describe some of the background and the ideas in this general context, focusing on questions that I have been considering lately together with my colleague Georg Schumacher from Marburg in Germany, who was visiting us recently.
Optimally Chosen Quadratic Forms for Partitioning Multivariate Data
13:10 Tue 14 Oct, 2014 :: Ingkarni Wardli 715 Conference Room :: Assoc. Prof. Inge Koch :: School of Mathematical Sciences

Quadratic forms are commonly used in linear algebra. For d-dimensional vectors they have a matrix representation, Q(x) = x'Ax, for some symmetric matrix A. In statistics quadratic forms are defined for d-dimensional random vectors, and one of the best-known quadratic forms is the Mahalanobis distance of two random vectors. In this talk we want to partition a quadratic form Q(X) = X'MX, where X is a random vector, and M a symmetric matrix, that is, we want to find a d-dimensional random vector W such that Q(X) = W'W. This problem has many solutions. We are interested in a solution or partition W of X such that pairs of corresponding variables (X_j, W_j) are highly correlated and such that W is simpler than the given X. We will consider some natural candidates for W which turn out to be suboptimal in the sense of the above constraints, and we will then exhibit the optimal solution. Solutions of this type are useful in the well-known T-square statistic. We will see in examples what these solutions look like.
The Serre-Grothendieck theorem by geometric means
12:10 Fri 24 Oct, 2014 :: Ingkarni Wardli B20 :: David Roberts :: University of Adelaide

The Serre-Grothendieck theorem implies that every torsion integral 3rd cohomology class on a finite CW-complex is the invariant of some projective bundle. It was originally proved in a letter by Serre, used homotopical methods, most notably a Postnikov decomposition of a certain classifying space with divisible homotopy groups. In this talk I will outline, using work of the algebraic geometer Offer Gabber, a proof for compact smooth manifolds using geometric means and a little K-theory.
Happiness and social information flow: Computational social science through data.
15:10 Fri 7 Nov, 2014 :: EM G06 (Engineering & Maths Bldg) :: Dr Lewis Mitchell :: University of Adelaide

The recent explosion in big data coming from online social networks has led to an increasing interest in bringing quantitative methods to bear on questions in social science. A recent high-profile example is the study of emotional contagion, which has led to significant challenges and controversy. This talk will focus on two issues related to emotional contagion, namely remote-sensing of population-level wellbeing and the problem of information flow across a social network. We discuss some of the challenges in working with massive online data sets, and present a simple tool for measuring large-scale happiness from such data. By combining over 10 million geolocated messages collected from Twitter with traditional census data we uncover geographies of happiness at the scale of states and cities, and discuss how these patterns may be related to traditional wellbeing measures and public health outcomes. Using tools from information theory we also study information flow between individuals and how this may relate to the concept of predictability for human behaviour.
Happiness and social information flow: Computational social science through data.
15:10 Fri 7 Nov, 2014 :: EM G06 (Engineering & Maths Bldg) :: Dr Lewis Mitchell :: University of Adelaide

The recent explosion in big data coming from online social networks has led to an increasing interest in bringing quantitative methods to bear on questions in social science. A recent high-profile example is the study of emotional contagion, which has led to significant challenges and controversy. This talk will focus on two issues related to emotional contagion, namely remote-sensing of population-level wellbeing and the problem of information flow across a social network. We discuss some of the challenges in working with massive online data sets, and present a simple tool for measuring large-scale happiness from such data. By combining over 10 million geolocated messages collected from Twitter with traditional census data we uncover geographies of happiness at the scale of states and cities, and discuss how these patterns may be related to traditional wellbeing measures and public health outcomes. Using tools from information theory we also study information flow between individuals and how this may relate to the concept of predictability for human behaviour.
Modelling segregation distortion in multi-parent crosses
15:00 Mon 17 Nov, 2014 :: 5.57 Ingkarni Wardli :: Rohan Shah (joint work with B. Emma Huang and Colin R. Cavanagh) :: The University of Queensland

Construction of high-density genetic maps has been made feasible by low-cost high-throughput genotyping technology; however, the process is still complicated by biological, statistical and computational issues. A major challenge is the presence of segregation distortion, which can be caused by selection, difference in fitness, or suppression of recombination due to introgressed segments from other species. Alien introgressions are common in major crop species, where they have often been used to introduce beneficial genes from wild relatives. Segregation distortion causes problems at many stages of the map construction process, including assignment to linkage groups and estimation of recombination fractions. This can result in incorrect ordering and estimation of map distances. While discarding markers will improve the resulting map, it may result in the loss of genomic regions under selection or containing beneficial genes (in the case of introgression). To correct for segregation distortion we model it explicitly in the estimation of recombination fractions. Previously proposed methods introduce additional parameters to model the distortion, with a corresponding increase in computing requirements. This poses difficulties for large, densely genotyped experimental populations. We propose a method imposing minimal additional computational burden which is suitable for high-density map construction in large multi-parent crosses. We demonstrate its use modelling the known Sr36 introgression in wheat for an eight-parent complex cross.
On the analyticity of CR-diffeomorphisms
12:10 Fri 13 Mar, 2015 :: Engineering North N132 :: Ilya Kossivskiy :: University of Vienna

One of the fundamental objects in several complex variables is CR-mappings. CR-mappings naturally occur in complex analysis as boundary values of mappings between domains, and as restrictions of holomorphic mappings onto real submanifolds. It was already observed by Cartan that smooth CR-diffeomorphisms between CR-submanifolds in C^N tend to be very regular, i.e., they are restrictions of holomorphic maps. However, in general smooth CR-mappings form a more restrictive class of mappings. Thus, since the inception of CR-geometry, the following general question has been of fundamental importance for the field: Are CR-equivalent real-analytic CR-structures also equivalent holomorphically? In joint work with Lamel, we answer this question in the negative, in any positive CR-dimension and CR-codimension. Our construction is based on a recent dynamical technique in CR-geometry, developed in my earlier work with Shafikov.
Group Meeting
15:10 Fri 24 Apr, 2015 :: N218 Engineering North :: Dr Ben Binder :: University of Adelaide

Talk (Dr Ben Binder): How do we quantify the filamentous growth in a yeast colony? Abstract: In this talk we will develop a systematic method to measure the spatial patterning of yeast colony morphology. The methods are applicable to other physical systems with circular spatial domains, for example, batch mixing fluid devices. A hybrid modelling approach of the yeast growth process will also be discussed. After the seminar, Ben will start a group discussion by sharing some information and experiences on attracting honours/PhD students to the group.
Haven't I seen you before? Accounting for partnership duration in infectious disease modeling
15:10 Fri 8 May, 2015 :: Level 7 Conference Room Ingkarni Wardli :: Dr Joel Miller :: Monash University


Our ability to accurately predict and explain the spread of an infectious disease is a significant factor in our ability to implement effective interventions. Our ability to accurately model disease spread depends on how accurately we capture the various effects. This is complicated by the fact that infectious disease spread involves a number of time scales. Four that are particularly relevant are: duration of infection in an individual, duration of partnerships between individuals, the time required for an epidemic to spread through the population, and the time required for the population structure to change (demographic or otherwise).

Mathematically simple models of disease spread usually make the implicit assumption that the duration of partnerships is by far the shortest time scale in the system. Thus they miss out on the tendency for infected individuals to deplete their local pool of susceptibles. Depending on the details of the disease in question, this effect may be significant.

I will discuss work done to reduce these assumptions for "SIR" (Susceptible-Infected-Recovered) diseases, which allows us to interpolate between populations which are static and populations which change partners rapidly in closed populations (no entry/exit). I will then discuss early results in applying these methods to diseases such as HIV in which the population time scales are relevant.

An Engineer-Mathematician Duality Approach to Finite Element Methods
12:10 Mon 18 May, 2015 :: Napier LG29 :: Jordan Belperio :: University of Adelaide

The finite element method has been a prominently used numerical technique for engineers solving solid mechanics, electro-magnetic and heat transfer problems for over 30 years. More recently the finite element method has been used to solve fluid mechanics problems, a field where finite difference methods are more commonly used. In this talk, I will introduce the basic mathematics behind the finite element method, the similarity between the finite element method and finite difference method and comparing how engineers and mathematicians use finite element methods. I will then demonstrate two solutions to the wave equation using the finite element method.
People smugglers and statistics
12:10 Mon 25 May, 2015 :: Ingkarni Wardli 715 Conference Room :: Prof. Patty Solomon :: School of Mathematical Sciences

In 2012 the Commonwealth Chief Scientist asked for my advice on the statistics being used in people smuggling prosecutions. Many defendants come from poor fishing villages in Indonesia, where births are not routinely recorded and the age of the defendant is not known. However mandatory jail sentences apply in Australia for individuals convicted of people smuggling which do not apply to children less than 18 years old - so assessing the age of each defendant is very important. Following an Australian Human Rights Commission inquiry into the treatment of individuals suspected of people smuggling, the Attorney-General's department sought advice from the Chief Scientist, which is where I come in. I'll present the methods used by the prosecution and defence, which are both wrong, and introduce the prosecutor's fallacy.
Mathematical Modeling and Analysis of Active Suspensions
14:10 Mon 3 Aug, 2015 :: Napier 209 :: Professor Michael Shelley :: Courant Institute of Mathematical Sciences, New York University

Complex fluids that have a 'bio-active' microstructure, like suspensions of swimming bacteria or assemblies of immersed biopolymers and motor-proteins, are important examples of so-called active matter. These internally driven fluids can have strange mechanical properties, and show persistent activity-driven flows and self-organization. I will show how first-principles PDE models are derived through reciprocal coupling of the 'active stresses' generated by collective microscopic activity to the fluid's macroscopic flows. These PDEs have an interesting analytic structures and dynamics that agree qualitatively with experimental observations: they predict the transitions to flow instability and persistent mixing observed in bacterial suspensions, and for microtubule assemblies show the generation, propagation, and annihilation of disclination defects. I'll discuss how these models might be used to study yet more complex biophysical systems.
Natural Optimisation (No Artificial Colours, Flavours or Preservatives)
12:10 Mon 21 Sep, 2015 :: Benham Labs G10 :: James Walker :: University of Adelaide

Sometimes nature seems to have the best solutions to complicated optimisation problems. For example ant colonies have a clever way of optimising the amount of food brought to the colony using pheromones, the process of natural selection gives rise to species which are optimally suited to their environment and although this process is not technically natural, for centuries people have been using properties of crystal formation to make steel with optimal properties. In this talk I will discuss non-convex optimisation and some optimisation methods inspired by natural processes.
Analytic complexity of bivariate holomorphic functions and cluster trees
12:10 Fri 2 Oct, 2015 :: Ingkarni Wardli B17 :: Timur Sadykov :: Plekhanov University, Moscow

The Kolmogorov-Arnold theorem yields a representation of a multivariate continuous function in terms of a composition of functions which depend on at most two variables. In the analytic case, understanding the complexity of such a representation naturally leads to the notion of the analytic complexity of (a germ of) a bivariate multi-valued analytic function. According to Beloshapka's local definition, the order of complexity of any univariate function is equal to zero while the n-th complexity class is defined recursively to consist of functions of the form a(b(x,y)+c(x,y)), where a is a univariate analytic function and b and c belong to the (n-1)-th complexity class. Such a represenation is meant to be valid for suitable germs of multi-valued holomorphic functions. A randomly chosen bivariate analytic functions will most likely have infinite analytic complexity. However, for a number of important families of special functions of mathematical physics their complexity is finite and can be computed or estimated. Using this, we introduce the notion of the analytic complexity of a binary tree, in particular, a cluster tree, and investigate its properties.
Chern-Simons classes on loop spaces and diffeomorphism groups
12:10 Fri 16 Oct, 2015 :: Ingkarni Wardli B17 :: Steve Rosenberg :: Boston University

Not much is known about the topology of the diffeomorphism group Diff(M) of manifolds M of dimension four and higher. We'll show that for a class of manifolds of dimension 4k+1, Diff(M) has infinite fundamental group. This is proved by translating the problem into a question about Chern-Simons classes on the tangent bundle to the loop space LM. To build the CS classes, we use a family of metrics on LM associated to a Riemannian metric on M. The curvature of these metrics takes values in an algebra of pseudodifferential operators. The main technical step in the CS construction is to replace the ordinary matrix trace in finite dimensions with the Wodzicki residue, the unique trace on this algebra. The moral is that some techniques in finite dimensional Riemannian geometry can be extended to some examples in infinite dimensional geometry.
Use of epidemic models in optimal decision making
15:00 Thu 19 Nov, 2015 :: Ingkarni Wardli 5.57 :: Tim Kinyanjui :: School of Mathematics, The University of Manchester

Epidemic models have proved useful in a number of applications in epidemiology. In this work, I will present two areas that we have used modelling to make informed decisions. Firstly, we have used an age structured mathematical model to describe the transmission of Respiratory Syncytial Virus in a developed country setting and to explore different vaccination strategies. We found that delayed infant vaccination has significant potential in reducing the number of hospitalisations in the most vulnerable group and that most of the reduction is due to indirect protection. It also suggests that marked public health benefit could be achieved through RSV vaccine delivered to age groups not seen as most at risk of severe disease. The second application is in the optimal design of studies aimed at collection of household-stratified infection data. A design decision involves making a trade-off between the number of households to enrol and the sampling frequency. Two commonly used study designs are considered: cross-sectional and cohort. The search for an optimal design uses Bayesian methods to explore the joint parameter-design space combined with Shannon entropy of the posteriors to estimate the amount of information for each design. We found that for the cross-sectional designs, the amount of information increases with the sampling intensity while the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing data collection studies.
Mathematical modelling of the immune response to influenza
15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne

The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.

We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of cross-reactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short inter-exposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit cross-reactive cellular adaptive immune responses. To account for inter-subject as well as inter-virus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.
Harmonic Analysis in Rough Contexts
15:10 Fri 13 May, 2016 :: Engineering South S112 :: Dr Pierre Portal :: Australian National University

In recent years, perspectives on what constitutes the ``natural" framework within which to conduct various forms of mathematical analysis have shifted substantially. The common theme of these shifts can be described as a move towards roughness, i.e. the elimination of smoothness assumptions that had previously been considered fundamental. Examples include partial differential equations on domains with a boundary that is merely Lipschitz continuous, geometric analysis on metric measure spaces that do not have a smooth structure, and stochastic analysis of dynamical systems that have nowhere differentiable trajectories. In this talk, aimed at a general mathematical audience, I describe some of these shifts towards roughness, placing an emphasis on harmonic analysis, and on my own contributions. This includes the development of heat kernel methods in situations where such a kernel is merely a distribution, and applications to deterministic and stochastic partial differential equations.
Time series analysis of paleo-climate proxies (a mathematical perspective)
15:10 Fri 27 May, 2016 :: Engineering South S112 :: Dr Thomas Stemler :: University of Western Australia

In this talk I will present the work my colleagues from the School of Earth and Environment (UWA), the "trans disciplinary methods" group of the Potsdam Institute for Climate Impact Research, Germany, and I did to explain the dynamics of the Australian-South East Asian monsoon system during the last couple of thousand years. From a time series perspective paleo-climate proxy series are more or less the monsters moving under your bed that wake you up in the middle of the night. The data is clearly non-stationary, non-uniform sampled in time and the influence of stochastic forcing or the level of measurement noise are more or less unknown. Given these undesirable properties almost all traditional time series analysis methods fail. I will highlight two methods that allow us to draw useful conclusions from the data sets. The first one uses Gaussian kernel methods to reconstruct climate networks from multiple proxies. The coupling relationships in these networks change over time and therefore can be used to infer which areas of the monsoon system dominate the complex dynamics of the whole system. Secondly I will introduce the transformation cost time series method, which allows us to detect changes in the dynamics of a non-uniform sampled time series. Unlike the frequently used interpolation approach, our new method does not corrupt the data and therefore avoids biases in any subsequence analysis. While I will again focus on paleo-climate proxies, the method can be used in other applied areas, where regular sampling is not possible.
Holomorphic Flexibility Properties of Spaces of Elliptic Functions
12:10 Fri 29 Jul, 2016 :: Ingkarni Wardli B18 :: David Bowman :: University of Adelaide

The set of meromorphic functions on an elliptic curve naturally possesses the structure of a complex manifold. The component of degree 3 functions is 6-dimensional and enjoys several interesting complex-analytic properties that make it, loosely speaking, the opposite of a hyperbolic manifold. Our main result is that this component has a 54-sheeted branched covering space that is an Oka manifold.
Probabilistic Meshless Methods for Bayesian Inverse Problems
15:10 Fri 5 Aug, 2016 :: Engineering South S112 :: Dr Chris Oates :: University of Technology Sydney

This talk deals with statistical inverse problems that involve partial differential equations (PDEs) with unknown parameters. Our goal is to account, in a rigorous way, for the impact of discretisation error that is introduced at each evaluation of the likelihood due to numerical solution of the PDE. In the context of meshless methods, the proposed, model-based approach to discretisation error encourages statistical inferences to be more conservative in the presence of significant solver error. In addition, (i) a principled learning-theoretic approach to minimise the impact of solver error is developed, and (ii) the challenge of non-linear PDEs is considered. The method is applied to parameter inference problems in which non-negligible solver error must be accounted for in order to draw valid statistical conclusions.
Predicting turbulence
14:10 Tue 30 Aug, 2016 :: Napier 209 :: Dr Trent Mattner :: School of Mathematical Sciences

Turbulence is characterised by three-dimensional unsteady fluid motion over a wide range of spatial and temporal scales. It is important in many problems of technological and scientific interest, such as drag reduction, energy production and climate prediction. Turbulent flows are governed by the Navier--Stokes equations, which are a nonlinear system of partial differential equations. Typically, numerical methods are needed to find solutions to these equations. In turbulent flows, however, the resulting computational problem is usually intractable. Filtering or averaging the Navier--Stokes equations mitigates the computational problem, but introduces new quantities into the equations. Mathematical models of turbulence are needed to estimate these quantities. One promising turbulence model consists of a random collection of fluid vortices, which are themselves approximate solutions of the Navier--Stokes equations.
What is the best way to count votes?
13:10 Mon 12 Sep, 2016 :: Hughes 322 :: Dr Stuart Johnson :: School of Mathematical Sciences

Around the world there are many different ways of counting votes in elections, and even within Australia there are different methods in use in various states. Which is the best method? Even for the simplest case of electing one person in a single electorate there is no easy answer to this, in fact there is a famous result - Arrow's Theorem - which tells us that there is no perfect way of counting votes. I will describe a number of different methods along with their problems before giving a more precise statement of the theorem and outlining a proof
A principled experimental design approach to big data analysis
15:10 Fri 23 Sep, 2016 :: Napier G03 :: Prof Kerrie Mengersen :: Queensland University of Technology

Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, complexity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appeal to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers equivalent answers compared with analyses of the full dataset. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
SIR epidemics with stages of infection
12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles

This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.

We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semi-Markov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.

Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk.
What is index theory?
12:10 Tue 21 Mar, 2017 :: Inkgarni Wardli 5.57 :: Dr Peter Hochs :: School of Mathematical Sciences

Index theory is a link between topology, geometry and analysis. A typical theorem in index theory says that two numbers are equal: an analytic index and a topological index. The first theorem of this kind was the index theorem of Atiyah and Singer, which they proved in 1963. Index theorems have many applications in maths and physics. For example, they can be used to prove that a differential equation must have a solution. Also, they imply that the topology of a space like a sphere or a torus determines in what ways it can be curved. Topology is the study of geometric properties that do not change if we stretch or compress a shape without cutting or glueing. Curvature does change when we stretch something out, so it is surprising that topology can say anything about curvature. Index theory has many surprising consequences like this.
Stokes' Phenomenon in Translating Bubbles
15:10 Fri 2 Jun, 2017 :: Ingkarni Wardli 5.57 :: Dr Chris Lustri :: Macquarie University

This study of translating air bubbles in a Hele-Shaw cell containing viscous fluid reveals the critical role played by surface tension in these systems. The standard zero-surface-tension model of Hele-Shaw flow predicts that a continuum of bubble solutions exists for arbitrary flow translation velocity. The inclusion of small surface tension, however, eliminates this continuum of solutions, instead producing a discrete, countably infinite family of solutions, each with distinct translation speeds. We are interested in determining this discrete family of solutions, and understanding why only these solutions are permitted. Studying this problem in the asymptotic limit of small surface tension does not seem to give any particular reason why only these solutions should be selected. It is only by using exponential asymptotic methods to study the Stokes’ structure hidden in the problem that we are able to obtain a complete picture of the bubble behaviour, and hence understand the selection mechanism that only permits certain solutions to exist. In the first half of my talk, I will explain the powerful ideas that underpin exponential asymptotic techniques, such as analytic continuation and optimal truncation. I will show how they are able to capture behaviour known as Stokes' Phenomenon, which is typically invisible to classical asymptotic series methods. In the second half of the talk, I will introduce the problem of a translating air bubble in a Hele-Shaw cell, and show that the behaviour can be fully understood by examining the Stokes' structure concealed within the problem. Finally, I will briefly showcase other important physical applications of exponential asymptotic methods, including submarine waves and particle chains.
Complex methods in real integral geometry
12:10 Fri 28 Jul, 2017 :: Engineering Sth S111 :: Mike Eastwood :: University of Adelaide

There are well-known analogies between holomorphic integral transforms such as the Penrose transform and real integral transforms such as the Radon, Funk, and John transforms. In fact, one can make a precise connection between them and hence use complex methods to establish results in the real setting. This talk will introduce some simple integral transforms and indicate how complex analysis may be applied.
Exact coherent structures in high speed flows
15:10 Fri 28 Jul, 2017 :: Ingkarni Wardli B17 :: Prof Philip Hall :: Monash University

In recent years, there has been much interest in the relevance of nonlinear solutions of the Navier-Stokes equations to fully turbulent flows. The solutions must be calculated numerically at moderate Reynolds numbers but in the limit of high Reynolds numbers asymptotic methods can be used to greatly simplify the computational task and to uncover the key physical processes sustaining the nonlinear states. In particular, in confined flows exact coherent structures defining the boundary between the laminar and turbulent attractors can be constructed. In addition, structures which capture the essential physical properties of fully turbulent flows can be found. The extension of the ideas to boundary layer flows and current work attempting to explain the law of the wall will be discussed.
The Markovian binary tree applied to demography and conservation biology
15:10 Fri 27 Oct, 2017 :: Ingkarni Wardli B17 :: Dr Sophie Hautphenne :: University of Melbourne

Markovian binary trees form a general and tractable class of continuous-time branching processes, which makes them well-suited for real-world applications. Thanks to their appealing probabilistic and computational features, these processes have proven to be an excellent modelling tool for applications in population biology. Typical performance measures of these models include the extinction probability of a population, the distribution of the population size at a given time, the total progeny size until extinction, and the asymptotic population composition. Besides giving an overview of the main performance measures and the techniques involved to compute them, we discuss recently developed statistical methods to estimate the model parameters, depending on the accuracy of the available data. We illustrate our results in human demography and in conservation biology.
Stochastic Modelling of Urban Structure
11:10 Mon 20 Nov, 2017 :: Engineering Nth N132 :: Mark Girolami :: Imperial College London, and The Alan Turing Institute

Urban systems are complex in nature and comprise of a large number of individuals that act according to utility, a measure of net benefit pertaining to preferences. The actions of individuals give rise to an emergent behaviour, creating the so-called urban structure that we observe. In this talk, I develop a stochastic model of urban structure to formally account for uncertainty arising from the complex behaviour. We further use this stochastic model to infer the components of a utility function from observed urban structure. This is a more powerful modelling framework in comparison to the ubiquitous discrete choice models that are of limited use for complex systems, in which the overall preferences of individuals are difficult to ascertain. We model urban structure as a realization of a Boltzmann distribution that is the invariant distribution of a related stochastic differential equation (SDE) that describes the dynamics of the urban system. Our specification of Boltzmann distribution assigns higher probability to stable configurations, in the sense that consumer surplus (demand) is balanced with running costs (supply), as characterized by a potential function. We specify a Bayesian hierarchical model to infer the components of a utility function from observed structure. Our model is doubly-intractable and poses significant computational challenges that we overcome using recent advances in Markov chain Monte Carlo (MCMC) methods. We demonstrate our methodology with case studies on the London retail system and airports in England.

News matching "Matrix analytic methods"

ARC Grant successes
Congratulations to Tony Roberts, Charles Pearce, Robert Elliot, Andrew Metcalfe and all their collaborators on their success in the current round of ARC grants. The projects are "Development of innovative technologies for oil production based on the advanced theory of suspension flows in porous media" (Tony Roberts et al.), "Perturbation and approximation methods for linear operators with applications to train control, water resource management and evolution of physical systems" (Charles Pearce et al.), "Risk Measures and Management in Finance and Actuarial Science Under Regime-Switching Models" (Robert Elliott et al.) and "A new flood design methodology for a variable and changing climate" (Andrew Metcalfe et al.) Posted Mon 26 Oct 09.
ARC Grant successes
The School of Mathematical Sciences has again had outstanding success in the ARC Discovery and Linkage Projects schemes. Congratulations to the following staff for their success in the Discovery Project scheme: Prof Nigel Bean, Dr Josh Ross, Prof Phil Pollett, Prof Peter Taylor, New methods for improving active adaptive management in biological systems, $255,000 over 3 years; Dr Josh Ross, New methods for integrating population structure and stochasticity into models of disease dynamics, $248,000 over three years; A/Prof Matt Roughan, Dr Walter Willinger, Internet traffic-matrix synthesis, $290,000 over three years; Prof Patricia Solomon, A/Prof John Moran, Statistical methods for the analysis of critical care data, with application to the Australian and New Zealand Intensive Care Database, $310,000 over 3 years; Prof Mathai Varghese, Prof Peter Bouwknegt, Supersymmetric quantum field theory, topology and duality, $375,000 over 3 years; Prof Peter Taylor, Prof Nigel Bean, Dr Sophie Hautphenne, Dr Mark Fackrell, Dr Malgorzata O'Reilly, Prof Guy Latouche, Advanced matrix-analytic methods with applications, $600,000 over 3 years. Congratulations to the following staff for their success in the Linkage Project scheme: Prof Simon Beecham, Prof Lee White, A/Prof John Boland, Prof Phil Howlett, Dr Yvonne Stokes, Mr John Wells, Paving the way: an experimental approach to the mathematical modelling and design of permeable pavements, $370,000 over 3 years; Dr Amie Albrecht, Prof Phil Howlett, Dr Andrew Metcalfe, Dr Peter Pudney, Prof Roderick Smith, Saving energy on trains - demonstration, evaluation, integration, $540,000 over 3 years Posted Fri 29 Oct 10.
Summer Research Student Thomas Brown wins the AMSI/Cambridge University Press Prize for 2013
Congratulations to Thomas Brown, jointly supervised by Ed Green and Ben Binder who won the AMSI/Cambridge University Press Prize for the best talk at the 2013 CSIRO Big Day In, recently held this month. After completion of their summer project, vacation scholars must submit a project report which summarises the project and addresses the nature of the topic, methods of investigation, results found, and benefits of the experience. The scholars then present a 15-minute presentation about their project at the CSIRO Big Day In (BDI). This experience enables students to meet and socialise with their peers, gain experience presenting to their colleagues and supervisors and learn about a range of careers in science by interacting with several CSIRO scientists (including mathematicians) in a discussion panel. This is a very pleasing result for Thomas, Ed and Ben as well as for the School of Mathematical Sciences. Well done Thomas. Posted Fri 15 Feb 13.

Publications matching "Matrix analytic methods"

Medical imaging and processing methods for cardiac flow reconstruction
Wong, Kelvin; Kelso, Richard; Worthley, Stephen; Sanders, Prashanthan; Mazumdar, Jagan; Abbott, Derek, Journal of Mechanics in Medicine and Biology 9 (1–20) 2009
Portfolio risk minimization and differential games
Elliott, Robert; Siu, T, Nonlinear Analysis-Theory Methods & Applications In Press (–) 2009
Siciak-Zahariuta extremal functions, analytic discs and polynomial hulls
Larusson, Finnur; Sigurdsson, R, Mathematische Annalen 345 (159–174) 2009
Learning fuzzy rules with evolutionary algorithms - An analytic approach
Kroeske, Jens; Ghandar, Adam; Michalewicz, Zbigniew; Neumann, F, 10th International Conference on Parallel Problem Solving from Nature, Germany 01/09/08
Characterization of matrix-exponential distributions
Bean, Nigel; Fackrell, Mark; Taylor, Peter, Stochastic Models 24 (339–363) 2008
Oriented bond percolation and phase transitions: an analytic approach
Pearce, Charles, International Conference on Numerical Analysis and Applied Mathematics, Corfu, Greece 16/09/07
Monogenic functions in conformal geometry
Eastwood, Michael; Ryan, J, Symmetry, Integrability and Geometry: Methods and Applications 84 (1–14) 2007
Nonclassical symmetry solutions for reaction-diffusion equations with explicity spatial dependence
Hajek, Bronwyn; Edwards, M; Broadbridge, P; Williams, G, Nonlinear Analysis-Theory Methods & Applications 67 (2541–2552) 2007
Symmetries and invariant differential pairings
Eastwood, Michael, Symmetry, Integrability and Geometry: Methods and Applications 113 (1–10) 2007
Traffic matrix estimation method and apparatus
Duffield, N; Greenberg, A; Klincewicz, J; Roughan, Matthew; Zhang, Y,
Fractional analytic index
Varghese, Mathai; Melrose, R; Singer, I, Journal of Differential Geometry 74 (265–292) 2006
Methodology in meta-analysis: a study from critical care meta-analytic practice
Moran, John; Solomon, Patricia; Warn, D, Health Services and Outcomes Research Methodology 5 (207–226) 2006
Methods of constrained and unconstrained approximation for mappings in probability spaces
Torokhti, Anatoli; Howlett, P; Pearce, Charles, chapter in Modern Applied Mathematics (Narosa Publishing House) 83–129, 2005
Boundary element methods for infiltration from irrigation channels
Lobo, Maria; Clements, David, The International Conference on Boundary Element Techniques VI, Montreal, Canada 27/07/05
A 3-D non-hydrostatic pressure model for small amplitude free surface flows
Lee, Jong; Teubner, Michael; Nixon, John; Gill, Peter, International Journal for Numerical Methods in Fluids 50 (649–672) 2005
An analytic modelling approach for network routing algorithms that use "ant-like" mobile agents
Bean, Nigel; Costa, Andre, Computer Networks-The International Journal of Computer and Telecommunications Networking 49 (243–268) 2005
Applications of the artificial compressibility method for turbulent open channel flows
Lee, Jong; Teubner, Michael; Nixon, John; Gill, Peter, International Journal for Numerical Methods in Fluids 51 (617–633) 2005
Ramaswami's duality and probabilistic algorithms for determining the rate matrix for a structured GI/M/1 Markov chain
Hunt, Emma, The ANZIAM Journal 46 (485–493) 2005
Traffic matrix reloaded: Impact of routing changes
Teixeira, R; Duffield, N; Rexford, J; Roughan, Matthew, Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence 3431 (251–264) 2005
An introduction to programming and numerical methods in MATLAB
Otto, S; Denier, James, (Springer-Verlag) 2005
A probabilistic algorithm for finding the rate matrix of a block-GI/M/1 Markov chain
Hunt, Emma, The ANZIAM Journal 45 (457–475) 2004
A sufficient condition for the uniform exponential stability of time-varying systems with noise
Grammel, G; Maizurna, Isna, Nonlinear Analysis-Theory Methods & Applications 56 (951–960) 2004
Spectral decomposition methods for the computation of RMS values in an active suspension
Pearce, Charles; Thompson, A, Vehicle System Dynamics 42 (395–411) 2004
Second moments of a matrix analytic model of machine maintenance
Green, David; Metcalfe, Andrew, IMA International Conference on Modelling in Industrial Maintenance and Reliability (5th: 2004), Salford, United Kingdom 05/04/04
Arborescences, matrix-trees and the accumulated sojourn time in a Markov process
Pearce, Charles; Falzon, L, chapter in Stochastic analysis and applications Volume 3 (Nova Science Publishers) 147–168, 2003
A Probabilistic algorithm for determining the fundamental matrix of a block M/G/1 Markov chain
Hunt, Emma, Mathematical and Computer Modelling 38 (1203–1209) 2003
Dynamics of the cell and its extracellular matrix - A simple mathematical approach
Saha, Asit; Mazumdar, Jagan, IEEE Transactions on NanoBioscience 2 (89–93) 2003
Edge of the wedge theory in hypo-analytic manifolds
Eastwood, Michael; Graham, C, Communications in Partial Differential Equations 28 (2003–2028) 2003
Numerical model of electrical potential within the human head
Nixon, John; Rasser, Paul; Teubner, Michael; Clark, C; Bottema, M, International Journal for Numerical Methods in Engineering 56 (2353–2366) 2003
An Information-Theoretic Approach to Traffic Matrix Estimation
Zhang, Y; Roughan, Matthew; Lund, C; Donoho, D, Ulrich, Karlsruche, Germany 25/08/03
Nonclassical description of analytic cohomology
Bailey, T; Eastwood, Michael; Gindikin, S,
A matrix analytic model for machine maintenance
Green, David; Metcalfe, Andrew; Swailes, D, Matrix-Analytic Methods: Theory and Applications, Adelaide, Australia 14/07/02
Martingale methods for analysing single-server queues
Roughan, Matthew; Pearce, Charles, Queueing Systems 41 (205–239) 2002
Mathematical methods for spatially cohesive reserve design
McDonnell, Mark; Possingham, Hugh; Ball, Ian; Cousins, Elizabeth, Environmental Modeling & Assessment 7 (107–114) 2002
Comparison of spinal myotatic reflexes in human adults investigated with cross-correlation and signal averaging methods
Miller, S; Clark, J; Eyre, J; Kelly, S; Lim, E; McClelland, V; McDonough, S; Metcalfe, Andrew, Brain Research 899 (47–65) 2001
Csiszr f-divergence, Ostrowski's inequality and mutual information
Dragomir, S; Gluscevic, Vido; Pearce, Charles, Nonlinear Analysis-Theory Methods & Applications 47 (2375–2386) 2001
Some new bounds for singular values and eigenvalues of matrix products
Lu, L-Z; Pearce, Charles, Annals of Operations Research 98 (141–148) 2001
The modelling and numerical simulation of causal non-linear systems
Howlett, P; Torokhti, Anatoli; Pearce, Charles, Nonlinear Analysis-Theory Methods & Applications 47 (5559–5572) 2001
Truncation-type methods and Bcklund transformations for ordinary differential equations: The third and fifth Painlev equations
Gordoa, P; Joshi, Nalini; Pickering, A, Glasgow Mathematical Journal 43A (23–32) 2001
Martingale methods in dynamic portfolio allocation with distortion operators
Hamada, M; Sherris, M; Van Der Hoek, John, Quantitative Methods in Finance (2001), Sydney, Australia 12/12/01
Reporting of clinical trials using group sequential methods
Moran, John; Peake, Sandra; Solomon, Patricia, Critical care and Resuscitation 3 (146–147) 2001
Meta-analysis, overviews and publication bias
Solomon, Patricia; Hutton, Jonathon, Statistical Methods in Medical Research 10 (245–250) 2001
Analytic continuation of vector bundles with Lp-curvature
Harris, A; Tonegawa, Y, International Journal of Mathematics 11 (29–40) 2000
Disease surveillance and data collection issues in epidemic modelling
Solomon, Patricia; Isham, V, Statistical Methods in Medical Research 9 (259–277) 2000
Explicit finite difference methods for variable velocity advection in the presence of a source
Noye, Brian, Computers & Fluids 29 (385–399) 2000
Numerical study of the stability of some explicit finite-difference methods for oscillatory advection
Noye, Brian; McInerney, David, The ANZIAM Journal 42 (C1076–C1096) 2000
Disease surveillance and intervention studies in developing countries
Solomon, Patricia, Statistical Methods in Medical Research 9 (183–184) 2000

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.