
Search the School of Mathematical SciencesPeople matching "Analysis of categorical data"Courses matching "Analysis of categorical data" 
Analysis of multivariable and high dimensional data Multivariate analysis of data is performed with the aims to
1. understand the structure in data and summarise the data in simpler ways;
2. understand the relationship of one part of the data to another part; and
3. make decisions or draw inferences based on data.
The statistical analyses of multivariate data extend those of univariate data, and in doing so require
more advanced mathematical theory and computational techniques. The course begins with a
discussion of the three classical methods Principal Component Analysis, Canonical Correlation
Analysis and Discriminant Analysis which correspond to the aims above. We also learn about
Cluster Analysis, Factor Analysis and newer methods including Independent Component Analysis.
For most real data the underlying distribution is not known, but if the assumptions of multivariate
normality of the data hold, extra properties can be derived. Our treatment combines ideas,
theoretical properties and a strong computational component for each of the different methods we
discuss. For the computational part  with Matlab  we make use of real data and learn the use
of simulations in order to assess the performance of different methods in practice.
Topics covered:
1. Introduction to multivariate data, the multivariate normal distribution
2. Principal Component Analysis, theory and practice
3. Canonical Correlation Analysis, theory and practice
4. Discriminant Analysis, Fisher's LDA, linear and quadratic DA
5. Cluster Analysis: hierarchical and kmeans methods
6. Factor Analysis and latent variables
7. Independent Component Analysis including an Introduction to Information Theory
The course will be based on my forthcoming monograph
Analysis of Multivariate and HighDimensional Data  Theory and Practice, to be published by
Cambridge University Press.
More about this course... 

Complex Analysis III When the real numbers are replaced by the complex numbers in the definition of the derivative of a function, the resulting (complex)differentiable functions turn out to have many remarkable properties not enjoyed by their real analogues. These functions, usually known as holomorphic functions, have numerous applications in areas such as engineering, physics, differential equations and number theory, to name just a few. The focus of this course is on the study of holomorphic functions and their most important basic properties. Topics covered are: Complex numbers and functions; complex limits and differentiability; elementary examples; analytic functions; complex line integrals; Cauchy's theorem and the Cauchy integral formula; Taylor's theorem; zeros of holomorphic functions; Rouche's Theorem; the Open Mapping theorem and Inverse Function theorem; Schwarz' Lemma; automorphisms of the ball, the plane and the Riemann sphere; isolated singularities and their classification; Laurent series; the Residue Theorem; calculation of definite integrals and evaluation of infinite series using residues; outlines of the Jordan Curve Theorem, Montel's Theorem and the Riemann Mapping Theorem.
More about this course... 

Integration and Analysis III The Riemann integral works well for continuous functions on closed bounded intervals, but it has certain deficiencies that cause problems, for example, in Fourier analysis and in the theory of differential equations. To overcome such deficiencies, a "new and improved" version of the integral was developed around the beginning of the twentieth century, and it is this theory with which this course is concerned. The underlying basis of the theory, measure theory, has important applications not just in analysis but also in the modern theory of probability.
Topics covered are: Set theory; Lebesgue outer measure; measurable sets; measurable functions. Integration of measurable functions over measurable sets. Convergence of sequences of functions and their integrals. General measure spaces and product measures. Fubini and Tonelli's theorems. Lp spaces. The RadonNikodym theorem. The Riesz representation theorem. Integration and Differentiation.
More about this course... 

Real Analysis Modern mathematics and physics rely on our ability to be able to solve equations, if not in explicit exact forms, then at least in being able to establish the existence of solutions. To do this requires a knowledge of socalled ``analysis", which in many respects is just Calculus in very general settings. The foundations for this work are commenced in Real Analysis, a course that develops this basic material in a systematic and rigorous manner in the context of realvalued functions of a real variable. Topics covered are: Basic set theory. The real numbers, least upper bounds, completeness and its consequences. Sequences: convergence, subsequences, Cauchy sequences. Open, closed, and compact sets of real numbers. Continuous functions, uniform continuity. Differentiation, the Mean Value Theorem. Sequences and series of functions, pointwise and uniform convergence. Power series and Taylor series. Metric spaces: basic notions generalised from the setting of the real numbers. The space of continuous functions on a compact interval. The Contraction Principle. Picard's Theorem on the existence and uniqueness of solutions of ordinary differential equations.
More about this course... 

Statistical Analysis and Modelling 1 This is a first course in Statistics for mathematically inclined students. It will address the key principles underlying commonly used statistical methods such as confidence intervals, hypothesis tests, inference for means and proportions, and linear regression. It will develop a deeper mathematical understanding of these ideas, many of which will be familiar from studies in secondary school. The application of basic and more advanced statistical methods will be illustrated on a range of problems from areas such as medicine, science, technology, government, commerce and manufacturing. The use of the statistical package SPSS will be developed through a sequence of computer practicals. Topics covered will include: basic probability and random variables, fundamental distributions, inference for means and proportions, comparison of independent and paired samples, simple linear regression, diagnostics and model checking, multiple linear regression, simple factorial models, models with factors and continuous predictors.
More about this course... 

Topology and Analysis III Solving equations is a crucial aspect of working in mathematics, physics, engineering, and many other fields. These equations might be straightforward algebraic statements, or complicated systems of differential equations, but there are some fundamental questions common to all of these settings: does a solution exist? If so, is it unique? And if we know of the existence of some specific solution, how do we determine it explicitly or as accurately as possible? This course develops the foundations required to rigorously establish the existence of solutions to various equations, thereby laying the basis for the study of solutions. Through an understanding of the foundations of analysis, we obtain insight critical in numerous areas of application, such areas ranging across physics, engineering, economics and finance. Topics covered are: sets, functions, metric spaces and normed linear spaces, compactness, connectedness, and completeness. Banach fixed point theorem and applications, uniform continuity and convergence. General topological spaces, generating topologies, topological invariants, quotient spaces. Introduction to Hilbert spaces and bounded operators on Hilbert spaces.
More about this course... 
Events matching "Analysis of categorical data" 
Stability of timeperiodic flows 15:10 Fri 10 Mar, 2006 :: G08 Mathematics Building University of Adelaide :: Prof. Andrew Bassom, School of Mathematics and
Statistics, University of Western Australia
Timeperiodic shear layers occur naturally in a wide
range of applications from engineering to physiology. Transition to
turbulence in such flows is of practical interest and there have been
several papers dealing with the stability of flows composed of a
steady component plus an oscillatory part with zero mean. In such
flows a possible instability mechanism is associated with the mean
component so that the stability of the flow can be examined using some
sort of perturbationtype analysis. This strategy fails when the mean
part of the flow is small compared with the oscillatory component
which, of course, includes the case when the mean part is precisely
zero.
This difficulty with analytical studies has meant that the stability
of purely oscillatory flows has relied on various numerical
methods. Until very recently such techniques have only ever predicted
that the flow is stable, even though experiments suggest that they do
become unstable at high enough speeds. In this talk I shall expand on
this discrepancy with emphasis on the particular case of the socalled
flat Stokes layer. This flow, which is generated in a deep layer of
incompressible fluid lying above a flat plate which is oscillated in
its own plane, represents one of the few exact solutions of the
NavierStokes equations. We show theoretically that the flow does
become unstable to waves which propagate relative to the basic motion
although the theory predicts that this occurs much later than has been
found in experiments. Reasons for this discrepancy are examined by
reference to calculations for oscillatory flows in pipes and
channels. Finally, we propose some new experiments that might reduce
this disagreement between the theoretical predictions of instability
and practical realisations of breakdown in oscillatory flows. 

Homological algebra and applications  a historical survey 15:10 Fri 19 May, 2006 :: G08 Mathematics Building University of Adelaide :: Prof. Amnon Neeman
Homological algebra is a curious branch of
mathematics; it is a powerful tool which has been used in many diverse
places, without any clear understanding why it should be so useful.
We will give a list of applications, proceeding chronologically: first
to topology, then to complex analysis, then to algebraic geometry,
then to commutative algebra and finally (if we have time) to
noncommutative algebra. At the end of the talk I hope to be able to
say something about the part of homological algebra on which I have
worked, and its applications. That part is derived categories. 

Watching evolution in real time; problems and potential research areas.
15:10 Fri 26 May, 2006 :: G08. Mathematics Building University of Adelaide :: Prof Alan Cooper (Federation Fellow)
Recent studies (1) have indicated problems with our
ability to use the genetic distances between species to estimate the
time since their divergence (so called molecular clocks). An
exponential decay curve has been detected in comparisons of closely
related taxa in mammal and bird groups, and rough approximations
suggest that molecular clock calculations may be problematic for the
recent past (eg <1 million years). Unfortunately, this period
encompasses a number of key evolutionary events where estimates of
timing are critical such as modern human evolutionary history, the
domestication of animals and plants, and most issues involved in
conservation biology. A solution (formulated at UA) will be briefly
outlined. A second area of active interest is the recent suggestion
(2) that mitochondrial DNA diversity does not track population size in
several groups, in contrast to standard thinking. This finding has
been interpreted as showing that mtDNA may not be evolving neutrally,
as has long been assumed.
Large ancient DNA datasets provide a means to examine these issues, by
revealing evolutionary processes in real time (3). The data also
provide a rich area for mathematical investigation as temporal
information provides information about several parameters that are
unknown in serial coalescent calculations (4). References: Ho SYW et al. Time dependency of molecular rate estimates and
systematic overestimation of recent divergence
times. Mol. Biol. Evol. 22, 15611568 (2005);
Penny D, Nature 436, 183184 (2005).
 Bazin E., et al. Population size does not influence mitochondrial
genetic diversity in animals. Science 312, 570 (2006);
EyreWalker A. Size does not matter for mitochondrial DNA,
Science 312, 537 (2006).
 Shapiro B, et al. Rise and fall of the Beringian steppe
bison. Science 306: 15611565 (2004);
Chan et al. Bayesian estimation of the timing and severity of a
population bottleneck from ancient DNA. PLoS Genetics, 2 e59
(2006).
 Drummond et al. Measurably evolving populations, Trends in
Ecol. Evol. 18, 481488 (2003);
Drummond et al. Bayesian coalescent inference of past population
dynamics from molecular sequences. Molecular Biology Evolution
22, 118592 (2005).


A Bivariate Zeroinflated Poisson Regression Model and application to some Dental Epidemiological data 14:10 Fri 27 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir Paul
Data in the form of paired (pretreatment, posttreatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zeroinflated bivariate Poisson regression (ZIBPR) model for the paired (pretreatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zeroinflated Poisson regression (ZIPR) model of the posttreatment count with the pretreatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zeroinflated Poisson regression model in which the pretreatment DMFT index is taken to be a covariate 

Identifying the source of photographic images by analysis of JPEG quantization artifacts 15:10 Fri 27 Apr, 2007 :: G08 Mathematics Building University of Adelaide :: Dr Matthew Sorell
Media...In a forensic context, digital photographs are becoming more common as sources of evidence in criminal and civil matters. Questions that arise include identifying the make and model of a camera to assist in the gathering of physical evidence; matching photographs to a particular camera through the cameraâs unique characteristics; and determining the integrity of a digital image, including whether the image contains steganographic information. From a digital file perspective, there is also the question of whether metadata has been deliberately modified to mislead the investigator, and in the case of multiple images, whether a timeline can be established from the various timestamps within the file, imposed by the operating system or determined by other image characteristics. This talk is concerned specifically with techniques to identify the make, model series and particular source camera model given a digital image. We exploit particular characteristics of the cameraâs JPEG coder to demonstrate that such identification is possible, and that even when an image has subsequently been reprocessed, there are often sufficient residual characteristics of the original coding to at least narrow down the possible camera models of interest. 

Likelihood inference for a problem in particle physics 15:10 Fri 27 Jul, 2007 :: G04 Napier Building University of Adelaide :: Prof. Anthony Davison
The Large Hadron Collider (LHC), a particle accelerator located at CERN, near Geneva, is (currently!) expected to start operation in early 2008. It is located in an underground tunnel 27km in circumference, and when fully operational, will be the world's largest and highest energy particle accelerator. It is hoped that it will provide evidence for the existence of the Higgs boson, the last remaining particle of the socalled Standard Model of particle physics. The quantity of data that will be generated by the LHC is roughly equivalent to that of the European telecommunications network, but this will be boiled down to just a few numbers. After a brief introduction, this talk will outline elements of the statistical problem of detecting the presence of a particle, and then sketch how higher order likelihood asymptotics may be used for signal detection in this context. The work is joint with Nicola Sartori, of the Università Ca' Foscari, in Venice. 

Regression: a backwards step? 13:10 Fri 7 Sep, 2007 :: Maths G08 :: Dr Gary Glonek
Media...Most students of high school mathematics will have encountered the technique of fitting a line to data by least squares. Those who have taken a university statistics course will also have heard this method referred to as regression. However, it is not obvious from common dictionary definitions why this should be the case. For example, "reversion to an earlier or less advanced state or form". In this talk, the mathematical phenomenon that gave regression its name will be explained and will be shown to have implications in some unexpected contexts.


The Linear Algebra of Internet Search Engines 15:10 Fri 5 Oct, 2007 :: G04 Napier Building University of Adelaide :: Dr Lesley Ward :: School of Mathematics and Statistics, University of South Australia
We often want to search the web for information on a given topic. Early websearch algorithms worked by counting up the number of times the words in a query topic appeared on each webpage. If the topic words appeared often on a given page, that page was ranked highly as a source of information on that topic.
More recent algorithms rely on Link Analysis. People make judgments about how useful a given page is for a given topic, and they express these judgments through the hyperlinks they choose to put on their own webpages. Linkanalysis algorithms aim to mine the collective wisdom encoded in the resulting network of links.
I will discuss the linear algebra that forms the common underpinning of three linkanalysis algorithms for web search. I will also present some work on refining one such algorithm, Kleinberg's HITS algorithm.
This is joint work with Joel Miller, Greg Rae, Fred Schaefer, Ayman Farahat, Tom LoFaro, Tracy Powell, Estelle Basor, and Kent Morrison. It originated in a Mathematics Clinic project at Harvey Mudd College. 

Statistical Critique of the International Panel on Climate Change's work on Climate Change. 18:00 Wed 17 Oct, 2007 :: Union Hall University of Adelaide :: Mr Dennis Trewin
Climate change is one of the most important issues facing us today. Many governments have introduced or are developing appropriate policy interventions to (a) reduce the growth of greenhouse gas emissions in order to mitigate future climate change, or (b) adapt to future climate change.
This important work deserves a high quality statistical data base but there are statistical shortcomings in the work of the International Panel on Climate Change (IPCC). There has been very little involvement of qualified statisticians in the very important work of the IPCC which appears to be scientifically meritorious in most other ways.
Mr Trewin will explain these shortcomings and outline his views on likely future climate change, taking into account the statistical deficiencies.
His conclusions suggest climate change is still an important issue that needs to be addressed but the range of likely outcomes is a lot lower than has been suggested by the IPCC.
This presentation will be based on an invited paper presented at the OECD World Forum.


Moderated Statistical Tests for Digital Gene Expression Technologies 15:10 Fri 19 Oct, 2007 :: G04 Napier Building University of Adelaide :: Dr Gordon Smyth :: Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia
Digital gene expression (DGE) technologies measure gene expression by counting sequence tags. They are sensitive technologies for measuring gene expression on a genomic scale, without the need for prior knowledge of the genome sequence. As the cost of DNA sequencing decreases, the number of DGE datasets is expected to grow dramatically. Various tests of differential expression have been proposed for replicated DGE data using overdispersed binomial or Poisson models for the counts, but none of the these are usable when the number of replicates is very small. We develop tests using the negative binomial distribution to model overdispersion relative to the Poisson, and use conditional weighted likelihood to moderate the level of overdispersion across genes. A heuristic empirical Bayes algorithm is developed which is applicable to very general likelihood estimation contexts. Not only is our strategy applicable even with the smallest number of replicates, but it also proves to be more powerful than previous strategies when more replicates are available. The methodology is applicable to other counting technologies, such as proteomic spectral counts.


Global and Local stationary modelling in finance: Theory and empirical evidence 14:10 Thu 10 Apr, 2008 :: G04 Napier Building University of Adelaide :: Prof. Dominique Guégan :: Universite Paris 1 PantheonSorbonne
To model real data sets using second order stochastic processes imposes that the data sets verify the second order stationarity condition. This stationarity condition concerns the unconditional moments of the process. It is in that context that most of models developed from the sixties' have been studied; We refer to the ARMA processes (Brockwell and Davis, 1988), the ARCH, GARCH and EGARCH models (Engle, 1982, Bollerslev, 1986, Nelson, 1990), the SETAR process (Lim and Tong, 1980 and Tong, 1990), the bilinear model (Granger and Andersen, 1978, Guégan, 1994), the EXPAR model (Haggan and Ozaki, 1980), the long memory process (Granger and Joyeux, 1980, Hosking, 1981, Gray, Zang and Woodward, 1989, Beran, 1994, Giraitis and Leipus, 1995, Guégan, 2000), the switching process (Hamilton, 1988). For all these models, we get an invertible causal solution under specific conditions on the parameters, then the forecast points and the forecast intervals are available.
Thus, the stationarity assumption is the basis for a general asymptotic theory for identification, estimation and forecasting. It guarantees that the increase of the sample size leads to more and more information of the same kind which is basic for an asymptotic theory to make sense.
Now nonstationarity modelling has also a long tradition in econometrics. This one is based on the conditional moments of the data generating process. It appears mainly in the heteroscedastic and volatility models, like the GARCH and related models, and stochastic volatility processes (Ghysels, Harvey and Renault 1997). This nonstationarity appears also in a different way with structural changes models like the switching models (Hamilton, 1988), the stopbreak model (Diebold and Inoue, 2001, Breidt and Hsu, 2002, Granger and Hyung, 2004) and the SETAR models, for instance. It can also be observed from linear models with time varying coefficients (Nicholls and Quinn, 1982, Tsay, 1987).
Thus, using stationary unconditional moments suggest a global stationarity for the model, but using nonstationary unconditional moments or nonstationary conditional moments or assuming existence of states suggest that this global stationarity fails and that we only observe a local stationary behavior.
The growing evidence of instability in the stochastic behavior of stocks, of exchange rates, of some economic data sets like growth rates for instance, characterized by existence of volatility or existence of jumps in the variance or on the levels of the prices imposes to discuss the assumption of global stationarity and its consequence in modelling, particularly in forecasting. Thus we can address several questions with respect to these remarks.
1. What kinds of nonstationarity affect the major financial and economic data sets? How to detect them?
2. Local and global stationarities: How are they defined?
3. What is the impact of evidence of nonstationarity on the statistics computed from the global non stationary data sets?
4. How can we analyze data sets in the nonstationary global framework? Does the asymptotic theory work in nonstationary framework?
5. What kind of models create local stationarity instead of global stationarity? How can we use them to develop a modelling and a forecasting strategy?
These questions began to be discussed in some papers in the economic literature. For some of these questions, the answers are known, for others, very few works exist. In this talk I will discuss all these problems and will propose 2 new stategies and modelling to solve them. Several interesting topics in empirical finance awaiting future research will also be discussed.


Computational Methods for Phase Response Analysis of Circadian Clocks 15:10 Fri 18 Jul, 2008 :: G04 Napier Building University of Adelaide. :: Prof. Linda Petzold :: Dept. of Mechanical and Environmental Engineering, University of California, Santa Barbara
Circadian clocks govern daily behaviors of organisms in all kingdoms of life. In mammals, the master clock resides in the suprachiasmatic nucleus (SCN) of the hypothalamus. It is composed of thousands of neurons, each of which contains a sloppy oscillator  a molecular clock governed by a transcriptional feedback network. Via intercellular signaling, the cell population synchronizes spontaneously, forming a coherent oscillation. This multioscillator is then entrained to its environment by the daily light/dark cycle.
Both at the cellular and tissular levels, the most important feature of the clock is its ability not simply to keep time, but to adjust its time, or phase, to signals. We present the parametric impulse phase response curve (pIPRC), an analytical analog to the phase response curve (PRC) used experimentally. We use the pIPRC to understand both the consequences of intercellular signaling and the light entrainment process. Further, we determine which model components determine the phase response behavior of a single oscillator by using a novel model reduction technique. We reduce the number of model components while preserving the pIPRC and then incorporate the resultant model into a couple SCN tissue model. Emergent properties, including the ability of the population to synchronize spontaneously are preserved in the reduction. Finally, we present some mathematical tools for the study of synchronization in a network of coupled, noisy oscillators.


Betti's Reciprocal Theorem for Inclusion and Contact Problems 15:10 Fri 1 Aug, 2008 :: G03 Napier Building University of Adelaide :: Prof. Patrick Selvadurai :: Department of Civil Engineering and Applied Mechanics, McGill University
Enrico Betti (18231892) is recognized in the mathematics community for his pioneering contributions to topology. An equally important contribution is his formulation of the reciprocity theorem applicable to elastic bodies that satisfy the classical equations of linear elasticity. Although James Clerk Maxwell (18311879) proposed a law of reciprocal displacements and rotations in 1864, the contribution of Betti is acknowledged for its underlying formal mathematical basis and generality. The purpose of this lecture is to illustrate how Betti's reciprocal theorem can be used to full advantage to develop compact analytical results for certain contact and inclusion problems in the classical theory of elasticity. Inclusion problems are encountered in number of areas in applied mechanics ranging from composite materials to geomechanics. In composite materials, the inclusion represents an inhomogeneity that is introduced to increase either the strength or the deformability characteristics of resulting material. In geomechanics, the inclusion represents a constructed material region, such as a ground anchor, that is introduced to provide load transfer from structural systems. Similarly, contact problems have applications to the modelling of the behaviour of indentors used in materials testing to the study of foundations used to distribute loads transmitted from structures. In the study of conventional problems the inclusions and the contact regions are directly loaded and this makes their analysis quite straightforward. When the interaction is induced by loads that are placed exterior to the indentor or inclusion, the direct analysis of the problem becomes inordinately complicated both in terns of formulation of the integral equations and their numerical solution. It is shown by a set of selected examples that the application of Betti's reciprocal theorem leads to the development of exact closed form solutions to what would otherwise be approximate solutions achievable only through the numerical solution of a set of coupled integral equations. 

Elliptic equation for diffusionadvection flows 15:10 Fri 15 Aug, 2008 :: G03 Napier Building University of Adelaide :: Prof. Pavel Bedrikovsetsky :: Australian School of Petroleum Science, University of Adelaide.
The standard diffusion equation is obtained by Einstein's method and its generalisation, FokkerPlankKolmogorovFeller theory. The time between jumps in Einstein derivation is constant.
We discuss random walks with residence time distribution, which occurs for flows of solutes and suspensions/colloids in porous media, CO2 sequestration in coal mines, several processes in chemical, petroleum and environmental engineering. The rigorous application of the Einstein's method results in new equation, containing the time and the mixed dispersion terms expressing the dispersion of the particle time steps.
Usually, adding the second time derivative results in additional initial data. For the equation derived, the condition of limited solution when time tends to infinity provides with uniqueness of the Caushy problem solution.
The solution of the pulse injection problem describing a common tracer injection experiment is studied in greater detail. The new theory predicts delay of the maximum of the tracer, compared to the velocity of the flow, while its forward "tail" contains much more particles than in the solution of the classical parabolic (advectiondispersion) equation. This is in agreement with the experimental observations and predictions of the direct simulation.


The Role of Walls in Chaotic Mixing 15:10 Fri 22 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr JeanLuc Thiffeault :: Department of Mathematics, University of Wisconsin  Madison
I will report on experiments of chaotic mixing in closed and open
vessels, in which a highly viscous fluid is stirred by a moving
rod. In these experiments we analyze quantitatively how the
concentration field of a lowdiffusivity dye relaxes towards
homogeneity, and observe a slow algebraic decay, at odds with the
exponential decay predicted by most previous studies. Visual
observations reveal the dominant role of the vessel wall, which
strongly influences the concentration field in the entire domain and
causes the anomalous scaling. A simplified 1D model supports our
experimental results. Quantitative analysis of the concentration
pattern leads to scalings for the distributions and the variance of
the concentration field consistent with experimental and numerical
results. I also discuss possible ways of avoiding the limiting role
of walls.
This is joint work with Emmanuelle Gouillart, Olivier Dauchot, and
Stephane Roux. 

Mathematical modelling of blood flow in curved arteries 15:10 Fri 12 Sep, 2008 :: G03 Napier Building University of Adelaide :: Dr Jennifer Siggers :: Imperial College London
Atherosclerosis, characterised by plaques, is the most common arterial
disease. Plaques tend to develop in regions of low mean wall shear
stress, and regions where the wall shear stress changes direction during
the course of the cardiac cycle. To investigate the effect of the
arterial geometry and driving pressure gradient on the wall shear stress
distribution we consider an idealised model of a curved artery with
uniform curvature. We assume that the flow is fullydeveloped and seek
solutions of the governing equations, finding the effect of the
parameters on the flow and wall shear stress distribution. Most
previous work assumes the curvature ratio is asymptotically small;
however, many arteries have significant curvature (e.g. the aortic arch
has curvature ratio approx 0.25), and in this work we consider in
particular the effect of finite curvature.
We present an extensive analysis of curvedpipe flow driven by a steady
and unsteady pressure gradients. Increasing the curvature causes the
shear stress on the inside of the bend to rise, indicating that the risk
of plaque development would be overestimated by considering only the
weak curvature limit. 

Oceanographic Research at the South Australian Research and Development Institute: opportunities for collaborative research 15:10 Fri 21 Nov, 2008 :: Napier G04 :: Associate Prof John Middleton :: South Australian Research and Development Institute
Increasing threats to S.A.'s fisheries and marine environment have underlined the increasing need for soundly based research into the ocean circulation and ecosystems (phyto/zooplankton) of the shelf and gulfs. With support of Marine Innovation SA, the Oceanography Program has within 2 years, grown to include 6 FTEs and a budget of over $4.8M. The program currently leads two major research projects, both of which involve numerical and applied mathematical modelling of oceanic flow and ecosystems as well as statistical techniques for the analysis of data. The first is the implementation of the Southern Australian Integrated Marine Observing System (SAIMOS) that is providing data to understand the dynamics of shelf boundary currents, monitor for climate change and understand the phyto/zooplankton ecosystems that underpin SA's wild fisheries and aquaculture. SAIMOS involves the use of shipbased sampling, the deployment of underwater marine moorings, underwater gliders, HF Ocean RADAR, acoustic tracking of tagged fish and Autonomous Underwater vehicles.
The second major project involves measuring and modelling the ocean circulation and biological systems within Spencer Gulf and the impact on prawn larval dispersal and on the sustainability of existing and proposed aquaculture sites. The discussion will focus on opportunities for collaborative research with both faculty and students in this exciting growth area of S.A. science.


Key Predistribution in GridBased Wireless Sensor Networks 15:10 Fri 12 Dec, 2008 :: Napier G03 :: Dr Maura Paterson :: Information Security Group at Royal Holloway, University of London.
Wireless sensors are small, batterypowered devices that are deployed to
measure quantities such as temperature within a given region, then form
a wireless network to transmit and process the data they collect.
We discuss the problem of distributing symmetric cryptographic keys to
the nodes of a wireless sensor network in the case where the sensors are
arranged in a square or hexagonal grid, and we propose a key
predistribution scheme for such networks that is based on Costas arrays.
We introduce more general structures known as distinctdifference
configurations, and show that they provide a flexible choice of
parameters in our scheme, leading to more efficient performance than
that achieved by prior schemes from the literature. 

Bursts and canards in a pituitary lactotroph model 15:10 Fri 6 Mar, 2009 :: Napier LG29 :: Dr Martin Wechselberger :: University of Sydney
Bursting oscillations in nerve cells have been the focus of a great deal of attention by mathematicians. These are typically studied by taking advantage of multiple timescales in the system under study to perform a singular perturbation analysis. Bursting also occurs in hormonesecreting pituitary cells, but is characterized by fast bursts with small electrical impulses. Although the separation of timescales is not as clear, singular perturbation analysis is still the key to understand the bursting mechanism. In particular, we will show that canards are responsible for the observed oscillatory behaviour. 

From histograms to multivariate polynomial histograms and shape estimation 12:10 Thu 19 Mar, 2009 :: Napier 210 :: A/Prof Inge Koch
Media...Histograms are convenient and easytouse tools for estimating the shape of
data, but they have serious problems which are magnified for multivariate data.
We combine classic histograms with shape estimation by polynomials. The new
relatives, `polynomial histograms', have surprisingly nice mathematical
properties, which we will explore in this talk. We also show how they can be
used for real data of 1020 dimensions to analyse and understand the shape of
these data.


Geometric analysis on the noncommutative torus 13:10 Fri 20 Mar, 2009 :: School Board Room :: Prof Jonathan Rosenberg :: University of Maryland
Noncommutative geometry (in the sense of Alain Connes) involves
replacing a conventional space by a "space" in which the algebra of
functions is noncommutative. The simplest truly nontrivial
noncommutative manifold is the noncommutative 2torus, whose algebra
of functions is also called the irrational rotation algebra. I will
discuss a number of recent results on geometric analysis on the
noncommutative torus, including the study of nonlinear noncommutative
elliptic PDEs (such as the noncommutative harmonic map equation) and
noncommutative complex analysis (with noncommutative elliptic
functions). 

Multiscale tools for interpreting cell biology data 15:10 Fri 17 Apr, 2009 :: Napier LG29 :: Dr Matthew Simpson :: University of Melbourne
Trajectory data from observations of a random walk process are often used to characterize macroscopic transport coefficients and to infer motility mechanisms in cell biology. New continuum equations describing the average moments of the position of an individual agent in a population of interacting agents are derived and validated. Unlike standard noninteracting random walks, the new moment equations explicitly represent the interactions between agents as they are coupled to the macroscopic agent density. Key issues associated with the validity of the new continuum equations and the interpretation of experimental data will be explored. 

Statistical analysis for harmonized development of systemic organs in human fetuses 11:00 Thu 17 Sep, 2009 :: School Board Room :: Prof Kanta Naito :: Shimane University
The growth processes of human babies have been studied
sufficiently in scientific fields, but there have still been many issues
about the developments of human fetus which are not clarified. The aim of
this research is to investigate the developing process of systemic organs of
human fetuses based on the data set of measurements of fetus's bodies and
organs. Specifically, this talk is concerned with giving a mathematical
understanding for the harmonized developments of the organs of human
fetuses. The method to evaluate such harmonies is proposed by the use of the
maximal dilatation appeared in the theory of quasiconformal mapping. 

The proof of the Poincare conjecture 15:10 Fri 25 Sep, 2009 :: Napier 102 :: Prof Terrence Tao :: UCLA
In a series of three papers from 20022003, Grigori Perelman gave a spectacular proof of the Poincare Conjecture (every smooth compact simply connected threedimensional manifold is topologically isomorphic to a sphere), one of the most famous open problems in mathematics (and one of the seven Clay Millennium Prize Problems worth a million dollars each), by developing several new groundbreaking advances in Hamilton's theory of Ricci flow on manifolds. In this talk I describe in broad detail how the proof proceeds, and briefly discuss some of the key turning points in the argument.
About the speaker:
Terence Tao was born in Adelaide, Australia, in 1975. He has been a professor of mathematics at UCLA since 1999, having completed his PhD under Elias Stein at Princeton in 1996. Tao's areas of research include harmonic analysis, PDE, combinatorics, and number theory. He has received a number of awards, including the Salem Prize in 2000, the Bochner Prize in 2002, the Fields Medal and SASTRA Ramanujan Prize in 2006, and the MacArthur Fellowship and Ostrowski Prize in 2007. Terence Tao also currently holds the James and Carol Collins chair in mathematics at UCLA, and is a Fellow of the Royal Society and the Australian Academy of Sciences (Corresponding Member). 

Contemporary frontiers in statistics 15:10 Mon 28 Sep, 2009 :: Badger Labs G31 Macbeth Lectrue :: Prof. Peter Hall :: University of Melbourne
The availability of powerful computing equipment has had a dramatic impact on statistical methods and thinking, changing forever the way data are analysed. New data types, larger quantities of data, and new classes of research problem are all motivating new statistical methods. We shall give examples of each of these issues, and discuss the current and future directions of frontier problems in statistics. 

Eigenanalysis of fluidloaded compliant panels 15:10 Wed 9 Dec, 2009 :: Santos Lecture Theatre :: Prof Tony Lucey :: Curtin University of Technology
This presentation concerns the fluidstructure interaction (FSI) that occurs between a fluid flow and an arbitrarily deforming flexible boundary considered to be a flexible panel or a compliant coating that comprises the wetted surface of a marine vehicle. We develop and deploy an approach that is a hybrid of computational and theoretical techniques. The system studied is twodimensional and linearised disturbances are assumed. Of particular novelty in the present work is the ability of our methods to extract a full set of fluidstructure eigenmodes for systems that have strong spatial inhomogeneity in the structure of the flexible wall.
We first present the approach and some results of the system in which an ideal, zeropressure gradient, flow interacts with a flexible plate held at both its ends. We use a combination of boundaryelement and finitedifference methods to express the FSI system as a single matrix equation in the interfacial variable. This is then couched in statespace form and standard methods used to extract the system eigenvalues. It is then shown how the incorporation of spatial inhomogeneity in the stiffness of the plate can be either stabilising or destabilising. We also show that adding a further restraint within the streamwise extent of a homogeneous panel can trigger an additional type of hydroelastic instability at low flow speeds. The mechanism for the fluidtostructure energy transfer that underpins this instability can be explained in terms of the pressuresignal phase relative to that of the wall motion and the effect on this relationship of the added wall restraint.
We then show how the idealflow approach can be conceptually extended to include boundarylayer effects. The flow field is now modelled by the continuity equation and the linearised perturbation momentum equation written in velocityvelocity form. The nearwall flow field is spatially discretised into rectangular elements on an Eulerian grid and a variant of the discretevortex method is applied. The entire fluidstructure system can again be assembled as a linear system for a single set of unknowns  the flowfield vorticity and the wall displacements  that admits the extraction of eigenvalues. We then show how stability diagrams for the fullycoupled finite flowstructure system can be assembled, in doing so identifying classes of wallbased or fluidbased and spatiotemporal wave behaviour.


Hartogstype holomorphic extensions 13:10 Tue 15 Dec, 2009 :: School Board Room :: Prof Roman Dwilewicz :: Missouri University of Science and Technology
We will review holomorphic extension problems starting with the famous Hartogs extension theorem (1906), via SeveriKneserFicheraMartinelli theorems, up to some recent (partial) results of Al Boggess (Texas A&M Univ.), Zbigniew Slodkowski (Univ. Illinois at Chicago), and the speaker. The holomorphic extension problems for holomorphic or CauchyRiemann functions are fundamental problems in complex analysis of several variables. The talk will be very elementary, with many figures, and accessible to graduate and even advanced undergraduate students. 

A solution to the GromovVaserstein problem 15:10 Fri 29 Jan, 2010 :: Engineering North N 158 Chapman Lecture Theatre :: Prof Frank Kutzschebauch :: University of Berne, Switzerland
Any matrix in $SL_n (\mathbb C)$ can be written as a product of elementary matrices using the Gauss elimination process. If instead of the field of complex numbers, the entries in the matrix are elements of a more general ring, this becomes a delicate question. In particular, rings of complexvalued functions on a space are interesting cases. A deep result of Suslin gives an affirmative answer for the polynomial ring in $m$ variables in case the size $n$ of the matrix is at least 3. In the topological category, the problem was solved by Thurston and Vaserstein. For holomorphic functions on $\mathbb C^m$, the problem was posed by Gromov in the 1980s. We report on a complete solution to Gromov's problem. A main tool is the OkaGrauertGromov hprinciple in complex analysis. Our main theorem can be formulated as follows: In the absence of obvious topological obstructions, the Gauss elimination process can be performed in a way that depends holomorphically on the matrix. This is joint work with Bj\"orn Ivarsson. 

Exploratory experimentation and computation 15:10 Fri 16 Apr, 2010 :: Napier LG29 :: Prof Jonathan Borwein :: University of Newcastle
Media...The mathematical research community is facing a great challenge to reevaluate the role of proof in light of the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to datamine on the Internet. Add to that the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the Classification of finite simple groups. As the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished. I shall look at the philosophical context with examples and then offer some of five benchmarking examples of the opportunities and challenges we face. 

Estimation of sparse Bayesian networks using a scorebased approach 15:10 Fri 30 Apr, 2010 :: School Board Room :: Dr Jessica Kasza :: University of Copenhagen
The estimation of Bayesian networks given highdimensional data sets, with more variables than there are observations, has been the focus of much recent research. These structures provide a flexible framework for the representation of the conditional independence relationships of a set of variables, and can be particularly useful in the estimation of genetic regulatory networks given gene expression data.
In this talk, I will discuss some new research on learning sparse networks, that is, networks with many conditional independence restrictions, using a scorebased approach. In the case of genetic regulatory networks, such sparsity reflects the view that each gene is regulated by relatively few other genes. The presented approach allows prior information about the overall sparsity of the underlying structure to be included in the analysis, as well as the incorporation of prior knowledge about the connectivity of individual nodes within the network.


Whole genome analysis of repetitive DNA 15:10 Fri 21 May, 2010 :: Napier 209 :: Prof David Adelson :: University of Adelaide
The interspersed repeat content of mammalian genomes has been best characterized in human, mouse and cow. We carried out de novo identification of repeated elements in the equine genome and identified previously unknown elements present at low copy number. The equine genome contains typical eutherian mammal repeats. We analysed both interspersed and simple sequence repeats (SSR) genomewide, finding that some repeat classes are spatially correlated with each other as well as with G+C content and gene density. Based on these
spatial correlations, we have confirmed recentlydescribed ancestral vs cladespecific genome territories defined by repeat content. Territories enriched for ancestral repeats tended to be contiguous domains. To determine if these territories were evolutionarily conserved, we compared these results with a similar analysis of the human genome, and observed similar ancestral repeat enriched domains. These results indicate that ancestral, evolutionarily conserved mammalian genome territories can be identified on the basis of repeat content alone. Interspersed repeats of different ages appear to be analogous to geologic strata, allowing identification of ancient vs newly remodelled regions of mammalian genomes. 

Interpolation of complex data using spatiotemporal compressive sensing 13:00 Fri 28 May, 2010 :: Santos Lecture Theatre :: A/Prof Matthew Roughan :: School of Mathematical Sciences, University of Adelaide
Many complex datasets suffer from missing data, and interpolating these missing
elements is a key task in data analysis. Moreover, it is often the case that we
see only a linear combination of the desired measurements, not the measurements
themselves. For instance, in network management, it is easy to count the traffic
on a link, but harder to measure the endtoend flows. Additionally, typical
interpolation algorithms treat either the spatial, or the temporal
components of data separately, but in many real datasets have strong
spatiotemporal structure that we would like to exploit in reconstructing the
missing data. In this talk I will describe a novel reconstruction algorithm that
exploits concepts from the growing area of compressive sensing to solve all of
these problems and more. The approach works so well on Internet traffic matrices
that we can obtain a reasonable reconstruction with as much as 98% of the
original data missing. 

A variance constraining ensemble Kalman filter: how to improve forecast using climatic data of unobserved variables 15:10 Fri 28 May, 2010 :: Santos Lecture Theatre :: A/Prof Georg Gottwald :: The University of Sydney
Data assimilation aims to solve one of the fundamental problems ofnumerical weather prediction  estimating the optimal state of the
atmosphere given a numerical model of the dynamics, and sparse, noisy
observations of the system. A standard tool in attacking this
filtering problem is the Kalman filter.
We consider the problem when only partial observations are available.
In particular we consider the situation where the observational space
consists of variables which are directly observable with known
observational error, and of variables of which only their climatic
variance and mean are given. We derive the corresponding Kalman
filter in a variational setting.
We analyze the variance constraining Kalman filter (VCKF) filter for
a simple linear toy model and determine its range of optimal
performance. We explore the variance constraining Kalman filter in an
ensemble transform setting for the Lorenz96 system, and show that
incorporating the information on the variance on some unobservable
variables can improve the skill and also increase the stability of
the data assimilation procedure.
Using methods from dynamical systems theory we then systems where the
unobserved variables evolve deterministically but chaotically on a
fast time scale.
This is joint work with Lewis Mitchell and Sebastian Reich.


The mathematics of theoretical inference in cognitive psychology 15:10 Fri 11 Jun, 2010 :: Napier LG24 :: Prof John Dunn :: University of Adelaide
The aim of psychology in general, and of cognitive psychology in particular, is to construct theoretical accounts of mental processes based on observed changes in performance on one or more cognitive tasks. The fundamental problem faced by the researcher is that these mental processes are not directly observable but must be inferred from changes in performance between different experimental conditions. This inference is further complicated by the fact that performance measures may only be monotonically related to the underlying psychological constructs. Statetrace analysis provides an approach to this problem which has gained increasing interest in recent years. In this talk, I explain statetrace analysis and discuss the set of mathematical issues that flow from it. Principal among these are the challenges of statistical inference and an unexpected connection to the mathematics of oriented matroids. 

Some thoughts on wine production 15:05 Fri 18 Jun, 2010 :: School Board Room :: Prof Zbigniew Michalewicz :: School of Computer Science, University of Adelaide
In the modern information era, managers (e.g. winemakers) recognize the
competitive opportunities represented by decisionsupport tools which can
provide a significant cost savings & revenue increases for their businesses.
Wineries make daily decisions on the processing of grapes, from harvest time
(prediction of maturity of grapes, scheduling of equipment and labour, capacity
planning, scheduling of crushers) through tank farm activities (planning and
scheduling of wine and juice transfers on the tank farm) to packaging processes
(bottling and storage activities). As such operation is quite complex, the whole
area is loaded with interesting ORrelated issues. These include the issues of
global vs. local optimization, relationship between prediction and optimization,
operating in dynamic environments, strategic vs. tactical optimization, and
multiobjective optimization & tradeoff analysis. During the talk we address
the above issues; a few realworld applications will be shown and discussed to
emphasize some of the presented material. 

Meteorological drivers of extreme bushfire events in southern Australia 15:10 Fri 2 Jul, 2010 :: Benham Lecture Theatre :: Prof Graham Mills :: Centre for Australian Weather and Climate Research, Melbourne
Bushfires occur regularly during summer in southern Australia, but only a few of these fires become iconic due to their effects, either in terms of loss of life or economic and social cost. Such events include Black Friday (1939), the Hobart fires (1967), Ash Wednesday (1983), the Canberra bushfires (2003), and most recently Black Saturday in February 2009. In most of these events the weather of the day was statistically extreme in terms of heat, (low) humidity, and wind speed, and in terms of antecedent drought. There are a number of reasons for conducting postevent analyses of the meteorology of these events. One is to identify any meteorological circulation systems or dynamic processes occurring on those days that might not be widely or hitherto recognised, to document these, and to develop new forecast or guidance products. The understanding and prediction of such features can be used in the short term to assist in effective management of fires and the safety of firefighters and in the medium range to assist preparedness for the onset of extreme conditions. The results of such studies can also be applied to simulations of future climates to assess the likely changes in frequency of the most extreme fire weather events, and their documentary records provide a resource that can be used for advanced training purposes. In addition, particularly for events further in the past, revisiting these events using reanalysis data sets and contemporary NWP models can also provide insights unavailable at the time of the events.
Over the past few years the Bushfire CRC's Fire Weather and Fire Danger project in CAWCR has studied the mesoscale meteorology of a number of major fire events, including the days of Ash Wednesday 1983, the Dandenong Ranges fire in January 1997, the Canberra fires and the Alpine breakout fires in January 2003, the Lower Eyre Peninsula fires in January 2005 and the Boorabbin fire in December 2007January 2008. Various aspects of these studies are described below, including the structures of dry cold frontal wind changes, the particular character of the cold fronts associated with the most damaging fires in southeastern Australia, and some aspects of how the vertical temperature and humidity structure of the atmosphere may affect the fire weather at the surface.
These studies reveal much about these major events, but also suggest future research directions, and some of these will be discussed.


Mathematica Seminar 15:10 Wed 28 Jul, 2010 :: Engineering Annex 314 :: Kim Schriefer :: Wolfram Research
The Mathematica Seminars 2010 offer an opportunity to experience the applicability, easeofuse, as well as the advancements of Mathematica 7 in education and academic research. These seminars will highlight the latest directions in technical computing with Mathematica, and the impact this technology has across a wide range of academic fields, from maths, physics and biology to finance, economics and business.
Those not yet familiar with Mathematica will gain an overview of the system and discover the breadth of applications it can address, while experts will get firsthand experience with recent advances in Mathematica like parallel computing, digital image processing, pointandclick palettes, builtin curated data, as well as courseware examples. 

A spatialtemporal point process model for fine resolution multisite rainfall data from Roma, Italy 14:10 Thu 19 Aug, 2010 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of Technology
A point process rainfall model is further developed that has storm origins occurring in spacetime according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in twodimensional
space, where the storm radii are taken to be independent exponential random
variables. Storm origins are of random type z, where z follows a continuous
probability distribution. Cell origins occur in a further spatial Poisson
process and have arrival times that follow a NeymanScott point process. Cell
origins have random radii so that cells form discs in twodimensional space.
Statistical properties up to third order are derived and used to fit the model
to 10 min series taken from 23 sites across the Roma region, Italy.
Distributional properties of the observed annual maxima are compared to
equivalent values sampled from series that are simulated using the fitted
model. The results indicate that the model will be of use in urban drainage
projects for the Roma region.


Compound and constrained regression analyses for EIV models 15:05 Fri 27 Aug, 2010 :: Napier LG28 :: Prof Wei Zhu :: State University of New York at Stony Brook
In linear regression analysis, randomness often exists in the independent variables and the resulting models are referred to errorsinvariables (EIV) models. The existing general EIV modeling framework, the structural model approach, is parametric and dependent on the usually unknown underlying distributions. In this work, we introduce a general nonparametric EIV modeling framework, the compound regression analysis, featuring an intuitive geometric representation and a 11 correspondence to the structural model. Properties, examples and further generalizations of this new modeling approach are discussed in this talk. 

Simultaneous confidence band and hypothesis test in generalised varyingcoefficient models 15:05 Fri 10 Sep, 2010 :: Napier LG28 :: Prof Wenyang Zhang :: University of Bath
Generalised varyingcoefficient models (GVC) are very important
models. There are a considerable number of literature addressing these models.
However, most of the existing literature are devoted to the estimation
procedure. In this talk, I will systematically investigate the statistical
inference for GVC, which includes confidence band as well as hypothesis test. I
will show the asymptotic distribution of the maximum discrepancy between the
estimated functional coefficient and the true functional coefficient. I will
compare different approaches for the construction of confidence band and
hypothesis test. Finally, the proposed statistical inference methods are used to
analyse the data from China about contraceptive use there, which leads to some
interesting findings. 

Principal Component Analysis Revisited 15:10 Fri 15 Oct, 2010 :: Napier G04 :: Assoc. Prof Inge Koch :: University of Adelaide
Since the beginning of the 20th century, Principal Component Analysis (PCA) has been an important tool in the analysis of multivariate data. The principal components summarise data in fewer than the original number of variables without losing essential information, and thus allow a split of the data into signal and noise components. PCA is a linear method, based on elegant mathematical theory.
The increasing complexity of data together with the emergence of fast computers in the later parts of the 20th century has led to a renaissance of PCA. The growing numbers of variables (in particular, highdimensional low sample size problems), nonGaussian data, and functional data (where the data are curves) are posing exciting challenges to statisticians, and have resulted in new research which extends the classical theory.
I begin with the classical PCA methodology and illustrate the challenges presented by the complex data that we are now able to collect. The main part of the talk focuses on extensions of PCA: the duality of PCA and the Principal Coordinates of Multidimensional Scaling, Sparse PCA, and consistency results relating to principal components, as the dimension grows. We will also look at newer developments such as Principal Component Regression and Supervised PCA, nonlinear PCA and Functional PCA.


Bioinspired computation in combinatorial optimization: algorithms and their computational complexity 15:10 Fri 11 Mar, 2011 :: 7.15 Ingkarni Wardli :: Dr Frank Neumann :: The University of Adelaide
Media...Bioinspired computation methods, such as evolutionary algorithms and ant colony
optimization, are being applied successfully to complex engineering and
combinatorial optimization problems. The computational complexity analysis of
this type of algorithms has significantly increased the theoretical
understanding of these successful algorithms. In this talk, I will give an
introduction into this field of research and present some important results
that we achieved for problems from combinatorial optimization. These results
can also be found in my recent textbook "Bioinspired Computation in
Combinatorial Optimization  Algorithms and Their Computational Complexity". 

Classification for highdimensional data 15:10 Fri 1 Apr, 2011 :: Conference Room Level 7 Ingkarni Wardli :: Associate Prof Inge Koch :: The University of Adelaide
For twoclass classification problems Fisher's discriminant rule performs
well in many scenarios provided the dimension, d, is much smaller than the sample
size n. As the dimension increases, Fisher's rule may no longer be
adequate, and can perform as poorly as random guessing.
In this talk we look at new ways of overcoming this poor performance for
highdimensional data by suitably modifying Fisher's rule, and in particular
we describe the 'Features Annealed Independence Rule (FAIR)? of Fan and Fan
(2008) and a rule based on canonical correlation analysis. I describe some
theoretical developments, and also show analysis of data which illustrate the
performance of these modified rule. 

Comparison of Spectral and Wavelet Estimation of the Dynamic Linear System of a Wade Energy Device 12:10 Mon 2 May, 2011 :: 5.57 Ingkarni Wardli :: Mohd Aftar :: University of Adelaide
Renewable energy has been one of the main issues nowadays. The implications of fossil energy and nuclear energy along with its limited source have triggered researchers and industries to find another source of renewable energy for example hydro energy, wind energy and also wave energy. In this seminar, I will talk about the spectral estimation and wavelet estimation of a linear dynamical system of motion for a heaving buoy wave energy device. The spectral estimates was based on the Fourier transform, while the wavelet estimate was based on the wavelet transform. Comparisons between two spectral estimates with a wavelet estimate of the amplitude response operator(ARO) for the dynamical system of the wave energy device shows that the wavelet estimate ARO is much better for data with and without noise. 

A strong Oka principle for embeddings of some planar domains into CxC*, I 13:10 Fri 6 May, 2011 :: Mawson 208 :: Mr Tyson Ritter :: University of Adelaide
The Oka principle refers to a collection of results in
complex analysis which state that there are only topological
obstructions to solving certain holomorphically defined problems
involving Stein manifolds. For example, a basic version of Gromov's
Oka principle states that every continuous map from a Stein manifold
into an elliptic complex manifold is homotopic to a holomorphic map.
In these two talks I will discuss a new result showing that
if we restrict the class of source manifolds to circular domains and
fix the target as CxC* we can obtain a much stronger Oka principle:
every continuous map from a circular domain S into CxC* is homotopic
to a proper holomorphic embedding. This result has close links with
the longstanding and difficult problem of finding proper holomorphic
embeddings of Riemann surfaces into C^2, with additional motivation
from other sources.


On parameter estimation in population models 15:10 Fri 6 May, 2011 :: 715 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide
Essential to applying a mathematical model to a realworld application is
calibrating the model to data. Methods for calibrating population models
often become computationally infeasible when the populations size (more generally
the size of the state space) becomes large, or other complexities such as
timedependent transition rates, or sampling error, are present. Here we
will discuss the use of diffusion approximations to perform estimation in several
scenarios, with successively reduced assumptions: (i) under the assumption
of stationarity (the process had been evolving for a very long time with
constant parameter values); (ii) transient dynamics (the assumption of stationarity
is invalid, and thus only constant parameter values may be assumed); and, (iii)
timeinhomogeneous chains (the parameters may vary with time) and accounting
for observation error (a sample of the true state is observed). 

When statistics meets bioinformatics 12:10 Wed 11 May, 2011 :: Napier 210 :: Prof Patty Solomon :: School of Mathematical Sciences
Media...Bioinformatics is a new field of research which encompasses mathematics, computer science, biology, medicine and the physical sciences. It has arisen from the need to handle and analyse the vast amounts of data being generated by the new genomics technologies. The interface of these disciplines used to be informationpoor, but is now informationmegarich, and statistics plays a central role in processing this information and making it intelligible. In this talk, I will describe a published bioinformatics study which claimed to have developed a simple test for the early detection of ovarian cancer from a blood sample. The US Food and Drug Administration was on the verge of approving the test kits for market in 2004 when demonstrated flaws in the study design and analysis led to its withdrawal. We are still waiting for an effective early biomarker test for ovarian cancer. 

A strong Oka principle for embeddings of some planar domains into CxC*, II 13:10 Fri 13 May, 2011 :: Mawson 208 :: Mr Tyson Ritter :: University of Adelaide
The Oka principle refers to a collection of results in
complex analysis which state that there are only topological
obstructions to solving certain holomorphically defined problems
involving Stein manifolds. For example, a basic version of Gromov's
Oka principle states that every continuous map from a Stein manifold
into an elliptic complex manifold is homotopic to a holomorphic map.
In these two talks I will discuss a new result showing that
if we restrict the class of source manifolds to circular domains and
fix the target as CxC* we can obtain a much stronger Oka principle:
every continuous map from a circular domain S into CxC* is homotopic
to a proper holomorphic embedding. This result has close links with
the longstanding and difficult problem of finding proper holomorphic
embeddings of Riemann surfaces into C^2, with additional motivation
from other sources.


Change detection in rainfall time series for Perth, Western Australia 12:10 Mon 16 May, 2011 :: 5.57 Ingkarni Wardli :: Farah Mohd Isa :: University of Adelaide
There have been numerous reports that the rainfall in south Western Australia,
particularly around Perth has observed a step change decrease, which is
typically attributed to climate change. Four statistical tests are used to
assess the empirical evidence for this claim on time series from five
meteorological stations, all of which exceed 50 years. The tests used in this
study are: the CUSUM; Bayesian Change Point analysis; consecutive ttest and the
Hotellingâs TÂ²statistic. Results from multivariate Hotellingâs TÂ² analysis are
compared with those from the three univariate analyses. The issue of multiple
comparisons is discussed. A summary of the empirical evidence for the claimed
step change in Perth area is given. 

Statistical challenges in molecular phylogenetics 15:10 Fri 20 May, 2011 :: Mawson Lab G19 lecture theatre :: Dr Barbara Holland :: University of Tasmania
Media...This talk will give an introduction to the ways that mathematics and statistics gets used in the inference of evolutionary (phylogenetic) trees. Taking a modelbased approach to estimating the relationships between species has proven to be an enormously effective, however, there are some tricky statistical challenges that remain. The increasingly plentiful amount of DNA sequence data is a boon, but it is also throwing a spotlight on some of the shortcomings of current best practice particularly in how we (1) assess the reliability of our phylogenetic estimates, and (2) how we choose appropriate models. This talk will aim to give a general introduction this area of research and will also highlight some results from two of my recent PhD students. 

Permeability of heterogeneous porous media  experiments, mathematics and computations 15:10 Fri 27 May, 2011 :: B.21 Ingkarni Wardli :: Prof Patrick Selvadurai :: Department of Civil Engineering and Applied Mechanics, McGill University
Permeability is a key parameter important to a variety of applications in geological engineering and in the environmental geosciences. The conventional definition of Darcy flow enables the estimation of permeability at different levels of detail. This lecture will focus on the measurement of surface permeability characteristics of a large cuboidal block of Indiana Limestone, using a surface permeameter. The paper discusses the theoretical developments, the solution of the resulting triple integral equations and associated computational treatments that enable the mapping of the near surface permeability of the cuboidal region. This data combined with a kriging procedure is used to develop results for the permeability distribution at the interior of the cuboidal region. Upon verification of the absence of dominant pathways for fluid flow through the cuboidal region, estimates are obtained for the "Effective Permeability" of the cuboid using estimates proposed by Wiener, Landau and Lifschitz, King, Matheron, Journel et al., Dagan and others. The results of these estimates are compared with the geometric mean, derived form the computational estimates. 

Optimal experimental design for stochastic population models 15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane
Markov population processes are popular models for studying a wide range of
phenomena including the spread of disease, the evolution of chemical reactions
and the movements of organisms in population networks (metapopulations). Our
ability to use these models effectively can be limited by our knowledge about
parameters, such as disease transmission and recovery rates in an epidemic.
Recently, there has been interest in devising optimal experimental designs for
stochastic models, so that practitioners can collect data in a manner that
maximises the precision of maximum likelihood estimates of the parameters for
these models. I will discuss some recent work on optimal design for a variety
of population models, beginning with some simple oneparameter models where the
optimal design can be obtained analytically and moving on to more complicated
multiparameter models in epidemiology that involve latent states and
nonexponentially distributed infectious periods. For these more complex
models, the optimal design must be arrived at using computational methods and we
rely on a Gaussian diffusion approximation to obtain analytical expressions for
Fisher's information matrix, which is at the heart of most optimality criteria
in experimental design. I will outline a simple crossentropy algorithm that
can be used for obtaining optimal designs for these models. We will also
explore the improvements in experimental efficiency when using the optimal
design over some simpler designs, such as the design where observations are
spaced equidistantly in time. 

Inference and optimal design for percolation and general random graph models (Part I) 09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂ«mostly populatedÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Inference and optimal design for percolation and general random graph models (Part II) 10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂÃÂÃÂ«mostly populatedÃÂÃÂÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Quantitative proteomics: data analysis and statistical challenges 10:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Peter Hoffmann :: Adelaide Proteomics Centre


Introduction to functional data analysis with applications to proteomics data 11:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: A/Prof Inge Koch :: School of Mathematical Sciences


Object oriented data analysis 14:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill
Object Oriented Data Analysis is the statistical analysis of populations of complex objects. In the special case of Functional Data Analysis, these data objects are curves, where standard Euclidean approaches, such as principal components analysis, have been very successful. Recent developments in medical image analysis motivate the statistical analysis of populations of more complex data objects which are elements of mildly nonEuclidean spaces, such as Lie Groups and Symmetric Spaces, or of strongly nonEuclidean spaces, such as spaces of treestructured data objects. These new contexts for Object Oriented Data Analysis create several potentially large new interfaces between mathematics and statistics. Even in situations where Euclidean analysis makes sense, there are statistical challenges because of the High Dimension Low Sample Size problem, which motivates a new type of asymptotics leading to nonstandard mathematical statistics. 

Object oriented data analysis of treestructured data objects 15:10 Fri 1 Jul, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill
The field of Object Oriented Data Analysis has made a lot of
progress on the statistical analysis of the variation in populations
of complex objects. A particularly challenging example of this type
is populations of treestructured objects. Deep challenges arise,
which involve a marriage of ideas from statistics, geometry, and
numerical analysis, because the space of trees is strongly
nonEuclidean in nature. These challenges, together with three
completely different approaches to addressing them, are illustrated
using a real data example, where each data point is the tree of blood
arteries in one person's brain. 

Modelling computer network topologies through optimisation 12:10 Mon 1 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Rhys Bowden :: University of Adelaide
The core of the Internet is made up of many different computers (called routers) in many different interconnected networks, owned and operated by many different organisations. A popular and important field of study in the past has been "network topology": for instance, understanding which routers are connected to which other routers, or which networks are connected to which other networks; that is, studying and modelling the connection structure of the Internet. Previous study in this area has been plagued by unreliable or flawed experimental data and debate over appropriate models to use. The Internet Topology Zoo is a new source of network data created from the information that network operators make public. In order to better understand this body of network information we would like the ability to randomly generate network topologies resembling those in the zoo. Leveraging previous wisdom on networks produced as a result of optimisation processes, we propose a simple objective function based on possible economic constraints. By changing the relative costs in the objective function we can change the form of the resulting networks, and we compare these optimised networks to a variety of networks found in the Internet Topology Zoo. 

Spectra alignment/matching for the classification of cancer and control patients 12:10 Mon 8 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide
Proteomic timeofflight mass spectrometry produces a spectrum based on the peptides (chains of amino acids) in each patientâs serum sample. The spectra contain data points for an xaxis (peptide weight) and a yaxis (peptide frequency/count/intensity). It is our end goal to differentiate cancer (and subtypes) and control patients using these spectra. Before we can do this, peaks in these data must be found and common peptides to different spectra must be found. The data are noisy because of biotechnological variation and calibration error; data points for different peptide weights may in fact be same peptide. An algorithm needs to be employed to find common peptides between spectra, as performing alignment âby handâ is almost infeasible. We borrow methods suggested in the literature by metabolomic gas chromatographymass spectrometry and extend the methods for our purposes. In this talk I will go over the basic tenets of what we hope to achieve and the process towards this.


Dealing with the GCcontent bias in secondgeneration DNA sequence data 15:10 Fri 12 Aug, 2011 :: Horace Lamb :: Prof Terry Speed :: Walter and Eliza Hall Institute
Media...The field of genomics is currently dealing with an explosion of data from socalled
secondgeneration DNA sequencing machines. This is creating many challenges and
opportunities for statisticians interested in the area.
In this talk I will outline the technology and the data flood, and move on to one particular
problem where the technology is used: copynumber analysis.
There we find a novel bias, which, if not dealt with properly, can dominate the signal of
interest. I will describe how we think about and summarize it, and go on to identify a
plausible source of this bias, leading up to a way of removing it.
Our approach makes use of the total variation metric on discrete measures, but apart from
this, is largely descriptive. 

Laplace's equation on multiplyconnected domains 12:10 Mon 29 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Hayden Tronnolone :: University of Adelaide
Various physical processes take place on multiplyconnected domains
(domains with some number of 'holes'), such as the stirring of a fluid
with paddles or the extrusion of material from a die. These systems may
be described by partial differential equations (PDEs). However, standard
numerical methods for solving PDEs are not wellsuited to such examples:
finite difference methods are difficult to implement on
multiplyconnected domains, especially when the boundaries are irregular
or moving, while finite element methods are computationally expensive.
In this talk I will describe a fast and accurate numerical method for
solving certain PDEs on twodimensional multiplyconnected domains,
considering Laplace's equation as an example. This method takes
advantage of complex variable techniques which allow the solution to be
found with spectral accuracy provided the boundary data is smooth. Other
advantages over traditional numerical methods will also be discussed. 

Alignment of time course gene expression data sets using Hidden Markov Models 12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide
Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards.
Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data. 

Statistical analysis of metagenomic data from the microbial community involved in industrial bioleaching 12:10 Mon 19 Sep, 2011 :: 5.57 Ingkarni Wardli :: Ms Susana SotoRojo :: University of Adelaide
In the last two decades heap bioleaching has become established as a successful commercial option for recovering copper from lowgrade secondary sulfide ores. Geneticsbased approaches have recently been employed in the task of characterizing mineral processing bacteria. Data analysis is a key issue and thus the implementation of adequate mathematical and statistical tools is of fundamental importance to draw reliable conclusions. In this talk I will give a recount of two specific problems that we have been working on. The first regarding experimental design and the latter on modeling composition and activity of the microbial consortium. 

Can statisticians do better than random guessing? 12:10 Tue 20 Sep, 2011 :: Napier 210 :: A/Prof Inge Koch :: School of Mathematical Sciences
In the finance or credit risk area, a bank may want to assess whether a client is going to default, or be able to meet the repayments. In the assessment of benign or malignant tumours, a correct diagnosis is required. In these and similar examples, we make decisions based on data. The classical ttests provide a tool for making such decisions. However, many modern data sets have more variables than observations, and the classical rules may not be any better than random guessing. We consider Fisher's rule for classifying data into two groups, and show that it can break down for highdimensional data. We then look at ways of overcoming some of the weaknesses of the classical rules, and I show how these "postmodern" rules perform in practice. 

Estimating transmission parameters for the swine flu pandemic 15:10 Fri 23 Sep, 2011 :: 7.15 Ingkarni Wardli :: Dr Kathryn Glass :: Australian National University
Media...Following the onset of a new strain of influenza with pandemic potential, policy makers need specific advice on how fast the disease is spreading, who is at risk, and what interventions are appropriate for slowing transmission. Mathematical models play a key role in comparing interventions and identifying the best response, but models are only as good as the data that inform them. In the early stages of the 2009 swine flu outbreak, many researchers estimated transmission parameters  particularly the reproduction number  from outbreak data. These estimates varied, and were often biased by data collection methods, misclassification of imported cases or as a result of early stochasticity in case numbers. I will discuss a number of the pitfalls in achieving good quality parameter estimates from early outbreak data, and outline how best to avoid them.
One of the early indications from swine flu data was that children were disproportionately responsible for disease spread. I will introduce a new method for estimating agespecific transmission parameters from both outbreak and seroprevalence data. This approach allows us to take account of empirical data on human contact patterns, and highlights the need to allow for asymmetric mixing matrices in modelling disease transmission between age groups. Applied to swine flu data from a number of different countries, it presents a consistent picture of higher transmission from children. 

Statistical analysis of schoolbased student performance data 12:10 Mon 10 Oct, 2011 :: 5.57 Ingkarni Wardli :: Ms Jessica Tan :: University of Adelaide
Join me in the journey of being a statistician for 15 minutes of your day (if you are not already one) and experience the task of data cleaning without having to get your own hands dirty. Most of you may have sat the Basic Skills Tests when at school or know someone who currently has to do the NAPLAN (National Assessment Program  Literacy and Numeracy) tests. Tests like these assess student progress and can be used to accurately measure school performance. In trying to answer the research question: "what conclusions about student progress and school performance can be drawn from NAPLAN data or data of a similar nature, using mathematical and statistical modelling and analysis techniques?", I have uncovered some interesting results about the data in my initial data analysis which I shall explain in this talk. 

Statistical modelling for some problems in bioinformatics 11:10 Fri 14 Oct, 2011 :: B.17 Ingkarni Wardli :: Professor Geoff McLachlan :: The University of Queensland
Media...In this talk we consider some statistical analyses of data arising in
bioinformatics. The problems include the detection of differential
expression in microarray geneexpression data, the clustering of
timecourse geneexpression data and, lastly, the analysis of
modernday cytometric data. Extensions are considered to the procedures
proposed for these three problems in McLachlan et al. (Bioinformatics, 2006),
Ng et al. (Bioinformatics, 2006), and Pyne et al. (PNAS, 2009), respectively.
The latter references are available at http://www.maths.uq.edu.au/~gjm/. 

On the role of mixture distributions in the modelling of heterogeneous data 15:10 Fri 14 Oct, 2011 :: 7.15 Ingkarni Wardli :: Prof Geoff McLachlan :: University of Queensland
Media...We consider the role that finite mixture distributions have played in the modelling of heterogeneous data, in particular for clustering continuous data via mixtures of normal distributions. A very brief history is given starting with the seminal papers by Day and Wolfe in the sixties before the appearance of the EM algorithm. It was the publication in 1977 of the latter algorithm by Dempster, Laird, and Rubin that greatly stimulated interest in the use of finite mixture distributions to model heterogeneous data. This is because the fitting of mixture models by maximum likelihood is a classic example of a problem that is simplified considerably by the EM's conceptual unification of maximum likelihood estimation from data that can be viewed as being incomplete. In recent times there has been a proliferation of applications in which the number of experimental units n is comparatively small but the underlying dimension p is extremely large as, for example, in microarraybased genomics and other highthroughput experimental approaches. Hence there has been increasing attention given not only in bioinformatics and machine learning, but also in mainstream statistics, to the analysis of complex data in this situation where n is small relative to p. The latter part of the talk shall focus on the modelling of such highdimensional data using mixture distributions. 

Likelihoodfree Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis 15:10 Fri 21 Oct, 2011 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South Wales
Media...A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed ``likelihoodfree'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines.
In this talk, I'll present a brief, nontechnical overview of the ideas behind likelihoodfree methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis. 

Mathematical opportunities in molecular space 15:10 Fri 28 Oct, 2011 :: B.18 Ingkarni Wardli :: Dr Aaron Thornton :: CSIRO
The study of molecular motion, interaction and space at the nanoscale has become a powerful tool in the area of gas separation, storage and conversion for efficient energy solutions. Modeling in this field has typically involved highly iterative computational algorithms such as molecular dynamics, Monte Carlo and quantum mechanics. Mathematical formulae in the form of analytical solutions to this field offer a range of useful and insightful advantages including optimization, bifurcation analysis and standardization. Here we present a few case scenarios where mathematics has provided insight and opportunities for further investigation. 

Metric geometry in data analysis 13:10 Fri 11 Nov, 2011 :: B.19 Ingkarni Wardli :: Dr Facundo Memoli :: University of Adelaide
The problem of object matching under invariances can be
studied using certain tools from metric geometry. The central idea is
to regard
objects as metric spaces (or metric measure spaces). The type of
invariance that one wishes to have in the matching is encoded by the
choice of the metrics with which one endows the objects. The standard
example is matching objects in Euclidean space under rigid isometries:
in this
situation one would endow the objects with the Euclidean metric. More
general scenarios are possible in which the desired invariance cannot
be reflected by the preservation of an ambient space metric. Several
ideas due to M. Gromov are useful for approaching this problem. The
GromovHausdorff distance is a natural candidate for doing this.
However, this metric leads to very hard combinatorial optimization
problems and it is difficult to relate to previously reported
practical approaches to the problem of object matching. I will discuss
different variations of these ideas, and in particular will show a
construction of an L^p version of the GromovHausdorff metric, called
the GromovWassestein distance, which is based on mass transportation
ideas. This new metric directly leads to quadratic optimization
problems on continuous variables with linear constraints.
As a consequence of establishing several lower bounds, it turns out
that several invariants of metric measure spaces turn out to be
quantitatively stable in the GW sense. These invariants provide
practical tools for the discrimination of shapes and connect the GW
ideas to a number of preexisting approaches. 

Stability analysis of nonparallel unsteady flows via separation of variables 15:30 Fri 18 Nov, 2011 :: 7.15 Ingkarni Wardli :: Prof Georgy Burde :: BenGurion University
Media...The problem of variables separation in the linear stability
equations, which govern the disturbance behavior in viscous
incompressible fluid flows, is discussed.
Stability of some unsteady nonparallel threedimensional flows (exact
solutions of the NavierStokes equations)
is studied via separation of variables using a semianalytical, seminumerical approach.
In this approach, a solution with separated variables is defined in a new coordinate system which is sought together with the solution form. As the result, the linear stability problems are reduced to eigenvalue problems for ordinary differential equations which can be solved numerically.
In some specific cases, the eigenvalue
problems can be solved analytically. Those unique examples of exact
(explicit) solution of the nonparallel unsteady flow stability
problems provide a very useful test for methods used in the
hydrodynamic stability theory. Exact solutions of the stability problems for some stagnationtype flows are presented. 

Fluid flows in microstructured optical fibre fabrication 15:10 Fri 25 Nov, 2011 :: B.17 Ingkarni Wardli :: Mr Hayden Tronnolone :: University of Adelaide
Optical fibres are used extensively in modern telecommunications as they allow the transmission of information at high speeds. Microstructured optical fibres are a relatively new fibre design in which a waveguide for light is created by a series of air channels running along the length of the material. The flexibility of this design allows optical fibres to be created with adaptable (and previously unrealised) optical properties. However, the fluid flows that arise during fabrication can greatly distort the geometry, which can reduce the effectiveness of a fibre or render it useless. I will present an overview of the manufacturing process and highlight the difficulties. I will then focus on surfacetension driven deformation of the macroscopic version of the fibre extruded from a reservoir of molten glass, occurring during fabrication, which will be treated as a twodimensional Stokes flow problem. I will outline two different complexvariable numerical techniques for solving this problem along with comparisons of the results, both to other models and to experimental data.


Collision and instability in a rotating fluidfilled torus 15:10 Mon 12 Dec, 2011 :: Benham Lecture Theatre :: Dr Richard Clarke :: The University of Auckland
The simple experiment discussed in this talk, first conceived by Madden and
Mullin (JFM, 1994) as part of their investigations into the nonuniqueness
of decaying turbulent flow, consists of a fluidfilled torus which is
rotated in an horizontal plane. Turbulence within the contained flow is
triggered through a rapid change in its rotation rate. The flow
instabilities which transition the flow to this turbulent state, however,
are truly fascinating in their own right, and form the subject of this
presentation. Flow features observed in both UK and Aucklandbased
experiments will be highlighted, and explained through both boundarylayer
analysis and full DNS. In concluding we argue that this flow regime, with
its compact geometry and lack of cumbersome flow entry effects, presents an
ideal regime in which to study many prototype flow behaviours, very much in
the same spirit as TaylorCouette flow. 

Financial risk measures  the theory and applications of backward stochastic difference/differential equations with respect to the single jump process 12:10 Mon 26 Mar, 2012 :: 5.57 Ingkarni Wardli :: Mr Bin Shen :: University of Adelaide
Media...This is my PhD thesis submitted one month ago. Chapter 1 introduces the backgrounds of the research fields. Then each chapter is a published or an accepted paper.
Chapter 2, to appear in Methodology and Computing in Applied Probability, establishes the theory of Backward Stochastic Difference Equations with respect to the single jump process in discrete time.
Chapter 3, published in Stochastic Analysis and Applications, establishes the theory of Backward Stochastic Differential Equations with respect to the single jump process in continuous time.
Chapter 2 and 3 consist of Part I Theory.
Chapter 4, published in Expert Systems With Applications, gives some examples about how to measure financial risks by the theory established in Chapter 2.
Chapter 5, accepted by Journal of Applied Probability, considers the question of an optimal transaction between two investors to minimize their risks. It's the applications of the theory established in Chapter 3.
Chapter 4 and 5 consist of Part II Applications. 

Spatialpoint data sets and the Polya distribution 15:10 Fri 27 Apr, 2012 :: B.21 Ingkarni Wardli :: Dr Benjamin Binder :: The University of Adelaide
Media...Spatialpoint data sets, generated from a wide range of
physical systems and mathematical
models, can be analyzed by counting the number of objects in equally
sized bins. We find that the bin
counts are related to the Polya distribution. New indexes are
developed which quantify whether or not a
spatial data set is at its most evenly distributed state. Using three
case studies (Lagrangian fluid particles in chaotic laminar
flows, cellular automata agents in discrete models, and biological
cells within colonies),
we calculate the indexes and predict the spatialstate of the system. 

Are Immigrants Discriminated in the Australian Labour Market? 12:10 Mon 7 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Wei Xian Lim :: University of Adelaide
Media...In this talk, I will present what I did in my honours project, which was to determine if immigrants, categorised as immigrants from English speaking countries and NonEnglish speaking countries, are discriminated in the Australian labour market. To determine if discrimination exists, a decomposition of the wage function is applied and analysed via regression analysis. Two different methods of estimating the unknown parameters in the wage function will be discussed:
1. the Ordinary Least Square method,
2. the Quantile Regression method.
This is your rare chance of hearing me talk about nonnanomathematics related stuff! 

Change detection in rainfall times series for Perth, Western Australia 12:10 Mon 14 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Farah Mohd Isa :: University of Adelaide
Media...There have been numerous reports that the rainfall in south Western Australia,
particularly around Perth has observed a step change decrease, which is
typically attributed to climate change. Four statistical tests are used to
assess the empirical evidence for this claim on time series from five
meteorological stations, all of which exceed 50 years. The tests used in this
study are: the CUSUM; Bayesian Change Point analysis; consecutive ttest and the
Hotelling's T^2statistic. Results from multivariate Hotelling's T^2 analysis are
compared with those from the three univariate analyses. The issue of multiple
comparisons is discussed. A summary of the empirical evidence for the claimed
step change in Perth area is given. 

A brief introduction to Support Vector Machines 12:30 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide
Media...Support Vector Machines (SVMs) are used in a variety of contexts for a range of purposes including regression, feature selection and classification. To convey the basic principles of SVMs, this presentation will focus on the application of SVMs to classification. Classification (or discrimination), in a statistical sense, is supervised model creation for the purpose of assigning future observations to a group or class. An example might be determining healthy or diseased labels to patients from p characteristics obtained from a blood sample.
While SVMs are widely used, they are most successful when the data have one or more of the following properties:
The data are not consistent with a standard probability distribution.
The number of observations, n, used to create the model is less than the number of predictive features, p. (The socalled smalln, bigp problem.)
The decision boundary between the classes is likely to be nonlinear in the feature space.
I will present a short overview of how SVMs are constructed, keeping in mind their purpose. As this presentation is part of a double postgrad seminar, I will keep it to a maximum of 15 minutes.


Introduction to quantales via axiomatic analysis 13:10 Fri 15 Jun, 2012 :: Napier LG28 :: Dr Ittay Weiss :: University of the South Pacific
Quantales were introduced by Mulvey in 1986 in the context of noncommutative topology with the aim of providing a concrete noncommutative framework for the foundations of quantum mechanics. Since then quantales found applications in other areas as well, among others in the work of Flagg. Flagg considers certain special quantales, called value quantales, that are desigend to capture the essential properties of ([0,\infty],\le,+) that are relevant for analysis. The result is a well behaved theory of value quantale enriched metric spaces. I will introduce the notion of quantales as if they were desigend for just this purpose, review most of the known results (since there are not too many), and address a some new results, conjectures, and questions. 

Comparison of spectral and wavelet estimators of transfer function for linear systems 12:10 Mon 18 Jun, 2012 :: B.21 Ingkarni Wardli :: Mr Mohd Aftar Abu Bakar :: University of Adelaide
Media...We compare spectral and wavelet estimators of the response amplitude operator (RAO) of a linear system, with various input signals and added noise scenarios. The comparison is based on a model of a heaving buoy wave energy device (HBWED), which oscillates vertically as a single mode of vibration linear system.
HBWEDs and other single degree of freedom wave energy devices such as the oscillating wave surge convertors (OWSC) are currently deployed in the ocean, making single degree of freedom wave energy devices important systems to both model and analyse in some detail. However, the results of the comparison relate to any linear system.
It was found that the wavelet estimator of the RAO offers no advantage over the spectral estimators if both input and response time series data are noise free and long time series are available. If there is noise on only the response time series, only the wavelet estimator or the spectral estimator that uses the crossspectrum of the input and response signals in the numerator should be used. For the case of noise on only the input time series, only the spectral estimator that uses the crossspectrum in the denominator gives a sensible estimate of the RAO. If both the input and response signals are corrupted with noise, a modification to both the input and response spectrum estimates can provide a good estimator of the RAO. However, a combination of wavelet and spectral methods is introduced as an alternative RAO estimator.
The conclusions apply for autoregressive emulators of sea surface elevation, impulse, and pseudorandom binary sequences (PRBS) inputs. However, a wavelet estimator is needed in the special case of a chirp input where the signal has a continuously varying frequency. 

AFL Tipping isn't all about numbers and stats...or is it..... 12:10 Mon 6 Aug, 2012 :: B.21 Ingkarni Wardli :: Ms Jessica Tan :: University of Adelaide
Media...The result of an AFL game is always unpredictable  we all know that. Hence why we discuss the weekend's upsets and the local tipping competition as part of the "watercooler and weekend" conversation on a Monday morning. Different people use various weird and wonderful techniques or criteria to predict the winning team. With readily available data, I will investigate and compare various strategies and define a measure of the hardness of a round (full acknowledgements will be made in my presentation). Hopefully this will help me for next year's tipping competition... 

Hodge numbers and cohomology of complex algebraic varieties 13:10 Fri 10 Aug, 2012 :: Engineering North 218 :: Prof Gus Lehrer :: University of Sydney
Let $X$ be a complex algebraic variety defined over the ring $\mathfrak{O}$ of integers in a number field $K$ and let $\Gamma$ be a group of $\mathfrak{O}$automorphisms of $X$. I shall discuss how the counting of rational points over reductions mod $p$ of $X$, and an analysis of the Hodge structure of the cohomology of $X$, may be used to determine the cohomology as a $\Gamma$module. This will include some joint work with Alex Dimca and with Mark Kisin, and some classical unsolved problems.


Drawing of Viscous Threads with Temperaturedependent Viscosity 14:10 Fri 10 Aug, 2012 :: Engineering North N218 :: Dr Jonathan Wylie :: City University of Hong Kong
The drawing of viscous threads is important in a wide range of industrial
applications and is a primary manufacturing process in the optical fiber
and textile industries. Most of the materials used in these processes have
viscosities that vary extremely strongly with temperature.
We investigate the role played by viscous heating in the
drawing of viscous threads. Usually, the effects of viscous heating and
inertia are neglected because the parameters that characterize them are
typically very small. However, by performing a detailed theoretical
analysis we surprisingly show that even very small amounts of viscous
heating can lead to a runaway phenomena. On the other hand, inertia
prevents runaway, and the interplay between viscous heating and inertia
results in very complicated dynamics for the system.
Even more surprisingly, in the absence of viscous heating, we find that a
new type of instability can occur when a thread is heated by a radiative
heat source. By analyzing an asymptotic limit of the NavierStokes
equation we provide a theory that describes the nature of this instability
and explains the seemingly counterintuitive behavior.


Aircooled binary Rankine cycle performance with varying ambient temperature 12:10 Mon 13 Aug, 2012 :: B.21 Ingkarni Wardli :: Ms Josephine Varney :: University of Adelaide
Media...Next month, I have to give a presentation in Reno, Nevada to a group of geologists, engineers and geophysicists. So, for this talk, I am going to ask you to pretend you know very little about maths (and perhaps a lot about geology) and give me some feedback on my proposed talk.
The presentation itself, is about the effect of aircooling on geothermal power plant performance. Aircooling is necessary for geothermal plays in dry areas, and ambient air temperature significantly aï¬ects the power output of aircooled geothermal power plants. Hence, a method for determining the effect of ambient air temperature on geothermal power plants is presented. Using the ambient air temperature distribution from Leigh Creek, South Australia, this analysis shows that an optimally designed plant produces 6% more energy annually than a plant designed using the mean ambient temperature. 

Star Wars Vs The Lord of the Rings: A Survival Analysis 12:10 Mon 27 Aug, 2012 :: B.21 Ingkarni Wardli :: Mr Christopher Davies :: University of Adelaide
Media...Ever wondered whether you are more likely to die in the Galactic Empire or Middle Earth? Well this is the postgraduate seminar for you!
I'll be attempting to answer this question using survival analysis, the statistical method of choice for investigating time to event data.
Spoiler Warning: This talk will contain references to the deaths of characters in the above movie sagas. 

Principal Component Analysis (PCA) 12:30 Mon 3 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr Lyron Winderbaum :: University of Adelaide
Media...Principal Component Analysis (PCA) has become something of a buzzword recently in a number of disciplines including the gene expression and facial recognition. It is a classical, and fundamentally simple, concept that has been around since the early 1900's, its recent popularity largely due to the need for dimension reduction techniques in analyzing high dimensional data that has become more common in the last decade, and the availability of computing power to implement this. I will explain the concept, prove a result, and give a couple of examples. The talk should be accessible to all disciplines as it (should?) only assume first year linear algebra, the concept of a random variable, and covariance.


Electrokinetics of concentrated suspensions of spherical particles 15:10 Fri 28 Sep, 2012 :: B.21 Ingkarni Wardli :: Dr Bronwyn BradshawHajek :: University of South Australia
Electrokinetic techniques are used to gather specific information about concentrated dispersions such as electronic inks, mineral processing slurries, pharmaceutical products and biological fluids (e.g. blood). But, like most experimental techniques, intermediate quantities are measured, and consequently the method relies explicitly on theoretical modelling to extract the quantities of experimental interest. A selfconsistent cellmodel theory of electrokinetics can be used to determine the electrical conductivity of a dense suspension of spherical colloidal particles, and thereby determine the quantities of interest (such as the particle surface potential). The numerical predictions of this model compare well with published experimental results. High frequency asymptotic analysis of the cellmodel leads to some interesting conclusions. 

Turbulent flows, semtex, and rainbows 12:10 Mon 8 Oct, 2012 :: B.21 Ingkarni Wardli :: Ms Sophie Calabretto :: University of Adelaide
Media...The analysis of turbulence in transient flows has applications across a broad range of fields. We use the flow of fluid in a toroidal container as a paradigm for studying the complex dynamics due to this turbulence. To explore the dynamics of our system, we exploit the numerical capabilities of semtex; a quadrilateral spectral element DNS code. Rainbows result. 

Complex analysis in low Reynolds number hydrodynamics 15:10 Fri 12 Oct, 2012 :: B.20 Ingkarni Wardli :: Prof Darren Crowdy :: Imperial College London
Media...It is a wellknown fact that the methods of complex analysis provide great advantage
in studying physical problems involving a harmonic field satisfying Laplace's equation.
One example is in ideal fluid mechanics (infinite Reynolds number)
where the absence of viscosity, and the
assumption of zero vorticity, mean that it is possible to introduce a socalled
complex potential  an analytic function from which all physical quantities of
interest can be inferred.
In the opposite limit of zero Reynolds number flows which are slow and viscous
and the governing fields are not harmonic
it is much less common to employ the methods of complex analysis
even though they continue to be relevant in certain circumstances.
This talk will give an overview of a variety of problems involving slow viscous Stokes
flows where complex analysis can be usefully employed to gain theoretical
insights. A number of example problems will be considered including
the locomotion of lowReynoldsnumber microorganisms and microrobots,
the friction properties of superhydrophobic surfaces in microfluidics and
problems of viscous sintering and the manufacture of microstructured optic fibres (MOFs). 

Epidemic models in socially structured populations: when are simple models too simple? 14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick
Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for realtime predictions. 

Epidemic models in socially structured populations: when are simple models too simple? 14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick
Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for realtime predictions. 

The space of cubic rational maps 13:10 Fri 26 Oct, 2012 :: Engineering North 218 :: Mr Alexander Hanysz :: University of Adelaide
For each natural number d, the space of rational maps of degree d on the Riemann sphere has the structure of a complex manifold. The topology of these manifolds has been extensively studied. The recent development of Oka theory raises some new and interesting questions about their complex structure. We apply geometric invariant theory to the degree 3 case, studying a double action of the Mobius group on the space of cubic rational maps. We show that the categorical quotient is C, and that the space of cubic rational maps enjoys the holomorphic flexibility properties of strong dominability and Cconnectedness. 

Spatiotemporally Autoregressive Partially Linear Models with Application to the Housing Price Indexes of the United States 12:10 Mon 12 Nov, 2012 :: B.21 Ingkarni Wardli :: Ms Dawlah Alsulami :: University of Adelaide
Media...We propose a Spatiotemporal Autoregressive Partially Linear Regression ( STARPLR) model for data observed irregularly over space and regularly in time. The model is capable of catching possible non linearity and nonstationarity in space by coefficients to depend on locations. We suggest twostep procedure to estimate both the coefficients and the unknown function, which is readily implemented and can be computed even for large spatiotemoral data sets. As an illustration, we apply our model to analyze the 51 States' House Price Indexes (HPIs) in USA. 

On the chromatic number of a random hypergraph 13:10 Fri 22 Mar, 2013 :: Ingkarni Wardli B21 :: Dr Catherine Greenhill :: University of New South Wales
A hypergraph is a set of vertices and a set of hyperedges, where each
hyperedge is a subset of vertices. A hypergraph is runiform if every
hyperedge contains r vertices. A colouring of a hypergraph is an
assignment of colours to vertices such that no hyperedge is monochromatic.
When the colours are drawn from the set {1,..,k}, this defines a
kcolouring.
We consider the problem of kcolouring a random runiform hypergraph
with n vertices and cn edges, where k, r and c are constants and n tends
to infinity. In this setting, Achlioptas and Naor showed that for the
case of r = 2, the chromatic number of a random graph must have one of two
easily computable values as n tends to infinity.
I will describe some joint work with Martin Dyer (Leeds) and Alan Frieze
(Carnegie Mellon), in which we generalised this result to random uniform
hypergraphs. The argument uses the second moment method, and applies a
general theorem for performing Laplace summation over a lattice. So the
proof contains something for everyone, with elements from combinatorics,
analysis and algebra. 

A stability theorem for elliptic Harnack inequalities 15:10 Fri 5 Apr, 2013 :: B.18 Ingkarni Wardli :: Prof Richard Bass :: University of Connecticut
Media...Harnack inequalities are an important tool in probability theory,
analysis, and partial differential equations. The classical Harnack
inequality is just the one you learned in your graduate complex analysis
class, but there have been many extensions, to different spaces, such as
manifolds, fractals, infinite graphs, and to various sorts of elliptic operators.
A landmark result was that of Moser in 1961, where he proved the Harnack
inequality for solutions to a class of partial differential equations.
I will talk about the stability of Harnack inequalities. The main result
says that if the Harnack inequality holds for an operator on a space,
then the Harnack inequality will also hold for a large class of other operators
on that same space. This provides a generalization of the result of Moser. 

An Oka principle for equivariant isomorphisms 12:10 Fri 3 May, 2013 :: Ingkarni Wardli B19 :: A/Prof Finnur Larusson :: University of Adelaide
I will discuss new joint work with Frank Kutzschebauch (Bern) and Gerald Schwarz (Brandeis). Let $G$ be a reductive complex Lie group acting holomorphically on Stein manifolds $X$ and $Y$, which are locally $G$biholomorphic over a common categorical quotient $Q$. When is there a global $G$biholomorphism $X\to Y$?
In a situation that we describe, with some justification, as generic, we prove that the obstruction to solving this localtoglobal problem is topological and provide sufficient conditions for it to vanish. Our main tool is the equivariant version of Grauert's Oka principle due to Heinzner and Kutzschebauch.
We prove that $X$ and $Y$ are $G$biholomorphic if $X$ is $K$contractible, where $K$ is a maximal compact subgroup of $G$, or if there is a $G$diffeomorphism $X\to Y$ over $Q$, which is holomorphic when restricted to each fibre of the quotient map $X\to Q$. When $G$ is abelian, we obtain stronger theorems. Our results can be interpreted as instances of the Oka principle for sections of the sheaf of $G$biholomorphisms from $X$ to $Y$ over $Q$. This sheaf can be badly singular, even in simply defined examples.
Our work is in part motivated by the linearisation problem for actions on $\C^n$. It follows from one of our main results that a holomorphic $G$action on $\C^n$, which is locally $G$biholomorphic over a common quotient to a generic linear action, is linearisable. 

Colour 12:10 Mon 13 May, 2013 :: B.19 Ingkarni Wardli :: Lyron Winderbaum :: University of Adelaide
Media...Colour is a powerful tool in presenting data, but it can be tricky to choose just the right colours to represent your data honestly  do the colours used in your heatmap overemphasise the differences between particular values over others? does your choice of colours overemphasize one when they should be represented as equal? etc. All these questions are fundamentally based in how we perceive colour. There has been alot of research into how we perceive colour in the past century, and some interesting results. I will explain how a `standard observer' was found empirically and used to develop an absolute reference standard for colour in 1931. How although the common RedGreenBlue representation of colour is useful and intuitive, distances between colours in this space do not reflect our perception of difference between colours and how alternative, perceptually focused colourspaces where introduced in 1976. I will go on to explain how these results can be used to provide simple mechanisms by which to choose colours that satisfy particular properties such as being equally different from each other, or being linearly more different in sequence, or maintaining such properties when transferred to greyscale, or for a colourblind person. 

Progress in the prediction of buoyancyaffected turbulence 15:10 Fri 17 May, 2013 :: B.18 Ingkarni Wardli :: Dr Daniel Chung :: University of Melbourne
Media...Buoyancyaffected turbulence represents a significant challenge to our
understanding, yet it dominates many important flows that occur in the
ocean and atmosphere. The presentation will highlight some recent progress
in the characterisation, modelling and prediction of buoyancyaffected
turbulence using direct and largeeddy simulations, along with implications
for the characterisation of mixing in the ocean and the lowcloud feedback
in the atmosphere. Specifically, direct numerical simulation data of
stratified turbulence will be employed to highlight the importance of
boundaries in the characterisation of turbulent mixing in the ocean. Then,
a subgridscale model that captures the anisotropic character of stratified
mixing will be developed for largeeddy simulation of buoyancyaffected
turbulence. Finally, the subgridscale model is utilised to perform a
systematic largeeddy simulation investigation of the archetypal lowcloud
regimes, from which the link between the lowertropospheric stability
criterion and the cloud fraction interpreted. 

Pulsatile Flow 12:10 Mon 20 May, 2013 :: B.19 Ingkarni Wardli :: David Wilke :: University of Adelaide
Media...Blood flow within the human arterial system is inherently unsteady as a consequence of the pulsations of the heart. The unsteady nature of the flow gives rise to a number of important flow features which may be critical in understanding pathologies of the cardiovascular system. For example, it is believed that large oscillations in wall shear stress may enhance the effects of artherosclerosis, among other pathologies.
In this talk I will present some of the basic concepts of pulsatile flow and follow the analysis first performed by J.R. Womersley in his seminal 1955 paper. 

Coincidences 14:10 Mon 20 May, 2013 :: 7.15 Ingkarni Wardli :: A/Prof. Robb Muirhead :: School of Mathematical Sciences
Media...This is a lighthearted (some would say contentfree) talk about coincidences, those surprising concurrences of events that are often perceived as meaningfully related, with no apparent causal connection. Time permitting, it will touch on topics like:
Patterns in data and the dangers of looking for patterns, unspecified ahead of time, and trying to "explain" them; e.g. post hoc subgroup analyses, cancer clusters, conspiracy theories ...
Matching problems; e.g. the birthday problem and extensions
People who win a lottery more than once  how surprised should we really be? What's the question we should be asking?
When you become familiar with a new word, and see it again soon afterwards, how surprised should you be?
Caution: This is a shortened version of a talk that was originally prepared for a group of nonmathematicians and nonstatisticians, so it's mostly nontechnical. It probably does not contain anything you don't already know  it will be an amazing coincidence if it does! 

Multiscale modelling couples patches of wavelike simulations 12:10 Mon 27 May, 2013 :: B.19 Ingkarni Wardli :: Meng Cao :: University of Adelaide
Media...A multiscale model is proposed to significantly reduce the expensive numerical simulations of complicated waves over large spatial domains. The multiscale model is built from given microscale simulations of complicated physical processes such as sea ice or turbulent shallow water. Our long term aim is to enable macroscale simulations obtained by coupling small patches of simulations together over large physical distances. This initial work explores the coupling of patch simulations of wavelike pdes. With the line of development being to water waves we discuss the dynamics of two complementary fields called the 'depth' h and 'velocity' u. A staggered grid is used for the microscale simulation of the depth h and velocity u. We introduce a macroscale staggered grid to couple the microscale patches. Linear or quadratic interpolation provides boundary conditions on the field in each patch. Linear analysis of the whole coupled multiscale system establishes that the resultant macroscale dynamics is appropriate. Numerical simulations support the linear analysis. This multiscale method should empower the feasible computation of large scale simulations of wavelike dynamics with complicated underlying physics. 

FireAtmosphere Models 12:10 Mon 29 Jul, 2013 :: B.19 Ingkarni Wardli :: Mika Peace :: University of Adelaide
Media...Fire behaviour models are increasingly being used to assist in planning and operational decisions for bush fires and fuel reduction burns. Rate of spread (ROS) of the fire front is a key output of such models. The ROS value is typically calculated from a formula which has been derived from empirical data, using very simple meteorological inputs. We have used a coupled fireatmosphere model to simulate real bushfire events. The results show that complex interactions between a fire and the atmosphere can have a significant influence on fire spread, thus highlighting the limitations of a model that uses simple meteorological inputs. 

PrivacyPreserving Computation: Not just for secretive millionaires* 12:10 Mon 19 Aug, 2013 :: B.19 Ingkarni Wardli :: Wilko Henecka :: University of Adelaide
Media...PPC enables parties to share information while preserving their data privacy.
I will introduce the concept, show a common ingredient and illustrate its use in an example.
*See Yao's Millionaires Problem. 

Medical Decision Analysis 12:10 Mon 2 Sep, 2013 :: B.19 Ingkarni Wardli :: Eka Baker :: University of Adelaide
Doctors make life changing decisions every day based on clinical trial data. However, this data is often obtained from studies on healthy individuals or on patients with only the disease that a treatment is targeting. Outside of these studies, many patients will have other conditions that may affect the predicted benefit of receiving a certain treatment. I will talk about what clinical trials are, how to measure the benefit of treatments, and how having multiple conditions (comorbidities) will affect the benefit of treatments. 

Dynamics and the geometry of numbers 14:10 Fri 27 Sep, 2013 :: Horace Lamb Lecture Theatre :: Prof Akshay Venkatesh :: Stanford University
Media...It was understood by Minkowski that one could prove interesting results in number theory by considering the geometry of lattices in R^n. (A lattice is simply a grid of points.) This technique is called the "geometry of numbers." We now understand much more about analysis and dynamics on the space of all lattices, and this has led to a deeper understanding of classical questions. I will review some of these ideas, with emphasis on the dynamical aspects. 

Gravitational slingshot and space mission design 15:10 Fri 11 Oct, 2013 :: B.18 Ingkarni Wardli :: Prof Pawel Nurowski :: Polish Academy of Sciences
Media...When planning a space mission the weight of the spacecraft is the main issue. Every gram sent into the outer space costs a lot. A considerable part of the overall weight of the spaceship consists of a fuel needed to control it. I will explain how space agencies reduce the amount of fuel needed to go to a given place in the Solar System by using gravity of celestial bodies encountered along the trip. I will start with the explanation of an old trick called `gravitational slingshot', and end up with a modern technique which is based on the analysis of a 3body problem appearing in Newtonian mechanics. 

Classification Using Censored Functional Data 15:10 Fri 18 Oct, 2013 :: B.18 Ingkarni Wardli :: A/Prof Aurore Delaigle :: University of Melbourne
Media...We consider classification of functional data. This problem has received a lot of attention in the literature in the case where the curves are all observed on the same interval. A difficulty in applications is that the functional curves can be supported on quite different intervals, in which case standard methods of analysis cannot be used. We are interested in constructing classifiers for curves of this type. More precisely, we consider classification of functions supported on a compact interval, in cases where the training sample consists of functions observed on other intervals, which may differ among the training curves.
We propose several methods, depending on whether or not the observable intervals
overlap by a significant amount. In the case where these intervals differ a lot, our procedure involves extending the curves outside the interval where they were observed. We suggest a new nonparametric approach for doing this.
We also introduce flexible ways of combining potential differences in shapes of the curves from different populations, and potential differences between the endpoints of
the intervals where the curves from each population are observed. 

All at sea with spectral analysis 11:10 Tue 19 Nov, 2013 :: Ingkarni Wardli Level 5 Room 5.56 :: A/Prof Andrew Metcalfe :: The University of Adelaide
The steady state response of a single degree of freedom damped linear stystem to a sinusoidal input is a sinusoidal function at the same frequency, but generally with a different amplitude and a phase shift. The analogous result for a random stationary input can be described in terms of input and response spectra and a transfer function description of the linear system.
The practical use of this result is that the parameters of a linear system can be estimated from the input and response spectra, and the response spectrum can be predicted if the transfer function and input spectrum are known.
I shall demonstrate these results with data from a small ship in the North Sea. The results from the sea trial raise the issue of nonlinearity, and second order amplitude response functons are obtained using autoregressive estimators.
The possibility of using wavelets rather than spectra is consedred in the context of single degree of freedom linear systems.
Everybody welcome to attend.
Please not a change of venue  we will be in room 5.56 

Holomorphic null curves and the conformal CalabiYau problem 12:10 Tue 28 Jan, 2014 :: Ingkarni Wardli B20 :: Prof Franc Forstneric :: University of Ljubljana
Media...I shall describe how methods of complex analysis can be used to give new results on the conformal CalabiYau problem concerning the existence of bounded metrically complete minimal surfaces in real Euclidean 3space R^3. We shall see in particular that every bordered Riemann surface admits a proper complete holomorphic immersion into the ball of C^2, and a proper complete embedding as a
holomorphic null curve into the ball of C^3. Since the real and the imaginary parts of a holomorphic null curve in C^3 are conformally immersed minimal surfaces in R^3, we obtain a bounded complete conformal minimal immersion of any bordered Riemann surface into R^3. The main advantage of our methods, when compared to the existing ones in the literature, is that we do not need to change the conformal type of the Riemann surface. (Joint work with A. Alarcon, University of Granada.)


Integrability of infinitedimensional Lie algebras and Lie algebroids 12:10 Fri 7 Feb, 2014 :: Ingkarni Wardli B20 :: Christoph Wockel :: Hamburg University
Lie's Third Theorem states that each finitedimensional Lie algebra is the Lie algebra of a Lie group (we also say "integrates to a Lie group"). The corresponding statement for infinitedimensional Lie algebras or Lie algebroids is false and we will explain geometrically why this is the case. The underlying pattern is that of integration of central extensions of Lie algebras and Lie algebroids. This also occurs in other contexts, and we will explain some aspects of string group models in these terms. In the end we will sketch how the nonintegrability of Lie algebras and Lie algebroids can be overcome by passing to higher categorical objects (such as smooth stacks) and give a panoramic (but still conjectural) perspective on the precise relation of the various integrability problems.


Hormander's estimate, some generalizations and new applications 12:10 Mon 17 Feb, 2014 :: Ingkarni Wardli B20 :: Prof Zbigniew Blocki :: Jagiellonian University
Lars Hormander proved his estimate for the dbar equation in 1965. It is one the most important results in several complex variables (SCV). New applications have
emerged recently, outside of SCV. We will present three of them: the OhsawaTakegoshi extension theorem with optimal constant, the onedimensional Suita Conjecture, and Nazarov's approach to the BourgainMilman inequality from convex analysis. 

The structuring role of chaotic stirring on pelagic ecosystems 11:10 Fri 28 Feb, 2014 :: B19 Ingkarni Wardli :: Dr Francesco d'Ovidio :: Universite Pierre et Marie Curie (Paris VI)
The open ocean upper layer is characterized by a complex transport dynamics occuring over different spatiotemporal scales. At the scale of 10100 km  which covers the so called mesoscale and part of the submesoscale  in situ and remote sensing observations detect strong variability in physical and biogeochemical fields like sea surface temperature, salinity, and chlorophyll concentration. The calculation of Lyapunov exponent and other nonlinear diagnostics applied to the surface currents have allowed to show that an important part of this tracer variability is due to chaotic stirring. Here I will extend this analysis to marine ecosystems. For primary producers, I will show that stable and unstable manifolds of hyperbolic points embedded in the surface velocity field are able to structure the phytoplanktonic community in fluid dynamical niches of dominant types, where competition can locally occur during bloom events. By using data from tagged whales, frigatebirds, and elephant seals, I will also show that chaotic stirring affects the behaviour of higher trophic levels. In perspective, these relations between transport structures and marine ecosystems can be the base for a biodiversity index constructued from satellite information, and therefore able to monitor key aspects of the marine biodiversity and its temporal variability at the global scale. 

The effects of preexisting immunity 15:10 Fri 7 Mar, 2014 :: B.18 Ingkarni Wardli :: Associate Professor Jane Heffernan :: York University, Canada
Media...Immune system memory, also called immunity, is gained as a result of primary infection or vaccination, and can be boosted after vaccination or secondary infections. Immunity is developed so that the immune system is primed to react and fight a pathogen earlier and more effectively in secondary infections. The effects of memory, however, on pathogen propagation in an individual host (inhost) and a population (epidemiology) are not well understood. Mathematical models of infectious diseases, employing dynamical systems, computer simulation and bifurcation analysis, can provide projections of pathogen propagation, show outcomes of infection and help inform public health interventions. In the Modelling Infection and Immunity (MI^2) lab, we develop and study biologically informed mathematical models of infectious diseases at both levels of infection, and combine these models into comprehensive multiscale models so that the effects of individual immunity in a population can be determined. In this talk we will discuss some of the interesting mathematical phenomenon that arise in our models, and show how our results are directly applicable to what is known about the persistence of infectious diseases. 

Viscoelastic fluids: mathematical challenges in determining their relaxation spectra 15:10 Mon 17 Mar, 2014 :: 5.58 Ingkarni Wardli :: Professor Russell Davies :: Cardiff University
Determining the relaxation spectrum of a viscoelastic fluid is a crucial step before a linear or nonlinear constitutive model can be applied. Information about the relaxation spectrum is obtained from simple flow experiments such as creep or oscillatory shear. However, the determination process involves the solution of one or more highly illposed inverse problems. The availability of only discrete data, the presence of noise in the data, as well as incomplete data, collectively make the problem very hard to solve.
In this talk I will illustrate the mathematical challenges inherent in determining relaxation spectra, and also introduce the method of wavelet regularization which enables the representation of a continuous relaxation spectrum by a set of hyperbolic scaling functions.


A model for the BitCoin block chain that takes propagation delays into account 15:10 Fri 28 Mar, 2014 :: B.21 Ingkarni Wardli :: Professor Peter Taylor :: The University of Melbourne
Media...Unlike cash transactions, most electronic transactions require the presence of a trusted authority to verify that the payer has sufficient funding to be able to make the transaction and to adjust the account balances of the payer and payee. In recent years BitCoin has been proposed as an "electronic equivalent of cash". The general idea is that transactions are verified in a coded form in a block chain, which is maintained by the community of participants. Problems can arise when the block chain splits: that is different participants have different versions of the block chain, something which can happen only when there are propagation delays, at least if all participants are behaving according to the protocol.
In this talk I shall present a preliminary model for the splitting behaviour of the block chain. I shall then go on to perform a similar analysis for a situation where a group of participants has adopted a recentlyproposed strategy for gaining a greater advantage from BitCoin processing than its combined computer power should be able to control. 

Semiclassical restriction estimates 12:10 Fri 4 Apr, 2014 :: Ingkarni Wardli B20 :: Melissa Tacy :: University of Adelaide
Eigenfunctions of Hamiltonians arise naturally in the theory of quantum mechanics as stationary states of quantum systems. Their eigenvalues have an interpretation as the square root of E, where E is the energy of the system. We wish to better understand the high energy limit which defines the boundary between quantum and classical mechanics. In this talk I will focus on results regarding the restriction of eigenfunctions to lower dimensional subspaces, in particular to hypersurfaces. A convenient way to study such problems is to reframe them as problems in semiclassical analysis. 

Bayesian Indirect Inference 12:10 Mon 14 Apr, 2014 :: B.19 Ingkarni Wardli :: Brock Hermans :: University of Adelaide
Media...Bayesian likelihoodfree methods saw the resurgence of Bayesian statistics through the use of computer sampling techniques. Since the resurgence, attention has focused on socalled 'summary statistics', that is, ways of summarising data that allow for accurate inference to be performed. However, it is not uncommon to find data sets in which the summary statistic approach is not sufficient.
In this talk, I will be summarising some of the likelihoodfree methods most commonly used (don't worry if you've never seen any Bayesian analysis before), as well as looking at Bayesian Indirect Likelihood, a new way of implementing Bayesian analysis which combines new inference methods with some of the older computational algorithms. 

Networkbased approaches to classification and biomarker identification in metastatic melanoma 15:10 Fri 2 May, 2014 :: B.21 Ingkarni Wardli :: Associate Professor Jean Yee Hwa Yang :: The University of Sydney
Media...Finding prognostic markers has been a central question in much of current research in medicine and biology. In the last decade, approaches to prognostic prediction within a genomics setting are primarily based on changes in individual genes / protein. Very recently, however, network based approaches to prognostic prediction have begun to emerge which utilize interaction information between genes. This is based on the believe that largescale molecular interaction networks are dynamic in nature and changes in these networks, rather than changes in individual genes/proteins, are often drivers of complex diseases such as cancer.
In this talk, I use data from stage III melanoma patients provided by Prof. Mann from Melanoma Institute of Australia to discuss how network information can be utilize in the analysis of gene expression analysis to aid in biological interpretation. Here, we explore a number of novel and previously published networkbased prediction methods, which we will then compare to the common singlegene and geneset methods with the aim of identifying more biologically interpretable biomarkers in the form of networks. 

Multiple Sclerosis and linear stability analysis 12:35 Mon 19 May, 2014 :: B.19 Ingkarni Wardli :: Saber Dini :: University of Adelaide
Media...Multiple sclerosis (MS), is an inflammatory disease in which the immune system of the body attacks the myelin sheaths around axons in the brain and damages, or in other words, demyelinates the axons. Demyelination process can lead to scarring as well as a broad spectrum of signs and symptoms. Brain of vertebrates has a mechanism to restore the demyelination or Remyelinate the damaged area. Remyelination in the brain is accomplished by glial cells (servers of neurons). Glial cells should accumulate in the damaged areas of the brain to start the repairing process and this accumulation can be viewed as instability. Therefore, spatiotemporal linear stability analysis can be undertaken on the issue to investigate quantitative aspects of the remyelination process. 

Group meeting 15:10 Fri 6 Jun, 2014 :: 5.58 Ingkarni Wardli :: Meng Cao and Trent Mattner :: University of Adelaide
Meng Cao:: Multiscale modelling couples patches of nonlinear wavelike simulations ::
Abstract:
The multiscale gaptooth scheme is built from given microscale simulations of complicated physical processes to empower macroscale simulations. By coupling small patches of simulations over unsimulated physical gaps, large savings in computational time are possible. So far the gaptooth scheme has been developed for dissipative systems, but wave systems are also of great interest. This article develops the gaptooth scheme to the case of nonlinear microscale simulations of wavelike systems. Classic macroscale interpolation provides a generic coupling between patches that achieves arbitrarily high order consistency between the multiscale scheme and the underlying microscale dynamics. Eigenanalysis indicates that the resultant gaptooth scheme empowers feasible computation of large scale simulations of wavelike dynamics with complicated underlying physics. As an pilot study, we implement numerical simulations of dambreaking waves by the gaptooth scheme. Comparison between a gaptooth simulation, a microscale simulation over the whole domain, and some published experimental data on dam breaking, demonstrates that the gaptooth scheme feasibly computes large scale wavelike dynamics with computational savings.
Trent Mattner :: Coupled atmospherefire simulations of the Canberra 2003 bushfires using WRFSfire :: Abstract:
The Canberra fires of January 18, 2003 are notorious for the extreme fire behaviour and fireatmospheretopography interactions that occurred, including leeslope fire channelling, pyrocumulonimbus development and tornado formation. In this talk, I will discuss coupled fireweather simulations of the Canberra fires using WRFSFire. In these simulations, a firebehaviour model is used to dynamically predict the evolution of the fire front according to local atmospheric and topographic conditions, as well as the associated heat and moisture fluxes to the atmosphere. It is found that the predicted fire front and heat flux is not too bad, bearing in mind the complexity of the problem and the severe modelling assumptions made. However, the predicted moisture flux is too low, which has some impact on atmospheric dynamics. 

All's Fair in Love and Statistics 12:35 Mon 28 Jul, 2014 :: B.19 Ingkarni Wardli :: Annie Conway :: University of Adelaide
Media...Earlier this year Wired.com published an article about a "math genius" who found true love after scraping and analysing data from a dating site. In this talk I will be investigating the actual mathematics that he used, in particular methods for clustering categorical data, and whether or not the approach was successful. 

Fast computation of eigenvalues and eigenfunctions on bounded plane domains 15:10 Fri 1 Aug, 2014 :: B.18 Ingkarni Wardli :: Professor Andrew Hassell :: Australian National University
Media...I will describe a new method for numerically computing eigenfunctions and eigenvalues on certain plane domains, derived from the socalled "scaling method" of Vergini and Saraceno. It is based on properties of the DirichlettoNeumann map on the domain, which relates a function f on the boundary of the domain to the normal derivative (at the boundary) of the eigenfunction with boundary data f. This is a topic of independent interest in pure mathematics. In my talk I will try to emphasize the inteplay between theory and applications, which is very rich in this situation. This is joint work with numerical analyst Alex Barnett (Dartmouth). 

Software and protocol verification using Alloy 12:10 Mon 25 Aug, 2014 :: B.19 Ingkarni Wardli :: Dinesha Ranathunga :: University of Adelaide
Media...Reliable software isn't achieved by trial and error. It requires tools to support verification. Alloy is a tool based on set theory that allows expression of a logicbased model of software or a protocol, and hence allows checking of this model. In this talk, I will cover its key concepts, language syntax and analysis features. 

Neural Development of the Visual System: a laminar approach 15:10 Fri 29 Aug, 2014 :: N132 Engineering North :: Dr Andrew Oster :: Eastern Washington University
Media...In this talk, we will introduce the architecture of the visual
system in higher order primates and cats. Through activitydependent
plasticity mechanisms, the left and right eye streams segregate in the
cortex in a stripelike manner, resulting in a pattern called an ocular
dominance map. We introduce a mathematical model to study how such a
neural wiring pattern emerges. We go on to consider the joint
development of the ocular dominance map with another feature of the
visual system, the cytochrome oxidase blobs, which appear in the center
of the ocular dominance stripes. Since cortex is in fact comprised of
layers, we introduce a simple laminar model and perform a stability
analysis of the wiring pattern. This intricate biological structure
(ocular dominance stripes with "blobs" periodically distributed in their
centers) can be understood as occurring due to two Turing instabilities
combined with the leadingorder dynamics of the system. 

Neural Development of the Visual System: a laminar approach 15:10 Fri 29 Aug, 2014 :: This talk will now be given as a School Colloquium :: Dr Andrew Oster :: Eastern Washington University
In this talk, we will introduce the architecture of the visual system in higher order primates and cats. Through activitydependent plasticity mechanisms, the left and right eye streams segregate in the cortex in a stripelike manner, resulting in a pattern called an ocular dominance map. We introduce a mathematical model to study how such a neural wiring pattern emerges. We go on to consider the joint development of the ocular dominance map with another feature of the visual system, the cytochrome oxidase blobs, which appear in the center of the ocular dominance stripes. Since cortex is in fact comprised of layers, we introduce a simple laminar model and perform a stability analysis of the wiring pattern. This intricate biological structure (ocular dominance stripes with 'blobs' periodically distributed in their centers) can be understood as occurring due to two Turing instabilities combined with the leadingorder dynamics of the system. 

Inferring absolute population and recruitment of southern rock lobster using only catch and effort data 12:35 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide
Media...Abundance estimates from a datalimited version of catch survey analysis are compared to those from a novel oneparameter deterministic method. Bias of both methods is explored using simulation testing based on a more complex datarich stock assessment population dynamics fishery operating model, exploring the impact of both varying levels of observation error in data as well as model process error. Recruitment was consistently better estimated than legal size population, the latter most sensitive to increasing observation errors. A hybrid of the datalimited methods is proposed as the most robust approach. A more statistically conventional errorinvariables approach may also be touched upon if enough time. 

To Complex Analysis... and beyond! 12:10 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Brett Chenoweth :: University of Adelaide
Media...In the undergraduate complex analysis course students learn about complex valued functions on domains in C (the complex plane). Several interesting and surprising results come about from this study. In my talk I will introduce a more general setting where complex analysis can be done, namely Riemann surfaces (complex manifolds of dimension 1). I will then prove that all noncompact Riemann surfaces are Stein; which loosely speaking means that their function theory is similar to that of C. 

Optimally Chosen Quadratic Forms for Partitioning Multivariate Data 13:10 Tue 14 Oct, 2014 :: Ingkarni Wardli 715 Conference Room :: Assoc. Prof. Inge Koch :: School of Mathematical Sciences
Media...Quadratic forms are commonly used in linear algebra. For ddimensional vectors they have a matrix representation, Q(x) = x'Ax, for some symmetric matrix A. In statistics quadratic forms are defined for ddimensional random vectors, and one of the bestknown quadratic forms is the Mahalanobis distance of two random vectors.
In this talk we want to partition a quadratic form Q(X) = X'MX, where X is a random vector, and M a symmetric matrix, that is, we want to find a ddimensional random vector W such that Q(X) = W'W. This problem has many solutions. We are interested in a solution or partition W of X such that pairs of corresponding variables (X_j, W_j) are highly correlated and such that W is simpler than the given X.
We will consider some natural candidates for W which turn out to be suboptimal in the sense of the above constraints, and we will then exhibit the optimal solution. Solutions of this type are useful in the wellknown Tsquare statistic. We will see in examples what these solutions look like. 

Happiness and social information flow: Computational social science through data. 15:10 Fri 7 Nov, 2014 :: EM G06 (Engineering & Maths Bldg) :: Dr Lewis Mitchell :: University of Adelaide
The recent explosion in big data coming from online social networks has led to an increasing interest in bringing quantitative methods to bear on questions in social science. A recent highprofile example is the study of emotional contagion, which has led to significant challenges and controversy. This talk will focus on two issues related to emotional contagion, namely remotesensing of populationlevel wellbeing and the problem of information flow across a social network. We discuss some of the challenges in working with massive online data sets, and present a simple tool for measuring largescale happiness from such data. By combining over 10 million geolocated messages collected from Twitter with traditional census data we uncover geographies of happiness at the scale of states and cities, and discuss how these patterns may be related to traditional wellbeing measures and public health outcomes. Using tools from information theory we also study information flow between individuals and how this may relate to the concept of predictability for human behaviour. 

Happiness and social information flow: Computational social science through data. 15:10 Fri 7 Nov, 2014 :: EM G06 (Engineering & Maths Bldg) :: Dr Lewis Mitchell :: University of Adelaide
The recent explosion in big data coming from online social networks has led to an increasing interest in bringing quantitative methods to bear on questions in social science. A recent highprofile example is the study of emotional contagion, which has led to significant challenges and controversy. This talk will focus on two issues related to emotional contagion, namely remotesensing of populationlevel wellbeing and the problem of information flow across a social network. We discuss some of the challenges in working with massive online data sets, and present a simple tool for measuring largescale happiness from such data. By combining over 10 million geolocated messages collected from Twitter with traditional census data we uncover geographies of happiness at the scale of states and cities, and discuss how these patterns may be related to traditional wellbeing measures and public health outcomes. Using tools from information theory we also study information flow between individuals and how this may relate to the concept of predictability for human behaviour. 

Factorisations of Distributive Laws 12:10 Fri 19 Dec, 2014 :: Ingkarni Wardli B20 :: Paul Slevin :: University of Glasgow
Recently, distributive laws have been used by Boehm and Stefan to construct new examples of duplicial (paracyclic) objects, and hence cyclic homology theories. The paradigmatic example of such a theory is the cyclic homology HC(A) of an associative algebra A. It was observed by Kustermans, Murphy, and Tuset that the functor HC can be twisted by automorphisms of A. It turns out that this twisting procedure can be applied to any duplicial object defined by a distributive law.
I will begin by defining duplicial objects and cyclic homology, as well as discussing some categorical concepts, then describe the construction of Boehm and Stefan. I will then define the category of factorisations of a distributive law and explain how this acts on their construction, and give some examples, making explicit how the action of this category generalises the twisting of an associative algebra. 

Nonlinear analysis over infinite dimensional spaces and its applications 12:10 Fri 6 Feb, 2015 :: Ingkarni Wardli B20 :: Tsuyoshi Kato :: Kyoto University
In this talk we develop moduli theory of holomorphic curves over
infinite dimensional manifolds consisted by sequences of almost Kaehler manifolds.
Under the assumption of high symmetry, we verify that many mechanisms of
the standard moduli theory over closed symplectic manifolds also work over these
infinite dimensional spaces.
As an application, we study deformation theory of discrete groups acting
on trees. There is a canonical way, up to conjugacy to embed such groups
into the automorphism group over the infinite projective space.
We verify that for some class of Hamiltonian functions,
the deformed groups must be always asymptotically infinite. 

Boundary behaviour of Hitchin and hypo flows with leftinvariant initial data 12:10 Fri 27 Feb, 2015 :: Ingkarni Wardli B20 :: Vicente Cortes :: University of Hamburg
Hitchin and hypo flows constitute a system of first order pdes for the construction of
Ricciflat Riemannian mertrics of special holonomy in dimensions 6, 7 and 8.
Assuming that the initial geometric structure is leftinvariant, we study whether the resulting Ricciflat manifolds can be extended in a natural way to complete Ricciflat manifolds. This talk is based on joint work with Florin Belgun, Marco Freibert and Oliver Goertsches, see arXiv:1405.1866 (math.DG). 

On the analyticity of CRdiffeomorphisms 12:10 Fri 13 Mar, 2015 :: Engineering North N132 :: Ilya Kossivskiy :: University of Vienna
One of the fundamental objects in several complex variables is CRmappings. CRmappings naturally occur in complex analysis as boundary values of mappings between domains, and as restrictions of holomorphic mappings onto real submanifolds. It was already observed by Cartan that smooth CRdiffeomorphisms between CRsubmanifolds in C^N tend to be very regular, i.e., they are restrictions of holomorphic maps. However, in general smooth CRmappings form a more restrictive class of mappings. Thus, since the inception of CRgeometry, the following general question has been of fundamental importance for the field: Are CRequivalent realanalytic CRstructures also equivalent holomorphically? In joint work with Lamel, we answer this question in the negative, in any positive CRdimension and CRcodimension. Our construction is based on a recent dynamical technique in CRgeometry, developed in my earlier work with Shafikov. 

Symmetric groups via categorical representation theory 15:10 Fri 20 Mar, 2015 :: Engineering North N132 :: Dr Oded Yacobi :: University of Sydney
The symmetric groups play a fundamental role in representation theory and, while their characteristic zero representations are well understood, over fields of positive characteristic most foundational questions are still unanswered. In the 1990's Kleshchev made a spectacular breakthrough, and computed certain modular restriction multiplicities. It was observed by Lascoux, Leclerc, and Thibon that Kleshchev's numerology encodes a seemingly unrelated object: the crystal graph associated to an affine Lie algebra! We will explain how this mysterious connection opens the door to categorical representation theory, and, moreover, how the categorical perspective allows one to prove new theorems about representations of symmetric groups. We will also discuss other problems/applications in the landscape of categorical representation theory. 

Groups acting on trees 12:10 Fri 10 Apr, 2015 :: Napier 144 :: Anitha Thillaisundaram :: Heinrich Heine University of Duesseldorf
From a geometric point of view, branch groups are groups acting
spherically transitively on a spherically homogeneous rooted tree. The
applications of branch groups reach out to analysis, geometry,
combinatorics, and probability. The early construction of branch groups
were the Grigorchuk group and the GuptaSidki pgroups. Among its many
claims to fame, the Grigorchuk group was the first example of a group of
intermediate growth (i.e. neither polynomial nor exponential). Here we
consider a generalisation of the family of GrigorchukGuptaSidki groups,
and we examine the restricted occurrence of their maximal subgroups. 

Multivariate regression in quantitative finance: sparsity, structure, and robustness 15:10 Fri 1 May, 2015 :: Engineering North N132 :: A/Prof Mark Coates :: McGill University
Many quantitative hedge funds around the world strive to predict future equity and futures returns based on many sources of information, including historical returns and economic data. This leads to a multivariate regression problem. Compared to many regression problems, the signaltonoise ratio is extremely low, and profits can be realized if even a small fraction of the future returns can be accurately predicted. The returns generally have heavytailed distributions, further complicating the regression procedure.
In this talk, I will describe how we can impose structure into the regression problem in order to make detection and estimation of the very weak signals feasible. Some of this structure consists of an assumption of sparsity; some of it involves identification of common factors to reduce the dimension of the problem. I will also describe how we can formulate alternative regression problems that lead to more robust solutions that better match the performance metrics of interest in the finance setting. 

Medical Decision Making 12:10 Mon 11 May, 2015 :: Napier LG29 :: Eka Baker :: University of Adelaide
Media...Practicing physicians make treatment decisions based on clinical trial data every day. This data is based on trials primarily conducted on healthy volunteers, or on those with only the disease in question. In reality, patients do have existing conditions that can affect the benefits and risks associated with receiving these treatments.
In this talk, I will explain how we modified an already existing Markov model to show the progression of treatment of a single condition over time. I will then explain how we adapted this to a different condition, and then created a combined model, which demonstrated how both diseases and treatments progressed on the same patient over their lifetime. 

Can mathematics help save energy in computing? 15:10 Fri 22 May, 2015 :: Engineering North N132 :: Prof Markus Hegland :: ANU
Media...Recent development of computational hardware is characterised by two trends:
1. High levels of duplication of computational capabilities in multicore, parallel and GPU processing, and, 2. Substantially faster development of the speed of computational technology compared to communication
technology
A consequence of these two trends is that energy costs of modern computing devices from mobile phones to
supercomputers are increasingly dominated by communication costs. In order to save energy one would thus
need to reduce the amount of data movement within the computer. This can be achieved by recomputing results
instead of communicating them. The resulting increase in computational redundancy may also be used to make
the computations more robust against hardware faults. Paradoxically, by doing more (computations) we do
use less (energy).
This talk will first discuss for a simple example how a mathematical understanding can be applied to improve
computational results using extrapolation. Then the problem of energy consumption in computational hardware
will be considered. Finally some recent work will be discussed which shows how redundant computing is used
to mitigate computational faults and thus to save energy.


Group Meeting 15:10 Fri 29 May, 2015 :: EM 213 :: Dr Judy Bunder :: University of Adelaide
Talk : Patch dynamics for efficient exascale simulations
Abstract
Massive parallelisation has lead to a dramatic increase in available computational power.
However, data transfer speeds have failed to keep pace and are the major limiting factor in the development of exascale computing. New algorithms must be developed which minimise the transfer of data. Patch dynamics is a computational macroscale modelling scheme which provides a coarse macroscale solution of a problem defined on a fine microscale by dividing the domain into many nonoverlapping, coupled patches. Patch dynamics is readily adaptable to massive parallelisation as each processor core can evaluate the dynamics on one, or a few, patches. However, patch coupling conditions interpolate across the unevaluated parts of the domain between patches and require almost continuous data transfer. We propose a modified patch dynamics scheme which minimises data transfer by only reevaluating the patch coupling conditions at `mesoscale' time scales which are significantly larger than the microscale time of the microscale problem. We analyse and quantify the error arising from patch dynamics with mesoscale temporal coupling. 

Workshop on Geometric Quantisation 10:10 Mon 27 Jul, 2015 :: Level 7 conference room Ingkarni Wardli :: Michele Vergne, Weiping Zhang, Eckhard Meinrenken, Nigel Higson and many others
Media...Geometric quantisation has been an increasingly active area since before the 1980s, with links to physics, symplectic geometry, representation theory, index theory, and differential geometry and geometric analysis in general. In addition to its relevance as a field on its own, it acts as a focal point for the interaction between all of these areas, which has yielded farreaching and powerful results. This workshop features a large number of international speakers, who are all wellknown for their work in (differential) geometry, representation theory and/or geometric analysis. This is a great opportunity for anyone interested in these areas to meet and learn from some of the top mathematicians in the world. Students are especially welcome. Registration is free. 

Dynamics on Networks: The role of local dynamics and global networks on hypersynchronous neural activity 15:10 Fri 31 Jul, 2015 :: Ingkarni Wardli B21 :: Prof John Terry :: University of Exeter, UK
Media...Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of mathematical modelling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit.
In the talk we introduce some of these concepts with application to epilepsy, introducing a dynamic network approach to study resting state EEG recordings from a cohort of 35 people with epilepsy and 40 adult controls. Using this framework we demonstrate a strongly significant difference between networks inferred from the background activity of people with epilepsy in comparison to normal controls. Our findings demonstrate that a mathematical model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which may ultimately enable a more appropriate mechanistic stratification of people with epilepsy leading to improved diagnostics and therapeutics. 

Mathematical Modeling and Analysis of Active Suspensions 14:10 Mon 3 Aug, 2015 :: Napier 209 :: Professor Michael Shelley :: Courant Institute of Mathematical Sciences, New York University
Complex fluids that have a 'bioactive' microstructure, like
suspensions of swimming bacteria or assemblies of immersed biopolymers
and motorproteins, are important examples of socalled active matter.
These internally driven fluids can have strange mechanical properties,
and show persistent activitydriven flows and selforganization. I will
show how firstprinciples PDE models are derived through reciprocal
coupling of the 'active stresses' generated by collective microscopic
activity to the fluid's macroscopic flows. These PDEs have an
interesting analytic structures and dynamics that agree qualitatively
with experimental observations: they predict the transitions to flow
instability and persistent mixing observed in bacterial suspensions, and
for microtubule assemblies show the generation, propagation, and
annihilation of disclination defects. I'll discuss how these models
might be used to study yet more complex biophysical systems.


Be careful not to impute something ridiculous! 12:20 Mon 24 Aug, 2015 :: Benham Labs G10 :: Sarah James :: University of Adelaide
Media...When learning how to make inferences about data, we are given all of the information with no missing values. In reality data sets are often missing data, anywhere from 5% of the data to extreme cases such as 70% of the data. Instead of getting rid of the incomplete cases we can impute predictions for each missing value and make inferences on the resulting data set. But just how sensible are our predictions? In this talk, we will learn how to deal with missing data and talk about why we have to be careful with our predictions. 

Pattern Formation in Nature 12:10 Mon 31 Aug, 2015 :: Benham Labs G10 :: Saber Dini :: University of Adelaide
Media...Pattern formation is a ubiquitous process in nature: embryo development, animals skin pigmentation, etc. I will talk about how Alan Turing (the British genius known for the Turing Machine) explained pattern formation by linear stability analysis of reactiondiffusion systems. 

IGA/AMSI Workshop  AustraliaJapan Geometry, Analysis and their Applications 09:00 Mon 19 Oct, 2015 :: Ingkarni Wardli Conference Room 7.15 (Level 7)
Media...Interdisciplinary workshop between Australia and Japan on Geometry, Analysis and their Applications. 

Ocean dynamics of Gulf St Vincent: a numerical study 12:10 Mon 2 Nov, 2015 :: Benham Labs G10 :: Henry Ellis :: University of Adelaide
Media...The aim of this research is to determine the physical dynamics of ocean circulation within Gulf St. Vincent, South Australia, and the exchange of momentum, nutrients, heat, salt and other water properties between the gulf and shelf via Investigator Strait and Backstairs Passage. The project aims to achieve this through the creation of highresolution numerical models, combined with new and historical observations from a moored instrument package, satellite data, and shipboard surveys.
The quasirealistic highresolution models are forced using boundary conditions generated by existing larger scale ROMS models, which in turn are forced at the boundary by a global model, creating a global to regional to local model network. Climatological forcing is done using European Centres for Medium range Weather Forecasting (ECMWF) data sets and is consistent over the regional and local models. A series of conceptual models are used to investigate the relative importance of separate physical processes in addition to fully forced quasirealistic models.
An outline of the research to be undertaken is given:
ÃÂ¢ÃÂÃÂ¢ Connectivity of Gulf St. Vincent with shelf waters including seasonal variation due to wind and thermoclinic patterns;
ÃÂ¢ÃÂÃÂ¢ The role of winter time cooling and formation of eddies in flushing the gulf;
ÃÂ¢ÃÂÃÂ¢ The formation of a temperature front within the gulf during summer time; and
ÃÂ¢ÃÂÃÂ¢ The connectivity and importance of nutrient rich, cool, water upwelling from the Bonney Coast with the gulf via Backstairs Passage during summer time. 

Modelling Coverage in RNA Sequencing 09:00 Mon 9 Nov, 2015 :: Ingkarni Wardli 5.57 :: Arndt von Haeseler :: Max F Perutz Laboratories, University of Vienna
Media...RNA sequencing (RNAseq) is the method of choice for measuring the expression of RNAs in a cell population. In an RNAseq experiment, sequencing the full length of larger RNA molecules requires fragmentation into smaller pieces to be compatible with limited read lengths of most deepsequencing technologies. Unfortunately, the issue of nonuniform coverage across a genomic feature has been a concern in RNAseq and is attributed to preferences for certain fragments in steps of library preparation and sequencing. However, the disparity between the observed nonuniformity of read coverage in RNAseq data and the assumption of expected uniformity elicits a query on the read coverage profile one should expect across a transcript, if there are no biases in the sequencing protocol. We propose a simple model of unbiased fragmentation where we find that the expected coverage profile is not uniform and, in fact, depends on the ratio of fragment length to transcript length. To compare the nonuniformity proposed by our model with experimental data, we extended this simple model to incorporate empirical attributes matching that of the sequenced transcript in an RNAseq experiment. In addition, we imposed an experimentally derived distribution on the frequency at which fragment lengths occur.
We used this model to compare our theoretical prediction with experimental data and with the uniform coverage model. If time permits, we will also discuss a potential application of our model. 

Weak globularity in homotopy theory and higher category theory 12:10 Thu 12 Nov, 2015 :: Ingkarni Wardli B19 :: Simona Paoli :: University of Leicester
Media...Spaces and homotopy theories are fundamental objects of study of algebraic topology. One way to study these objects is to break them into smaller components with the Postnikov decomposition. To describe such decomposition purely algebraically we need higher categorical structures. We describe one approach to modelling these structures based on a new paradigm to build weak higher categories, which is the notion of weak globularity. We describe some of their connections to both homotopy theory and higher category theory. 

Use of epidemic models in optimal decision making 15:00 Thu 19 Nov, 2015 :: Ingkarni Wardli 5.57 :: Tim Kinyanjui :: School of Mathematics, The University of Manchester
Media...Epidemic models have proved useful in a number of applications in epidemiology. In this work, I will present two areas that we have used modelling to make informed decisions. Firstly, we have used an age structured mathematical model to describe the transmission of Respiratory Syncytial Virus in a developed country setting and to explore different vaccination strategies. We found that delayed infant vaccination has significant potential in reducing the number of hospitalisations in the most vulnerable group and that most of the reduction is due to indirect protection. It also suggests that marked public health benefit could be achieved through RSV vaccine delivered to age groups not seen as most at risk of severe disease. The second application is in the optimal design of studies aimed at collection of householdstratified infection data. A design decision involves making a tradeoff between the number of households to enrol and the sampling frequency. Two commonly used study designs are considered: crosssectional and cohort. The search for an optimal design uses Bayesian methods to explore the joint parameterdesign space combined with Shannon entropy of the posteriors to estimate the amount of information for each design. We found that for the crosssectional designs, the amount of information increases with the sampling intensity while the cohort design often exhibits a tradeoff between the number of households sampled and the intensity of followup. Our results broadly support the choices made in existing data collection studies. 

A fixed point theorem on noncompact manifolds 12:10 Fri 12 Feb, 2016 :: Ingkarni Wardli B21 :: Peter Hochs :: University of Adelaide / Radboud University
Media...For an elliptic operator on a compact manifold acted on by a compact Lie group, the AtiyahSegalSinger fixed point formula expresses its equivariant index in terms of data on fixed point sets of group elements. This can for example be used to prove Weylâs character formula. We extend the definition of the equivariant index to noncompact manifolds, and prove a generalisation of the AtiyahSegalSinger formula, for group elements with compact fixed point sets. In one example, this leads to a relation with characters of discrete series representations of semisimple Lie groups. (This is joint work with Hang Wang.) 

How predictable are you? Information and happiness in social media. 12:10 Mon 21 Mar, 2016 :: Ingkarni Wardli Conference Room 715 :: Dr Lewis Mitchell :: School of Mathematical Sciences
Media...The explosion of ``Big Data'' coming from online social networks and the like has opened up the new field of ``computational social science'', which applies a quantitative lens to problems traditionally in the domain of psychologists, anthropologists and social scientists. What does it mean to be influential? How do ideas propagate amongst populations? Is happiness contagious? For the first time, mathematicians, statisticians, and computer scientists can provide insight into these and other questions. Using data from social networks such as Facebook and Twitter, I will give an overview of recent research trends in computational social science, describe some of my own work using techniques like sentiment analysis and information theory in this realm, and explain how you can get involved with this highly rewarding research field as well.


Connecting withinhost and betweenhost dynamics to understand how pathogens evolve 15:10 Fri 1 Apr, 2016 :: Engineering South S112 :: A/Prof Mark Tanaka :: University of New South Wales
Media...Modern molecular technologies enable a detailed examination of the extent of genetic variation among isolates of bacteria and viruses. Mathematical models can help make inferences about pathogen evolution from such data. Because the evolution of pathogens ultimately occurs within hosts, it is influenced by dynamics within hosts including interactions between pathogens and hosts. Most models of pathogen evolution focus on either the withinhost or the betweenhost level. Here I describe steps towards bridging the two scales. First, I present a model of influenza virus evolution that incorporates withinhost dynamics to obtain the betweenhost rate of molecular substitution as a function of the mutation rate, the withinhost reproduction number and other factors. Second, I discuss a model of viral evolution in which some hosts are immunocompromised, thereby extending opportunities for withinhost virus evolution which then affects populationlevel evolution. Finally, I describe a model of Mycobacterium tuberculosis in which multidrug resistance evolves within hosts and spreads by transmission between hosts. 

Geometric analysis of gaplabelling 12:10 Fri 8 Apr, 2016 :: Eng & Maths EM205 :: Mathai Varghese :: University of Adelaide
Media...Using an earlier result, joint with Quillen, I will formulate a gap labelling conjecture for magnetic Schrodinger operators with smooth aperiodic potentials on Euclidean space. Results in low dimensions will be given, and the formulation of the same problem for certain nonEuclidean spaces will be given if time permits.
This is ongoing joint work with Moulay Benameur.


Sard Theorem for the endpoint map in subRiemannian manifolds 12:10 Fri 29 Apr, 2016 :: Eng & Maths EM205 :: Alessandro Ottazzi :: University of New South Wales
Media...SubRiemannian geometries occur in several areas of pure and applied mathematics, including harmonic analysis, PDEs, control theory, metric geometry, geometric group theory, and neurobiology. We introduce subRiemannian manifolds and give some examples. Therefore we discuss some of the open problems, and in particular we focus on the Sard Theorem for the endpoint map, which is related to the study of length minimizers. Finally, we consider some recent results obtained in collaboration with E. Le Donne, R. Montgomery, P. Pansu and D. Vittone. 

Mathematical modelling of the immune response to influenza 15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne
Media...The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.
We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of crossreactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short interexposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit crossreactive cellular adaptive immune responses. To account for intersubject as well as intervirus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.


Harmonic analysis of HodgeDirac operators 12:10 Fri 13 May, 2016 :: Eng & Maths EM205 :: Pierre Portal :: Australian National University
Media...When the metric on a Riemannian manifold is perturbed in a rough (merely bounded and measurable) manner, do basic estimates involving the Hodge Dirac operator $D = d+d^*$ remain valid? Even in the model case of a perturbation of the euclidean metric on $\mathbb{R}^n$, this is a difficult question. For instance, the fact that the $L^2$ estimate $\Du\_2 \sim \\sqrt{D^{2}}u\_2$ remains valid for perturbed versions of $D$ was a famous conjecture made by Kato in 1961 and solved, positively, in a ground breaking paper of Auscher, Hofmann, Lacey, McIntosh and Tchamitchian in 2002. In the past fifteen years, a theory has emerged from the solution of this conjecture, making rough perturbation problems much more tractable. In this talk, I will give a general introduction to this theory, and present one of its latest results: a flexible approach to $L^p$ estimates for the holomorphic functional calculus of $D$. This is joint work with D. Frey (Delft) and A. McIntosh (ANU).


Harmonic Analysis in Rough Contexts 15:10 Fri 13 May, 2016 :: Engineering South S112 :: Dr Pierre Portal :: Australian National University
Media...In recent years, perspectives on what constitutes the ``natural" framework within which to conduct various forms of mathematical analysis have shifted substantially. The common theme of these shifts can be described as a move towards roughness, i.e. the elimination of smoothness assumptions that had previously been considered fundamental. Examples include partial differential equations on domains with a boundary that is merely Lipschitz continuous, geometric analysis on metric measure spaces that do not have a smooth structure, and stochastic analysis of dynamical systems that have nowhere differentiable trajectories.
In this talk, aimed at a general mathematical audience, I describe some of these shifts towards roughness, placing an emphasis on harmonic analysis, and on my own contributions. This includes the development of heat kernel methods in situations where such a kernel is merely a distribution, and applications to deterministic and stochastic partial differential equations. 

Smooth mapping orbifolds 12:10 Fri 20 May, 2016 :: Eng & Maths EM205 :: David Roberts :: University of Adelaide
It is wellknown that orbifolds can be represented by a special kind of Lie groupoid, namely those that are Ã©tale and proper. Lie groupoids themselves are one way of presenting certain nice differentiable stacks.
In joint work with Ray Vozzo we have constructed a presentation of the mapping stack Hom(disc(M),X), for M a compact manifold and X a differentiable stack, by a FrÃ©chetLie groupoid. This uses an apparently new result in global analysis about the map C^\infty(K_1,Y) \to C^\infty(K_2,Y) induced by restriction along the inclusion K_2 \to K_1, for certain compact K_1,K_2. We apply this to the case of X being an orbifold to show that the mapping stack is an infinitedimensional orbifold groupoid. We also present results about mapping groupoids for bundle gerbes. 

Time series analysis of paleoclimate proxies (a mathematical perspective) 15:10 Fri 27 May, 2016 :: Engineering South S112 :: Dr Thomas Stemler :: University of Western Australia
Media...In this talk I will present the work my colleagues from the School of
Earth and Environment (UWA), the "trans disciplinary methods" group of
the Potsdam Institute for Climate Impact Research, Germany, and I did to
explain the dynamics of the AustralianSouth East Asian monsoon system
during the last couple of thousand years.
From a time series perspective paleoclimate proxy series are more or
less the monsters moving under your bed that wake you up in the middle
of the night. The data is clearly nonstationary, nonuniform sampled in
time and the influence of stochastic forcing or the level of measurement
noise are more or less unknown. Given these undesirable properties
almost all traditional time series analysis methods fail.
I will highlight two methods that allow us to draw useful conclusions
from the data sets. The first one uses Gaussian kernel methods to
reconstruct climate networks from multiple proxies. The coupling
relationships in these networks change over time and therefore can be
used to infer which areas of the monsoon system dominate the complex
dynamics of the whole system. Secondly I will introduce the
transformation cost time series method, which allows us to detect
changes in the dynamics of a nonuniform sampled time series. Unlike the
frequently used interpolation approach, our new method does not corrupt
the data and therefore avoids biases in any subsequence analysis. While
I will again focus on paleoclimate proxies, the method can be used in
other applied areas, where regular sampling is not possible.


Student Performance Issues in First Year University Calculus 15:10 Fri 10 Jun, 2016 :: Engineering South S112 :: Dr Christine Mangelsdorf :: University of Melbourne
Media...MAST10006 Calculus 2 is the largest subject in the School of Mathematics and Statistics at the University of Melbourne, accounting for about 2200 out of 7400 first year enrolments. Despite excellent and consistent feedback from students on lectures, tutorials and teaching materials, scaled failure rates in Calculus 2 averaged an unacceptably high 29.4% (with raw failure rates reaching 40%) by the end of 2014. To understand the issues behind the poor student performance, we studied the exam papers of students with grades of 4049% over a threeyear period. In this presentation, I will present data on areas of poor performance in the final exam, show samples of student work, and identify possible causes for their errors. Many of the performance issues are found to relate to basic weaknesses in the studentsâ secondary school mathematical skills that inhibit their ability to successfully complete Calculus 2. Since 2015, we have employed a number of approaches to support studentsâ learning that significantly improved student performance in assessment. I will discuss the changes made to assessment practices and extra support materials provided online and in person, that are driving the improvement. 

Multiscale modeling in biofluids and particle aggregation 15:10 Fri 17 Jun, 2016 :: B17 Ingkarni Wardli :: Dr Sarthok Sircar :: University of Adelaide
In today's seminar I will give 2 examples in mathematical biology which describes the multiscale organization at 2 levels: the meso/micro level and the continuum/macro level. I will then detail suitable tools in statistical mechanics to link these different scales.
The first problem arises in mathematical physiology: swellingdeswelling mechanism of mucus, an ionic gel. Mucus is packaged inside cells at high concentration (volume fraction) and when released into the extracellular environment, it expands in volume by two orders of magnitude in a matter of seconds. This rapid expansion is due to the rapid exchange of calcium and sodium that changes the crosslinked structure of the mucus polymers, thereby causing it to swell. Modeling this problem involves a twophase, polymer/solvent mixture theory (in the continuum level description), together with the chemistry of the polymer, its nearest neighbor interaction and its binding with the dissolved ionic species (in the microscale description). The problem is posed as a freeboundary problem, with the boundary conditions derived from a combination of variational principle and perturbation analysis. The dynamics of neutral gels and the equilibriumstates of the ionic gels are analyzed.
In the second example, we numerically study the adhesion fragmentation dynamics of rigid, round particles clusters subject to a homogeneous shear flow. In the macro level we describe the dynamics of the number density of these cluster. The description in the microscale includes (a) binding/unbinding of the bonds attached on the particle surface, (b) bond torsion, (c) surface potential due to ionic medium, and (d) flow hydrodynamics due to shear flow. 

ChernSimons invariants of Seifert manifolds via Loop spaces 14:10 Tue 28 Jun, 2016 :: Ingkarni Wardli B17 :: Ryan Mickler :: Northeastern University
Over the past 30 years the ChernSimons functional for connections on Gbundles over threemanfolds has lead to a deep understanding of the geometry of threemanfiolds, as well as knot invariants such as the Jones polynomial. Here we study this functional for threemanfolds that are topologically given as the total space of a principal circle bundle over a compact Riemann surface base, which are known as Seifert manifolds. We show that on such manifolds the ChernSimons functional reduces to a particular gaugetheoretic functional on the 2d base, that describes a gauge theory of connections on an infinite dimensional bundle over this base with structure group given by the levelk affine central extension of the loop group LG. We show that this formulation gives a new understanding of results of BeasleyWitten on the computability of quantum ChernSimons invariants of these manifolds as well as knot invariants for knots that wrap a single fiber of the circle bundle. A central tool in our analysis is the Caloron correspondence of MurrayStevensonVozzo.


Product Hardy spaces associated to operators with heat kernel bounds on spaces of homogeneous type 12:10 Fri 19 Aug, 2016 :: Ingkarni Wardli B18 :: Lesley Ward :: University of South Australia
Media...Much effort has been devoted to generalizing the
Calder'onZygmund theory in harmonic analysis from Euclidean
spaces to metric measure spaces, or spaces of homogeneous type.
Here the underlying space R^n with Euclidean metric
and Lebesgue measure is replaced by a set X with general
metric or quasimetric and a doubling measure. Further, one can
replace the Laplacian operator that underpins the
CalderonZygmund theory by more general operators L
satisfying heat kernel estimates.
I will present recent joint work with P. Chen, X.T. Duong,
J. Li and L.X. Yan along these lines. We develop the theory of
product Hardy spaces H^p_{L_1,L_2}(X_1 x X_2), for 1 

Mathematical modelling of social spreading processes 15:10 Fri 19 Aug, 2016 :: Napier G03 :: Prof Hans De Sterck :: Monash University
Media...Social spreading processes are intriguing manifestations of how humans interact and shape each others' lives. There is great interest in improving our understanding of these processes, and the increasing availability of empirical information in the era of big data and online social networks, combined with mathematical and computational modelling techniques, offer compelling new ways to study these processes.
I will first discuss mathematical models for the spread of political revolutions on social networks. The influence of online social networks and social media on the dynamics of the Arab Spring revolutions of 2011 are of particular interest in our work. I will describe a hierarchy of models, starting from agentbased models realized on empirical social networks, and ending up with populationlevel models that summarize the dynamical behaviour of the spreading process. We seek to understand quantitatively how political revolutions may be facilitated by the modern online social networks of social media.
The second part of the talk will describe a populationlevel model for the social dynamics that cause cigarette smoking to spread in a population. Our model predicts that more individualistic societies will show faster adoption and cessation of smoking. Evidence from a newly composed centurylong composite data set on smoking prevalence in 25 countries supports the model, with potential implications for public health interventions around the world.
Throughout the talk, I will argue that important aspects of social spreading processes can be revealed and understood via quantitative mathematical and computational models matched to empirical data.
This talk describes joint work with John Lang and Danny Abrams. 

A principled experimental design approach to big data analysis 15:10 Fri 23 Sep, 2016 :: Napier G03 :: Prof Kerrie Mengersen :: Queensland University of Technology
Media...Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, complexity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appeal to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers equivalent answers compared with analyses of the full dataset. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally it has the potential to add value to other Big Data sampling algorithms, in particular divideandconquer strategies, by determining efficient subsamples. 

SIR epidemics with stages of infection 12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles
Media...This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.
We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semiMarkov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.
Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk. 

Measuring and mapping carbon dioxide from remote sensing satellite data 15:10 Fri 21 Oct, 2016 :: Napier G03 :: Prof Noel Cressie :: University of Wollongong
Media...This talk is about environmental statistics for global remote sensing of atmospheric carbon dioxide, a leading greenhouse gas. An important compartment of the carbon cycle is atmospheric carbon dioxide (CO2), where it (and other gases) contribute to climate change through a greenhouse effect. There are a number of CO2 observational programs where measurements are made around the globe at a small number of groundbased locations at somewhat regular time intervals. In contrast, satellitebased programs are spatially global but give up some of the temporal richness. The most recent satellite launched to measure CO2 was NASA's Orbiting Carbon Observatory2 (OCO2), whose principal objective is to retrieve a geographical distribution of CO2 sources and sinks. OCO2's measurement of columnaveraged mole fraction, XCO2, is designed to achieve this, through a dataassimilation procedure that is statistical at its basis. Consequently, uncertainty quantification is key, starting with the spectral radiances from an individual sounding to borrowing of strength through spatialstatistical modelling. 

Toroidal Soap Bubbles: Constant Mean Curvature Tori in S ^ 3 and R ^3 12:10 Fri 28 Oct, 2016 :: Ingkarni Wardli B18 :: Emma Carberry :: University of Sydney
Media...Constant mean curvature (CMC) tori in S ^ 3, R ^ 3 or H ^ 3 are in bijective correspondence with spectral curve data, consisting of a hyperelliptic curve, a line bundle on this curve and some additional data, which in particular determines the relevant space form. This point of view is particularly relevant for considering modulispace questions, such as the prevalence of tori amongst CMC planes and whether tori can be deformed. I will address these questions for the spherical and Euclidean cases, using Whitham deformations.


Leavitt path algebras 12:10 Fri 2 Dec, 2016 :: Engineering & Math EM213 :: Roozbeh Hazrat :: Western Sydney University
Media...From a directed graph one can generate an algebra which captures the movements along the graph. One such algebras are Leavitt path algebras.
Despite being introduced only 10 years ago, Leavitt path algebras have arisen in a variety of different contexts as diverse as analysis, symbolic dynamics, noncommutative geometry and representation theory. In fact, Leavitt path algebras are algebraic counterpart to graph C*algebras, a theory which has become an area of intensive research globally. There are strikingly parallel similarities between these two theories. Even more surprisingly, one cannot (yet) obtain the results in one theory as a consequence of the other; the statements look the same, however the techniques to prove them are quite different (as the names suggest, one uses Algebra and other Analysis). These all suggest that there might be a bridge between Algebra and Analysis yet to be uncovered.
In this talk, we introduce Leavitt path algebras and try to classify them by means of (graded) Grothendieck groups. We will ask nice questions!


What is index theory? 12:10 Tue 21 Mar, 2017 :: Inkgarni Wardli 5.57 :: Dr Peter Hochs :: School of Mathematical Sciences
Media...Index theory is a link between topology, geometry and analysis. A typical theorem in index theory says that two numbers are equal: an analytic index and a topological index. The first theorem of this kind was the index theorem of Atiyah and Singer, which they proved in 1963. Index theorems have many applications in maths and physics. For example, they can be used to prove that a differential equation must have a solution. Also, they imply that the topology of a space like a sphere or a torus determines in what ways it can be curved. Topology is the study of geometric properties that do not change if we stretch or compress a shape without cutting or glueing. Curvature does change when we stretch something out, so it is surprising that topology can say anything about curvature. Index theory has many surprising consequences like this.


Minimal surfaces and complex analysis 12:10 Fri 24 Mar, 2017 :: Napier 209 :: Antonio Alarcon :: University of Granada
Media...A surface in the Euclidean space R^3 is said to be minimal if it is locally areaminimizing, meaning that every point in the surface admits a compact neighborhood with the least area among all the surfaces with the same boundary. Although the origin of minimal surfaces is in physics, since they can be realized locally as soap films, this family of surfaces lies in the intersection of many fields of mathematics. In particular, complex analysis in one and several variables plays a fundamental role in the theory. In this lecture we will discuss the influence of complex analysis in the study of minimal surfaces. 

Ktypes of tempered representations 12:10 Fri 7 Apr, 2017 :: Napier 209 :: Peter Hochs :: University of Adelaide
Media...Tempered representations of a reductive Lie group G are the irreducible unitary representations one needs in the Plancherel decomposition of L^2(G). They are relevant to harmonic analysis because of this, and also occur in the Langlands classification of the larger class of admissible representations. If K in G is a maximal compact subgroup, then there is a considerable amount of information in the restriction of a tempered representation to K. In joint work with Yanli Song and Shilin Yu, we give a geometric expression for the decomposition of such a restriction into irreducibles. The multiplicities of these irreducibles are expressed as indices of Dirac operators on reduced spaces of a coadjoint orbit of G corresponding to the representation. These reduced spaces are Spinc analogues of reduced spaces in symplectic geometry, defined in terms of moment maps that represent conserved quantities. This result involves a Spinc version of the quantisation commutes with reduction principle for noncompact manifolds. For discrete series representations, this was done by Paradan in 2003. 

Algae meet the mathematics of multiplicative multifractals 12:10 Tue 2 May, 2017 :: Inkgarni Wardli Conference Room 715 :: Professor Tony Roberts :: School of Mathematical Sciences
Media...There is much that is fragmented and rough in the world around us: clouds and landscapes are examples, as is algae.
We need fractal geometry to encompass these.
In practice we need multifractals: a composite of interwoven sets, each with their own fractal structure.
Multiplicative multifractals have known properties.
Optimising a fit between them and the data then empowers us to quantify subtle details of fractal geometry in applications, such as in algae distribution. 

Constructing differential string structures 14:10 Wed 7 Jun, 2017 :: EM213 :: David Roberts :: University of Adelaide
Media...String structures on a manifold are analogous to spin structures, except instead of lifting the structure group through the extension Spin(n)\to SO(n) of Lie groups, we need to lift through the extension String(n)\to Spin(n) of Lie *2groups*. Such a thing exists if the first fractional Pontryagin class (1/2)p_1 vanishes in cohomology. A differential string structure also lifts connection data, but this is rather complicated, involving a number of locally defined differential forms satisfying cocyclelike conditions. This is an expansion of the geometric string structures of Stolz and Redden, which is, for a given connection A, merely a 3form R on the frame bundle such that dR = tr(F^2) for F the curvature of A; in other words a trivialisation of the de Rham class of (1/2)p_1. I will present work in progress on a framework (and specific results) that allows explicit calculation of the differential string structure for a large class of homogeneous spaces, which also yields formulas for the StolzRedden form. I will comment on the application to verifying the refined Stolz conjecture for our particular class of homogeneous spaces. Joint work with Ray Vozzo. 

Complex methods in real integral geometry 12:10 Fri 28 Jul, 2017 :: Engineering Sth S111 :: Mike Eastwood :: University of Adelaide
There are wellknown analogies between holomorphic integral transforms such as the Penrose transform and real integral transforms such as the Radon, Funk, and John transforms. In fact, one can make a precise connection between them and hence use complex methods to establish results in the real setting. This talk will introduce some simple integral transforms and indicate how complex analysis may be applied. 

In space there is noone to hear you scream 12:10 Tue 12 Sep, 2017 :: Inkgarni Wardli 5.57 :: A/Prof Gary Glonek :: School of Mathematical Sciences
Media...Modern data problems often involve data in very high dimensions. For example, gene expression profiles, used to develop cancer screening models, typically have at least 30,000 dimensions. When dealing with such data, it is natural to apply intuition from low dimensional cases. For example, in a sample of normal observations, a typical data point will be near the centre of the distribution with only a small number of points at the edges.
In this talk, simple probability theory will be used to show that the geometry of data in high dimensional space is very different from what we can see in one and twodimensional examples. We will show that the typical data point is at the edge of the distribution, a long way from its centre and even further from any other points. 

On directions and operators 11:10 Wed 27 Sep, 2017 :: Engineering & Math EM213 :: Malabika Pramanik :: University of British Columbia
Media...Many fundamental operators arising in harmonic analysis are governed by sets of directions that they are naturally associated with. This talk will survey a few representative results in this area, and report on some new developments. 

Understanding burn injuries and first aid treatment using simple mathematical models 15:10 Fri 13 Oct, 2017 :: Ingkarni Wardli B17 :: Prof Mat Simpson :: Queensland University of Technology
Scald burns from accidental exposure to hot liquids are the most common cause of burn injury in children. Over 2000 children are treated for accidental burn injuries in Australia each year. Despite the frequency of these injuries, basic questions about the physics of heat transfer in living tissues remain unanswered. For example, skin thickness varies with age and anatomical location, yet our understanding of how tissue damage from thermal injury is influenced by skin thickness is surprisingly limited. In this presentation we will consider a series of porcine experiments to study heat transfer in living tissues. We consider burning the living tissue, as well as applying various first aid treatment strategies to cool the living tissue after injury. By calibrating solutions of simple mathematical models to match the experimental data we provide insight into how thermal energy propagates through living tissues, as well as exploring different first aid strategies. We conclude by outlining some of our current work that aims to produce more realistic mathematical models. 

The Markovian binary tree applied to demography and conservation biology 15:10 Fri 27 Oct, 2017 :: Ingkarni Wardli B17 :: Dr Sophie Hautphenne :: University of Melbourne
Markovian binary trees form a general and tractable class of continuoustime branching processes, which makes them wellsuited for realworld applications. Thanks to their appealing probabilistic and computational features, these processes have proven to be an excellent modelling tool for applications in population biology. Typical performance measures of these models include the extinction probability of a population, the distribution of the population size at a given time, the total progeny size until extinction, and the asymptotic population composition. Besides giving an overview of the main performance measures and the techniques involved to compute them, we discuss recently developed statistical methods to estimate the model parameters, depending on the accuracy of the available data. We illustrate our results in human demography and in conservation biology. 

Calculating optimal limits for transacting credit card customers 15:10 Fri 2 Mar, 2018 :: Horace Lamb 1022 :: Prof Peter Taylor :: University of Melbourne
Credit card users can roughly be divided into `transactors', who pay off their balance each month, and `revolvers', who maintain an outstanding balance, on which they pay substantial interest.
In this talk, we focus on modelling the behaviour of an individual transactor customer. Our motivation is to calculate an optimal credit limit from the bank's point of view. This requires an expression for the expected outstanding balance at the end of a payment period.
We establish a connection with the classical newsvendor model. Furthermore, we derive the Laplace transform of the outstanding balance, assuming that purchases are made according to a marked point process and that there is a simplified balance control policy which prevents all purchases in the rest of the payment period when the credit limit is exceeded. We then use the newsvendor model and our modified model to calculate bounds on the optimal credit limit for the more realistic balance control policy that accepts all purchases that do not exceed the limit.
We illustrate our analysis using a compound Poisson process example and show that the optimal limit scales with the distribution of the purchasing process, while the probability of exceeding the optimal limit remains constant.
Finally, we apply our model to some real credit card purchase data. 

Radial Toeplitz operators on bounded symmetric domains 11:10 Fri 9 Mar, 2018 :: Lower Napier LG11 :: Raul QuirogaBarranco :: CIMAT, Guanajuato, Mexico
Media...The Bergman spaces on a complex domain are defined as the space of holomorphic squareintegrable functions on the domain. These carry interesting structures both for analysis and representation theory in the case of bounded symmetric domains. On the other hand, these spaces have some bounded operators obtained as the composition of a multiplier operator and a projection. These operators are highly noncommuting between each other. However, there exist large commutative C*algebras generated by some of these Toeplitz operators very much related to Lie groups. I will construct an example of such C*algebras and provide a fairly explicit simultaneous diagonalization of the generating Toeplitz operators. 

Models, machine learning, and robotics: understanding biological networks 15:10 Fri 16 Mar, 2018 :: Horace Lamb 1022 :: Prof Steve Oliver :: University of Cambridge
The availability of complete genome sequences has enabled the construction of computer models of metabolic networks that may be used to predict the impact of genetic mutations on growth and survival. Both logical and constraintbased models of the metabolic network of the model eukaryote, the ale yeast Saccharomyces cerevisiae, have been available for some time and are continually being improved by the research community. While such models are very successful at predicting the impact of deleting single genes, the prediction of the impact of higher order genetic interactions is a greater challenge. Initial studies of limited gene sets provided encouraging results. However, the availability of comprehensive experimental data for the interactions between genes involved in metabolism demonstrated that, while the models were able to predict the general properties of the genetic interaction network, their ability to predict interactions between specific pairs of metabolic genes was poor. I will examine the reasons for this poor performance and demonstrate ways of improving the accuracy of the models by exploiting the techniques of machine learning and robotics.
The utility of these metabolic models rests on the firm foundations of genome sequencing data. However, there are two major problems with these kinds of network models  there is no dynamics, and they do not deal with the uncertain and incomplete nature of much biological data. To deal with these problems, we have developed the Flexible Nets (FNs) modelling formalism. FNs were inspired by Petri Nets and can deal with missing or uncertain data, incorporate both dynamics and regulation, and also have the potential for model predictive control of biotechnological processes.

News matching "Analysis of categorical data" 
ARC Grant successes The School of Mathematical Sciences has again had outstanding success in the ARC Discovery and Linkage Projects schemes.
Congratulations to the following staff for their success in the Discovery Project scheme:
Prof Nigel Bean, Dr Josh Ross, Prof Phil Pollett, Prof Peter Taylor, New methods for improving active adaptive management in biological systems, $255,000 over 3 years;
Dr Josh Ross, New methods for integrating population structure and stochasticity into models of disease dynamics, $248,000 over three years;
A/Prof Matt Roughan, Dr Walter Willinger, Internet trafficmatrix synthesis, $290,000 over three years;
Prof Patricia Solomon, A/Prof John Moran, Statistical methods for the analysis of critical care data, with application to the Australian and New Zealand Intensive Care Database, $310,000 over 3 years;
Prof Mathai Varghese, Prof Peter Bouwknegt, Supersymmetric quantum field theory, topology and duality, $375,000 over 3 years;
Prof Peter Taylor, Prof Nigel Bean, Dr Sophie Hautphenne, Dr Mark Fackrell, Dr Malgorzata O'Reilly, Prof Guy Latouche, Advanced matrixanalytic methods with applications, $600,000 over 3 years.
Congratulations to the following staff for their success in the Linkage Project scheme:
Prof Simon Beecham, Prof Lee White, A/Prof John Boland, Prof Phil Howlett, Dr Yvonne Stokes, Mr John Wells, Paving the way: an experimental approach to the mathematical modelling and design of permeable pavements, $370,000 over 3 years;
Dr Amie Albrecht, Prof Phil Howlett, Dr Andrew Metcalfe, Dr Peter Pudney, Prof Roderick Smith, Saving energy on trains  demonstration, evaluation, integration, $540,000 over 3 years
Posted Fri 29 Oct 10. 

New Fellow of the Australian Academy of Science Professor Mathai Varghese, Professor of Pure Mathematics and ARC Professorial Fellow within the School of Mathematical Sciences, was elected to the Australian Academy of Science. Professor Varghese's citation read "for his distinguished for his work in geometric analysis involving the topology of manifolds, including the MathaiQuillen formalism in topological field theory.". Posted Tue 30 Nov 10. 

ARC Grant Success Congratulations to the following staff who were successful in securing funding from the Australian Research Council Discovery Projects Scheme. Associate Professor Finnur Larusson awarded $270,000 for his project Flexibility and symmetry in complex geometry; Dr Thomas Leistner, awarded $303,464 for his project Holonomy groups in Lorentzian geometry, Professor Michael Murray Murray and Dr Daniel Stevenson (Glasgow), awarded $270,000 for their project Bundle gerbes: generalisations and applications; Professor Mathai Varghese, awarded $105,000 for his project Advances in index theory and Prof Anthony Roberts and Professor Ioannis Kevrekidis (Princeton) awarded $330,000 for their project Accurate modelling of large multiscale dynamical systems for engineering and scientific
simulation and analysis Posted Tue 8 Nov 11. 

Elder Professor Mathai Varghese Awarded Australian Laureate Fellowship Professor Mathai Varghese, Elder Professor of Mathematics in the School of Mathematical Sciences, has been awarded an Australian Laureate Fellowship worth $1.64 million to advance Index Theory and its applications. The project is expected to enhance Australiaâs position at the forefront of international research in geometric analysis. Posted Thu 15 Jun 17.More information... 

Elder Professor Mathai Varghese Awarded Australian Laureate Fellowship Professor Mathai Varghese, Elder Professor of Mathematics in the School of Mathematical Sciences, has been awarded an Australian Laureate Fellowship worth $1.64 million to advance Index Theory and its applications. The project will enhance Australia's position at the forefront of international research in geometric analysis. Posted Thu 15 Jun 17.More information... 
Publications matching "Analysis of categorical data"Publications 

Inversion of analytically perturbed linear operators that are singular at the origin Howlett, P; Avrachenkov, K; Pearce, Charles; Ejov, V, Journal of Mathematical Analysis and Applications 353 (68–84) 2009  Portfolio risk minimization and differential games Elliott, Robert; Siu, T, Nonlinear AnalysisTheory Methods & Applications In Press (–) 2009  Schlicht Envelopes of Holomorphy and Foliations by Lines Larusson, Finnur; Shafikov, R, Journal of Geometric Analysis 19 (373–389) 2009  A total probability approach to flood frequency analysis in tidal river reaches Need, Steven; Lambert, Martin; Metcalfe, Andrew, World Environmental and Water Resources Congress 2008 Ahupua'a, Honolulu 12/05/08  CleanBGP: Verifying the consistency of BGP data Flavel, Ashley; Maennel, Olaf; Chiera, Belinda; Roughan, Matthew; Bean, Nigel, International Network Management Workshop, Orlando, Florida 19/10/08  Energy balanced data gathering in WSNs with grid topologies Chen, J; Shen, Hong; Tian, Hui, 7th International Conference on Grid and Cooperative Computing, China 24/10/08  Quantitative analysis ofincorrectlyconfigured bogonfilter detection Arnold, Jonathan; Maennel, Olaf; Flavel, Ashley; McMahon, Jeremy; Roughan, Matthew, Australasian Telecommunication Networks and Applications Conference, Adelaide 07/12/08  A nonlinear filter Elliott, Robert; Leung, H; Deng, J, Stochastic Analysis and Applications 26 (856–862) 2008  Frequency analysis of rainfall and streamflow extremes accounting for seasonal and climatic partitions Leonard, Michael; Metcalfe, Andrew; Lambert, Martin, Journal of Hydrology 348 (135–147) 2008  Nonlinear transient heat conduction problems for a class of inhomogeneous anisotropic materials by BEM Azis, Mohammad; Clements, David, Engineering Analysis With Boundary Elements 32 (1054–1060) 2008  Internet traffic and multiresolution analysis Zhang, Y; Ge, Z; Diggavi, S; Mao, Z; Roughan, Matthew; Vaishampayan, V; Willinger, W; Zhang, Y, chapter in Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz (Institute of Mathematical Statistic) 215–234, 2008  Data fusion without data fusion: localization and tracking without sharing sensitive information Roughan, Matthew; Arnold, Jonathan, Information, Decision and Control 2007, Adelaide, Australia 12/02/07  Aspects of Dirac operators in analysis Eastwood, Michael; Ryan, J, Milan Journal of Mathematics 75 (91–116) 2007  Gene expression analysis of multiple gastrointestinal regions reveals activation of common cell regulatory pathways following cytotoxic chemotherapy Bowen, Joanne; Gibson, Rachel; Tsykin, Anna; Stringer, Andrea Marie; Logan, Richard; Keefe, Dorothy, International Journal of Cancer 121 (1847–1856) 2007  Nonclassical symmetry solutions for reactiondiffusion equations with explicity spatial dependence Hajek, Bronwyn; Edwards, M; Broadbridge, P; Williams, G, Nonlinear AnalysisTheory Methods & Applications 67 (2541–2552) 2007  Optimal multilinear estimation of a random vector under constraints of casualty and limited memory Howlett, P; Torokhti, Anatoli; Pearce, Charles, Computational Statistics & Data Analysis 52 (869–878) 2007  Statistics in review; Part 1: graphics, data summary and linear models Moran, John; Solomon, Patricia, Critical care and Resuscitation 9 (81–90) 2007  Statistics in review; Part 2: Generalised linear models, timetoevent and timeseries analysis, evidence synthesis and clinical trials Moran, John; Solomon, Patricia, Critical care and Resuscitation 9 (187–197) 2007  The solution of a free boundary problem related to environmental management systems Elliott, Robert; Filinkov, Alexei, Stochastic Analysis and Applications 25 (1189–1202) 2007  Experimental Design and Analysis of Microarray Data Wilson, C; Tsykin, Anna; Wilkinson, Christopher; Abbott, C, chapter in Bioinformatics (Elsevier Ltd) 1–36, 2006  Is BGP update storm a sign of trouble: Observing the internet control and data planes during internet worms Roughan, Matthew; Li, J; Bush, R; Mao, Z; Griffin, T, SPECTS 2006, Calgary, Canada 31/07/06  Watching data streams toward a multihomed sink under routing changes introduced by a BGP beacon Li, J; Bush, R; Mao, Z; Griffin, T; Roughan, Matthew; Stutzbach, D; Purpus, E, PAM2006, Adelaide, Australia 30/03/06  A Markov analysis of social learning and adaptation Wheeler, Scott; Bean, Nigel; Gaffney, Janice; Taylor, Peter, Journal of Evolutionary Economics 16 (299–319) 2006  Datarecursive smoother formulae for partially observed discretetime Markov chains Elliott, Robert; Malcolm, William, Stochastic Analysis and Applications 24 (579–597) 2006  Mathematical analysis of an extended mumfordshah model for image segmentation Tao, Trevor; Crisp, David; Van Der Hoek, John, Journal of Mathematical Imaging and Vision 24 (327–340) 2006  Methodology in metaanalysis: a study from critical care metaanalytic practice Moran, John; Solomon, Patricia; Warn, D, Health Services and Outcomes Research Methodology 5 (207–226) 2006  On the indentation of an inhomogeneous anisotropic elastic material by multiple straight rigid punches Clements, David; Ang, W, Engineering Analysis With Boundary Elements 30 (284–291) 2006  Optimal linear estimation and data fusion Elliott, Robert; Van Der Hoek, John, IEEE Transactions on Automatic Control 51 (686–689) 2006  Secure distributed datamining and its application to largescale network measurements Roughan, Matthew; Zhang, Y, Computer Communication Review 36 (7–14) 2006  Stochastic volatility model with filtering Elliott, Robert; MIao, H, Stochastic Analysis and Applications 24 (661–683) 2006  The influence of urban landuse on nonmotorised transport casualties Wedagama, D; Bird, R; Metcalfe, Andrew, Accident Analysis and Prevention 38 (1049–1057) 2006  Optimal estimation of a random signal from partially missed data Torokhti, Anatoli; Howlett, P; Pearce, Charles, EUSIPCO 2006, Florence, Italy 04/09/06  Threedimensional flow due to a microcantilever oscillating near a wall: an unsteady slenderbody analysis Clarke, Richard; Jensen, O; Billingham, J; Williams, P, Proceedings of the Royal Society of London Series AMathematical Physical and Engineering Sciences 462 (913–933) 2006  Analysis of a practical control policy for water storage in two connected dams Howlett, P; Piantadosi, J; Pearce, Charles, chapter in Continuous optimization: Current trends and modern applications (Springer) 435–450, 2005  Diversity sensitivity and multimodal Bayesian statistical analysis by relative entropy Leipnik, R; Pearce, Charles, The ANZIAM Journal 47 (277–287) 2005  Elastic plastic analysis of shallow shells  A new approach Mazumdar, Jagan; Ghosh, Abir; Hewitt, J; Bhattacharya, P, The ANZIAM Journal 47 (121–130) 2005  Hidden Markov chain filtering for a jump diffusion model Wu, P; Elliott, Robert, Stochastic Analysis and Applications 23 (153–163) 2005  Hidden Markov filter estimation of the occurrence time of an event in a financial market Elliott, Robert; Tsoi, A, Stochastic Analysis and Applications 23 (1165–1177) 2005  Metaanalysis of controlled trials of ventilator therapy in acute lung injury and acute respiratory distress syndrome: an alternative perspective Moran, John; Bersten, A; Solomon, Patricia, Intensive Care Medicine 31 (227–235) 2005  Optimal recursive estimation of raw data Torokhti, Anatoli; Howlett, P; Pearce, Charles, Annals of Operations Research 133 (285–302) 2005  Smoothly parameterized ech cohomology of complex manifolds Bailey, T; Eastwood, Michael; Gindikin, S, Journal of Geometric Analysis 15 (9–23) 2005  Image processing of finite size rat retinal ganglion cells using multifractal and local connected fractal analysis Jelinek, H; Cornforth, D; Roberts, Anthony John; Landini, G; Bourke, P; Iorio, A, chapter in AI 2004: Advances in Artificial Intelligence (Springer) 961–966, 2005  On the analysis of a casecontrol study with differential measurement error Glonek, Garique, 20th International Workshop on Statistical Modelling, Sydney, Australia 10/07/05  Dixmier traces as singular symmetric functionals and applications to measurable operators Lord, Steven; Sedaev, A; Sukochev, F, Journal of Functional Analysis 224 (72–106) 2005  Filtering, smoothing and Mary detection with discrete time poisson observations Elliott, Robert; Malcolm, William; Aggoun, L, Stochastic Analysis and Applications 23 (939–952) 2005  Finitedimensional filtering and control for continuoustime nonlinear systems Elliott, Robert; Aggoun, L; Benmerzouga, A, Stochastic Analysis and Applications 22 (499–505) 2005  Nonlinear analysis of rubberbased polymeric materials with thermal relaxation models Melnik, R; Strunin, D; Roberts, Anthony John, Numerical Heat Transfer Part AApplications 47 (549–569) 2005  Smoothly parameterized Cech cohomology of complex manifolds Bailey, T; Eastwood, Michael; Gindikin, S, Journal of Geometric Analysis 15 (9–23) 2005  Combining routing and traffic data for detection of IP forwarding anomalies Roughan, Matthew; Griffin, T; Mao, M; Greenberg, A; Freeman, B, Sigmetrics  Performance 2004, New York, USA 12/06/04  IP forwarding anomalies and improving their detection using multiple data sources Roughan, Matthew; Griffin, T; Mao, M; Greenberg, A; Freeman, B, SIGCOMM 2004, Oregon, USA 30/08/04  A deterministic discretisationstep upper bound for state estimation via Clark transformations Malcolm, William; Elliott, Robert; Van Der Hoek, John, J.A.M.S.A. Journal of Applied Mathematics and Stochastic Analysis 2004 (371–384) 2004  A sufficient condition for the uniform exponential stability of timevarying systems with noise Grammel, G; Maizurna, Isna, Nonlinear AnalysisTheory Methods & Applications 56 (951–960) 2004  Gerbes, Clifford Modules and the index theorem Murray, Michael; Singer, Michael, Annals of Global Analysis and Geometry 26 (355–367) 2004  Reactions to genetically modified food crops and how perception of risks and benefits influences consumers' information gathering Wilson, Carlene; Evans, G; Leppard, Phillip; Syrette, J, Risk Analysis 24 (1311–1321) 2004  The data processing inequality and stochastic resonance McDonnell, Mark; Stocks, N; Pearce, Charles; Abbott, Derek, Noise in Complex Systems and Stochastic Dynamics, Santa Fe, New Mexico, USA 01/06/03  A dualreciprocity boundary element method for a class of elliptic boundary value problems for nonhomogenous anisotropic media Ang, W; Clements, David; Vahdati, N, Engineering Analysis With Boundary Elements 27 (49–55) 2003  Compact Khler surfaces with trivial canonical bundle Buchdahl, Nicholas, Annals of Global Analysis and Geometry 23 (189–204) 2003  Complex analysis and the Funk transform Bailey, T; Eastwood, Michael; Gover, A; Mason, L, Journal of the Korean Mathematical Society 40 (577–593) 2003  Exponential stability and partial averaging Grammel, G; Maizurna, Isna, Journal of Mathematical Analysis and Applications 283 (276–286) 2003  Hyperbolic monopoles and holomorphic spheres Murray, Michael; Norbury, Paul; Singer, Michael, Annals of Global Analysis and Geometry 23 (101–128) 2003  Method of best successive approximations for nonlinear operators Torokhti, Anatoli; Howlett, P; Pearce, Charles, Journal of Computational Analysis and Applications 5 (299–312) 2003  On nonlinear operator approximation with preassigned accuracy Howlett, P; Pearce, Charles; Torokhti, Anatoli, Journal of Computational Analysis and Applications 5 (273–297) 2003  Rumours, epidemics, and processes of mass action: Synthesis and analysis Dickinson, Rowland; Pearce, Charles, Mathematical and Computer Modelling 38 (1157–1167) 2003  Stochastic resonance and data processing inequality McDonnell, Mark; Stocks, N; Pearce, Charles; Abbott, Derek, Electronics Letters 39 (1287–1288) 2003  Resamplingbased multiple testing for microarray data analysis (Invited discussion of paper by Ge, Dudoit and Speed) Glonek, Garique; Solomon, Patricia, Test 12 (50–53) 2003  An analysis of noise enhanced information transmission in an array of comparators McDonnell, Mark; Abbott, Derek; Pearce, Charles, Microelectronics Journal 33 (1079–1089) 2002  Approximating spectral invariants of Harper operators on graphs Varghese, Mathai; Yates, Stuart, Journal of Functional Analysis 188 (111–136) 2002  Portfolio optimization, hidden Markov models, and technical analysis of P&Fcharts Elliott, Robert; Hinz, J, International Journal of Theoretical and Applied Finance 5 (385–399) 2002  An edgeofthewedge theorum for hypersurface CR functions Eastwood, Michael; Graham, C, Journal of Geometric Analysis 11 (589–602) 2001  Csiszr fdivergence, Ostrowski's inequality and mutual information Dragomir, S; Gluscevic, Vido; Pearce, Charles, Nonlinear AnalysisTheory Methods & Applications 47 (2375–2386) 2001  Equivariant SeibergWitten Floer homology Marcolli, M; Wang, BaiLing, Communications in Analysis and Geometry 9 (451–639) 2001  On bestapproximation problems for nonlinear operators Howlett, P; Pearce, Charles; Torokhti, Anatoli, Nonlinear Functional Analysis and Applications 6 (351–368) 2001  On the extended reversed Meir inequality Guljas, B; Pearce, Charles; Pecaric, Josip, Journal of Computational Analysis and Applications 3 (243–247) 2001  The Mx/G/1 queue with queue length dependent service times Choi, B; Kim, Y; Shin, Y; Pearce, Charles, J.A.M.S.A. Journal of Applied Mathematics and Stochastic Analysis 14 (399–419) 2001  The modelling and numerical simulation of causal nonlinear systems Howlett, P; Torokhti, Anatoli; Pearce, Charles, Nonlinear AnalysisTheory Methods & Applications 47 (5559–5572) 2001  Best estimators of second degree for data analysis Howlett, P; Pearce, Charles; Torokhti, Anatoli, ASMDA 2001, Compiegne, France 12/06/01  Optimal successive estimation of observed data Torokhti, Anatoli; Howlett, P; Pearce, Charles, International Conference on Optimization: Techniques and Applications (5th: 2001), Hong Kong, China 15/12/01  A continuous time kronecker's lemma and martingale convergence Elliott, Robert, Stochastic Analysis and Applications 19 (433–437) 2001  Statistical analysis of medical data: New developments  Book review Solomon, Patricia, Biometrics 57 (327–328) 2001  Metaanalysis, overviews and publication bias Solomon, Patricia; Hutton, Jonathon, Statistical Methods in Medical Research 10 (245–250) 2001  Spectral analysis of heart sounds and vibration analysis of heart valves Mazumdar, Jagan, EMAC 2000, RMIT University, Melbourne, Australia 10/09/00  A martingale analysis of hysteretic overload control Roughan, Matthew; Pearce, Charles, Advances in Performance Analysis 3 (1–30) 2000  A note on higher cohomology groups of Khler quotients Wu, Siye, Annals of Global Analysis and Geometry 18 (569–576) 2000  Disease surveillance and data collection issues in epidemic modelling Solomon, Patricia; Isham, V, Statistical Methods in Medical Research 9 (259–277) 2000  Local Constraints on EinsteinWeyl geometries: The 3dimensional case Eastwood, Michael; Tod, K, Annals of Global Analysis and Geometry 18 (1–27) 2000  On Anastassiou's generalizations of the Ostrowski inequality and related results Pearce, Charles; Pecaric, Josip, Journal of Computational Analysis and Applications 2 (215–276) 2000 
Advanced search options
You may be able to improve your search results by using the following syntax:
Query  Matches the following 

Asymptotic Equation  Anything with "Asymptotic" or "Equation". 
+Asymptotic +Equation  Anything with "Asymptotic" and "Equation". 
+Stokes "NavierStokes"  Anything containing "Stokes" but not "NavierStokes". 
Dynam*  Anything containing "Dynamic", "Dynamical", "Dynamicist" etc. 
