
Search the School of Mathematical SciencesPeople matching "Statistical computing"Courses matching "Statistical computing" 
Advanced statistical inference We begin with modern and classical statistical inference and cover cumulants, the cumulant
generating function, natural exponential family models, minimal sufficient statistics, completeness,
and generalised linear models. We then consider conditional and marginal inference including the
concept of ancillary statistics, marginal likelihood and conditional inference. Chapter 2 is about model
choice, in particular Akaike's Information Criterion (AIC), Network Information Criterion (NIC), and
crossvalidation (CV). We will explore the theoretical basis of AIC via model misspecification and the
KullbackLeibler distance. Chapter 3 is devoted to bootstrap methods for assessing statistical
accuracy; we will focus on bootstrap estimation and confidence intervals, and consider the jackknife
and its relationship to the bootstrap. Chapter 4 is on the analysis of missing data; we will study the
different types of missingness and the ExpectationMaximisation (EM) algorithm in particular. Chapter
5 is about survival analysis, and we will cover the KaplanMeier estimator, parametric survival models,
and the semiparametric proportional hazards model.
More about this course... 

Mathematical epidemiology: Stochastic models and their statistical calibration Mathematical models are increasingly used to inform governmental policymakers on issues that
threaten human health or which have an adverse impact on the economy. It is this realworld success
combined with the wide variety of interesting mathematical problems which arise that makes
mathematical epidemiology one of the most exciting topics in applied mathematics. During the
summer school, you will be introduced to mathematical epidemiology and some fundamental theory
required for studying and parametrising stochastic models of infection dynamics, which will provide an
ideal basis for addressing key research questions in this area; several such questions will be
introduced and explored in this course. Topics:
An introduction to mathematical epidemiology
Discretetime and continuoustime discretestate stochastic infection models
Numerical methods for studying stochastic infection models: EXPOKIT, transforms and their inversion
Methods for simulating stochastic infection models: classical (Gillespie) algorithm, more efficient exact
and approximate algorithms
Methods for parameterising stochastic infection models: frequentist approaches, Bayesian
approaches, approximate Bayesian computation
Optimal observation of stochastic infection models
More about this course... 

Statistical Analysis and Modelling 1 This is a first course in Statistics for mathematically inclined students. It will address the key principles underlying commonly used statistical methods such as confidence intervals, hypothesis tests, inference for means and proportions, and linear regression. It will develop a deeper mathematical understanding of these ideas, many of which will be familiar from studies in secondary school. The application of basic and more advanced statistical methods will be illustrated on a range of problems from areas such as medicine, science, technology, government, commerce and manufacturing. The use of the statistical package SPSS will be developed through a sequence of computer practicals. Topics covered will include: basic probability and random variables, fundamental distributions, inference for means and proportions, comparison of independent and paired samples, simple linear regression, diagnostics and model checking, multiple linear regression, simple factorial models, models with factors and continuous predictors.
More about this course... 

Statistical Modelling and Inference Statistical methods are important to all areas that rely on data including science, technology, government and commerce. To deal with the complex problems that arise in practice requires a sound understanding of fundamental statistical principles together with a range of suitable modelling techniques. Computing using a high level statistical package is also an essential element of modern statistical practice. This course provides an introduction to the principles of statistical inference and the development of linear statistical models with the statistical package R. Topics covered are: Point estimates, unbiasedness, meansquared error, confidence intervals, tests of hypotheses, power calculations, derivation of one and twosample procedures; simple linear regression, regression diagnostics, prediction; linear models, ANOVA, multiple regression, factorial experiments, analysis of covariance models, model building; likelihood based methods for estimation and testing, goodness of fit tests; sample surveys, population means, totals and proportions, simple random samples, stratified random samples. Topics covered are: point estimates, unbiasedness, meansquared error, confidence intervals, tests of hypotheses, power calculations, derivation of one and twosample procedures: simple linear regression, regression diagnostics, prediction: linear models, analysis of variance (ANOVA), multiple regression, factorial experiments, analysis of covariance models, model building; likelihoodbased methods for estimation and testing and goodnessoffit tests.
More about this course... 

Statistical Modelling III One of the key requirements of an applied statistician is the ability to formulate appropriate statistical models and then apply them to data in order to answer the questions of interest. Most often, such models can be seen as relating a response variable to one or more explanatory variables. For example, in a medical experiment we may seek to evaluate a new treatment by relating patient outcome to treatment received while allowing for background variables such as age, sex and disease severity. In this course, a rigorous discussion of the linear model is given and various extensions are developed. There is a strong practical emphasis and the statistical package R is used extensively. Topics covered are: the linear model, least squares estimation, generalised least squares estimation, properties of estimators, the GaussMarkov theorem; geometry of least squares, subspace formulation of linear models, orthogonal projections; regression models, factorial experiments, analysis of covariance and model formulae; regression diagnostics, residuals, influence diagnostics, transformations, BoxCox models, model selection and model building strategies; models with complex error structure, splitplot experiments; logistic regression models.
More about this course... 

Statistical Practice I Statistical ideas and methods are essential tools in virtually all areas that rely on data to make decisions and reach conclusions. This includes diverse fields such as medicine, science, technology, government, commerce and manufacturing. In broad terms, statistics is about getting information from data. This includes both the important question of how to obtain suitable data for a given purpose and also how best to extract the information, often in the presence of random variability. This course provides an introduction to the contemporary application of statistics to a wide range of real world situations. It has a strong practical focus using the statistical package SPSS to analyse real data. Topics covered are: organisation, description and presentation of data; design of experiments and surveys; random variables, probability distributions, the binomial distribution and the normal distribution; statistical inference, tests of significance, confidence intervals; inference for means and proportions, onesample tests, two independent samples, paired data, ttests, contingency tables; analysis of variance; linear regression, least squares estimation, residuals and transformations, inference for regression coefficients, prediction.
More about this course... 

Statistical Practice I (Life Sciences) Statistical ideas and methods are essential tools in virtually all areas that rely on data to make decisions and reach conclusions. This includes diverse fields such as science, technology, government, commerce, manufacturing and the life sciences. In broad terms, statistics is about getting information from data. This includes both the important question of how to obtain suitable data for a given purpose and also how best to extract the information, often in the presence of random variability. This course provides an introduction to the contemporary application of statistics to a range of real world situations. It has a strong practical focus using the statistical package SPSS to analyse real data relevant to the life sciences. Topics covered are: organisation, description and presentation of data in the life sciences; design of experiments and surveys; random variables, probability distributions, the binomial distribution and the normal distribution; statistical inference, tests of significance, confidence intervals; inference for means and proportions, onesample tests, two independent samples, paired data, ttests, contingency tables; analysis of variance; linear regression, least squares estimation, residuals and transformations, inference for regression coefficients, prediction.
More about this course... 

Statistical Practice I (Life Sciences) (PreVet) Statistical ideas and methods are essential tools in virtually all areas that rely on data to make decisions and reach conclusions. This includes diverse fields such as science, technology, government, commerce, manufacturing and the life sciences. In broad terms, statistics is about getting information from data. This includes both the important question of how to obtain suitable data for a given purpose and also how best to extract the information, often in the presence of random variability. This course provides an introduction to the contemporary application of statistics to a range of real world situations. It has a strong practical focus using the statistical package SPSS to analyse real data relevant to the life sciences. Topics covered are: organisation, description and presentation of data in the life sciences; design of experiments and surveys; random variables, probability distributions, the binomial distribution and the normal distribution; statistical inference, tests of significance, confidence intervals; inference for means and proportions, onesample tests, two independent samples, paired data, ttests, contingency tables; analysis of variance; linear regression, least squares estimation, residuals and transformations, inference for regression coefficients, prediction.
More about this course... 
Events matching "Statistical computing" 
Statistical convergence of sequences of complex numbers with application to Fourier series 15:10 Tue 27 Mar, 2007 :: G08 Mathematics Building University of Adelaide :: Prof. Ferenc Morics
Media...The concept of statistical convergence was introduced by Henry Fast and Hugo Steinhaus in 1951. But in fact, it was Antoni Zygmund who first proved theorems on the statistical convergence of Fourier series, using the term \"almost convergence\". A sequence $\\{x_k : k=1,2\\ldots\\}$ of complex numbers is said to be statistically convergent to $\\xi$ if for every $\\varepsilon >0$ we have $$\\lim_{n\\to \\infty} n^{1} \\{1\\le k\\le n: x_k\\xi > \\varepsilon\\} = 0.$$ We present the basic properties of statistical convergence, and extend it to multiple sequences. We also discuss the convergence behavior of Fourier series. 

Likelihood inference for a problem in particle physics 15:10 Fri 27 Jul, 2007 :: G04 Napier Building University of Adelaide :: Prof. Anthony Davison
The Large Hadron Collider (LHC), a particle accelerator located at CERN, near Geneva, is (currently!) expected to start operation in early 2008. It is located in an underground tunnel 27km in circumference, and when fully operational, will be the world's largest and highest energy particle accelerator. It is hoped that it will provide evidence for the existence of the Higgs boson, the last remaining particle of the socalled Standard Model of particle physics. The quantity of data that will be generated by the LHC is roughly equivalent to that of the European telecommunications network, but this will be boiled down to just a few numbers. After a brief introduction, this talk will outline elements of the statistical problem of detecting the presence of a particle, and then sketch how higher order likelihood asymptotics may be used for signal detection in this context. The work is joint with Nicola Sartori, of the Università Ca' Foscari, in Venice. 

Statistical Critique of the International Panel on Climate Change's work on Climate Change. 18:00 Wed 17 Oct, 2007 :: Union Hall University of Adelaide :: Mr Dennis Trewin
Climate change is one of the most important issues facing us today. Many governments have introduced or are developing appropriate policy interventions to (a) reduce the growth of greenhouse gas emissions in order to mitigate future climate change, or (b) adapt to future climate change.
This important work deserves a high quality statistical data base but there are statistical shortcomings in the work of the International Panel on Climate Change (IPCC). There has been very little involvement of qualified statisticians in the very important work of the IPCC which appears to be scientifically meritorious in most other ways.
Mr Trewin will explain these shortcomings and outline his views on likely future climate change, taking into account the statistical deficiencies.
His conclusions suggest climate change is still an important issue that needs to be addressed but the range of likely outcomes is a lot lower than has been suggested by the IPCC.
This presentation will be based on an invited paper presented at the OECD World Forum.


Moderated Statistical Tests for Digital Gene Expression Technologies 15:10 Fri 19 Oct, 2007 :: G04 Napier Building University of Adelaide :: Dr Gordon Smyth :: Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia
Digital gene expression (DGE) technologies measure gene expression by counting sequence tags. They are sensitive technologies for measuring gene expression on a genomic scale, without the need for prior knowledge of the genome sequence. As the cost of DNA sequencing decreases, the number of DGE datasets is expected to grow dramatically. Various tests of differential expression have been proposed for replicated DGE data using overdispersed binomial or Poisson models for the counts, but none of the these are usable when the number of replicates is very small. We develop tests using the negative binomial distribution to model overdispersion relative to the Poisson, and use conditional weighted likelihood to moderate the level of overdispersion across genes. A heuristic empirical Bayes algorithm is developed which is applicable to very general likelihood estimation contexts. Not only is our strategy applicable even with the smallest number of replicates, but it also proves to be more powerful than previous strategies when more replicates are available. The methodology is applicable to other counting technologies, such as proteomic spectral counts.


Probabilistic models of human cognition 15:10 Fri 29 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr Daniel Navarro :: School of Psychology, University of Adelaide
Over the last 15 years a fairly substantial psychological literature has developed in which human reasoning and decisionmaking is viewed as the solution to a variety of statistical problems posed by the environments in which we operate. In this talk, I briefly outline the general approach to cognitive modelling that is adopted in this literature, which relies heavily on Bayesian statistics, and introduce a little of the current research in this field. In particular, I will discuss work by myself and others on the statistical basis of how people make simple inductive leaps and generalisations, and the links between these generalisations and how people acquire word meanings and learn new concepts. If time permits, the extensions of the work in which complex concepts may be characterised with the aid of nonparametric Bayesian tools such as Dirichlet processes will be briefly mentioned. 

Oceanographic Research at the South Australian Research and Development Institute: opportunities for collaborative research 15:10 Fri 21 Nov, 2008 :: Napier G04 :: Associate Prof John Middleton :: South Australian Research and Development Institute
Increasing threats to S.A.'s fisheries and marine environment have underlined the increasing need for soundly based research into the ocean circulation and ecosystems (phyto/zooplankton) of the shelf and gulfs. With support of Marine Innovation SA, the Oceanography Program has within 2 years, grown to include 6 FTEs and a budget of over $4.8M. The program currently leads two major research projects, both of which involve numerical and applied mathematical modelling of oceanic flow and ecosystems as well as statistical techniques for the analysis of data. The first is the implementation of the Southern Australian Integrated Marine Observing System (SAIMOS) that is providing data to understand the dynamics of shelf boundary currents, monitor for climate change and understand the phyto/zooplankton ecosystems that underpin SA's wild fisheries and aquaculture. SAIMOS involves the use of shipbased sampling, the deployment of underwater marine moorings, underwater gliders, HF Ocean RADAR, acoustic tracking of tagged fish and Autonomous Underwater vehicles.
The second major project involves measuring and modelling the ocean circulation and biological systems within Spencer Gulf and the impact on prawn larval dispersal and on the sustainability of existing and proposed aquaculture sites. The discussion will focus on opportunities for collaborative research with both faculty and students in this exciting growth area of S.A. science.


Statistical analysis for harmonized development of systemic organs in human fetuses 11:00 Thu 17 Sep, 2009 :: School Board Room :: Prof Kanta Naito :: Shimane University
The growth processes of human babies have been studied
sufficiently in scientific fields, but there have still been many issues
about the developments of human fetus which are not clarified. The aim of
this research is to investigate the developing process of systemic organs of
human fetuses based on the data set of measurements of fetus's bodies and
organs. Specifically, this talk is concerned with giving a mathematical
understanding for the harmonized developments of the organs of human
fetuses. The method to evaluate such harmonies is proposed by the use of the
maximal dilatation appeared in the theory of quasiconformal mapping. 

Stable commutator length 13:40 Fri 25 Sep, 2009 :: Napier 102 :: Prof Danny Calegari :: California Institute of Technology
Stable commutator length answers the question: "what is the simplest
surface in a given space with prescribed boundary?" where "simplest"
is interpreted in topological terms. This topological definition is
complemented by several equivalent definitions  in group theory, as a
measure of noncommutativity of a group; and in linear programming, as
the solution of a certain linear optimization problem. On the
topological side, scl is concerned with questions such as computing
the genus of a knot, or finding the simplest 4manifold that bounds a
given 3manifold. On the linear programming side, scl is measured in
terms of certain functions called quasimorphisms, which arise from
hyperbolic geometry (negative curvature) and symplectic geometry
(causal structures). In these talks we will discuss how scl in free
and surface groups is connected to such diverse phenomena as the
existence of closed surface subgroups in graphs of groups, rigidity
and discreteness of symplectic representations, bounding immersed
curves on a surface by immersed subsurfaces, and the theory of multi
dimensional continued fractions and Klein polyhedra.
Danny Calegari is the Richard Merkin Professor of Mathematics at the California Institute of Technology, and is one of the recipients of the 2009 Clay Research Award for his work in geometric topology and geometric group theory. He received a B.A. in 1994 from the University of Melbourne, and a Ph.D. in 2000 from the University of California, Berkeley under the joint supervision of Andrew Casson and William Thurston. From 2000 to 2002 he was Benjamin Peirce Assistant Professor at Harvard University, after which he joined the Caltech faculty; he became Richard Merkin Professor in 2007.


Contemporary frontiers in statistics 15:10 Mon 28 Sep, 2009 :: Badger Labs G31 Macbeth Lectrue :: Prof. Peter Hall :: University of Melbourne
The availability of powerful computing equipment has had a dramatic impact on statistical methods and thinking, changing forever the way data are analysed. New data types, larger quantities of data, and new classes of research problem are all motivating new statistical methods. We shall give examples of each of these issues, and discuss the current and future directions of frontier problems in statistics. 

Manifold destiny: a talk on water, fire and life 15:10 Fri 6 Nov, 2009 :: MacBeth Lecture Theatre :: Dr Sanjeeva Balasuriya :: University of Adelaide
Manifolds are important entities in dynamical systems, and organise space
into regions in which different motions occur. For example, intersections
between stable and unstable manifolds in discrete systems result in
chaotic motion. This talk will focus on manifolds and their locations in
continuous dynamical systems, and in particular on Melnikov's method and its adaptations for determining the effect of perturbations on manifolds.
The relevance of such adaptations to a surprising range of applications will be shown, in addition to recent theoretical developments inspired by such problems. The applications addressed in this talk include understanding the motion of fluid near oceanic eddies and currents, optimising mixing in nanofluidic devices in order to improve reactions, computing the speed of a flame front, and finding the spreading rate of bacterial colonies. 

Exploratory experimentation and computation 15:10 Fri 16 Apr, 2010 :: Napier LG29 :: Prof Jonathan Borwein :: University of Newcastle
Media...The mathematical research community is facing a great challenge to reevaluate the role of proof in light of the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to datamine on the Internet. Add to that the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the Classification of finite simple groups. As the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished. I shall look at the philosophical context with examples and then offer some of five benchmarking examples of the opportunities and challenges we face. 

The mathematics of theoretical inference in cognitive psychology 15:10 Fri 11 Jun, 2010 :: Napier LG24 :: Prof John Dunn :: University of Adelaide
The aim of psychology in general, and of cognitive psychology in particular, is to construct theoretical accounts of mental processes based on observed changes in performance on one or more cognitive tasks. The fundamental problem faced by the researcher is that these mental processes are not directly observable but must be inferred from changes in performance between different experimental conditions. This inference is further complicated by the fact that performance measures may only be monotonically related to the underlying psychological constructs. Statetrace analysis provides an approach to this problem which has gained increasing interest in recent years. In this talk, I explain statetrace analysis and discuss the set of mathematical issues that flow from it. Principal among these are the challenges of statistical inference and an unexpected connection to the mathematics of oriented matroids. 

Mathematica Seminar 15:10 Wed 28 Jul, 2010 :: Engineering Annex 314 :: Kim Schriefer :: Wolfram Research
The Mathematica Seminars 2010 offer an opportunity to experience the applicability, easeofuse, as well as the advancements of Mathematica 7 in education and academic research. These seminars will highlight the latest directions in technical computing with Mathematica, and the impact this technology has across a wide range of academic fields, from maths, physics and biology to finance, economics and business.
Those not yet familiar with Mathematica will gain an overview of the system and discover the breadth of applications it can address, while experts will get firsthand experience with recent advances in Mathematica like parallel computing, digital image processing, pointandclick palettes, builtin curated data, as well as courseware examples. 

A spatialtemporal point process model for fine resolution multisite rainfall data from Roma, Italy 14:10 Thu 19 Aug, 2010 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of Technology
A point process rainfall model is further developed that has storm origins occurring in spacetime according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in twodimensional
space, where the storm radii are taken to be independent exponential random
variables. Storm origins are of random type z, where z follows a continuous
probability distribution. Cell origins occur in a further spatial Poisson
process and have arrival times that follow a NeymanScott point process. Cell
origins have random radii so that cells form discs in twodimensional space.
Statistical properties up to third order are derived and used to fit the model
to 10 min series taken from 23 sites across the Roma region, Italy.
Distributional properties of the observed annual maxima are compared to
equivalent values sampled from series that are simulated using the fitted
model. The results indicate that the model will be of use in urban drainage
projects for the Roma region.


Simultaneous confidence band and hypothesis test in generalised varyingcoefficient models 15:05 Fri 10 Sep, 2010 :: Napier LG28 :: Prof Wenyang Zhang :: University of Bath
Generalised varyingcoefficient models (GVC) are very important
models. There are a considerable number of literature addressing these models.
However, most of the existing literature are devoted to the estimation
procedure. In this talk, I will systematically investigate the statistical
inference for GVC, which includes confidence band as well as hypothesis test. I
will show the asymptotic distribution of the maximum discrepancy between the
estimated functional coefficient and the true functional coefficient. I will
compare different approaches for the construction of confidence band and
hypothesis test. Finally, the proposed statistical inference methods are used to
analyse the data from China about contraceptive use there, which leads to some
interesting findings. 

Statistical physics and behavioral adaptation to Creation's main stimuli: sex and food 15:10 Fri 29 Oct, 2010 :: E10 B17 Suite 1 :: Prof Laurent Seuront :: Flinders University and South Australian Research and Development Institute
Animals typically search for food and mates, while avoiding predators. This is particularly critical for keystone organisms such as intertidal gastropods and copepods (i.e. millimeterscale crustaceans) as they typically rely on nonvisual senses for detecting, identifying and locating mates in their two and threedimensional environments. Here, using stochastic methods derived from the field of nonlinear physics, we provide new insights into the nature (i.e. innate vs. acquired) of the motion behavior of gastropods and copepods, and demonstrate how changes in their behavioral properties can be used to identify the tradeoffs between foraging for food or sex. The gastropod Littorina littorea hence moves according to fractional Brownian motions while foraging for food (in accordance with the fractal nature of food distributions), and switch to Brownian motion while foraging for sex. In contrast, the swimming behavior of the copepod Temora longicornis belongs to the class of multifractal random walks (MRW; i.e. a form of anomalous diffusion), characterized by a nonlinear moment scaling function for distance versus time. This clearly differs from the traditional Brownian and fractional Brownian walks expected or previously detected in animal behaviors. The divergence between MRW and Levy flight and walk is also discussed, and it is shown how copepod anomalous diffusion is enhanced by the presence and concentration of conspecific waterborne signals, and is dramatically increasing malefemale encounter rates. 

Change detection in rainfall time series for Perth, Western Australia 12:10 Mon 16 May, 2011 :: 5.57 Ingkarni Wardli :: Farah Mohd Isa :: University of Adelaide
There have been numerous reports that the rainfall in south Western Australia,
particularly around Perth has observed a step change decrease, which is
typically attributed to climate change. Four statistical tests are used to
assess the empirical evidence for this claim on time series from five
meteorological stations, all of which exceed 50 years. The tests used in this
study are: the CUSUM; Bayesian Change Point analysis; consecutive ttest and the
Hotellingâs TÂ²statistic. Results from multivariate Hotellingâs TÂ² analysis are
compared with those from the three univariate analyses. The issue of multiple
comparisons is discussed. A summary of the empirical evidence for the claimed
step change in Perth area is given. 

Statistical challenges in molecular phylogenetics 15:10 Fri 20 May, 2011 :: Mawson Lab G19 lecture theatre :: Dr Barbara Holland :: University of Tasmania
Media...This talk will give an introduction to the ways that mathematics and statistics gets used in the inference of evolutionary (phylogenetic) trees. Taking a modelbased approach to estimating the relationships between species has proven to be an enormously effective, however, there are some tricky statistical challenges that remain. The increasingly plentiful amount of DNA sequence data is a boon, but it is also throwing a spotlight on some of the shortcomings of current best practice particularly in how we (1) assess the reliability of our phylogenetic estimates, and (2) how we choose appropriate models. This talk will aim to give a general introduction this area of research and will also highlight some results from two of my recent PhD students. 

Statistical modelling in economic forecasting: semiparametrically spatiotemporal approach 12:10 Mon 23 May, 2011 :: 5.57 Ingkarni Wardli :: Dawlah Alsulami :: University of Adelaide
How to model spatiotemporal variation of housing prices is an important and challenging problem as it is of vital importance for both investors and policy makersto assess any movement in housing prices. In this seminar I will talk about the proposed model to estimate any movement in housing prices and measure the risk more accurately. 

Inference and optimal design for percolation and general random graph models (Part I) 09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂ«mostly populatedÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Inference and optimal design for percolation and general random graph models (Part II) 10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂÃÂÃÂ«mostly populatedÃÂÃÂÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Routing in equilibrium 15:10 Tue 21 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Timothy Griffin :: University of Cambridge
Media...Some path problems cannot be modelled
using semirings because the associated
algebraic structure is not distributive. Rather
than attempting to compute globally optimal
paths with such structures, it may be sufficient
in some cases to find locally optimal paths 
paths that represent a stable local equilibrium.
For example, this is the type of routing system that
has evolved to connect Internet Service Providers
(ISPs) where link weights implement
bilateral commercial relationships between them.
Previous work has shown that routing equilibria can
be computed for some nondistributive algebras
using algorithms in the BellmanFord family.
However, no polynomial time bound was known
for such algorithms. In this talk, we show that
routing equilibria can be computed using
Dijkstra's algorithm for one class of nondistributive
structures. This provides the first
polynomial time algorithm for computing locally
optimal solutions to path problems. 

Quantitative proteomics: data analysis and statistical challenges 10:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Peter Hoffmann :: Adelaide Proteomics Centre


Object oriented data analysis 14:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill
Object Oriented Data Analysis is the statistical analysis of populations of complex objects. In the special case of Functional Data Analysis, these data objects are curves, where standard Euclidean approaches, such as principal components analysis, have been very successful. Recent developments in medical image analysis motivate the statistical analysis of populations of more complex data objects which are elements of mildly nonEuclidean spaces, such as Lie Groups and Symmetric Spaces, or of strongly nonEuclidean spaces, such as spaces of treestructured data objects. These new contexts for Object Oriented Data Analysis create several potentially large new interfaces between mathematics and statistics. Even in situations where Euclidean analysis makes sense, there are statistical challenges because of the High Dimension Low Sample Size problem, which motivates a new type of asymptotics leading to nonstandard mathematical statistics. 

Object oriented data analysis of treestructured data objects 15:10 Fri 1 Jul, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill
The field of Object Oriented Data Analysis has made a lot of
progress on the statistical analysis of the variation in populations
of complex objects. A particularly challenging example of this type
is populations of treestructured objects. Deep challenges arise,
which involve a marriage of ideas from statistics, geometry, and
numerical analysis, because the space of trees is strongly
nonEuclidean in nature. These challenges, together with three
completely different approaches to addressing them, are illustrated
using a real data example, where each data point is the tree of blood
arteries in one person's brain. 

Statistical analysis of metagenomic data from the microbial community involved in industrial bioleaching 12:10 Mon 19 Sep, 2011 :: 5.57 Ingkarni Wardli :: Ms Susana SotoRojo :: University of Adelaide
In the last two decades heap bioleaching has become established as a successful commercial option for recovering copper from lowgrade secondary sulfide ores. Geneticsbased approaches have recently been employed in the task of characterizing mineral processing bacteria. Data analysis is a key issue and thus the implementation of adequate mathematical and statistical tools is of fundamental importance to draw reliable conclusions. In this talk I will give a recount of two specific problems that we have been working on. The first regarding experimental design and the latter on modeling composition and activity of the microbial consortium. 

Statistical analysis of schoolbased student performance data 12:10 Mon 10 Oct, 2011 :: 5.57 Ingkarni Wardli :: Ms Jessica Tan :: University of Adelaide
Join me in the journey of being a statistician for 15 minutes of your day (if you are not already one) and experience the task of data cleaning without having to get your own hands dirty. Most of you may have sat the Basic Skills Tests when at school or know someone who currently has to do the NAPLAN (National Assessment Program  Literacy and Numeracy) tests. Tests like these assess student progress and can be used to accurately measure school performance. In trying to answer the research question: "what conclusions about student progress and school performance can be drawn from NAPLAN data or data of a similar nature, using mathematical and statistical modelling and analysis techniques?", I have uncovered some interesting results about the data in my initial data analysis which I shall explain in this talk. 

Statistical modelling for some problems in bioinformatics 11:10 Fri 14 Oct, 2011 :: B.17 Ingkarni Wardli :: Professor Geoff McLachlan :: The University of Queensland
Media...In this talk we consider some statistical analyses of data arising in
bioinformatics. The problems include the detection of differential
expression in microarray geneexpression data, the clustering of
timecourse geneexpression data and, lastly, the analysis of
modernday cytometric data. Extensions are considered to the procedures
proposed for these three problems in McLachlan et al. (Bioinformatics, 2006),
Ng et al. (Bioinformatics, 2006), and Pyne et al. (PNAS, 2009), respectively.
The latter references are available at http://www.maths.uq.edu.au/~gjm/. 

Likelihoodfree Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis 15:10 Fri 21 Oct, 2011 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South Wales
Media...A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed ``likelihoodfree'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines.
In this talk, I'll present a brief, nontechnical overview of the ideas behind likelihoodfree methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis. 

Financial risk measures  the theory and applications of backward stochastic difference/differential equations with respect to the single jump process 12:10 Mon 26 Mar, 2012 :: 5.57 Ingkarni Wardli :: Mr Bin Shen :: University of Adelaide
Media...This is my PhD thesis submitted one month ago. Chapter 1 introduces the backgrounds of the research fields. Then each chapter is a published or an accepted paper.
Chapter 2, to appear in Methodology and Computing in Applied Probability, establishes the theory of Backward Stochastic Difference Equations with respect to the single jump process in discrete time.
Chapter 3, published in Stochastic Analysis and Applications, establishes the theory of Backward Stochastic Differential Equations with respect to the single jump process in continuous time.
Chapter 2 and 3 consist of Part I Theory.
Chapter 4, published in Expert Systems With Applications, gives some examples about how to measure financial risks by the theory established in Chapter 2.
Chapter 5, accepted by Journal of Applied Probability, considers the question of an optimal transaction between two investors to minimize their risks. It's the applications of the theory established in Chapter 3.
Chapter 4 and 5 consist of Part II Applications. 

Change detection in rainfall times series for Perth, Western Australia 12:10 Mon 14 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Farah Mohd Isa :: University of Adelaide
Media...There have been numerous reports that the rainfall in south Western Australia,
particularly around Perth has observed a step change decrease, which is
typically attributed to climate change. Four statistical tests are used to
assess the empirical evidence for this claim on time series from five
meteorological stations, all of which exceed 50 years. The tests used in this
study are: the CUSUM; Bayesian Change Point analysis; consecutive ttest and the
Hotelling's T^2statistic. Results from multivariate Hotelling's T^2 analysis are
compared with those from the three univariate analyses. The issue of multiple
comparisons is discussed. A summary of the empirical evidence for the claimed
step change in Perth area is given. 

Evaluation and comparison of the performance of Australian and New Zealand intensive care units 14:10 Fri 25 May, 2012 :: 7.15 Ingkarni Wardli :: Dr Jessica Kasza :: The University of Adelaide
Media...Recently, the Australian Government has emphasised the need for monitoring and comparing the performance of Australian hospitals. Evaluating the performance of intensive care units (ICUs) is of particular importance, given that the most severe cases are treated in these units. Indeed, ICU performance can be thought of as a proxy for the overall performance of a hospital. We compare the performance of the ICUs contributing to the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database, the largest of its kind in the world, and identify those ICUs with unusual performance.
It is wellknown that there are many statistical issues that must be accounted for in the evaluation of healthcare provider performance. Indicators of performance must be appropriately selected and estimated, investigators must adequately adjust for casemix, statistical variation must be fully accounted for, and adjustment for multiple comparisons must be made. Our basis for dealing with these issues is the estimation of a hierarchical logistic model for the inhospital death of each patient, with patients clustered within ICUs. Both patient and ICUlevel covariates are adjusted for, with a random intercept and random coefficient for the APACHE III severity score. Given that we expect most ICUs to have similar performance after adjustment for these covariates, we follow Ohlssen et al., JRSS A (2007), and estimate a null model that we expect the majority of ICUs to follow. This methodology allows us to rigorously account for the aforementioned statistical issues, and accurately identify those ICUs contributing to the ANZICS database that have comparatively unusual performance. This is joint work with Prof. Patty Solomon and Assoc. Prof. John Moran. 

A brief introduction to Support Vector Machines 12:30 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide
Media...Support Vector Machines (SVMs) are used in a variety of contexts for a range of purposes including regression, feature selection and classification. To convey the basic principles of SVMs, this presentation will focus on the application of SVMs to classification. Classification (or discrimination), in a statistical sense, is supervised model creation for the purpose of assigning future observations to a group or class. An example might be determining healthy or diseased labels to patients from p characteristics obtained from a blood sample.
While SVMs are widely used, they are most successful when the data have one or more of the following properties:
The data are not consistent with a standard probability distribution.
The number of observations, n, used to create the model is less than the number of predictive features, p. (The socalled smalln, bigp problem.)
The decision boundary between the classes is likely to be nonlinear in the feature space.
I will present a short overview of how SVMs are constructed, keeping in mind their purpose. As this presentation is part of a double postgrad seminar, I will keep it to a maximum of 15 minutes.


Star Wars Vs The Lord of the Rings: A Survival Analysis 12:10 Mon 27 Aug, 2012 :: B.21 Ingkarni Wardli :: Mr Christopher Davies :: University of Adelaide
Media...Ever wondered whether you are more likely to die in the Galactic Empire or Middle Earth? Well this is the postgraduate seminar for you!
I'll be attempting to answer this question using survival analysis, the statistical method of choice for investigating time to event data.
Spoiler Warning: This talk will contain references to the deaths of characters in the above movie sagas. 

Principal Component Analysis (PCA) 12:30 Mon 3 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr Lyron Winderbaum :: University of Adelaide
Media...Principal Component Analysis (PCA) has become something of a buzzword recently in a number of disciplines including the gene expression and facial recognition. It is a classical, and fundamentally simple, concept that has been around since the early 1900's, its recent popularity largely due to the need for dimension reduction techniques in analyzing high dimensional data that has become more common in the last decade, and the availability of computing power to implement this. I will explain the concept, prove a result, and give a couple of examples. The talk should be accessible to all disciplines as it (should?) only assume first year linear algebra, the concept of a random variable, and covariance.


Optimal Experimental Design: What Is It? 12:10 Mon 15 Oct, 2012 :: B.21 Ingkarni Wardli :: Mr David Price :: University of Adelaide
Media...Optimal designs are a class of experimental designs that are optimal with respect to some statistical criterion. That answers the question, right? But what do I mean by 'optimal', and which 'statistical criterion' should you use? In this talk I will answer all these questions, and provide an overly simple example to demonstrate how optimal design works. I will then give a brief explanation of how I will use this methodology, and what chickens have to do with it. 

Numerical Free Probability: Computing Eigenvalue Distributions of Algebraic Manipulations of Random Matrices 15:10 Fri 2 Nov, 2012 :: B.20 Ingkarni Wardli :: Dr Sheehan Olver :: The University of Sydney
Media...Suppose that the global eigenvalue distributions
of two large random matrices A and B are known. It is a
remarkable fact that, generically, the eigenvalue distribution
of A + B and (if A and B are positive definite) A*B are
uniquely determined from only the eigenvalue distributions
of A and B; i.e., no information about eigenvectors are
required. These operations on eigenvalue distributions
are described by free probability theory. We construct a
numerical toolbox that can efficiently and reliably
calculate these operations with spectral accuracy, by
exploiting the complex analytical framework that underlies
free probability theory.


What are fusion categories? 12:10 Fri 6 Sep, 2013 :: Ingkarni Wardli B19 :: Dr Scott Morrison :: Australian National University
Fusion categories are a common generalization of finite groups and quantum groups at roots of unity. I'll explain a little of their structure, mention their applications (to topological field theory and quantum computing), and then explore the ways in which they are in general similar to, or different from, the 'classical' cases. We've only just started exploring, and don't yet know what the exotic examples we've discovered signify about the landscape ahead. 

Random Wanderings on a Sphere... 11:10 Tue 17 Sep, 2013 :: Ingkarni Wardli Level 5 Room 5.57 :: A/Prof Robb Muirhead :: University of Adelaide
This will be a short talk (about 30 minutes) about the following problem. (Even if I tell you all I know about it, it won't take very long!)
Imagine the earth is a unit sphere in 3dimensions. You're standing at a fixed point, which we may as well take to be the North Pole. Suddenly you get moved to another point on the sphere by a random (uniform) orthogonal transormation. Where are you now? You're not at a point which is uniformly distributed on the surface of the sphere (so, since most of the earth's surface is water, you're probably drowning). But then you get moved again by the same orthogonal transformation. Where are you now? And what happens to your location it this happens repeatedly? I have only a partial answwer to this question, for 2 and 3 transformations. (There's nothing special about 3 dimensions hereresults hold for all dimensions which are at least 3.)
I don't know of any statistical application for this! This work was motivated by a talk I heard, given by Tom Marzetta (Bell Labs) at a conference at MIT. Although I know virtually nothing about signal processing, I gather Marzetta was trying to encode signals using powers of ranfom orthogonal matrices. After carrying out simulations, I think he decided it wasn't a good idea. 

A mathematician walks into a bar..... 12:10 Mon 30 Sep, 2013 :: B.19 Ingkarni Wardli :: Ben Rohrlach :: University of Adelaide
Media...Man is by his very nature, inquisitive. Our need to know has been the reason we've always evolved as a species. From discovering fire, to exploring the galaxy with those Vulcan guys in that documentary I saw, knowing the answer to a question has always driven human kind. Clearly then, I had to ask something. Something that by it's very nature is a thing. A thing that, specifically, I had to know. That thing that I had to know was this:
Do mathematicians get stupider the more they drink? Is this effect more pronounced than for normal (Gaussian) people?
At the quiz night that AUMS just ran I managed to talk two tables into letting me record some key drinking statistics. I'll be using those statistics to introduce some different statistical tests commonly seen in most analyses you'll see in other fields. Oh, and I'll answer those questions I mentioned earlier too, hopefully. Let's do this thing. 

Stochastic models of evolution: Trees and beyond 15:10 Fri 16 May, 2014 :: B.18 Ingkarni Wardli :: Dr Barbara Holland :: The University of Tasmania
Media...In the first part of the talk I will give a general introduction to phylogenetics, and discuss some of the mathematical and statistical issues that arise in trying to infer evolutionary trees. In particular, I will discuss how we model the evolution of DNA along a phylogenetic tree using a continuous time Markov process.
In the second part of the talk I will discuss how to express the twostate continuoustime Markov model on phylogenetic trees in such a way that allows its extension to more general models. In this framework we can model convergence of species as well as divergence (speciation). I will discuss the identifiability (or otherwise) of the models that arise in some simple cases. Use of a statistical framework means that we can use established techniques such as the AIC or likelihood ratio tests to decide if datasets show evidence of convergent evolution. 

Computing with groups 15:10 Fri 30 May, 2014 :: B.21 Ingkarni Wardli :: Dr Heiko Dietrich :: Monash University
Media...Groups are algebraic structures which show up in many branches of
mathematics and other areas of science; Computational Group Theory is
on the cutting edge of pure research in group theory and its interplay
with computational methods.
In this talk, we consider a practical aspect
of Computational Group Theory: how to represent a group in a computer,
and how to work with such a description efficiently. We will first
recall some wellestablished methods for permutation group; we will
then discuss some recent progress for matrix groups. 

Fast computation of eigenvalues and eigenfunctions on bounded plane domains 15:10 Fri 1 Aug, 2014 :: B.18 Ingkarni Wardli :: Professor Andrew Hassell :: Australian National University
Media...I will describe a new method for numerically computing eigenfunctions and eigenvalues on certain plane domains, derived from the socalled "scaling method" of Vergini and Saraceno. It is based on properties of the DirichlettoNeumann map on the domain, which relates a function f on the boundary of the domain to the normal derivative (at the boundary) of the eigenfunction with boundary data f. This is a topic of independent interest in pure mathematics. In my talk I will try to emphasize the inteplay between theory and applications, which is very rich in this situation. This is joint work with numerical analyst Alex Barnett (Dartmouth). 

Frequentist vs. Bayesian. 12:10 Mon 18 Aug, 2014 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide
Media...Abstract: There are two frameworks in which we can do statistical analyses. Choosing one framework over the other can be* as controversial as choosing between team Jacob and... that other guy. In this talk, I aim to give a very very simple explanation of the main difference between frequentist and Bayesian methods. I'll probably flip a coin and show you a video too.
* to people who really care. 

Testing Statistical Association between Genetic Pathways and Disease Susceptibility 12:10 Mon 1 Sep, 2014 :: B.19 Ingkarni Wardli :: Andy Pfieffer :: University of Adelaide
Media...A major research area is the identification of genetic pathways associated with various diseases. However, a detailed comparison of methods that have been designed to ascertain the association between pathways and diseases has not been performed.
I will give the necessary biological background behind GenomeWide Association Studies (GWAS), and explain the shortfalls in traditional GWAS methodologies. I will then explore various methods that use information about genetic pathways in GWAS, and explain the challenges in comparing these methods. 

Modelling segregation distortion in multiparent crosses 15:00 Mon 17 Nov, 2014 :: 5.57 Ingkarni Wardli :: Rohan Shah (joint work with B. Emma Huang and Colin R. Cavanagh) :: The University of Queensland
Construction of highdensity genetic maps has been made feasible by lowcost highthroughput genotyping technology; however, the process is still complicated by biological, statistical and computational issues. A major challenge is the presence of segregation distortion, which can be caused by selection, difference in fitness, or suppression of recombination due to introgressed segments from other species. Alien introgressions are common in major crop species, where they have often been used to introduce beneficial genes from wild relatives.
Segregation distortion causes problems at many stages of the map construction process, including assignment to linkage groups and estimation of recombination fractions. This can result in incorrect ordering and estimation of map distances. While discarding markers will improve the resulting map, it may result in the loss of genomic regions under selection or containing beneficial genes (in the case of introgression).
To correct for segregation distortion we model it explicitly in the estimation of recombination fractions. Previously proposed methods introduce additional parameters to model the distortion, with a corresponding increase in computing requirements. This poses difficulties for large, densely genotyped experimental populations. We propose a method imposing minimal additional computational burden which is suitable for highdensity map construction in large multiparent crosses. We demonstrate its use modelling the known Sr36 introgression in wheat for an eightparent complex cross.


Can mathematics help save energy in computing? 15:10 Fri 22 May, 2015 :: Engineering North N132 :: Prof Markus Hegland :: ANU
Media...Recent development of computational hardware is characterised by two trends:
1. High levels of duplication of computational capabilities in multicore, parallel and GPU processing, and, 2. Substantially faster development of the speed of computational technology compared to communication
technology
A consequence of these two trends is that energy costs of modern computing devices from mobile phones to
supercomputers are increasingly dominated by communication costs. In order to save energy one would thus
need to reduce the amount of data movement within the computer. This can be achieved by recomputing results
instead of communicating them. The resulting increase in computational redundancy may also be used to make
the computations more robust against hardware faults. Paradoxically, by doing more (computations) we do
use less (energy).
This talk will first discuss for a simple example how a mathematical understanding can be applied to improve
computational results using extrapolation. Then the problem of energy consumption in computational hardware
will be considered. Finally some recent work will be discussed which shows how redundant computing is used
to mitigate computational faults and thus to save energy.


Monodromy of the Hitchin system and components of representation varieties 12:10 Fri 29 May, 2015 :: Napier 144 :: David Baraglia :: University of Adelaide
Representations of the fundamental group of a compact Riemann surface into a reductive Lie group form a moduli space, called a representation variety. An outstanding problem in topology is to determine the number of components of these varieties. Through a deep result known as nonabelian Hodge theory, representation varieties are homeomorphic to moduli spaces of certain holomorphic objects called Higgs bundles. In this talk I will describe recent joint work with L. Schaposnik computing the monodromy of the Hitchin fibration for Higgs bundle moduli spaces. Our results give a new unified proof of the number of components of several representation varieties. 

Group Meeting 15:10 Fri 29 May, 2015 :: EM 213 :: Dr Judy Bunder :: University of Adelaide
Talk : Patch dynamics for efficient exascale simulations
Abstract
Massive parallelisation has lead to a dramatic increase in available computational power.
However, data transfer speeds have failed to keep pace and are the major limiting factor in the development of exascale computing. New algorithms must be developed which minimise the transfer of data. Patch dynamics is a computational macroscale modelling scheme which provides a coarse macroscale solution of a problem defined on a fine microscale by dividing the domain into many nonoverlapping, coupled patches. Patch dynamics is readily adaptable to massive parallelisation as each processor core can evaluate the dynamics on one, or a few, patches. However, patch coupling conditions interpolate across the unevaluated parts of the domain between patches and require almost continuous data transfer. We propose a modified patch dynamics scheme which minimises data transfer by only reevaluating the patch coupling conditions at `mesoscale' time scales which are significantly larger than the microscale time of the microscale problem. We analyse and quantify the error arising from patch dynamics with mesoscale temporal coupling. 

Complex Systems, Chaotic Dynamics and Infectious Diseases 15:10 Fri 5 Jun, 2015 :: Engineering North N132 :: Prof Michael Small :: UWA
Media...In complex systems, the interconnection between the components of the system determine the dynamics. The system is described by a very large and random mathematical graph and it is the topological structure of that graph which is important for understanding of the dynamical behaviour of the system. I will talk about two specific examples  (1) spread of infectious disease (where the connection between the agents in a population, rather than epidemic parameters, determine the endemic state); and, (2) a transformation to represent a dynamical system as a graph (such that the "statistical mechanics" of the graph characterise the dynamics). 

A relaxed introduction to resamplingbased multiple testing 12:10 Mon 10 Aug, 2015 :: Benham Labs G10 :: Ngoc Vo :: University of Adelaide
Media...Pvalues and false positives are two phrases that you commonly see thrown around in scientific literature. More often than not, experimenters and analysts are required to quote pvalues as a measure of statistical significance â how strongly does your evidence support your hypothesis? But what happens when this "strong evidence" is just a coincidence? What happens if you have lots of theses hypotheses â up to tens of thousands â to test all at the same time and most of your significant findings end up being just "coincidences"? 

Modelling Directionality in Stationary Geophysical Time Series 12:10 Mon 12 Oct, 2015 :: Benham Labs G10 :: Mohd Mahayaudin Mansor :: University of Adelaide
Media...Many time series show directionality inasmuch as plots against time and against timetogo are qualitatively different, and there is a range of statistical tests to quantify this effect. There are two strategies for allowing for directionality in time series models. Linear models are reversible if and only if the noise terms are Gaussian, so one strategy is to use linear models with nonGaussian noise. The alternative is to use nonlinear models. We investigate how nonGaussian noise affects directionality in a first order autoregressive process AR(1) and compare this with a threshold autoregressive model with two thresholds. The findings are used to suggest possible improvements to an AR(9) model, identified by an AIC criterion, for the average yearly sunspot numbers from 1700 to 1900. The improvement is defined in terms of onestepahead forecast errors from 1901 to 2014. 

Quasiisometry classification of certain hyperbolic Coxeter groups 11:00 Fri 23 Oct, 2015 :: Ingkarni Wardli Conference Room 7.15 (Level 7) :: Anne Thomas :: University of Sydney
Media...Let Gamma be a finite simple graph with vertex set S. The associated rightangled Coxeter group W is the group with generating set S, so that s^2 = 1 for all s in S and st = ts if and only if s and t are adjacent vertices in Gamma. Moussong proved that the group W is hyperbolic in the sense of Gromov if and only if Gamma has no "empty squares". We consider the quasiisometry classification of such Coxeter groups using the local cut point structure of their visual boundaries. In particular, we find an algorithm for computing Bowditch's JSJ tree for a class of these groups, and prove that two such groups are quasiisometric if and only if their JSJ trees are the same. This is joint work with Pallavi Dani (Louisiana State University). 

Group meeting 15:10 Fri 20 Nov, 2015 :: Ingkarni Wardli B17 :: Mr Jack Keeler :: University of East Anglia / University of Adelaide
Title: Stability of freesurface flow over topography
Abstract: The forced KdV equation is used as a model to analyse the wave behaviour on the free surface in response to prescribed topographic forcing. The research involves computing steady solutions using numeric and asymptotic techniques and then analysing the stability of these steady solutions in timedependent calculations. Stability is analysed by computing the eigenvalue spectra of the linearised fKdV operator and by exploiting the Hamiltonian structure of the fKdV. Future work includes analysing the solution space for a corrugated topography and investigating the 3 dimensional problem using the KP equation.
+ Any items for group discussion 

Group meeting 15:10 Fri 20 Nov, 2015 :: Ingkarni Wardli B17 :: Mr Jack Keeler :: University of East Anglia / University of Adelaide
Title: Stability of freesurface flow over topography
Abstract: The forced KdV equation is used as a model to analyse the wave behaviour on the free surface in response to prescribed topographic forcing. The research involves computing steady solutions using numeric and asymptotic techniques and then analysing the stability of these steady solutions in timedependent calculations. Stability is analysed by computing the eigenvalue spectra of the linearised fKdV operator and by exploiting the Hamiltonian structure of the fKdV. Future work includes analysing the solution space for a corrugated topography and investigating the 3 dimensional problem using the KP equation.
+ Any items for group discussion 

A SemiMarkovian Modeling of Limit Order Markets 13:00 Fri 11 Dec, 2015 :: Ingkarni Wardli 5.57 :: Anatoliy Swishchuk :: University of Calgary
Media...R. Cont and A. de Larrard (SIAM J. Financial Mathematics, 2013) introduced a tractable stochastic model for the dynamics of a limit order book, computing various quantities of interest such as the probability of a price increase or the diffusion limit of the price process. As suggested by empirical observations, we extend their framework to 1) arbitrary distributions for book events interarrival times (possibly nonexponential) and 2) both the nature of a new book event and its corresponding interarrival time depend on the nature of the previous book event. We do so by resorting to Markov renewal processes to model the dynamics of the bid and ask queues. We keep analytical tractability via explicit expressions for the Laplace transforms of various quantities of interest. Our approach is justified and illustrated by calibrating the model to the five stocks Amazon, Apple, Google, Intel and Microsoft on June 21st 2012. As in Cont and Larrard, the bidask spread remains constant equal to one tick, only the bid and ask queues are modelled (they are independent from each other and get reinitialized after a price change), and all orders have the same size. (This talk is based on our joint paper with Nelson Vadori (Morgan Stanley)). 

Multiscale modeling in biofluids and particle aggregation 15:10 Fri 17 Jun, 2016 :: B17 Ingkarni Wardli :: Dr Sarthok Sircar :: University of Adelaide
In today's seminar I will give 2 examples in mathematical biology which describes the multiscale organization at 2 levels: the meso/micro level and the continuum/macro level. I will then detail suitable tools in statistical mechanics to link these different scales.
The first problem arises in mathematical physiology: swellingdeswelling mechanism of mucus, an ionic gel. Mucus is packaged inside cells at high concentration (volume fraction) and when released into the extracellular environment, it expands in volume by two orders of magnitude in a matter of seconds. This rapid expansion is due to the rapid exchange of calcium and sodium that changes the crosslinked structure of the mucus polymers, thereby causing it to swell. Modeling this problem involves a twophase, polymer/solvent mixture theory (in the continuum level description), together with the chemistry of the polymer, its nearest neighbor interaction and its binding with the dissolved ionic species (in the microscale description). The problem is posed as a freeboundary problem, with the boundary conditions derived from a combination of variational principle and perturbation analysis. The dynamics of neutral gels and the equilibriumstates of the ionic gels are analyzed.
In the second example, we numerically study the adhesion fragmentation dynamics of rigid, round particles clusters subject to a homogeneous shear flow. In the macro level we describe the dynamics of the number density of these cluster. The description in the microscale includes (a) binding/unbinding of the bonds attached on the particle surface, (b) bond torsion, (c) surface potential due to ionic medium, and (d) flow hydrodynamics due to shear flow. 

Probabilistic Meshless Methods for Bayesian Inverse Problems 15:10 Fri 5 Aug, 2016 :: Engineering South S112 :: Dr Chris Oates :: University of Technology Sydney
Media...This talk deals with statistical inverse problems that involve partial differential equations (PDEs) with unknown parameters. Our goal is to account, in a rigorous way, for the impact of discretisation error that is introduced at each evaluation of the likelihood due to numerical solution of the PDE. In the context of meshless methods, the proposed, modelbased approach to discretisation error encourages statistical inferences to be more conservative in the presence of significant solver error. In addition, (i) a principled learningtheoretic approach to minimise the impact of solver error is developed, and (ii) the challenge of nonlinear PDEs is considered. The method is applied to parameter inference problems in which nonnegligible solver error must be accounted for in order to draw valid statistical conclusions. 

Measuring and mapping carbon dioxide from remote sensing satellite data 15:10 Fri 21 Oct, 2016 :: Napier G03 :: Prof Noel Cressie :: University of Wollongong
Media...This talk is about environmental statistics for global remote sensing of atmospheric carbon dioxide, a leading greenhouse gas. An important compartment of the carbon cycle is atmospheric carbon dioxide (CO2), where it (and other gases) contribute to climate change through a greenhouse effect. There are a number of CO2 observational programs where measurements are made around the globe at a small number of groundbased locations at somewhat regular time intervals. In contrast, satellitebased programs are spatially global but give up some of the temporal richness. The most recent satellite launched to measure CO2 was NASA's Orbiting Carbon Observatory2 (OCO2), whose principal objective is to retrieve a geographical distribution of CO2 sources and sinks. OCO2's measurement of columnaveraged mole fraction, XCO2, is designed to achieve this, through a dataassimilation procedure that is statistical at its basis. Consequently, uncertainty quantification is key, starting with the spectral radiances from an individual sounding to borrowing of strength through spatialstatistical modelling. 

Fault tolerant computation of hyperbolic PDEs with the sparse grid combination technique 15:10 Fri 28 Oct, 2016 :: Ingkarni Wardli 5.57 :: Dr Brendan Harding :: University of Adelaide
Computing solutions to high dimensional problems is challenging because of the curse of dimensionality. The sparse grid combination technique allows one to significantly reduce the cost of computing solutions such that they become manageable on current supercomputers. However, as these supercomputers increase in size the rate of failure also increases. This poses a challenge for our computations. In this talk we look at the problem of computing solutions to hyperbolic partial differential equations with the combination technique in an environment where faults occur. A fault tolerant generalisation of the combination technique will be presented along with results that demonstrate its effectiveness. 

Collective and aneural foraging in biological systems 15:10 Fri 3 Mar, 2017 :: Lower Napier LG14 :: Dr Jerome Buhl and Dr David Vogel :: The University of Adelaide
The field of collective behaviour uses concepts originally adapted from statistical physics to study how complex collective phenomena such as mass movement or swarm intelligence emerge from relatively simple interactions between individuals. Here we will focus on two applications of this framework. First we will have look at new insights into the evolution of sociality brought by combining models of nutrition and social interactions to explore phenomena such as collective foraging decisions, emergence of social organisation and social immunity. Second, we will look at the networks built by slime molds under exploration and foraging context. 

Fast approximate inference for arbitrarily large statistical models via message passing 15:10 Fri 17 Mar, 2017 :: Engineering South S111 :: Prof Matt Wand :: University of Technology Sydney
We explain how the notion of message passing can be used
to streamline the algebra and computer coding for fast
approximate inference in large Bayesian statistical models.
In particular, this approach is amenable to handling
arbitrarily large models of particular types
once a set of primitive operations is established.
The approach is founded upon a message passing formulation
of mean field variational Bayes that utilizes
factor graph representations of statistical
models. The notion of factor graph fragments is introduced
and is shown to facilitate compartmentalization of the
required algebra and coding. 

Graded Ktheory and C*algebras 11:10 Fri 12 May, 2017 :: Engineering North 218 :: Aidan Sims :: University of Wollongong
Media...C*algebras can be regarded, in a very natural way, as noncommutative algebras of continuous functions on topological spaces. The analogy is strong enough that topological Ktheory in terms of formal differences of vector bundles has a direct analogue for C*algebras. There is by now a substantial array of tools out there for computing C*algebraic Ktheory. However, when we want to model physical phenomena, like topological phases of matter, we need to take into account various physical symmetries, some of which are encoded by gradings of C*algebras by the twoelement group. Even the definition of graded C*algebraic Ktheory is not entirely settled, and there are relatively few computational tools out there. I will try to outline what a C*algebra (and a graded C*algebra is), indicate what graded Ktheory ought to look like, and discuss recent work with Alex Kumjian and David Pask linking this with the deep and powerful work of Kasparov, and using this to develop computational tools. 

The Markovian binary tree applied to demography and conservation biology 15:10 Fri 27 Oct, 2017 :: Ingkarni Wardli B17 :: Dr Sophie Hautphenne :: University of Melbourne
Markovian binary trees form a general and tractable class of continuoustime branching processes, which makes them wellsuited for realworld applications. Thanks to their appealing probabilistic and computational features, these processes have proven to be an excellent modelling tool for applications in population biology. Typical performance measures of these models include the extinction probability of a population, the distribution of the population size at a given time, the total progeny size until extinction, and the asymptotic population composition. Besides giving an overview of the main performance measures and the techniques involved to compute them, we discuss recently developed statistical methods to estimate the model parameters, depending on the accuracy of the available data. We illustrate our results in human demography and in conservation biology. 

Computing trisections of 4manifolds 13:10 Fri 23 Mar, 2018 :: Barr Smith South Polygon Lecture theatre :: Stephen Tillmann :: University of Sydney
Media...Gay and Kirby recently generalised Heegaard splittings of 3manifolds to
trisections of 4manifolds. A trisection describes a 4Ã¢ÂÂdimensional manifold
as a union of three 4Ã¢ÂÂdimensional handlebodies. The complexity of the
4Ã¢ÂÂmanifold is captured in a collection of curves on a surface, which guide
the gluing of the handelbodies. The minimal genus of such a surface is the
trisection genus of the 4manifold.
After defining trisections and giving key examples and applications, I will
describe an algorithm to compute trisections of 4Ã¢ÂÂmanifolds using arbitrary
triangulations as input. This results in the first explicit complexity
bounds for the trisection genus of a 4Ã¢ÂÂmanifold in terms of the number of
pentachora (4Ã¢ÂÂsimplices) in a triangulation. This is joint work with Mark
Bell, Joel Hass and Hyam Rubinstein. I will also describe joint work with
Jonathan Spreer that determines the trisection genus for each of the
standard simply connected PL 4manifolds. 

Quantifying language change 15:10 Fri 1 Jun, 2018 :: Horace Lamb 1022 :: A/Prof Eduardo Altmann :: University of Sydney
Mathematical methods to study natural language are increasingly important because of the ubiquity of textual data in the Internet. In this talk I will discuss mathematical models and statistical methods to quantify the variability of language, with focus on two problems: (i) How the vocabulary of languages changed over the last centuries? (ii) How the language of scientific disciplines relate to each other and evolved in the last decades? One of the main challenges of these analyses stem from universal properties of word frequencies, which show high temporal variability and are fattailed distributed. The later feature dramatically affects the statistical properties of entropybased estimators, which motivates us to compare vocabularies using a generalized JensonShannon divergence (obtained from entropies of order alpha). 

Quantifying language change 15:10 Fri 1 Jun, 2018 :: Napier 208 :: A/Prof Eduardo Altmann :: University of Sydney
Mathematical methods to study natural language are increasingly important because of the ubiquity of textual data in the Internet. In this talk I will discuss mathematical models and statistical methods to quantify the variability of language, with focus on two problems: (i) How the vocabulary of languages changed over the last centuries? (ii) How the language of scientific disciplines relate to each other and evolved in the last decades? One of the main challenges of these analyses stem from universal properties of word frequencies, which show high temporal variability and are fattailed distributed. The later feature dramatically affects the statistical properties of entropybased estimators, which motivates us to compare vocabularies using a generalized JensonShannon divergence (obtained from entropies of order alpha). 

Projected Particle Filters 15:10 Fri 24 Aug, 2018 :: Lower Napier LG15 :: Dr John Maclean :: University of Adelaide
cientific advances owe equally to models and data, and both will remain relevant and key to further understanding. Observations drive model development, and model development often drives data acquisition. It therefore is particularly prudent to have these two sides of the scientific coin work in concert. This is a mathematical and statistical question: how to combine the output of model investigations and observational data. The area that is dedicated to studying and developing the best approaches to this issue is called Data Assimilation (DA). Perhaps the most crucial modernday application of DA is numerical weather prediction, but it is also used in GPS systems and studies of atmospheric conditions on other planets.
I will take the probabilistic or Bayesian approach to DA. At a particular time at which data are available, the question of data assimilation is how to approximate the posterior or analysis distribution, that is found by conditioning the "forecast distribution" on the data. A key method under this umbrella is the particle filter, that approximates the forecast and posterior distributions with an ensemble of weighted particles.
The talk will focus on a contribution to particle filtering made from a dynamical systems point of view. I will introduce a framework for Particle Filtering, PFAUS, in which only the components of data corresponding to the unstable and neutral modes of the forecast model are assimilated.
The particle filter is well suited to nonlinear forecast models, and nonGaussian forecast distributions, but would normally require exponentially more computational effort as the dimension of the DA problem increases. The PFAUS implementation is shown to correspond to assimilating observations of a lower dimension, equal to the number of Lyapunov exponents. The dimension of the observations is crucial in the computational cost of the particle filter and this approach is a framework to drastically lower that cost while preserving as much relevant information as possible, in that the unstable and neutral modes correspond to the most uncertain model predictions.
Particle filters are an active area of research in both the DA and the statistical communities, and there are many competing algorithms. One nice feature of PFAUS is that it is not exactly an algorithm but rather a framework for filtering: any particle filter can be applied in the PFAUS framework. 

Topological Data Analysis 15:10 Fri 31 Aug, 2018 :: Napier 208 :: Dr Vanessa Robins :: Australian National University
Topological Data Analysis has grown out of work focussed on deriving qualitative and yet quantifiable information about the shape of data. The underlying assumption is that knowledge of shape  the way the data are distributed  permits highlevel reasoning and modelling of the processes that created this data. The 0th order aspect of shape is the number pieces: "connected components" to a topologist; "clustering" to a statistician. Higherorder topological aspects of shape are holes, quantified as "nonbounding cycles" in homology theory. These signal the existence of some type of constraint on the datagenerating process.
Homology lends itself naturally to computer implementation, but its naive application is not robust to noise. This inspired the development of persistent homology: an algebraic topological tool that measures changes in the topology of a growing sequence of spaces (a filtration). Persistent homology provides invariants called the barcodes or persistence diagrams that are sets of intervals recording the birth and death parameter values of each homology class in the filtration. It captures information about the shape of data over a range of length scales, and enables the identification of "noisy" topological structure.
Statistical analysis of persistent homology has been challenging because the raw information (the persistence diagrams) are provided as sets of intervals rather than functions. Various approaches to converting persistence diagrams to functional forms have been developed recently, and have found application to data ranging from the distribution of galaxies, to porous materials, and cancer detection. 

Mathematical modelling of the emergence and spread of antimalarial drug resistance 15:10 Fri 14 Sep, 2018 :: Napier 208 :: Dr Jennifer Flegg :: University of Melbourne
Malaria parasites have repeatedly evolved resistance to antimalarial drugs, thwarting efforts to eliminate the disease and contributing to an increase in mortality. In this talk, I will introduce several statistical and mathematical models for monitoring the emergence and spread of antimalarial drug resistance. For example, results will be presented from Bayesian geostatistical models that have quantified the spacetime trends in drug resistance in Africa and Southeast Asia. I will discuss how the results of these models have been used to update public health policy. 
News matching "Statistical computing" 
Usenet Conference Associate Professor Matt Roughan (Applied Mathematics) has been invited to CoChair the Association for Computing Machinery Usenet Internet Measurement Conference. Posted Mon 15 Jan 07. 

New Professor of Statistical Bioinformatics Associate Professor Patty Solomon will take up the Chair of Statistical Bioinformatics within the School of Mathematical Sciences effective from 29th of October, 2007. Posted Mon 29 Oct 07. 

ARC Grant successes The School of Mathematical Sciences has again had outstanding success in the ARC Discovery and Linkage Projects schemes.
Congratulations to the following staff for their success in the Discovery Project scheme:
Prof Nigel Bean, Dr Josh Ross, Prof Phil Pollett, Prof Peter Taylor, New methods for improving active adaptive management in biological systems, $255,000 over 3 years;
Dr Josh Ross, New methods for integrating population structure and stochasticity into models of disease dynamics, $248,000 over three years;
A/Prof Matt Roughan, Dr Walter Willinger, Internet trafficmatrix synthesis, $290,000 over three years;
Prof Patricia Solomon, A/Prof John Moran, Statistical methods for the analysis of critical care data, with application to the Australian and New Zealand Intensive Care Database, $310,000 over 3 years;
Prof Mathai Varghese, Prof Peter Bouwknegt, Supersymmetric quantum field theory, topology and duality, $375,000 over 3 years;
Prof Peter Taylor, Prof Nigel Bean, Dr Sophie Hautphenne, Dr Mark Fackrell, Dr Malgorzata O'Reilly, Prof Guy Latouche, Advanced matrixanalytic methods with applications, $600,000 over 3 years.
Congratulations to the following staff for their success in the Linkage Project scheme:
Prof Simon Beecham, Prof Lee White, A/Prof John Boland, Prof Phil Howlett, Dr Yvonne Stokes, Mr John Wells, Paving the way: an experimental approach to the mathematical modelling and design of permeable pavements, $370,000 over 3 years;
Dr Amie Albrecht, Prof Phil Howlett, Dr Andrew Metcalfe, Dr Peter Pudney, Prof Roderick Smith, Saving energy on trains  demonstration, evaluation, integration, $540,000 over 3 years
Posted Fri 29 Oct 10. 
Publications matching "Statistical computing"Publications 

Adaptively varyingcoefficient spatiotemporal models Lu, Zudi; Steinskog, D; Tjostheim, D; Yao, Q, Journal of the Royal Statistical Society Series BStatistical Methodology 71 (859–880) 2009  Algorithms for the LaplaceStieltjes transforms of first return times for stochastic fluid flows Bean, Nigel; O'Reilly, Malgorzata; Taylor, Peter, Methodology and Computing in Applied Probability 10 (381–408) 2008  Robust Optimal Portfolio Choice Under Markovian Regimeswitching Model Elliott, Robert; Siu, T, Methodology and Computing in Applied Probability 11 (145–157) 2008  General tooth boundary conditions for equation free modeling Roberts, Anthony John; Kevrekidis, I, Siam Journal on Scientific Computing 29 (1495–1510) 2007  Statistical characteristics of rainstorms derived from weather radar images Qin, J; Leonard, Michael; Kuczera, George; Thyer, M; Lambert, Martin; Metcalfe, Andrew, 30th Hydrology and Water Resources Symposium, Launceston, Tasmania 04/12/06  Diversity sensitivity and multimodal Bayesian statistical analysis by relative entropy Leipnik, R; Pearce, Charles, The ANZIAM Journal 47 (277–287) 2005  Impinging laminar jets at moderate Reynolds numbers and separation distances Bergthorson, J; Sone, K; Mattner, Trent; Dimotakis, P; Goodwin, D; Meiron, D, Physical Review E. (Statistical, Nonlinear, and Soft Matter Physics) 72 (0663071–06630712) 2005  Classofservice mapping for QoS: A statistical signaturebased approach to IP traffic classification Roughan, Matthew; Sen, S; Spatscheck, O; Duffield, N, ACM SIG COMM 2004, Taormina, Sicily, Italy 25/10/04  SwiftHohenberg model for magnetoconvection Cox, Stephen; Matthews, P; Pollicott, S, Physical Review E. (Statistical, Nonlinear, and Soft Matter Physics) 69 (0663141–06631414) 2004  The Oxford dictionary of statistical terms Dodge, Y; Cox, D; Commenges, D; Solomon, Patricia; Wilson, S,  Higherorder statistical moments of waveinduced response of offshore structures via efficient sampling techniques Najafian, G; Burrows, R; Tickell, R; Metcalfe, Andrew, International Offshore and Polar Engineering Conference 3 (465–470) 2002  Statistical modelling and prediction associated with the HIV/AIDS epidemic Solomon, Patricia; Wilson, Susan, The Mathematical Scientist 26 (87–102) 2001  Statistical analysis of medical data: New developments  Book review Solomon, Patricia, Biometrics 57 (327–328) 2001  Metaanalysis, overviews and publication bias Solomon, Patricia; Hutton, Jonathon, Statistical Methods in Medical Research 10 (245–250) 2001  A GUI for computing flows past general airfoils Simakov, Sergey; Dostovalova, Anna; Tuck, Ernest, The MATLAB User Conference 2000, Melbourne, Australia 09/11/00  Disease surveillance and data collection issues in epidemic modelling Solomon, Patricia; Isham, V, Statistical Methods in Medical Research 9 (259–277) 2000  Disease surveillance and intervention studies in developing countries Solomon, Patricia, Statistical Methods in Medical Research 9 (183–184) 2000 
Advanced search options
You may be able to improve your search results by using the following syntax:
Query  Matches the following 

Asymptotic Equation  Anything with "Asymptotic" or "Equation". 
+Asymptotic +Equation  Anything with "Asymptotic" and "Equation". 
+Stokes "NavierStokes"  Anything containing "Stokes" but not "NavierStokes". 
Dynam*  Anything containing "Dynamic", "Dynamical", "Dynamicist" etc. 
