The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
June 2018
MTWTFSS
    123
45678910
11121314151617
18192021222324
252627282930 
       

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

People matching "Statistical computing"

Associate Professor Gary Glonek
Associate Professor in Statistics


More about Gary Glonek...
Professor Patty Solomon
Professor of Statistical Bioinformatics


More about Patty Solomon...
Dr Simon Tuke
Lecturer in Statistics


More about Simon Tuke...

Courses matching "Statistical computing"

Advanced statistical inference

We begin with modern and classical statistical inference and cover cumulants, the cumulant generating function, natural exponential family models, minimal sufficient statistics, completeness, and generalised linear models. We then consider conditional and marginal inference including the concept of ancillary statistics, marginal likelihood and conditional inference. Chapter 2 is about model choice, in particular Akaike's Information Criterion (AIC), Network Information Criterion (NIC), and cross-validation (CV). We will explore the theoretical basis of AIC via model misspecification and the Kullback-Leibler distance. Chapter 3 is devoted to bootstrap methods for assessing statistical accuracy; we will focus on bootstrap estimation and confidence intervals, and consider the jackknife and its relationship to the bootstrap. Chapter 4 is on the analysis of missing data; we will study the different types of missingness and the Expectation-Maximisation (EM) algorithm in particular. Chapter 5 is about survival analysis, and we will cover the Kaplan-Meier estimator, parametric survival models, and the semi-parametric proportional hazards model.

More about this course...

Mathematical epidemiology: Stochastic models and their statistical calibration

Mathematical models are increasingly used to inform governmental policy-makers on issues that threaten human health or which have an adverse impact on the economy. It is this real-world success combined with the wide variety of interesting mathematical problems which arise that makes mathematical epidemiology one of the most exciting topics in applied mathematics. During the summer school, you will be introduced to mathematical epidemiology and some fundamental theory required for studying and parametrising stochastic models of infection dynamics, which will provide an ideal basis for addressing key research questions in this area; several such questions will be introduced and explored in this course. Topics: An introduction to mathematical epidemiology Discrete-time and continuous-time discrete-state stochastic infection models Numerical methods for studying stochastic infection models: EXPOKIT, transforms and their inversion Methods for simulating stochastic infection models: classical (Gillespie) algorithm, more efficient exact and approximate algorithms Methods for parameterising stochastic infection models: frequentist approaches, Bayesian approaches, approximate Bayesian computation Optimal observation of stochastic infection models

More about this course...

Statistical Analysis and Modelling 1

This is a first course in Statistics for mathematically inclined students. It will address the key principles underlying commonly used statistical methods such as confidence intervals, hypothesis tests, inference for means and proportions, and linear regression. It will develop a deeper mathematical understanding of these ideas, many of which will be familiar from studies in secondary school. The application of basic and more advanced statistical methods will be illustrated on a range of problems from areas such as medicine, science, technology, government, commerce and manufacturing. The use of the statistical package SPSS will be developed through a sequence of computer practicals. Topics covered will include: basic probability and random variables, fundamental distributions, inference for means and proportions, comparison of independent and paired samples, simple linear regression, diagnostics and model checking, multiple linear regression, simple factorial models, models with factors and continuous predictors.

More about this course...

Statistical Modelling and Inference

Statistical methods are important to all areas that rely on data including science, technology, government and commerce. To deal with the complex problems that arise in practice requires a sound understanding of fundamental statistical principles together with a range of suitable modelling techniques. Computing using a high level statistical package is also an essential element of modern statistical practice. This course provides an introduction to the principles of statistical inference and the development of linear statistical models with the statistical package R. Topics covered are: Point estimates, unbiasedness, mean-squared error, confidence intervals, tests of hypotheses, power calculations, derivation of one and two-sample procedures; simple linear regression, regression diagnostics, prediction; linear models, ANOVA, multiple regression, factorial experiments, analysis of covariance models, model building; likelihood based methods for estimation and testing, goodness of fit tests; sample surveys, population means, totals and proportions, simple random samples, stratified random samples. Topics covered are: point estimates, unbiasedness, mean-squared error, confidence intervals, tests of hypotheses, power calculations, derivation of one and two-sample procedures: simple linear regression, regression diagnostics, prediction: linear models, analysis of variance (ANOVA), multiple regression, factorial experiments, analysis of covariance models, model building; likelihood-based methods for estimation and testing and goodness-of-fit tests.

More about this course...

Statistical Modelling III

One of the key requirements of an applied statistician is the ability to formulate appropriate statistical models and then apply them to data in order to answer the questions of interest. Most often, such models can be seen as relating a response variable to one or more explanatory variables. For example, in a medical experiment we may seek to evaluate a new treatment by relating patient outcome to treatment received while allowing for background variables such as age, sex and disease severity. In this course, a rigorous discussion of the linear model is given and various extensions are developed. There is a strong practical emphasis and the statistical package R is used extensively. Topics covered are: the linear model, least squares estimation, generalised least squares estimation, properties of estimators, the Gauss-Markov theorem; geometry of least squares, subspace formulation of linear models, orthogonal projections; regression models, factorial experiments, analysis of covariance and model formulae; regression diagnostics, residuals, influence diagnostics, transformations, Box-Cox models, model selection and model building strategies; models with complex error structure, split-plot experiments; logistic regression models.

More about this course...

Statistical Practice I

Statistical ideas and methods are essential tools in virtually all areas that rely on data to make decisions and reach conclusions. This includes diverse fields such as medicine, science, technology, government, commerce and manufacturing. In broad terms, statistics is about getting information from data. This includes both the important question of how to obtain suitable data for a given purpose and also how best to extract the information, often in the presence of random variability. This course provides an introduction to the contemporary application of statistics to a wide range of real world situations. It has a strong practical focus using the statistical package SPSS to analyse real data. Topics covered are: organisation, description and presentation of data; design of experiments and surveys; random variables, probability distributions, the binomial distribution and the normal distribution; statistical inference, tests of significance, confidence intervals; inference for means and proportions, one-sample tests, two independent samples, paired data, t-tests, contingency tables; analysis of variance; linear regression, least squares estimation, residuals and transformations, inference for regression coefficients, prediction.

More about this course...

Statistical Practice I (Life Sciences)

Statistical ideas and methods are essential tools in virtually all areas that rely on data to make decisions and reach conclusions. This includes diverse fields such as science, technology, government, commerce, manufacturing and the life sciences. In broad terms, statistics is about getting information from data. This includes both the important question of how to obtain suitable data for a given purpose and also how best to extract the information, often in the presence of random variability. This course provides an introduction to the contemporary application of statistics to a range of real world situations. It has a strong practical focus using the statistical package SPSS to analyse real data relevant to the life sciences. Topics covered are: organisation, description and presentation of data in the life sciences; design of experiments and surveys; random variables, probability distributions, the binomial distribution and the normal distribution; statistical inference, tests of significance, confidence intervals; inference for means and proportions, one-sample tests, two independent samples, paired data, t-tests, contingency tables; analysis of variance; linear regression, least squares estimation, residuals and transformations, inference for regression coefficients, prediction.

More about this course...

Statistical Practice I (Life Sciences) (Pre-Vet)

Statistical ideas and methods are essential tools in virtually all areas that rely on data to make decisions and reach conclusions. This includes diverse fields such as science, technology, government, commerce, manufacturing and the life sciences. In broad terms, statistics is about getting information from data. This includes both the important question of how to obtain suitable data for a given purpose and also how best to extract the information, often in the presence of random variability. This course provides an introduction to the contemporary application of statistics to a range of real world situations. It has a strong practical focus using the statistical package SPSS to analyse real data relevant to the life sciences. Topics covered are: organisation, description and presentation of data in the life sciences; design of experiments and surveys; random variables, probability distributions, the binomial distribution and the normal distribution; statistical inference, tests of significance, confidence intervals; inference for means and proportions, one-sample tests, two independent samples, paired data, t-tests, contingency tables; analysis of variance; linear regression, least squares estimation, residuals and transformations, inference for regression coefficients, prediction.

More about this course...

Events matching "Statistical computing"

Statistical convergence of sequences of complex numbers with application to Fourier series
15:10 Tue 27 Mar, 2007 :: G08 Mathematics Building University of Adelaide :: Prof. Ferenc Morics

Media...
The concept of statistical convergence was introduced by Henry Fast and Hugo Steinhaus in 1951. But in fact, it was Antoni Zygmund who first proved theorems on the statistical convergence of Fourier series, using the term \"almost convergence\". A sequence $\\{x_k : k=1,2\\ldots\\}$ of complex numbers is said to be statistically convergent to $\\xi$ if for every $\\varepsilon >0$ we have $$\\lim_{n\\to \\infty} n^{-1} |\\{1\\le k\\le n: |x_k-\\xi| > \\varepsilon\\}| = 0.$$ We present the basic properties of statistical convergence, and extend it to multiple sequences. We also discuss the convergence behavior of Fourier series.
Likelihood inference for a problem in particle physics
15:10 Fri 27 Jul, 2007 :: G04 Napier Building University of Adelaide :: Prof. Anthony Davison

The Large Hadron Collider (LHC), a particle accelerator located at CERN, near Geneva, is (currently!) expected to start operation in early 2008. It is located in an underground tunnel 27km in circumference, and when fully operational, will be the world's largest and highest energy particle accelerator. It is hoped that it will provide evidence for the existence of the Higgs boson, the last remaining particle of the so-called Standard Model of particle physics. The quantity of data that will be generated by the LHC is roughly equivalent to that of the European telecommunications network, but this will be boiled down to just a few numbers. After a brief introduction, this talk will outline elements of the statistical problem of detecting the presence of a particle, and then sketch how higher order likelihood asymptotics may be used for signal detection in this context. The work is joint with Nicola Sartori, of the Università Ca' Foscari, in Venice.
Statistical Critique of the International Panel on Climate Change's work on Climate Change.
18:00 Wed 17 Oct, 2007 :: Union Hall University of Adelaide :: Mr Dennis Trewin

Climate change is one of the most important issues facing us today. Many governments have introduced or are developing appropriate policy interventions to (a) reduce the growth of greenhouse gas emissions in order to mitigate future climate change, or (b) adapt to future climate change. This important work deserves a high quality statistical data base but there are statistical shortcomings in the work of the International Panel on Climate Change (IPCC). There has been very little involvement of qualified statisticians in the very important work of the IPCC which appears to be scientifically meritorious in most other ways. Mr Trewin will explain these shortcomings and outline his views on likely future climate change, taking into account the statistical deficiencies. His conclusions suggest climate change is still an important issue that needs to be addressed but the range of likely outcomes is a lot lower than has been suggested by the IPCC. This presentation will be based on an invited paper presented at the OECD World Forum.
Moderated Statistical Tests for Digital Gene Expression Technologies
15:10 Fri 19 Oct, 2007 :: G04 Napier Building University of Adelaide :: Dr Gordon Smyth :: Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia

Digital gene expression (DGE) technologies measure gene expression by counting sequence tags. They are sensitive technologies for measuring gene expression on a genomic scale, without the need for prior knowledge of the genome sequence. As the cost of DNA sequencing decreases, the number of DGE datasets is expected to grow dramatically. Various tests of differential expression have been proposed for replicated DGE data using over-dispersed binomial or Poisson models for the counts, but none of the these are usable when the number of replicates is very small. We develop tests using the negative binomial distribution to model overdispersion relative to the Poisson, and use conditional weighted likelihood to moderate the level of overdispersion across genes. A heuristic empirical Bayes algorithm is developed which is applicable to very general likelihood estimation contexts. Not only is our strategy applicable even with the smallest number of replicates, but it also proves to be more powerful than previous strategies when more replicates are available. The methodology is applicable to other counting technologies, such as proteomic spectral counts.
Probabilistic models of human cognition
15:10 Fri 29 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr Daniel Navarro :: School of Psychology, University of Adelaide

Over the last 15 years a fairly substantial psychological literature has developed in which human reasoning and decision-making is viewed as the solution to a variety of statistical problems posed by the environments in which we operate. In this talk, I briefly outline the general approach to cognitive modelling that is adopted in this literature, which relies heavily on Bayesian statistics, and introduce a little of the current research in this field. In particular, I will discuss work by myself and others on the statistical basis of how people make simple inductive leaps and generalisations, and the links between these generalisations and how people acquire word meanings and learn new concepts. If time permits, the extensions of the work in which complex concepts may be characterised with the aid of nonparametric Bayesian tools such as Dirichlet processes will be briefly mentioned.
Oceanographic Research at the South Australian Research and Development Institute: opportunities for collaborative research
15:10 Fri 21 Nov, 2008 :: Napier G04 :: Associate Prof John Middleton :: South Australian Research and Development Institute

Increasing threats to S.A.'s fisheries and marine environment have underlined the increasing need for soundly based research into the ocean circulation and ecosystems (phyto/zooplankton) of the shelf and gulfs. With support of Marine Innovation SA, the Oceanography Program has within 2 years, grown to include 6 FTEs and a budget of over $4.8M. The program currently leads two major research projects, both of which involve numerical and applied mathematical modelling of oceanic flow and ecosystems as well as statistical techniques for the analysis of data. The first is the implementation of the Southern Australian Integrated Marine Observing System (SAIMOS) that is providing data to understand the dynamics of shelf boundary currents, monitor for climate change and understand the phyto/zooplankton ecosystems that under-pin SA's wild fisheries and aquaculture. SAIMOS involves the use of ship-based sampling, the deployment of underwater marine moorings, underwater gliders, HF Ocean RADAR, acoustic tracking of tagged fish and Autonomous Underwater vehicles.

The second major project involves measuring and modelling the ocean circulation and biological systems within Spencer Gulf and the impact on prawn larval dispersal and on the sustainability of existing and proposed aquaculture sites. The discussion will focus on opportunities for collaborative research with both faculty and students in this exciting growth area of S.A. science.

Statistical analysis for harmonized development of systemic organs in human fetuses
11:00 Thu 17 Sep, 2009 :: School Board Room :: Prof Kanta Naito :: Shimane University

The growth processes of human babies have been studied sufficiently in scientific fields, but there have still been many issues about the developments of human fetus which are not clarified. The aim of this research is to investigate the developing process of systemic organs of human fetuses based on the data set of measurements of fetus's bodies and organs. Specifically, this talk is concerned with giving a mathematical understanding for the harmonized developments of the organs of human fetuses. The method to evaluate such harmonies is proposed by the use of the maximal dilatation appeared in the theory of quasi-conformal mapping.
Stable commutator length
13:40 Fri 25 Sep, 2009 :: Napier 102 :: Prof Danny Calegari :: California Institute of Technology

Stable commutator length answers the question: "what is the simplest surface in a given space with prescribed boundary?" where "simplest" is interpreted in topological terms. This topological definition is complemented by several equivalent definitions - in group theory, as a measure of non-commutativity of a group; and in linear programming, as the solution of a certain linear optimization problem. On the topological side, scl is concerned with questions such as computing the genus of a knot, or finding the simplest 4-manifold that bounds a given 3-manifold. On the linear programming side, scl is measured in terms of certain functions called quasimorphisms, which arise from hyperbolic geometry (negative curvature) and symplectic geometry (causal structures). In these talks we will discuss how scl in free and surface groups is connected to such diverse phenomena as the existence of closed surface subgroups in graphs of groups, rigidity and discreteness of symplectic representations, bounding immersed curves on a surface by immersed subsurfaces, and the theory of multi- dimensional continued fractions and Klein polyhedra. Danny Calegari is the Richard Merkin Professor of Mathematics at the California Institute of Technology, and is one of the recipients of the 2009 Clay Research Award for his work in geometric topology and geometric group theory. He received a B.A. in 1994 from the University of Melbourne, and a Ph.D. in 2000 from the University of California, Berkeley under the joint supervision of Andrew Casson and William Thurston. From 2000 to 2002 he was Benjamin Peirce Assistant Professor at Harvard University, after which he joined the Caltech faculty; he became Richard Merkin Professor in 2007.
Contemporary frontiers in statistics
15:10 Mon 28 Sep, 2009 :: Badger Labs G31 Macbeth Lectrue :: Prof. Peter Hall :: University of Melbourne

The availability of powerful computing equipment has had a dramatic impact on statistical methods and thinking, changing forever the way data are analysed. New data types, larger quantities of data, and new classes of research problem are all motivating new statistical methods. We shall give examples of each of these issues, and discuss the current and future directions of frontier problems in statistics.
Manifold destiny: a talk on water, fire and life
15:10 Fri 6 Nov, 2009 :: MacBeth Lecture Theatre :: Dr Sanjeeva Balasuriya :: University of Adelaide

Manifolds are important entities in dynamical systems, and organise space into regions in which different motions occur. For example, intersections between stable and unstable manifolds in discrete systems result in chaotic motion. This talk will focus on manifolds and their locations in continuous dynamical systems, and in particular on Melnikov's method and its adaptations for determining the effect of perturbations on manifolds. The relevance of such adaptations to a surprising range of applications will be shown, in addition to recent theoretical developments inspired by such problems. The applications addressed in this talk include understanding the motion of fluid near oceanic eddies and currents, optimising mixing in nano-fluidic devices in order to improve reactions, computing the speed of a flame front, and finding the spreading rate of bacterial colonies.
Exploratory experimentation and computation
15:10 Fri 16 Apr, 2010 :: Napier LG29 :: Prof Jonathan Borwein :: University of Newcastle

Media...
The mathematical research community is facing a great challenge to re-evaluate the role of proof in light of the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet. Add to that the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the Classification of finite simple groups. As the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished. I shall look at the philosophical context with examples and then offer some of five bench-marking examples of the opportunities and challenges we face.
The mathematics of theoretical inference in cognitive psychology
15:10 Fri 11 Jun, 2010 :: Napier LG24 :: Prof John Dunn :: University of Adelaide

The aim of psychology in general, and of cognitive psychology in particular, is to construct theoretical accounts of mental processes based on observed changes in performance on one or more cognitive tasks. The fundamental problem faced by the researcher is that these mental processes are not directly observable but must be inferred from changes in performance between different experimental conditions. This inference is further complicated by the fact that performance measures may only be monotonically related to the underlying psychological constructs. State-trace analysis provides an approach to this problem which has gained increasing interest in recent years. In this talk, I explain state-trace analysis and discuss the set of mathematical issues that flow from it. Principal among these are the challenges of statistical inference and an unexpected connection to the mathematics of oriented matroids.
Mathematica Seminar
15:10 Wed 28 Jul, 2010 :: Engineering Annex 314 :: Kim Schriefer :: Wolfram Research

The Mathematica Seminars 2010 offer an opportunity to experience the applicability, ease-of-use, as well as the advancements of Mathematica 7 in education and academic research. These seminars will highlight the latest directions in technical computing with Mathematica, and the impact this technology has across a wide range of academic fields, from maths, physics and biology to finance, economics and business. Those not yet familiar with Mathematica will gain an overview of the system and discover the breadth of applications it can address, while experts will get firsthand experience with recent advances in Mathematica like parallel computing, digital image processing, point-and-click palettes, built-in curated data, as well as courseware examples.
A spatial-temporal point process model for fine resolution multisite rainfall data from Roma, Italy
14:10 Thu 19 Aug, 2010 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of Technology

A point process rainfall model is further developed that has storm origins occurring in space-time according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in two-dimensional space, where the storm radii are taken to be independent exponential random variables. Storm origins are of random type z, where z follows a continuous probability distribution. Cell origins occur in a further spatial Poisson process and have arrival times that follow a Neyman-Scott point process. Cell origins have random radii so that cells form discs in two-dimensional space. Statistical properties up to third order are derived and used to fit the model to 10 min series taken from 23 sites across the Roma region, Italy. Distributional properties of the observed annual maxima are compared to equivalent values sampled from series that are simulated using the fitted model. The results indicate that the model will be of use in urban drainage projects for the Roma region.
Simultaneous confidence band and hypothesis test in generalised varying-coefficient models
15:05 Fri 10 Sep, 2010 :: Napier LG28 :: Prof Wenyang Zhang :: University of Bath

Generalised varying-coefficient models (GVC) are very important models. There are a considerable number of literature addressing these models. However, most of the existing literature are devoted to the estimation procedure. In this talk, I will systematically investigate the statistical inference for GVC, which includes confidence band as well as hypothesis test. I will show the asymptotic distribution of the maximum discrepancy between the estimated functional coefficient and the true functional coefficient. I will compare different approaches for the construction of confidence band and hypothesis test. Finally, the proposed statistical inference methods are used to analyse the data from China about contraceptive use there, which leads to some interesting findings.
Statistical physics and behavioral adaptation to Creation's main stimuli: sex and food
15:10 Fri 29 Oct, 2010 :: E10 B17 Suite 1 :: Prof Laurent Seuront :: Flinders University and South Australian Research and Development Institute

Animals typically search for food and mates, while avoiding predators. This is particularly critical for keystone organisms such as intertidal gastropods and copepods (i.e. millimeter-scale crustaceans) as they typically rely on non-visual senses for detecting, identifying and locating mates in their two- and three-dimensional environments. Here, using stochastic methods derived from the field of nonlinear physics, we provide new insights into the nature (i.e. innate vs. acquired) of the motion behavior of gastropods and copepods, and demonstrate how changes in their behavioral properties can be used to identify the trade-offs between foraging for food or sex. The gastropod Littorina littorea hence moves according to fractional Brownian motions while foraging for food (in accordance with the fractal nature of food distributions), and switch to Brownian motion while foraging for sex. In contrast, the swimming behavior of the copepod Temora longicornis belongs to the class of multifractal random walks (MRW; i.e. a form of anomalous diffusion), characterized by a nonlinear moment scaling function for distance versus time. This clearly differs from the traditional Brownian and fractional Brownian walks expected or previously detected in animal behaviors. The divergence between MRW and Levy flight and walk is also discussed, and it is shown how copepod anomalous diffusion is enhanced by the presence and concentration of conspecific water-borne signals, and is dramatically increasing male-female encounter rates.
Change detection in rainfall time series for Perth, Western Australia
12:10 Mon 16 May, 2011 :: 5.57 Ingkarni Wardli :: Farah Mohd Isa :: University of Adelaide

There have been numerous reports that the rainfall in south Western Australia, particularly around Perth has observed a step change decrease, which is typically attributed to climate change. Four statistical tests are used to assess the empirical evidence for this claim on time series from five meteorological stations, all of which exceed 50 years. The tests used in this study are: the CUSUM; Bayesian Change Point analysis; consecutive t-test and the Hotelling’s T²-statistic. Results from multivariate Hotelling’s T² analysis are compared with those from the three univariate analyses. The issue of multiple comparisons is discussed. A summary of the empirical evidence for the claimed step change in Perth area is given.
Statistical challenges in molecular phylogenetics
15:10 Fri 20 May, 2011 :: Mawson Lab G19 lecture theatre :: Dr Barbara Holland :: University of Tasmania

Media...
This talk will give an introduction to the ways that mathematics and statistics gets used in the inference of evolutionary (phylogenetic) trees. Taking a model-based approach to estimating the relationships between species has proven to be an enormously effective, however, there are some tricky statistical challenges that remain. The increasingly plentiful amount of DNA sequence data is a boon, but it is also throwing a spotlight on some of the shortcomings of current best practice particularly in how we (1) assess the reliability of our phylogenetic estimates, and (2) how we choose appropriate models. This talk will aim to give a general introduction this area of research and will also highlight some results from two of my recent PhD students.
Statistical modelling in economic forecasting: semi-parametrically spatio-temporal approach
12:10 Mon 23 May, 2011 :: 5.57 Ingkarni Wardli :: Dawlah Alsulami :: University of Adelaide

How to model spatio-temporal variation of housing prices is an important and challenging problem as it is of vital importance for both investors and policy makersto assess any movement in housing prices. In this seminar I will talk about the proposed model to estimate any movement in housing prices and measure the risk more accurately.
Inference and optimal design for percolation and general random graph models (Part I)
09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Inference and optimal design for percolation and general random graph models (Part II)
10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Routing in equilibrium
15:10 Tue 21 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Timothy Griffin :: University of Cambridge

Media...
Some path problems cannot be modelled using semirings because the associated algebraic structure is not distributive. Rather than attempting to compute globally optimal paths with such structures, it may be sufficient in some cases to find locally optimal paths --- paths that represent a stable local equilibrium. For example, this is the type of routing system that has evolved to connect Internet Service Providers (ISPs) where link weights implement bilateral commercial relationships between them. Previous work has shown that routing equilibria can be computed for some non-distributive algebras using algorithms in the Bellman-Ford family. However, no polynomial time bound was known for such algorithms. In this talk, we show that routing equilibria can be computed using Dijkstra's algorithm for one class of non-distributive structures. This provides the first polynomial time algorithm for computing locally optimal solutions to path problems.
Quantitative proteomics: data analysis and statistical challenges
10:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Peter Hoffmann :: Adelaide Proteomics Centre

Object oriented data analysis
14:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill

Object Oriented Data Analysis is the statistical analysis of populations of complex objects. In the special case of Functional Data Analysis, these data objects are curves, where standard Euclidean approaches, such as principal components analysis, have been very successful. Recent developments in medical image analysis motivate the statistical analysis of populations of more complex data objects which are elements of mildly non-Euclidean spaces, such as Lie Groups and Symmetric Spaces, or of strongly non-Euclidean spaces, such as spaces of tree-structured data objects. These new contexts for Object Oriented Data Analysis create several potentially large new interfaces between mathematics and statistics. Even in situations where Euclidean analysis makes sense, there are statistical challenges because of the High Dimension Low Sample Size problem, which motivates a new type of asymptotics leading to non-standard mathematical statistics.
Object oriented data analysis of tree-structured data objects
15:10 Fri 1 Jul, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill

The field of Object Oriented Data Analysis has made a lot of progress on the statistical analysis of the variation in populations of complex objects. A particularly challenging example of this type is populations of tree-structured objects. Deep challenges arise, which involve a marriage of ideas from statistics, geometry, and numerical analysis, because the space of trees is strongly non-Euclidean in nature. These challenges, together with three completely different approaches to addressing them, are illustrated using a real data example, where each data point is the tree of blood arteries in one person's brain.
Statistical analysis of metagenomic data from the microbial community involved in industrial bioleaching
12:10 Mon 19 Sep, 2011 :: 5.57 Ingkarni Wardli :: Ms Susana Soto-Rojo :: University of Adelaide

In the last two decades heap bioleaching has become established as a successful commercial option for recovering copper from low-grade secondary sulfide ores. Genetics-based approaches have recently been employed in the task of characterizing mineral processing bacteria. Data analysis is a key issue and thus the implementation of adequate mathematical and statistical tools is of fundamental importance to draw reliable conclusions. In this talk I will give a recount of two specific problems that we have been working on. The first regarding experimental design and the latter on modeling composition and activity of the microbial consortium.
Statistical analysis of school-based student performance data
12:10 Mon 10 Oct, 2011 :: 5.57 Ingkarni Wardli :: Ms Jessica Tan :: University of Adelaide

Join me in the journey of being a statistician for 15 minutes of your day (if you are not already one) and experience the task of data cleaning without having to get your own hands dirty. Most of you may have sat the Basic Skills Tests when at school or know someone who currently has to do the NAPLAN (National Assessment Program - Literacy and Numeracy) tests. Tests like these assess student progress and can be used to accurately measure school performance. In trying to answer the research question: "what conclusions about student progress and school performance can be drawn from NAPLAN data or data of a similar nature, using mathematical and statistical modelling and analysis techniques?", I have uncovered some interesting results about the data in my initial data analysis which I shall explain in this talk.
Statistical modelling for some problems in bioinformatics
11:10 Fri 14 Oct, 2011 :: B.17 Ingkarni Wardli :: Professor Geoff McLachlan :: The University of Queensland

Media...
In this talk we consider some statistical analyses of data arising in bioinformatics. The problems include the detection of differential expression in microarray gene-expression data, the clustering of time-course gene-expression data and, lastly, the analysis of modern-day cytometric data. Extensions are considered to the procedures proposed for these three problems in McLachlan et al. (Bioinformatics, 2006), Ng et al. (Bioinformatics, 2006), and Pyne et al. (PNAS, 2009), respectively. The latter references are available at http://www.maths.uq.edu.au/~gjm/.
Likelihood-free Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis
15:10 Fri 21 Oct, 2011 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South Wales

Media...
A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed ``likelihood-free'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines. In this talk, I'll present a brief, non-technical overview of the ideas behind likelihood-free methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis.
Financial risk measures - the theory and applications of backward stochastic difference/differential equations with respect to the single jump process
12:10 Mon 26 Mar, 2012 :: 5.57 Ingkarni Wardli :: Mr Bin Shen :: University of Adelaide

Media...
This is my PhD thesis submitted one month ago. Chapter 1 introduces the backgrounds of the research fields. Then each chapter is a published or an accepted paper. Chapter 2, to appear in Methodology and Computing in Applied Probability, establishes the theory of Backward Stochastic Difference Equations with respect to the single jump process in discrete time. Chapter 3, published in Stochastic Analysis and Applications, establishes the theory of Backward Stochastic Differential Equations with respect to the single jump process in continuous time. Chapter 2 and 3 consist of Part I Theory. Chapter 4, published in Expert Systems With Applications, gives some examples about how to measure financial risks by the theory established in Chapter 2. Chapter 5, accepted by Journal of Applied Probability, considers the question of an optimal transaction between two investors to minimize their risks. It's the applications of the theory established in Chapter 3. Chapter 4 and 5 consist of Part II Applications.
Change detection in rainfall times series for Perth, Western Australia
12:10 Mon 14 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Farah Mohd Isa :: University of Adelaide

Media...
There have been numerous reports that the rainfall in south Western Australia, particularly around Perth has observed a step change decrease, which is typically attributed to climate change. Four statistical tests are used to assess the empirical evidence for this claim on time series from five meteorological stations, all of which exceed 50 years. The tests used in this study are: the CUSUM; Bayesian Change Point analysis; consecutive t-test and the Hotelling's T^2-statistic. Results from multivariate Hotelling's T^2 analysis are compared with those from the three univariate analyses. The issue of multiple comparisons is discussed. A summary of the empirical evidence for the claimed step change in Perth area is given.
Evaluation and comparison of the performance of Australian and New Zealand intensive care units
14:10 Fri 25 May, 2012 :: 7.15 Ingkarni Wardli :: Dr Jessica Kasza :: The University of Adelaide

Media...
Recently, the Australian Government has emphasised the need for monitoring and comparing the performance of Australian hospitals. Evaluating the performance of intensive care units (ICUs) is of particular importance, given that the most severe cases are treated in these units. Indeed, ICU performance can be thought of as a proxy for the overall performance of a hospital. We compare the performance of the ICUs contributing to the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database, the largest of its kind in the world, and identify those ICUs with unusual performance. It is well-known that there are many statistical issues that must be accounted for in the evaluation of healthcare provider performance. Indicators of performance must be appropriately selected and estimated, investigators must adequately adjust for casemix, statistical variation must be fully accounted for, and adjustment for multiple comparisons must be made. Our basis for dealing with these issues is the estimation of a hierarchical logistic model for the in-hospital death of each patient, with patients clustered within ICUs. Both patient- and ICU-level covariates are adjusted for, with a random intercept and random coefficient for the APACHE III severity score. Given that we expect most ICUs to have similar performance after adjustment for these covariates, we follow Ohlssen et al., JRSS A (2007), and estimate a null model that we expect the majority of ICUs to follow. This methodology allows us to rigorously account for the aforementioned statistical issues, and accurately identify those ICUs contributing to the ANZICS database that have comparatively unusual performance. This is joint work with Prof. Patty Solomon and Assoc. Prof. John Moran.
A brief introduction to Support Vector Machines
12:30 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide

Media...
Support Vector Machines (SVMs) are used in a variety of contexts for a range of purposes including regression, feature selection and classification. To convey the basic principles of SVMs, this presentation will focus on the application of SVMs to classification. Classification (or discrimination), in a statistical sense, is supervised model creation for the purpose of assigning future observations to a group or class. An example might be determining healthy or diseased labels to patients from p characteristics obtained from a blood sample. While SVMs are widely used, they are most successful when the data have one or more of the following properties: The data are not consistent with a standard probability distribution. The number of observations, n, used to create the model is less than the number of predictive features, p. (The so-called small-n, big-p problem.) The decision boundary between the classes is likely to be non-linear in the feature space. I will present a short overview of how SVMs are constructed, keeping in mind their purpose. As this presentation is part of a double post-grad seminar, I will keep it to a maximum of 15 minutes.
Star Wars Vs The Lord of the Rings: A Survival Analysis
12:10 Mon 27 Aug, 2012 :: B.21 Ingkarni Wardli :: Mr Christopher Davies :: University of Adelaide

Media...
Ever wondered whether you are more likely to die in the Galactic Empire or Middle Earth? Well this is the postgraduate seminar for you! I'll be attempting to answer this question using survival analysis, the statistical method of choice for investigating time to event data. Spoiler Warning: This talk will contain references to the deaths of characters in the above movie sagas.
Principal Component Analysis (PCA)
12:30 Mon 3 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr Lyron Winderbaum :: University of Adelaide

Media...
Principal Component Analysis (PCA) has become something of a buzzword recently in a number of disciplines including the gene expression and facial recognition. It is a classical, and fundamentally simple, concept that has been around since the early 1900's, its recent popularity largely due to the need for dimension reduction techniques in analyzing high dimensional data that has become more common in the last decade, and the availability of computing power to implement this. I will explain the concept, prove a result, and give a couple of examples. The talk should be accessible to all disciplines as it (should?) only assume first year linear algebra, the concept of a random variable, and covariance.
Optimal Experimental Design: What Is It?
12:10 Mon 15 Oct, 2012 :: B.21 Ingkarni Wardli :: Mr David Price :: University of Adelaide

Media...
Optimal designs are a class of experimental designs that are optimal with respect to some statistical criterion. That answers the question, right? But what do I mean by 'optimal', and which 'statistical criterion' should you use? In this talk I will answer all these questions, and provide an overly simple example to demonstrate how optimal design works. I will then give a brief explanation of how I will use this methodology, and what chickens have to do with it.
Numerical Free Probability: Computing Eigenvalue Distributions of Algebraic Manipulations of Random Matrices
15:10 Fri 2 Nov, 2012 :: B.20 Ingkarni Wardli :: Dr Sheehan Olver :: The University of Sydney

Media...
Suppose that the global eigenvalue distributions of two large random matrices A and B are known. It is a remarkable fact that, generically, the eigenvalue distribution of A + B and (if A and B are positive definite) A*B are uniquely determined from only the eigenvalue distributions of A and B; i.e., no information about eigenvectors are required. These operations on eigenvalue distributions are described by free probability theory. We construct a numerical toolbox that can efficiently and reliably calculate these operations with spectral accuracy, by exploiting the complex analytical framework that underlies free probability theory.
What are fusion categories?
12:10 Fri 6 Sep, 2013 :: Ingkarni Wardli B19 :: Dr Scott Morrison :: Australian National University

Fusion categories are a common generalization of finite groups and quantum groups at roots of unity. I'll explain a little of their structure, mention their applications (to topological field theory and quantum computing), and then explore the ways in which they are in general similar to, or different from, the 'classical' cases. We've only just started exploring, and don't yet know what the exotic examples we've discovered signify about the landscape ahead.
Random Wanderings on a Sphere...
11:10 Tue 17 Sep, 2013 :: Ingkarni Wardli Level 5 Room 5.57 :: A/Prof Robb Muirhead :: University of Adelaide

This will be a short talk (about 30 minutes) about the following problem. (Even if I tell you all I know about it, it won't take very long!) Imagine the earth is a unit sphere in 3-dimensions. You're standing at a fixed point, which we may as well take to be the North Pole. Suddenly you get moved to another point on the sphere by a random (uniform) orthogonal transormation. Where are you now? You're not at a point which is uniformly distributed on the surface of the sphere (so, since most of the earth's surface is water, you're probably drowning). But then you get moved again by the same orthogonal transformation. Where are you now? And what happens to your location it this happens repeatedly? I have only a partial answwer to this question, for 2 and 3 transformations. (There's nothing special about 3 dimensions here--results hold for all dimensions which are at least 3.) I don't know of any statistical application for this! This work was motivated by a talk I heard, given by Tom Marzetta (Bell Labs) at a conference at MIT. Although I know virtually nothing about signal processing, I gather Marzetta was trying to encode signals using powers of ranfom orthogonal matrices. After carrying out simulations, I think he decided it wasn't a good idea.
A mathematician walks into a bar.....
12:10 Mon 30 Sep, 2013 :: B.19 Ingkarni Wardli :: Ben Rohrlach :: University of Adelaide

Media...
Man is by his very nature, inquisitive. Our need to know has been the reason we've always evolved as a species. From discovering fire, to exploring the galaxy with those Vulcan guys in that documentary I saw, knowing the answer to a question has always driven human kind. Clearly then, I had to ask something. Something that by it's very nature is a thing. A thing that, specifically, I had to know. That thing that I had to know was this: Do mathematicians get stupider the more they drink? Is this effect more pronounced than for normal (Gaussian) people? At the quiz night that AUMS just ran I managed to talk two tables into letting me record some key drinking statistics. I'll be using those statistics to introduce some different statistical tests commonly seen in most analyses you'll see in other fields. Oh, and I'll answer those questions I mentioned earlier too, hopefully. Let's do this thing.
Stochastic models of evolution: Trees and beyond
15:10 Fri 16 May, 2014 :: B.18 Ingkarni Wardli :: Dr Barbara Holland :: The University of Tasmania

Media...
In the first part of the talk I will give a general introduction to phylogenetics, and discuss some of the mathematical and statistical issues that arise in trying to infer evolutionary trees. In particular, I will discuss how we model the evolution of DNA along a phylogenetic tree using a continuous time Markov process. In the second part of the talk I will discuss how to express the two-state continuous-time Markov model on phylogenetic trees in such a way that allows its extension to more general models. In this framework we can model convergence of species as well as divergence (speciation). I will discuss the identifiability (or otherwise) of the models that arise in some simple cases. Use of a statistical framework means that we can use established techniques such as the AIC or likelihood ratio tests to decide if datasets show evidence of convergent evolution.
Computing with groups
15:10 Fri 30 May, 2014 :: B.21 Ingkarni Wardli :: Dr Heiko Dietrich :: Monash University

Media...
Groups are algebraic structures which show up in many branches of mathematics and other areas of science; Computational Group Theory is on the cutting edge of pure research in group theory and its interplay with computational methods. In this talk, we consider a practical aspect of Computational Group Theory: how to represent a group in a computer, and how to work with such a description efficiently. We will first recall some well-established methods for permutation group; we will then discuss some recent progress for matrix groups.
Fast computation of eigenvalues and eigenfunctions on bounded plane domains
15:10 Fri 1 Aug, 2014 :: B.18 Ingkarni Wardli :: Professor Andrew Hassell :: Australian National University

Media...
I will describe a new method for numerically computing eigenfunctions and eigenvalues on certain plane domains, derived from the so-called "scaling method" of Vergini and Saraceno. It is based on properties of the Dirichlet-to-Neumann map on the domain, which relates a function f on the boundary of the domain to the normal derivative (at the boundary) of the eigenfunction with boundary data f. This is a topic of independent interest in pure mathematics. In my talk I will try to emphasize the inteplay between theory and applications, which is very rich in this situation. This is joint work with numerical analyst Alex Barnett (Dartmouth).
Frequentist vs. Bayesian.
12:10 Mon 18 Aug, 2014 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide

Media...
Abstract: There are two frameworks in which we can do statistical analyses. Choosing one framework over the other can be* as controversial as choosing between team Jacob and... that other guy. In this talk, I aim to give a very very simple explanation of the main difference between frequentist and Bayesian methods. I'll probably flip a coin and show you a video too. * to people who really care.
Testing Statistical Association between Genetic Pathways and Disease Susceptibility
12:10 Mon 1 Sep, 2014 :: B.19 Ingkarni Wardli :: Andy Pfieffer :: University of Adelaide

Media...
A major research area is the identification of genetic pathways associated with various diseases. However, a detailed comparison of methods that have been designed to ascertain the association between pathways and diseases has not been performed. I will give the necessary biological background behind Genome-Wide Association Studies (GWAS), and explain the shortfalls in traditional GWAS methodologies. I will then explore various methods that use information about genetic pathways in GWAS, and explain the challenges in comparing these methods.
Modelling segregation distortion in multi-parent crosses
15:00 Mon 17 Nov, 2014 :: 5.57 Ingkarni Wardli :: Rohan Shah (joint work with B. Emma Huang and Colin R. Cavanagh) :: The University of Queensland

Construction of high-density genetic maps has been made feasible by low-cost high-throughput genotyping technology; however, the process is still complicated by biological, statistical and computational issues. A major challenge is the presence of segregation distortion, which can be caused by selection, difference in fitness, or suppression of recombination due to introgressed segments from other species. Alien introgressions are common in major crop species, where they have often been used to introduce beneficial genes from wild relatives. Segregation distortion causes problems at many stages of the map construction process, including assignment to linkage groups and estimation of recombination fractions. This can result in incorrect ordering and estimation of map distances. While discarding markers will improve the resulting map, it may result in the loss of genomic regions under selection or containing beneficial genes (in the case of introgression). To correct for segregation distortion we model it explicitly in the estimation of recombination fractions. Previously proposed methods introduce additional parameters to model the distortion, with a corresponding increase in computing requirements. This poses difficulties for large, densely genotyped experimental populations. We propose a method imposing minimal additional computational burden which is suitable for high-density map construction in large multi-parent crosses. We demonstrate its use modelling the known Sr36 introgression in wheat for an eight-parent complex cross.
Can mathematics help save energy in computing?
15:10 Fri 22 May, 2015 :: Engineering North N132 :: Prof Markus Hegland :: ANU

Media...

Recent development of computational hardware is characterised by two trends: 1. High levels of duplication of computational capabilities in multicore, parallel and GPU processing, and, 2. Substantially faster development of the speed of computational technology compared to communication technology

A consequence of these two trends is that energy costs of modern computing devices from mobile phones to supercomputers are increasingly dominated by communication costs. In order to save energy one would thus need to reduce the amount of data movement within the computer. This can be achieved by recomputing results instead of communicating them. The resulting increase in computational redundancy may also be used to make the computations more robust against hardware faults. Paradoxically, by doing more (computations) we do use less (energy).

This talk will first discuss for a simple example how a mathematical understanding can be applied to improve computational results using extrapolation. Then the problem of energy consumption in computational hardware will be considered. Finally some recent work will be discussed which shows how redundant computing is used to mitigate computational faults and thus to save energy.

Monodromy of the Hitchin system and components of representation varieties
12:10 Fri 29 May, 2015 :: Napier 144 :: David Baraglia :: University of Adelaide

Representations of the fundamental group of a compact Riemann surface into a reductive Lie group form a moduli space, called a representation variety. An outstanding problem in topology is to determine the number of components of these varieties. Through a deep result known as non-abelian Hodge theory, representation varieties are homeomorphic to moduli spaces of certain holomorphic objects called Higgs bundles. In this talk I will describe recent joint work with L. Schaposnik computing the monodromy of the Hitchin fibration for Higgs bundle moduli spaces. Our results give a new unified proof of the number of components of several representation varieties.
Group Meeting
15:10 Fri 29 May, 2015 :: EM 213 :: Dr Judy Bunder :: University of Adelaide

Talk : Patch dynamics for efficient exascale simulations Abstract Massive parallelisation has lead to a dramatic increase in available computational power. However, data transfer speeds have failed to keep pace and are the major limiting factor in the development of exascale computing. New algorithms must be developed which minimise the transfer of data. Patch dynamics is a computational macroscale modelling scheme which provides a coarse macroscale solution of a problem defined on a fine microscale by dividing the domain into many nonoverlapping, coupled patches. Patch dynamics is readily adaptable to massive parallelisation as each processor core can evaluate the dynamics on one, or a few, patches. However, patch coupling conditions interpolate across the unevaluated parts of the domain between patches and require almost continuous data transfer. We propose a modified patch dynamics scheme which minimises data transfer by only reevaluating the patch coupling conditions at `mesoscale' time scales which are significantly larger than the microscale time of the microscale problem. We analyse and quantify the error arising from patch dynamics with mesoscale temporal coupling.
Complex Systems, Chaotic Dynamics and Infectious Diseases
15:10 Fri 5 Jun, 2015 :: Engineering North N132 :: Prof Michael Small :: UWA

Media...
In complex systems, the interconnection between the components of the system determine the dynamics. The system is described by a very large and random mathematical graph and it is the topological structure of that graph which is important for understanding of the dynamical behaviour of the system. I will talk about two specific examples - (1) spread of infectious disease (where the connection between the agents in a population, rather than epidemic parameters, determine the endemic state); and, (2) a transformation to represent a dynamical system as a graph (such that the "statistical mechanics" of the graph characterise the dynamics).
A relaxed introduction to resampling-based multiple testing
12:10 Mon 10 Aug, 2015 :: Benham Labs G10 :: Ngoc Vo :: University of Adelaide

Media...
P-values and false positives are two phrases that you commonly see thrown around in scientific literature. More often than not, experimenters and analysts are required to quote p-values as a measure of statistical significance — how strongly does your evidence support your hypothesis? But what happens when this "strong evidence" is just a coincidence? What happens if you have lots of theses hypotheses — up to tens of thousands — to test all at the same time and most of your significant findings end up being just "coincidences"?
Modelling Directionality in Stationary Geophysical Time Series
12:10 Mon 12 Oct, 2015 :: Benham Labs G10 :: Mohd Mahayaudin Mansor :: University of Adelaide

Media...
Many time series show directionality inasmuch as plots again-st time and against time-to-go are qualitatively different, and there is a range of statistical tests to quantify this effect. There are two strategies for allowing for directionality in time series models. Linear models are reversible if and only if the noise terms are Gaussian, so one strategy is to use linear models with non-Gaussian noise. The alternative is to use non-linear models. We investigate how non-Gaussian noise affects directionality in a first order autoregressive process AR(1) and compare this with a threshold autoregressive model with two thresholds. The findings are used to suggest possible improvements to an AR(9) model, identified by an AIC criterion, for the average yearly sunspot numbers from 1700 to 1900. The improvement is defined in terms of one-step-ahead forecast errors from 1901 to 2014.
Quasi-isometry classification of certain hyperbolic Coxeter groups
11:00 Fri 23 Oct, 2015 :: Ingkarni Wardli Conference Room 7.15 (Level 7) :: Anne Thomas :: University of Sydney

Media...
Let Gamma be a finite simple graph with vertex set S. The associated right-angled Coxeter group W is the group with generating set S, so that s^2 = 1 for all s in S and st = ts if and only if s and t are adjacent vertices in Gamma. Moussong proved that the group W is hyperbolic in the sense of Gromov if and only if Gamma has no "empty squares". We consider the quasi-isometry classification of such Coxeter groups using the local cut point structure of their visual boundaries. In particular, we find an algorithm for computing Bowditch's JSJ tree for a class of these groups, and prove that two such groups are quasi-isometric if and only if their JSJ trees are the same. This is joint work with Pallavi Dani (Louisiana State University).
Group meeting
15:10 Fri 20 Nov, 2015 :: Ingkarni Wardli B17 :: Mr Jack Keeler :: University of East Anglia / University of Adelaide

Title: Stability of free-surface flow over topography Abstract: The forced KdV equation is used as a model to analyse the wave behaviour on the free surface in response to prescribed topographic forcing. The research involves computing steady solutions using numeric and asymptotic techniques and then analysing the stability of these steady solutions in time-dependent calculations. Stability is analysed by computing the eigenvalue spectra of the linearised fKdV operator and by exploiting the Hamiltonian structure of the fKdV. Future work includes analysing the solution space for a corrugated topography and investigating the 3 dimensional problem using the KP equation. + Any items for group discussion
Group meeting
15:10 Fri 20 Nov, 2015 :: Ingkarni Wardli B17 :: Mr Jack Keeler :: University of East Anglia / University of Adelaide

Title: Stability of free-surface flow over topography Abstract: The forced KdV equation is used as a model to analyse the wave behaviour on the free surface in response to prescribed topographic forcing. The research involves computing steady solutions using numeric and asymptotic techniques and then analysing the stability of these steady solutions in time-dependent calculations. Stability is analysed by computing the eigenvalue spectra of the linearised fKdV operator and by exploiting the Hamiltonian structure of the fKdV. Future work includes analysing the solution space for a corrugated topography and investigating the 3 dimensional problem using the KP equation. + Any items for group discussion
A Semi-Markovian Modeling of Limit Order Markets
13:00 Fri 11 Dec, 2015 :: Ingkarni Wardli 5.57 :: Anatoliy Swishchuk :: University of Calgary

Media...
R. Cont and A. de Larrard (SIAM J. Financial Mathematics, 2013) introduced a tractable stochastic model for the dynamics of a limit order book, computing various quantities of interest such as the probability of a price increase or the diffusion limit of the price process. As suggested by empirical observations, we extend their framework to 1) arbitrary distributions for book events inter-arrival times (possibly non-exponential) and 2) both the nature of a new book event and its corresponding inter-arrival time depend on the nature of the previous book event. We do so by resorting to Markov renewal processes to model the dynamics of the bid and ask queues. We keep analytical tractability via explicit expressions for the Laplace transforms of various quantities of interest. Our approach is justified and illustrated by calibrating the model to the five stocks Amazon, Apple, Google, Intel and Microsoft on June 21st 2012. As in Cont and Larrard, the bid-ask spread remains constant equal to one tick, only the bid and ask queues are modelled (they are independent from each other and get reinitialized after a price change), and all orders have the same size. (This talk is based on our joint paper with Nelson Vadori (Morgan Stanley)).
Multi-scale modeling in biofluids and particle aggregation
15:10 Fri 17 Jun, 2016 :: B17 Ingkarni Wardli :: Dr Sarthok Sircar :: University of Adelaide

In today's seminar I will give 2 examples in mathematical biology which describes the multi-scale organization at 2 levels: the meso/micro level and the continuum/macro level. I will then detail suitable tools in statistical mechanics to link these different scales. The first problem arises in mathematical physiology: swelling-de-swelling mechanism of mucus, an ionic gel. Mucus is packaged inside cells at high concentration (volume fraction) and when released into the extracellular environment, it expands in volume by two orders of magnitude in a matter of seconds. This rapid expansion is due to the rapid exchange of calcium and sodium that changes the cross-linked structure of the mucus polymers, thereby causing it to swell. Modeling this problem involves a two-phase, polymer/solvent mixture theory (in the continuum level description), together with the chemistry of the polymer, its nearest neighbor interaction and its binding with the dissolved ionic species (in the micro-scale description). The problem is posed as a free-boundary problem, with the boundary conditions derived from a combination of variational principle and perturbation analysis. The dynamics of neutral gels and the equilibrium-states of the ionic gels are analyzed. In the second example, we numerically study the adhesion fragmentation dynamics of rigid, round particles clusters subject to a homogeneous shear flow. In the macro level we describe the dynamics of the number density of these cluster. The description in the micro-scale includes (a) binding/unbinding of the bonds attached on the particle surface, (b) bond torsion, (c) surface potential due to ionic medium, and (d) flow hydrodynamics due to shear flow.
Probabilistic Meshless Methods for Bayesian Inverse Problems
15:10 Fri 5 Aug, 2016 :: Engineering South S112 :: Dr Chris Oates :: University of Technology Sydney

Media...
This talk deals with statistical inverse problems that involve partial differential equations (PDEs) with unknown parameters. Our goal is to account, in a rigorous way, for the impact of discretisation error that is introduced at each evaluation of the likelihood due to numerical solution of the PDE. In the context of meshless methods, the proposed, model-based approach to discretisation error encourages statistical inferences to be more conservative in the presence of significant solver error. In addition, (i) a principled learning-theoretic approach to minimise the impact of solver error is developed, and (ii) the challenge of non-linear PDEs is considered. The method is applied to parameter inference problems in which non-negligible solver error must be accounted for in order to draw valid statistical conclusions.
Measuring and mapping carbon dioxide from remote sensing satellite data
15:10 Fri 21 Oct, 2016 :: Napier G03 :: Prof Noel Cressie :: University of Wollongong

Media...
This talk is about environmental statistics for global remote sensing of atmospheric carbon dioxide, a leading greenhouse gas. An important compartment of the carbon cycle is atmospheric carbon dioxide (CO2), where it (and other gases) contribute to climate change through a greenhouse effect. There are a number of CO2 observational programs where measurements are made around the globe at a small number of ground-based locations at somewhat regular time intervals. In contrast, satellite-based programs are spatially global but give up some of the temporal richness. The most recent satellite launched to measure CO2 was NASA's Orbiting Carbon Observatory-2 (OCO-2), whose principal objective is to retrieve a geographical distribution of CO2 sources and sinks. OCO-2's measurement of column-averaged mole fraction, XCO2, is designed to achieve this, through a data-assimilation procedure that is statistical at its basis. Consequently, uncertainty quantification is key, starting with the spectral radiances from an individual sounding to borrowing of strength through spatial-statistical modelling.
Fault tolerant computation of hyperbolic PDEs with the sparse grid combination technique
15:10 Fri 28 Oct, 2016 :: Ingkarni Wardli 5.57 :: Dr Brendan Harding :: University of Adelaide

Computing solutions to high dimensional problems is challenging because of the curse of dimensionality. The sparse grid combination technique allows one to significantly reduce the cost of computing solutions such that they become manageable on current supercomputers. However, as these supercomputers increase in size the rate of failure also increases. This poses a challenge for our computations. In this talk we look at the problem of computing solutions to hyperbolic partial differential equations with the combination technique in an environment where faults occur. A fault tolerant generalisation of the combination technique will be presented along with results that demonstrate its effectiveness.
Collective and aneural foraging in biological systems
15:10 Fri 3 Mar, 2017 :: Lower Napier LG14 :: Dr Jerome Buhl and Dr David Vogel :: The University of Adelaide

The field of collective behaviour uses concepts originally adapted from statistical physics to study how complex collective phenomena such as mass movement or swarm intelligence emerge from relatively simple interactions between individuals. Here we will focus on two applications of this framework. First we will have look at new insights into the evolution of sociality brought by combining models of nutrition and social interactions to explore phenomena such as collective foraging decisions, emergence of social organisation and social immunity. Second, we will look at the networks built by slime molds under exploration and foraging context.
Fast approximate inference for arbitrarily large statistical models via message passing
15:10 Fri 17 Mar, 2017 :: Engineering South S111 :: Prof Matt Wand :: University of Technology Sydney

We explain how the notion of message passing can be used to streamline the algebra and computer coding for fast approximate inference in large Bayesian statistical models. In particular, this approach is amenable to handling arbitrarily large models of particular types once a set of primitive operations is established. The approach is founded upon a message passing formulation of mean field variational Bayes that utilizes factor graph representations of statistical models. The notion of factor graph fragments is introduced and is shown to facilitate compartmentalization of the required algebra and coding.
Graded K-theory and C*-algebras
11:10 Fri 12 May, 2017 :: Engineering North 218 :: Aidan Sims :: University of Wollongong

Media...
C*-algebras can be regarded, in a very natural way, as noncommutative algebras of continuous functions on topological spaces. The analogy is strong enough that topological K-theory in terms of formal differences of vector bundles has a direct analogue for C*-algebras. There is by now a substantial array of tools out there for computing C*-algebraic K-theory. However, when we want to model physical phenomena, like topological phases of matter, we need to take into account various physical symmetries, some of which are encoded by gradings of C*-algebras by the two-element group. Even the definition of graded C*-algebraic K-theory is not entirely settled, and there are relatively few computational tools out there. I will try to outline what a C*-algebra (and a graded C*-algebra is), indicate what graded K-theory ought to look like, and discuss recent work with Alex Kumjian and David Pask linking this with the deep and powerful work of Kasparov, and using this to develop computational tools.
The Markovian binary tree applied to demography and conservation biology
15:10 Fri 27 Oct, 2017 :: Ingkarni Wardli B17 :: Dr Sophie Hautphenne :: University of Melbourne

Markovian binary trees form a general and tractable class of continuous-time branching processes, which makes them well-suited for real-world applications. Thanks to their appealing probabilistic and computational features, these processes have proven to be an excellent modelling tool for applications in population biology. Typical performance measures of these models include the extinction probability of a population, the distribution of the population size at a given time, the total progeny size until extinction, and the asymptotic population composition. Besides giving an overview of the main performance measures and the techniques involved to compute them, we discuss recently developed statistical methods to estimate the model parameters, depending on the accuracy of the available data. We illustrate our results in human demography and in conservation biology.
Computing trisections of 4-manifolds
13:10 Fri 23 Mar, 2018 :: Barr Smith South Polygon Lecture theatre :: Stephen Tillmann :: University of Sydney

Media...
Gay and Kirby recently generalised Heegaard splittings of 3-manifolds to trisections of 4-manifolds. A trisection describes a 4–dimensional manifold as a union of three 4–dimensional handlebodies. The complexity of the 4–manifold is captured in a collection of curves on a surface, which guide the gluing of the handelbodies. The minimal genus of such a surface is the trisection genus of the 4-manifold. After defining trisections and giving key examples and applications, I will describe an algorithm to compute trisections of 4–manifolds using arbitrary triangulations as input. This results in the first explicit complexity bounds for the trisection genus of a 4–manifold in terms of the number of pentachora (4–simplices) in a triangulation. This is joint work with Mark Bell, Joel Hass and Hyam Rubinstein. I will also describe joint work with Jonathan Spreer that determines the trisection genus for each of the standard simply connected PL 4-manifolds.
Quantifying language change
15:10 Fri 1 Jun, 2018 :: Horace Lamb 1022 :: A/Prof Eduardo Altmann :: University of Sydney

Mathematical methods to study natural language are increasingly important because of the ubiquity of textual data in the Internet. In this talk I will discuss mathematical models and statistical methods to quantify the variability of language, with focus on two problems: (i) How the vocabulary of languages changed over the last centuries? (ii) How the language of scientific disciplines relate to each other and evolved in the last decades? One of the main challenges of these analyses stem from universal properties of word frequencies, which show high temporal variability and are fat-tailed distributed. The later feature dramatically affects the statistical properties of entropy-based estimators, which motivates us to compare vocabularies using a generalized Jenson-Shannon divergence (obtained from entropies of order alpha).

News matching "Statistical computing"

Usenet Conference
Associate Professor Matt Roughan (Applied Mathematics) has been invited to Co-Chair the Association for Computing Machinery Usenet Internet Measurement Conference. Posted Mon 15 Jan 07.
New Professor of Statistical Bioinformatics
Associate Professor Patty Solomon will take up the Chair of Statistical Bioinformatics within the School of Mathematical Sciences effective from 29th of October, 2007. Posted Mon 29 Oct 07.
ARC Grant successes
The School of Mathematical Sciences has again had outstanding success in the ARC Discovery and Linkage Projects schemes. Congratulations to the following staff for their success in the Discovery Project scheme: Prof Nigel Bean, Dr Josh Ross, Prof Phil Pollett, Prof Peter Taylor, New methods for improving active adaptive management in biological systems, $255,000 over 3 years; Dr Josh Ross, New methods for integrating population structure and stochasticity into models of disease dynamics, $248,000 over three years; A/Prof Matt Roughan, Dr Walter Willinger, Internet traffic-matrix synthesis, $290,000 over three years; Prof Patricia Solomon, A/Prof John Moran, Statistical methods for the analysis of critical care data, with application to the Australian and New Zealand Intensive Care Database, $310,000 over 3 years; Prof Mathai Varghese, Prof Peter Bouwknegt, Supersymmetric quantum field theory, topology and duality, $375,000 over 3 years; Prof Peter Taylor, Prof Nigel Bean, Dr Sophie Hautphenne, Dr Mark Fackrell, Dr Malgorzata O'Reilly, Prof Guy Latouche, Advanced matrix-analytic methods with applications, $600,000 over 3 years. Congratulations to the following staff for their success in the Linkage Project scheme: Prof Simon Beecham, Prof Lee White, A/Prof John Boland, Prof Phil Howlett, Dr Yvonne Stokes, Mr John Wells, Paving the way: an experimental approach to the mathematical modelling and design of permeable pavements, $370,000 over 3 years; Dr Amie Albrecht, Prof Phil Howlett, Dr Andrew Metcalfe, Dr Peter Pudney, Prof Roderick Smith, Saving energy on trains - demonstration, evaluation, integration, $540,000 over 3 years Posted Fri 29 Oct 10.

Publications matching "Statistical computing"

Publications
Adaptively varying-coefficient spatiotemporal models
Lu, Zudi; Steinskog, D; Tjostheim, D; Yao, Q, Journal of the Royal Statistical Society Series B-Statistical Methodology 71 (859–880) 2009
Algorithms for the Laplace-Stieltjes transforms of first return times for stochastic fluid flows
Bean, Nigel; O'Reilly, Malgorzata; Taylor, Peter, Methodology and Computing in Applied Probability 10 (381–408) 2008
Robust Optimal Portfolio Choice Under Markovian Regime-switching Model
Elliott, Robert; Siu, T, Methodology and Computing in Applied Probability 11 (145–157) 2008
General tooth boundary conditions for equation free modeling
Roberts, Anthony John; Kevrekidis, I, Siam Journal on Scientific Computing 29 (1495–1510) 2007
Statistical characteristics of rainstorms derived from weather radar images
Qin, J; Leonard, Michael; Kuczera, George; Thyer, M; Lambert, Martin; Metcalfe, Andrew, 30th Hydrology and Water Resources Symposium, Launceston, Tasmania 04/12/06
Diversity sensitivity and multimodal Bayesian statistical analysis by relative entropy
Leipnik, R; Pearce, Charles, The ANZIAM Journal 47 (277–287) 2005
Impinging laminar jets at moderate Reynolds numbers and separation distances
Bergthorson, J; Sone, K; Mattner, Trent; Dimotakis, P; Goodwin, D; Meiron, D, Physical Review E. (Statistical, Nonlinear, and Soft Matter Physics) 72 (066307-1–066307-12) 2005
Class-of-service mapping for QoS: A statistical signature-based approach to IP traffic classification
Roughan, Matthew; Sen, S; Spatscheck, O; Duffield, N, ACM SIG COMM 2004, Taormina, Sicily, Italy 25/10/04
Swift-Hohenberg model for magnetoconvection
Cox, Stephen; Matthews, P; Pollicott, S, Physical Review E. (Statistical, Nonlinear, and Soft Matter Physics) 69 (066314-1–066314-14) 2004
The Oxford dictionary of statistical terms
Dodge, Y; Cox, D; Commenges, D; Solomon, Patricia; Wilson, S,
Higher-order statistical moments of wave-induced response of offshore structures via efficient sampling techniques
Najafian, G; Burrows, R; Tickell, R; Metcalfe, Andrew, International Offshore and Polar Engineering Conference 3 (465–470) 2002
Statistical modelling and prediction associated with the HIV/AIDS epidemic
Solomon, Patricia; Wilson, Susan, The Mathematical Scientist 26 (87–102) 2001
Statistical analysis of medical data: New developments - Book review
Solomon, Patricia, Biometrics 57 (327–328) 2001
Meta-analysis, overviews and publication bias
Solomon, Patricia; Hutton, Jonathon, Statistical Methods in Medical Research 10 (245–250) 2001
A GUI for computing flows past general airfoils
Simakov, Sergey; Dostovalova, Anna; Tuck, Ernest, The MATLAB User Conference 2000, Melbourne, Australia 09/11/00
Disease surveillance and data collection issues in epidemic modelling
Solomon, Patricia; Isham, V, Statistical Methods in Medical Research 9 (259–277) 2000
Disease surveillance and intervention studies in developing countries
Solomon, Patricia, Statistical Methods in Medical Research 9 (183–184) 2000

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.