You are here
 Text size: S | M | L
 June 2013 M T W T F S S 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

# Search the School of Mathematical Sciences

 Find in People Courses Events News Publications

## People matching "Spatial-point data sets and the Polya distribution"

 Associate Professor Gary Glonek Associate Professor in Statistics More about Gary Glonek...
 Associate Professor Inge Koch Associate Professor in Statistics More about Inge Koch...
 Associate Professor Zudi Lu Associate Professor in Statistics and ARC Future Fellow More about Zudi Lu...
 Professor Matthew Roughan Professor of Applied Mathematics More about Matthew Roughan...
 Professor Patty Solomon Professor of Statistical Bioinformatics More about Patty Solomon...
 Mr Simon Tuke Associate Lecturer in Statistics More about Simon Tuke...

## Courses matching "Spatial-point data sets and the Polya distribution"

 Analysis of multivariable and high dimensional data Multivariate analysis of data is performed with the aims to 1. understand the structure in data and summarise the data in simpler ways; 2. understand the relationship of one part of the data to another part; and 3. make decisions or draw inferences based on data. The statistical analyses of multivariate data extend those of univariate data, and in doing so require more advanced mathematical theory and computational techniques. The course begins with a discussion of the three classical methods Principal Component Analysis, Canonical Correlation Analysis and Discriminant Analysis which correspond to the aims above. We also learn about Cluster Analysis, Factor Analysis and newer methods including Independent Component Analysis. For most real data the underlying distribution is not known, but if the assumptions of multivariate normality of the data hold, extra properties can be derived. Our treatment combines ideas, theoretical properties and a strong computational component for each of the different methods we discuss. For the computational part -- with Matlab -- we make use of real data and learn the use of simulations in order to assess the performance of different methods in practice. Topics covered: 1. Introduction to multivariate data, the multivariate normal distribution 2. Principal Component Analysis, theory and practice 3. Canonical Correlation Analysis, theory and practice 4. Discriminant Analysis, Fisher's LDA, linear and quadratic DA 5. Cluster Analysis: hierarchical and k-means methods 6. Factor Analysis and latent variables 7. Independent Component Analysis including an Introduction to Information Theory The course will be based on my forthcoming monograph Analysis of Multivariate and High-Dimensional Data - Theory and Practice, to be published by Cambridge University Press. More about this course...

## Events matching "Spatial-point data sets and the Polya distribution"

 Watching evolution in real time; problems and potential research areas. 15:10 Fri 26 May 06 :: G08. Mathematics Building University of Adelaide :: Prof Alan Cooper (Federation Fellow)Abstract... Recent studies (1) have indicated problems with our ability to use the genetic distances between species to estimate the time since their divergence (so called molecular clocks). An exponential decay curve has been detected in comparisons of closely related taxa in mammal and bird groups, and rough approximations suggest that molecular clock calculations may be problematic for the recent past (eg <1 million years). Unfortunately, this period encompasses a number of key evolutionary events where estimates of timing are critical such as modern human evolutionary history, the domestication of animals and plants, and most issues involved in conservation biology. A solution (formulated at UA) will be briefly outlined. A second area of active interest is the recent suggestion (2) that mitochondrial DNA diversity does not track population size in several groups, in contrast to standard thinking. This finding has been interpreted as showing that mtDNA may not be evolving neutrally, as has long been assumed. Large ancient DNA datasets provide a means to examine these issues, by revealing evolutionary processes in real time (3). The data also provide a rich area for mathematical investigation as temporal information provides information about several parameters that are unknown in serial coalescent calculations (4).References: Ho SYW et al. Time dependency of molecular rate estimates and systematic overestimation of recent divergence times. Mol. Biol. Evol. 22, 1561-1568 (2005); Penny D, Nature 436, 183-184 (2005). Bazin E., et al. Population size does not influence mitochondrial genetic diversity in animals. Science 312, 570 (2006); Eyre-Walker A. Size does not matter for mitochondrial DNA, Science 312, 537 (2006). Shapiro B, et al. Rise and fall of the Beringian steppe bison. Science 306: 1561-1565 (2004); Chan et al. Bayesian estimation of the timing and severity of a population bottleneck from ancient DNA. PLoS Genetics, 2 e59 (2006). Drummond et al. Measurably evolving populations, Trends in Ecol. Evol. 18, 481-488 (2003); Drummond et al. Bayesian coalescent inference of past population dynamics from molecular sequences. Molecular Biology Evolution 22, 1185-92 (2005).
 A Bivariate Zero-inflated Poisson Regression Model and application to some Dental Epidemiological data 14:10 Fri 27 Oct 06 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir PaulAbstract... Data in the form of paired (pre-treatment, post-treatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zero-inflated bivariate Poisson regression (ZIBPR) model for the paired (pre-treatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zero-inflated Poisson regression (ZIPR) model of the post-treatment count with the pre-treatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zero-inflated Poisson regression model in which the pre-treatment DMFT index is taken to be a covariate
 Likelihood inference for a problem in particle physics 15:10 Fri 27 Jul 07 :: G04 Napier Building University of Adelaide :: Prof. Anthony DavisonAbstract... The Large Hadron Collider (LHC), a particle accelerator located at CERN, near Geneva, is (currently!) expected to start operation in early 2008. It is located in an underground tunnel 27km in circumference, and when fully operational, will be the world's largest and highest energy particle accelerator. It is hoped that it will provide evidence for the existence of the Higgs boson, the last remaining particle of the so-called Standard Model of particle physics. The quantity of data that will be generated by the LHC is roughly equivalent to that of the European telecommunications network, but this will be boiled down to just a few numbers. After a brief introduction, this talk will outline elements of the statistical problem of detecting the presence of a particle, and then sketch how higher order likelihood asymptotics may be used for signal detection in this context. The work is joint with Nicola Sartori, of the Università Ca' Foscari, in Venice.
 Regression: a backwards step? 13:10 Fri 7 Sep 07 :: Maths G08 :: Dr Gary GlonekMedia...Abstract... Most students of high school mathematics will have encountered the technique of fitting a line to data by least squares. Those who have taken a university statistics course will also have heard this method referred to as regression. However, it is not obvious from common dictionary definitions why this should be the case. For example, "reversion to an earlier or less advanced state or form". In this talk, the mathematical phenomenon that gave regression its name will be explained and will be shown to have implications in some unexpected contexts.
 Statistical Critique of the International Panel on Climate Change's work on Climate Change. 18:00 Wed 17 Oct 07 :: Union Hall University of Adelaide :: Mr Dennis TrewinAbstract... Climate change is one of the most important issues facing us today. Many governments have introduced or are developing appropriate policy interventions to (a) reduce the growth of greenhouse gas emissions in order to mitigate future climate change, or (b) adapt to future climate change. This important work deserves a high quality statistical data base but there are statistical shortcomings in the work of the International Panel on Climate Change (IPCC). There has been very little involvement of qualified statisticians in the very important work of the IPCC which appears to be scientifically meritorious in most other ways. Mr Trewin will explain these shortcomings and outline his views on likely future climate change, taking into account the statistical deficiencies. His conclusions suggest climate change is still an important issue that needs to be addressed but the range of likely outcomes is a lot lower than has been suggested by the IPCC. This presentation will be based on an invited paper presented at the OECD World Forum.
 Moderated Statistical Tests for Digital Gene Expression Technologies 15:10 Fri 19 Oct 07 :: G04 Napier Building University of Adelaide :: Dr Gordon Smyth :: Walter and Eliza Hall Institute of Medical Research in Melbourne, AustraliaAbstract... Digital gene expression (DGE) technologies measure gene expression by counting sequence tags. They are sensitive technologies for measuring gene expression on a genomic scale, without the need for prior knowledge of the genome sequence. As the cost of DNA sequencing decreases, the number of DGE datasets is expected to grow dramatically. Various tests of differential expression have been proposed for replicated DGE data using over-dispersed binomial or Poisson models for the counts, but none of the these are usable when the number of replicates is very small. We develop tests using the negative binomial distribution to model overdispersion relative to the Poisson, and use conditional weighted likelihood to moderate the level of overdispersion across genes. A heuristic empirical Bayes algorithm is developed which is applicable to very general likelihood estimation contexts. Not only is our strategy applicable even with the smallest number of replicates, but it also proves to be more powerful than previous strategies when more replicates are available. The methodology is applicable to other counting technologies, such as proteomic spectral counts.
 Similarity solutions for surface-tension driven flows 15:10 Fri 14 Mar 08 :: LG29 Napier Building University of Adelaide :: Prof John Lister :: Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UKAbstract... The breakup of a mass of fluid into drops is a ubiquitous phenomenon in daily life, the natural environment and technology, with common examples including a dripping tap, ocean spray and ink-jet printing. It is a feature of many generic industrial processes such as spraying, emulsification, aeration, mixing and atomisation, and is an undesirable feature in coating and fibre spinning. Surface-tension driven pinch-off and the subsequent recoil are examples of finite-time singularities in which the interfacial curvature becomes infinite at the point of disconnection. As a result, the flow near the point of disconnection becomes self-similar and independent of initial and far-field conditions. Similarity solutions will be presented for the cases of inviscid and very viscous flow, along with comparison to experiments. In each case, a boundary-integral representation can be used both to examine the time-dependent behaviour and as the basis of a modified Newton scheme for direct solution of the similarity equations.
 Values of transcendental entire functions at algebraic points. 15:10 Fri 28 Mar 08 :: LG29 Napier Building University of Adelaide :: Prof. Eugene Poletsky :: Syracuse University, USAAbstract... Algebraic numbers are roots of polynomials with integer coefficients, so their set is countable. All other numbers are called transcendental. Although most numbers are transcendental, it was only in 1873 that Hermite proved that the base $e$ of natural logarithms is not algebraic. The proof was based on the fact that $e$ is the value at 1 of the exponential function $e^z$ which is entire and does not change under differentiation. This achievement raised two questions: What entire functions take only transcendental values at algebraic points? Also, given an entire transcendental function $f$, describe, or at least find properties of, the set of algebraic numbers where the values of $f$ are also algebraic. The first question, developed by Siegel, Shidlovsky, and others, led to the notion of $E$-functions, which have controlled derivatives. Answering the second question, Polya and Gelfond obtained restrictions for entire functions that have integer values at integer points (Polya) or Gaussian integer values at Gaussian integer points (Gelfond). For more general sets of points only counterexamples were known. Recently D. Coman and the speaker developed new tools for the second question, which give an answer, at least partially, for general entire functions and their values at general sets of algebraic points. In my talk we will discuss old and new results in this direction. All relevant definitions will be provided and the talk will be accessible to postgraduates and honours students.
 Global and Local stationary modelling in finance: Theory and empirical evidence 14:10 Thu 10 Apr 08 :: G04 Napier Building University of Adelaide :: Prof. Dominique Guégan :: Universite Paris 1 Pantheon-SorbonneAbstract... To model real data sets using second order stochastic processes imposes that the data sets verify the second order stationarity condition. This stationarity condition concerns the unconditional moments of the process. It is in that context that most of models developed from the sixties' have been studied; We refer to the ARMA processes (Brockwell and Davis, 1988), the ARCH, GARCH and EGARCH models (Engle, 1982, Bollerslev, 1986, Nelson, 1990), the SETAR process (Lim and Tong, 1980 and Tong, 1990), the bilinear model (Granger and Andersen, 1978, Guégan, 1994), the EXPAR model (Haggan and Ozaki, 1980), the long memory process (Granger and Joyeux, 1980, Hosking, 1981, Gray, Zang and Woodward, 1989, Beran, 1994, Giraitis and Leipus, 1995, Guégan, 2000), the switching process (Hamilton, 1988). For all these models, we get an invertible causal solution under specific conditions on the parameters, then the forecast points and the forecast intervals are available. Thus, the stationarity assumption is the basis for a general asymptotic theory for identification, estimation and forecasting. It guarantees that the increase of the sample size leads to more and more information of the same kind which is basic for an asymptotic theory to make sense. Now non-stationarity modelling has also a long tradition in econometrics. This one is based on the conditional moments of the data generating process. It appears mainly in the heteroscedastic and volatility models, like the GARCH and related models, and stochastic volatility processes (Ghysels, Harvey and Renault 1997). This non-stationarity appears also in a different way with structural changes models like the switching models (Hamilton, 1988), the stopbreak model (Diebold and Inoue, 2001, Breidt and Hsu, 2002, Granger and Hyung, 2004) and the SETAR models, for instance. It can also be observed from linear models with time varying coefficients (Nicholls and Quinn, 1982, Tsay, 1987). Thus, using stationary unconditional moments suggest a global stationarity for the model, but using non-stationary unconditional moments or non-stationary conditional moments or assuming existence of states suggest that this global stationarity fails and that we only observe a local stationary behavior. The growing evidence of instability in the stochastic behavior of stocks, of exchange rates, of some economic data sets like growth rates for instance, characterized by existence of volatility or existence of jumps in the variance or on the levels of the prices imposes to discuss the assumption of global stationarity and its consequence in modelling, particularly in forecasting. Thus we can address several questions with respect to these remarks. 1. What kinds of non-stationarity affect the major financial and economic data sets? How to detect them? 2. Local and global stationarities: How are they defined? 3. What is the impact of evidence of non-stationarity on the statistics computed from the global non stationary data sets? 4. How can we analyze data sets in the non-stationary global framework? Does the asymptotic theory work in non-stationary framework? 5. What kind of models create local stationarity instead of global stationarity? How can we use them to develop a modelling and a forecasting strategy? These questions began to be discussed in some papers in the economic literature. For some of these questions, the answers are known, for others, very few works exist. In this talk I will discuss all these problems and will propose 2 new stategies and modelling to solve them. Several interesting topics in empirical finance awaiting future research will also be discussed.
 Puzzle-based learning: Introduction to mathematics 15:10 Fri 23 May 08 :: LG29 Napier Building University of Adelaide :: Prof. Zbigniew Michalewicz :: School of Computer Science, University of AdelaideMedia...Abstract... The talk addresses a gap in the educational curriculum for 1st year students by proposing a new course that aims at getting students to think about how to frame and solve unstructured problems. The idea is to increase the student's mathematical awareness and problem-solving skills by discussing a variety of puzzles. The talk makes an argument that this approach - called Puzzle-Based Learning - is very beneficial for introducing mathematics, critical thinking, and problem-solving skills. The new course has been approved by the University of Adelaide for Faculty of Engineering, Computer Science, and Mathematics. Many other universities are in the process of introducing such a course. The course will be offered in two versions: (a) full-semester course and (b) a unit within general course (e.g. Introduction to Engineering). All teaching materials (power point slides, assignments, etc.) are being prepared. The new textbook (Puzzle-Based Learning: Introduction to Critical Thinking, Mathematics, and Problem Solving) will be available from June 2008. The talk provides additional information on this development. For further information see http://www.PuzzleBasedlearning.edu.au/
 Elliptic equation for diffusion-advection flows 15:10 Fri 15 Aug 08 :: G03 Napier Building University of Adelaide :: Prof. Pavel Bedrikovsetsky :: Australian School of Petroleum Science, University of Adelaide.Abstract... The standard diffusion equation is obtained by Einstein's method and its generalisation, Fokker-Plank-Kolmogorov-Feller theory. The time between jumps in Einstein derivation is constant. We discuss random walks with residence time distribution, which occurs for flows of solutes and suspensions/colloids in porous media, CO2 sequestration in coal mines, several processes in chemical, petroleum and environmental engineering. The rigorous application of the Einstein's method results in new equation, containing the time and the mixed dispersion terms expressing the dispersion of the particle time steps. Usually, adding the second time derivative results in additional initial data. For the equation derived, the condition of limited solution when time tends to infinity provides with uniqueness of the Caushy problem solution. The solution of the pulse injection problem describing a common tracer injection experiment is studied in greater detail. The new theory predicts delay of the maximum of the tracer, compared to the velocity of the flow, while its forward "tail" contains much more particles than in the solution of the classical parabolic (advection-dispersion) equation. This is in agreement with the experimental observations and predictions of the direct simulation.
 Mathematical modelling of blood flow in curved arteries 15:10 Fri 12 Sep 08 :: G03 Napier Building University of Adelaide :: Dr Jennifer Siggers :: Imperial College LondonAbstract... Atherosclerosis, characterised by plaques, is the most common arterial disease. Plaques tend to develop in regions of low mean wall shear stress, and regions where the wall shear stress changes direction during the course of the cardiac cycle. To investigate the effect of the arterial geometry and driving pressure gradient on the wall shear stress distribution we consider an idealised model of a curved artery with uniform curvature. We assume that the flow is fully-developed and seek solutions of the governing equations, finding the effect of the parameters on the flow and wall shear stress distribution. Most previous work assumes the curvature ratio is asymptotically small; however, many arteries have significant curvature (e.g. the aortic arch has curvature ratio approx 0.25), and in this work we consider in particular the effect of finite curvature. We present an extensive analysis of curved-pipe flow driven by a steady and unsteady pressure gradients. Increasing the curvature causes the shear stress on the inside of the bend to rise, indicating that the risk of plaque development would be overestimated by considering only the weak curvature limit.
 Symmetry-breaking and the Origin of Species 15:10 Fri 24 Oct 08 :: G03 Napier Building University of Adelaide :: Toby Elmhirst :: ARC Centre of Excellence for Coral Reef Studies, James Cook UniversityAbstract... The theory of partial differential equations can say much about generic bifurcations from spatially homogeneous steady states, but relatively little about generic bifurcations from unimodal steady states. In many applications, spatially homogeneous steady states correspond to low-energy physical states that are destabilized as energy is fed into the system, and in these cases standard PDE theory can yield some impressive and elegant results. However, for many macroscopic biological systems such results are less useful because low-energy states do not hold the same priviledged position as they do in physical and chemical systems. For example, speciation -- the evolutionary process by which new species are formed -- can be seen as the destabilization of a unimodal density distribution over phenotype space. Given the diversity of species and environments, generic results are clearly needed, but cannot be gained from PDE theory. Indeed, such questions cannot even be adequately formulated in terms of PDEs. In this talk I will introduce 'Pod Systems' which can provide an answer to the question; 'What happens, generically, when a unimodal steady state loses stability?' In the pod system formalization, the answer involves elements of equivariant bifurcation theory and suggests that new species can arise as the result of broken symmetries.
 Oceanographic Research at the South Australian Research and Development Institute: opportunities for collaborative research 15:10 Fri 21 Nov 08 :: Napier G04 :: Associate Prof John Middleton :: South Australian Research and Development InstituteAbstract... Increasing threats to S.A.'s fisheries and marine environment have underlined the increasing need for soundly based research into the ocean circulation and ecosystems (phyto/zooplankton) of the shelf and gulfs. With support of Marine Innovation SA, the Oceanography Program has within 2 years, grown to include 6 FTEs and a budget of over $4.8M. The program currently leads two major research projects, both of which involve numerical and applied mathematical modelling of oceanic flow and ecosystems as well as statistical techniques for the analysis of data. The first is the implementation of the Southern Australian Integrated Marine Observing System (SAIMOS) that is providing data to understand the dynamics of shelf boundary currents, monitor for climate change and understand the phyto/zooplankton ecosystems that under-pin SA's wild fisheries and aquaculture. SAIMOS involves the use of ship-based sampling, the deployment of underwater marine moorings, underwater gliders, HF Ocean RADAR, acoustic tracking of tagged fish and Autonomous Underwater vehicles. The second major project involves measuring and modelling the ocean circulation and biological systems within Spencer Gulf and the impact on prawn larval dispersal and on the sustainability of existing and proposed aquaculture sites. The discussion will focus on opportunities for collaborative research with both faculty and students in this exciting growth area of S.A. science.  Key Predistribution in Grid-Based Wireless Sensor Networks 15:10 Fri 12 Dec 08 :: Napier G03 :: Dr Maura Paterson :: Information Security Group at Royal Holloway, University of London.Abstract... Wireless sensors are small, battery-powered devices that are deployed to measure quantities such as temperature within a given region, then form a wireless network to transmit and process the data they collect. We discuss the problem of distributing symmetric cryptographic keys to the nodes of a wireless sensor network in the case where the sensors are arranged in a square or hexagonal grid, and we propose a key predistribution scheme for such networks that is based on Costas arrays. We introduce more general structures known as distinct-difference configurations, and show that they provide a flexible choice of parameters in our scheme, leading to more efficient performance than that achieved by prior schemes from the literature.  Boltzmann's Equations for Suspension Flow in Porous Media and Correction of the Classical Model 15:10 Fri 13 Mar 09 :: Napier LG29 :: Prof Pavel Bedrikovetsky :: University of AdelaideAbstract... Suspension/colloid transport in porous media is a basic phenomenon in environmental, petroleum and chemical engineering. Suspension of particles moves through porous media and particles are captured by straining or attraction. We revise the classical equations for particle mass balance and particle capture kinetics and show its non-realistic behaviour in cases of large dispersion and of flow-free filtration. In order to resolve the paradoxes, the pore-scale model is derived. The model can be transformed to Boltzmann equation with particle distribution over pores. Introduction of sink-source terms into Boltzmann equation results in much more simple calculations if compared with the traditional Chapman-Enskog averaging procedure. Technique of projecting operators in Hilbert space of Fourier images is used. The projection subspace is constructed in a way to avoid dependency of averaged equations on sink-source terms. The averaging results in explicit expressions for particle flux and capture rate. The particle flux expression describes the effect of advective particle velocity decrease if compared with the carrier water velocity due to preferential capture of "slow" particles in small pores. The capture rate kinetics describes capture from either advective or diffusive fluxes. The equations derived exhibit positive advection velocity for any dispersion and particle capture in immobile fluid that resolves the above-mentioned paradox. Finally, we discuss validation of the model for propagation of contaminants in aquifers, for filtration, for potable water production by artesian wells, for formation damage in oilfields.  From histograms to multivariate polynomial histograms and shape estimation 12:10 Thu 19 Mar 09 :: Napier 210 :: A/Prof Inge KochMedia...Abstract... Histograms are convenient and easy-to-use tools for estimating the shape of data, but they have serious problems which are magnified for multivariate data. We combine classic histograms with shape estimation by polynomials. The new relatives, polynomial histograms', have surprisingly nice mathematical properties, which we will explore in this talk. We also show how they can be used for real data of 10-20 dimensions to analyse and understand the shape of these data.  Multi-scale tools for interpreting cell biology data 15:10 Fri 17 Apr 09 :: Napier LG29 :: Dr Matthew Simpson :: University of MelbourneAbstract... Trajectory data from observations of a random walk process are often used to characterize macroscopic transport coefficients and to infer motility mechanisms in cell biology. New continuum equations describing the average moments of the position of an individual agent in a population of interacting agents are derived and validated. Unlike standard noninteracting random walks, the new moment equations explicitly represent the interactions between agents as they are coupled to the macroscopic agent density. Key issues associated with the validity of the new continuum equations and the interpretation of experimental data will be explored.  Sloshing in tanks of liquefied natural gas (LNG) vessels 15:10 Wed 22 Apr 09 :: Napier LG29 :: Prof. Frederic Dias :: ENS, CachanAbstract... The last scientific conversation I had with Ernie Tuck was on liquid impact. As a matter of fact, we discussed the paper by J.H. Milgram, Journal of Fluid Mechanics 37 (1969), entitled "The motion of a fluid in a cylindrical container with a free surface following vertical impact." Liquid impact is a key issue in sloshing and in particular in sloshing in tanks of LNG vessels. Numerical simulations of sloshing have been performed by various groups, using various types of numerical methods. In terms of the numerical results, the outcome is often impressive, but the question remains of how relevant these results are when it comes to determining impact pressures. The numerical models are too simplified to reproduce the high variability of the measured pressures. In fact, for the time being, it is not possible to simulate accurately both global and local effects. Unfortunately it appears that local effects predominate over global effects when the behaviour of pressures is considered. Having said this, it is important to point out that numerical studies can be quite useful to perform sensitivity analyses in idealized conditions such as a liquid mass falling under gravity on top of a horizontal wall and then spreading along the lateral sides. Simple analytical models inspired by numerical results on idealized problems can also be useful to predict trends. The talk is organized as follows: After a brief introduction on the sloshing problem and on scaling laws, it will be explained to what extent numerical studies can be used to improve our understanding of impact pressures. Results on a liquid mass hitting a wall obtained by a finite-volume code with interface reconstruction as well as results obtained by a simple analytical model will be shown to reproduce the trends of experiments on sloshing. This is joint work with L. Brosset (GazTransport & Technigaz), J.-M. Ghidaglia (ENS Cachan) and J.-P. Braeunig (INRIA).  Statistical analysis for harmonized development of systemic organs in human fetuses 11:00 Thu 17 Sep 09 :: School Board Room :: Prof Kanta Naito :: Shimane UniversityAbstract... The growth processes of human babies have been studied sufficiently in scientific fields, but there have still been many issues about the developments of human fetus which are not clarified. The aim of this research is to investigate the developing process of systemic organs of human fetuses based on the data set of measurements of fetus's bodies and organs. Specifically, this talk is concerned with giving a mathematical understanding for the harmonized developments of the organs of human fetuses. The method to evaluate such harmonies is proposed by the use of the maximal dilatation appeared in the theory of quasi-conformal mapping.  Understanding hypersurfaces through tropical geometry 12:10 Fri 25 Sep 09 :: Napier 102 :: Dr Mohammed Abouzaid :: Massachusetts Institute of TechnologyAbstract... Given a polynomial in two or more variables, one may study the zero locus from the point of view of different mathematical subjects (number theory, algebraic geometry, ...). I will explain how tropical geometry allows to encode all topological aspects by elementary combinatorial objects called "tropical varieties." Mohammed Abouzaid received a B.S. in 2002 from the University of Richmond, and a Ph.D. in 2007 from the University of Chicago under the supervision of Paul Seidel. He is interested in symplectic topology and its interactions with algebraic geometry and differential topology, in particular the homological mirror symmetry conjecture. Since 2007 he has been a postdoctoral fellow at MIT, and a Clay Mathematics Institute Research Fellow.  Contemporary frontiers in statistics 15:10 Mon 28 Sep 09 :: Badger Labs G31 Macbeth Lectrue :: Prof. Peter Hall :: University of MelbourneAbstract... The availability of powerful computing equipment has had a dramatic impact on statistical methods and thinking, changing forever the way data are analysed. New data types, larger quantities of data, and new classes of research problem are all motivating new statistical methods. We shall give examples of each of these issues, and discuss the current and future directions of frontier problems in statistics.  Critical sets of products of linear forms 13:10 Mon 14 Dec 09 :: School Board Room :: Dr Graham Denham :: University of Western Ontario, CanadaAbstract... Suppose$f_1,f_2,\ldots,f_n$are linear polynomials in$\ell$variables and$\lambda_1,\lambda_2,\ldots,\lambda_n$are nonzero complex numbers. The product $$\Phi_\lambda=\Prod_{i=1}^n f_1^{\lambda_i},$$ called a master function, defines a (multivalued) function on$\ell$-dimensional complex space, or more precisely, on the complement of a set of hyperplanes. Then it is easy to ask (but harder to answer) what the set of critical points of a master function looks like, in terms of some properties of the input polynomials and$\lambda_i$'s. In my talk I will describe the motivation for considering such a question. Then I will indicate how the geometry and combinatorics of hyperplane arrangements can be used to provide at least a partial answer.  Integrable systems: noncommutative versus commutative 14:10 Thu 4 Mar 10 :: School Board Room :: Dr Cornelia Schiebold :: Mid Sweden UniversityAbstract... After a general introduction to integrable systems, we will explain an approach to their solution theory, which is based on Banach space theory. The main point is first to shift attention to noncommutative integrable systems and then to extract information about the original setting via projection techniques. The resulting solution formulas turn out to be particularly well-suited to the qualitative study of certain solution classes. We will show how one can obtain a complete asymptotic description of the so called multiple pole solutions, a problem that was only treated for special cases before.  Exploratory experimentation and computation 15:10 Fri 16 Apr 10 :: Napier LG29 :: Prof Jonathan Borwein :: University of NewcastleMedia...Abstract... The mathematical research community is facing a great challenge to re-evaluate the role of proof in light of the growing power of current computer systems, of modern mathematical computing packages, and of the growing capacity to data-mine on the Internet. Add to that the enormous complexity of many modern capstone results such as the Poincare conjecture, Fermat's last theorem, and the Classification of finite simple groups. As the need and prospects for inductive mathematics blossom, the requirement to ensure the role of proof is properly founded remains undiminished. I shall look at the philosophical context with examples and then offer some of five bench-marking examples of the opportunities and challenges we face.  Estimation of sparse Bayesian networks using a score-based approach 15:10 Fri 30 Apr 10 :: School Board Room :: Dr Jessica Kasza :: University of CopenhagenAbstract... The estimation of Bayesian networks given high-dimensional data sets, with more variables than there are observations, has been the focus of much recent research. These structures provide a flexible framework for the representation of the conditional independence relationships of a set of variables, and can be particularly useful in the estimation of genetic regulatory networks given gene expression data. In this talk, I will discuss some new research on learning sparse networks, that is, networks with many conditional independence restrictions, using a score-based approach. In the case of genetic regulatory networks, such sparsity reflects the view that each gene is regulated by relatively few other genes. The presented approach allows prior information about the overall sparsity of the underlying structure to be included in the analysis, as well as the incorporation of prior knowledge about the connectivity of individual nodes within the network.  Interpolation of complex data using spatio-temporal compressive sensing 13:00 Fri 28 May 10 :: Santos Lecture Theatre :: A/Prof Matthew Roughan :: School of Mathematical Sciences, University of AdelaideAbstract... Many complex datasets suffer from missing data, and interpolating these missing elements is a key task in data analysis. Moreover, it is often the case that we see only a linear combination of the desired measurements, not the measurements themselves. For instance, in network management, it is easy to count the traffic on a link, but harder to measure the end-to-end flows. Additionally, typical interpolation algorithms treat either the spatial, or the temporal components of data separately, but in many real datasets have strong spatio-temporal structure that we would like to exploit in reconstructing the missing data. In this talk I will describe a novel reconstruction algorithm that exploits concepts from the growing area of compressive sensing to solve all of these problems and more. The approach works so well on Internet traffic matrices that we can obtain a reasonable reconstruction with as much as 98% of the original data missing.  A variance constraining ensemble Kalman filter: how to improve forecast using climatic data of unobserved variables 15:10 Fri 28 May 10 :: Santos Lecture Theatre :: A/Prof Georg Gottwald :: The University of SydneyAbstract... Data assimilation aims to solve one of the fundamental problems ofnumerical weather prediction - estimating the optimal state of the atmosphere given a numerical model of the dynamics, and sparse, noisy observations of the system. A standard tool in attacking this filtering problem is the Kalman filter. We consider the problem when only partial observations are available. In particular we consider the situation where the observational space consists of variables which are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. We derive the corresponding Kalman filter in a variational setting. We analyze the variance constraining Kalman filter (VCKF) filter for a simple linear toy model and determine its range of optimal performance. We explore the variance constraining Kalman filter in an ensemble transform setting for the Lorenz-96 system, and show that incorporating the information on the variance on some un-observable variables can improve the skill and also increase the stability of the data assimilation procedure. Using methods from dynamical systems theory we then systems where the un-observed variables evolve deterministically but chaotically on a fast time scale. This is joint work with Lewis Mitchell and Sebastian Reich.  Meteorological drivers of extreme bushfire events in southern Australia 15:10 Fri 2 Jul 10 :: Benham Lecture Theatre :: Prof Graham Mills :: Centre for Australian Weather and Climate Research, MelbourneAbstract... Bushfires occur regularly during summer in southern Australia, but only a few of these fires become iconic due to their effects, either in terms of loss of life or economic and social cost. Such events include Black Friday (1939), the Hobart fires (1967), Ash Wednesday (1983), the Canberra bushfires (2003), and most recently Black Saturday in February 2009. In most of these events the weather of the day was statistically extreme in terms of heat, (low) humidity, and wind speed, and in terms of antecedent drought. There are a number of reasons for conducting post-event analyses of the meteorology of these events. One is to identify any meteorological circulation systems or dynamic processes occurring on those days that might not be widely or hitherto recognised, to document these, and to develop new forecast or guidance products. The understanding and prediction of such features can be used in the short term to assist in effective management of fires and the safety of firefighters and in the medium range to assist preparedness for the onset of extreme conditions. The results of such studies can also be applied to simulations of future climates to assess the likely changes in frequency of the most extreme fire weather events, and their documentary records provide a resource that can be used for advanced training purposes. In addition, particularly for events further in the past, revisiting these events using reanalysis data sets and contemporary NWP models can also provide insights unavailable at the time of the events. Over the past few years the Bushfire CRC's Fire Weather and Fire Danger project in CAWCR has studied the mesoscale meteorology of a number of major fire events, including the days of Ash Wednesday 1983, the Dandenong Ranges fire in January 1997, the Canberra fires and the Alpine breakout fires in January 2003, the Lower Eyre Peninsula fires in January 2005 and the Boorabbin fire in December 2007-January 2008. Various aspects of these studies are described below, including the structures of dry cold frontal wind changes, the particular character of the cold fronts associated with the most damaging fires in southeastern Australia, and some aspects of how the vertical temperature and humidity structure of the atmosphere may affect the fire weather at the surface. These studies reveal much about these major events, but also suggest future research directions, and some of these will be discussed.  Mathematica Seminar 15:10 Wed 28 Jul 10 :: Engineering Annex 314 :: Kim Schriefer :: Wolfram ResearchAbstract... The Mathematica Seminars 2010 offer an opportunity to experience the applicability, ease-of-use, as well as the advancements of Mathematica 7 in education and academic research. These seminars will highlight the latest directions in technical computing with Mathematica, and the impact this technology has across a wide range of academic fields, from maths, physics and biology to finance, economics and business. Those not yet familiar with Mathematica will gain an overview of the system and discover the breadth of applications it can address, while experts will get firsthand experience with recent advances in Mathematica like parallel computing, digital image processing, point-and-click palettes, built-in curated data, as well as courseware examples.  Counting lattice points in polytopes and geometry 15:10 Fri 6 Aug 10 :: Napier G04 :: Dr Paul Norbury :: University of MelbourneAbstract... Counting lattice points in polytopes arises in many areas of pure and applied mathematics. A basic counting problem is this: how many different ways can one give change of 1 dollar into 5,10, 20 and 50 cent coins? This problem counts lattice points in a tetrahedron, and if there also must be exactly 10 coins then it counts lattice points in a triangle. The number of lattice points in polytopes can be used to measure the robustness of a computer network, or in statistics to test independence of characteristics of samples. I will describe the general structure of lattice point counts and the difficulty of calculations. I will then describe a particular lattice point count in which the structure simplifies considerably allowing one to calculate easily. I will spend a brief time at the end describing how this is related to the moduli space of Riemann surfaces.  A spatial-temporal point process model for fine resolution multisite rainfall data from Roma, Italy 14:10 Thu 19 Aug 10 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of TechnologyAbstract... A point process rainfall model is further developed that has storm origins occurring in space-time according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in two-dimensional space, where the storm radii are taken to be independent exponential random variables. Storm origins are of random type z, where z follows a continuous probability distribution. Cell origins occur in a further spatial Poisson process and have arrival times that follow a Neyman-Scott point process. Cell origins have random radii so that cells form discs in two-dimensional space. Statistical properties up to third order are derived and used to fit the model to 10 min series taken from 23 sites across the Roma region, Italy. Distributional properties of the observed annual maxima are compared to equivalent values sampled from series that are simulated using the fitted model. The results indicate that the model will be of use in urban drainage projects for the Roma region.  A classical construction for simplicial sets revisited 13:10 Fri 27 Aug 10 :: Ingkarni Wardli B20 (Suite 4) :: Dr Danny Stevenson :: University of GlasgowAbstract... Simplicial sets became popular in the 1950s as a combinatorial way to study the homotopy theory of topological spaces. They are more robust than the older notion of simplicial complexes, which were introduced for the same purpose. In this talk, which will be as introductory as possible, we will review some classical functors arising in the theory of simplicial sets, some well-known, some not-so-well-known. We will re-examine the proof of an old theorem of Kan in light of these functors. We will try to keep all jargon to a minimum.  Simultaneous confidence band and hypothesis test in generalised varying-coefficient models 15:05 Fri 10 Sep 10 :: Napier LG28 :: Prof Wenyang Zhang :: University of BathAbstract... Generalised varying-coefficient models (GVC) are very important models. There are a considerable number of literature addressing these models. However, most of the existing literature are devoted to the estimation procedure. In this talk, I will systematically investigate the statistical inference for GVC, which includes confidence band as well as hypothesis test. I will show the asymptotic distribution of the maximum discrepancy between the estimated functional coefficient and the true functional coefficient. I will compare different approaches for the construction of confidence band and hypothesis test. Finally, the proposed statistical inference methods are used to analyse the data from China about contraceptive use there, which leads to some interesting findings.  Principal Component Analysis Revisited 15:10 Fri 15 Oct 10 :: Napier G04 :: Assoc. Prof Inge Koch :: University of AdelaideAbstract... Since the beginning of the 20th century, Principal Component Analysis (PCA) has been an important tool in the analysis of multivariate data. The principal components summarise data in fewer than the original number of variables without losing essential information, and thus allow a split of the data into signal and noise components. PCA is a linear method, based on elegant mathematical theory. The increasing complexity of data together with the emergence of fast computers in the later parts of the 20th century has led to a renaissance of PCA. The growing numbers of variables (in particular, high-dimensional low sample size problems), non-Gaussian data, and functional data (where the data are curves) are posing exciting challenges to statisticians, and have resulted in new research which extends the classical theory. I begin with the classical PCA methodology and illustrate the challenges presented by the complex data that we are now able to collect. The main part of the talk focuses on extensions of PCA: the duality of PCA and the Principal Coordinates of Multidimensional Scaling, Sparse PCA, and consistency results relating to principal components, as the dimension grows. We will also look at newer developments such as Principal Component Regression and Supervised PCA, nonlinear PCA and Functional PCA.  Queues with skill based routing under FCFS–ALIS regime 15:10 Fri 11 Feb 11 :: B17 Ingkarni Wardli :: Prof Gideon Weiss :: The University of Haifa, IsraelAbstract... We consider a system where jobs of several types are served by servers of several types, and a bipartite graph between server types and job types describes feasible assignments. This is a common situation in manufacturing, call centers with skill based routing, matching of parent-child in adoption or matching in kidney transplants etc. We consider the case of first come first served policy: jobs are assigned to the first available feasible server in order of their arrivals. We consider two types of policies for assigning customers to idle servers - a random assignment and assignment to the longest idle server (ALIS) We survey some results for four different situations: For a loss system we find conditions for reversibility and insensitivity. For a manufacturing type system, in which there is enough capacity to serve all jobs, we discuss a product form solution and waiting times. For an infinite matching model in which an infinite sequence of customers of IID types, and infinite sequence of servers of IID types are matched according to first come first, we obtain a product form stationary distribution for this system, which we use to calculate matching rates. For a call center model with overload and abandonments we make some plausible observations. This talk surveys joint work with Ivo Adan, Rene Caldentey, Cor Hurkens, Ed Kaplan and Damon Wischik, as well as work by Jeremy Visschers, Rishy Talreja and Ward Whitt.  Real analytic sets in complex manifolds I: holomorphic closure dimension 13:10 Fri 4 Mar 11 :: Mawson 208 :: Dr Rasul Shafikov :: University of Western OntarioAbstract... After a quick introduction to real and complex analytic sets, I will discuss possible notions of complex dimension of real sets, and then discuss a structure theorem for the holomorphic closure dimension which is defined as the dimension of the smallest complex analytic germ containing the real germ.  Real analytic sets in complex manifolds II: complex dimension 13:10 Fri 11 Mar 11 :: Mawson 208 :: Dr Rasul Shafikov :: University of Western OntarioAbstract... Given a real analytic set R, denote by A the subset of R of points through which there is a nontrivial complex variety contained in R, i.e., A consists of points in R of positive complex dimension. I will discuss the structure of the set A.  Classification for high-dimensional data 15:10 Fri 1 Apr 11 :: Conference Room Level 7 Ingkarni Wardli :: Associate Prof Inge Koch :: The University of AdelaideAbstract... For two-class classification problems Fisher's discriminant rule performs well in many scenarios provided the dimension, d, is much smaller than the sample size n. As the dimension increases, Fisher's rule may no longer be adequate, and can perform as poorly as random guessing. In this talk we look at new ways of overcoming this poor performance for high-dimensional data by suitably modifying Fisher's rule, and in particular we describe the 'Features Annealed Independence Rule (FAIR)? of Fan and Fan (2008) and a rule based on canonical correlation analysis. I describe some theoretical developments, and also show analysis of data which illustrate the performance of these modified rule.  Modelling of Hydrological Persistence in the Murray-Darling Basin for the Management of Weirs 12:10 Mon 4 Apr 11 :: 5.57 Ingkarni Wardli :: Aiden Fisher :: University of AdelaideAbstract... The lakes and weirs along the lower Murray River in Australia are aggregated and considered as a sequence of five reservoirs. A seasonal Markov chain model for the system will be implemented, and a stochastic dynamic program will be used to find optimal release strategies, in terms of expected monetary value (EMV), for the competing demands on the water resource given the stochastic nature of inflows. Matrix analytic methods will be used to analyse the system further, and in particular enable the full distribution of first passage times between any groups of states to be calculated. The full distribution of first passage times can be used to provide a measure of the risk associated with optimum EMV strategies, such as conditional value at risk (CVaR). The sensitivity of the model, and risk, to changing rainfall scenarios will be investigated. The effect of decreasing the level of discretisation of the reservoirs will be explored. Also, the use of matrix analytic methods facilitates the use of hidden states to allow for hydrological persistence in the inflows. Evidence for hydrological persistence of inflows to the lower Murray system, and the effect of making allowance for this, will be discussed.  Comparison of Spectral and Wavelet Estimation of the Dynamic Linear System of a Wade Energy Device 12:10 Mon 2 May 11 :: 5.57 Ingkarni Wardli :: Mohd Aftar :: University of AdelaideAbstract... Renewable energy has been one of the main issues nowadays. The implications of fossil energy and nuclear energy along with its limited source have triggered researchers and industries to find another source of renewable energy for example hydro energy, wind energy and also wave energy. In this seminar, I will talk about the spectral estimation and wavelet estimation of a linear dynamical system of motion for a heaving buoy wave energy device. The spectral estimates was based on the Fourier transform, while the wavelet estimate was based on the wavelet transform. Comparisons between two spectral estimates with a wavelet estimate of the amplitude response operator(ARO) for the dynamical system of the wave energy device shows that the wavelet estimate ARO is much better for data with and without noise.  On parameter estimation in population models 15:10 Fri 6 May 11 :: 715 Ingkarni Wardli :: Dr Joshua Ross :: The University of AdelaideAbstract... Essential to applying a mathematical model to a real-world application is calibrating the model to data. Methods for calibrating population models often become computationally infeasible when the populations size (more generally the size of the state space) becomes large, or other complexities such as time-dependent transition rates, or sampling error, are present. Here we will discuss the use of diffusion approximations to perform estimation in several scenarios, with successively reduced assumptions: (i) under the assumption of stationarity (the process had been evolving for a very long time with constant parameter values); (ii) transient dynamics (the assumption of stationarity is invalid, and thus only constant parameter values may be assumed); and, (iii) time-inhomogeneous chains (the parameters may vary with time) and accounting for observation error (a sample of the true state is observed).  When statistics meets bioinformatics 12:10 Wed 11 May 11 :: Napier 210 :: Prof Patty Solomon :: School of Mathematical SciencesMedia...Abstract... Bioinformatics is a new field of research which encompasses mathematics, computer science, biology, medicine and the physical sciences. It has arisen from the need to handle and analyse the vast amounts of data being generated by the new genomics technologies. The interface of these disciplines used to be information-poor, but is now information-mega-rich, and statistics plays a central role in processing this information and making it intelligible. In this talk, I will describe a published bioinformatics study which claimed to have developed a simple test for the early detection of ovarian cancer from a blood sample. The US Food and Drug Administration was on the verge of approving the test kits for market in 2004 when demonstrated flaws in the study design and analysis led to its withdrawal. We are still waiting for an effective early biomarker test for ovarian cancer.  Statistical challenges in molecular phylogenetics 15:10 Fri 20 May 11 :: Mawson Lab G19 lecture theatre :: Dr Barbara Holland :: University of Tasmania Media...Abstract... This talk will give an introduction to the ways that mathematics and statistics gets used in the inference of evolutionary (phylogenetic) trees. Taking a model-based approach to estimating the relationships between species has proven to be an enormously effective, however, there are some tricky statistical challenges that remain. The increasingly plentiful amount of DNA sequence data is a boon, but it is also throwing a spotlight on some of the shortcomings of current best practice particularly in how we (1) assess the reliability of our phylogenetic estimates, and (2) how we choose appropriate models. This talk will aim to give a general introduction this area of research and will also highlight some results from two of my recent PhD students.  Permeability of heterogeneous porous media - experiments, mathematics and computations 15:10 Fri 27 May 11 :: B.21 Ingkarni Wardli :: Prof Patrick Selvadurai :: Department of Civil Engineering and Applied Mechanics, McGill UniversityAbstract... Permeability is a key parameter important to a variety of applications in geological engineering and in the environmental geosciences. The conventional definition of Darcy flow enables the estimation of permeability at different levels of detail. This lecture will focus on the measurement of surface permeability characteristics of a large cuboidal block of Indiana Limestone, using a surface permeameter. The paper discusses the theoretical developments, the solution of the resulting triple integral equations and associated computational treatments that enable the mapping of the near surface permeability of the cuboidal region. This data combined with a kriging procedure is used to develop results for the permeability distribution at the interior of the cuboidal region. Upon verification of the absence of dominant pathways for fluid flow through the cuboidal region, estimates are obtained for the "Effective Permeability" of the cuboid using estimates proposed by Wiener, Landau and Lifschitz, King, Matheron, Journel et al., Dagan and others. The results of these estimates are compared with the geometric mean, derived form the computational estimates.  Optimal experimental design for stochastic population models 15:00 Wed 1 Jun 11 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, BrisbaneAbstract... Markov population processes are popular models for studying a wide range of phenomena including the spread of disease, the evolution of chemical reactions and the movements of organisms in population networks (metapopulations). Our ability to use these models effectively can be limited by our knowledge about parameters, such as disease transmission and recovery rates in an epidemic. Recently, there has been interest in devising optimal experimental designs for stochastic models, so that practitioners can collect data in a manner that maximises the precision of maximum likelihood estimates of the parameters for these models. I will discuss some recent work on optimal design for a variety of population models, beginning with some simple one-parameter models where the optimal design can be obtained analytically and moving on to more complicated multi-parameter models in epidemiology that involve latent states and non-exponentially distributed infectious periods. For these more complex models, the optimal design must be arrived at using computational methods and we rely on a Gaussian diffusion approximation to obtain analytical expressions for Fisher's information matrix, which is at the heart of most optimality criteria in experimental design. I will outline a simple cross-entropy algorithm that can be used for obtaining optimal designs for these models. We will also explore the improvements in experimental efficiency when using the optimal design over some simpler designs, such as the design where observations are spaced equidistantly in time.  Inference and optimal design for percolation and general random graph models (Part I) 09:30 Wed 8 Jun 11 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of CambridgeAbstract... The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs. We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions. We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice$\mathbb{Z}^d$and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ÃÂÃÂ«mostly populatedÃÂÃÂ­ designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.  Inference and optimal design for percolation and general random graph models (Part II) 10:50 Wed 8 Jun 11 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of CambridgeAbstract... The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs. We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions. We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice$\mathbb{Z}^dand derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ÃÂÃÂÃÂÃÂ«mostly populatedÃÂÃÂÃÂÃÂ­ designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.  Quantitative proteomics: data analysis and statistical challenges 10:10 Thu 30 Jun 11 :: 7.15 Ingkarni Wardli :: Dr Peter Hoffmann :: Adelaide Proteomics Centre  Introduction to functional data analysis with applications to proteomics data 11:10 Thu 30 Jun 11 :: 7.15 Ingkarni Wardli :: A/Prof Inge Koch :: School of Mathematical Sciences  Object oriented data analysis 14:10 Thu 30 Jun 11 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel HillAbstract... Object Oriented Data Analysis is the statistical analysis of populations of complex objects. In the special case of Functional Data Analysis, these data objects are curves, where standard Euclidean approaches, such as principal components analysis, have been very successful. Recent developments in medical image analysis motivate the statistical analysis of populations of more complex data objects which are elements of mildly non-Euclidean spaces, such as Lie Groups and Symmetric Spaces, or of strongly non-Euclidean spaces, such as spaces of tree-structured data objects. These new contexts for Object Oriented Data Analysis create several potentially large new interfaces between mathematics and statistics. Even in situations where Euclidean analysis makes sense, there are statistical challenges because of the High Dimension Low Sample Size problem, which motivates a new type of asymptotics leading to non-standard mathematical statistics.  Object oriented data analysis of tree-structured data objects 15:10 Fri 1 Jul 11 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel HillAbstract... The field of Object Oriented Data Analysis has made a lot of progress on the statistical analysis of the variation in populations of complex objects. A particularly challenging example of this type is populations of tree-structured objects. Deep challenges arise, which involve a marriage of ideas from statistics, geometry, and numerical analysis, because the space of trees is strongly non-Euclidean in nature. These challenges, together with three completely different approaches to addressing them, are illustrated using a real data example, where each data point is the tree of blood arteries in one person's brain.  Modelling computer network topologies through optimisation 12:10 Mon 1 Aug 11 :: 5.57 Ingkarni Wardli :: Mr Rhys Bowden :: University of AdelaideAbstract... The core of the Internet is made up of many different computers (called routers) in many different interconnected networks, owned and operated by many different organisations. A popular and important field of study in the past has been "network topology": for instance, understanding which routers are connected to which other routers, or which networks are connected to which other networks; that is, studying and modelling the connection structure of the Internet. Previous study in this area has been plagued by unreliable or flawed experimental data and debate over appropriate models to use. The Internet Topology Zoo is a new source of network data created from the information that network operators make public. In order to better understand this body of network information we would like the ability to randomly generate network topologies resembling those in the zoo. Leveraging previous wisdom on networks produced as a result of optimisation processes, we propose a simple objective function based on possible economic constraints. By changing the relative costs in the objective function we can change the form of the resulting networks, and we compare these optimised networks to a variety of networks found in the Internet Topology Zoo.  Spectra alignment/matching for the classification of cancer and control patients 12:10 Mon 8 Aug 11 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of AdelaideAbstract... Proteomic time-of-flight mass spectrometry produces a spectrum based on the peptides (chains of amino acids) in each patientâs serum sample. The spectra contain data points for an x-axis (peptide weight) and a y-axis (peptide frequency/count/intensity). It is our end goal to differentiate cancer (and sub-types) and control patients using these spectra. Before we can do this, peaks in these data must be found and common peptides to different spectra must be found. The data are noisy because of biotechnological variation and calibration error; data points for different peptide weights may in fact be same peptide. An algorithm needs to be employed to find common peptides between spectra, as performing alignment âby handâ is almost infeasible. We borrow methods suggested in the literature by metabolomic gas chromatography-mass spectrometry and extend the methods for our purposes. In this talk I will go over the basic tenets of what we hope to achieve and the process towards this.  Horocycle flows at prime times 13:10 Wed 10 Aug 11 :: B.19 Ingkarni Wardli :: Prof Peter Sarnak :: Institute for Advanced Study, PrincetonAbstract... The distribution of individual orbits of unipotent flows in homogeneous spaces are well understood thanks to the work work of Marina Ratner. It is conjectured that this property is preserved on restricting the times from the integers to primes, this being important in the study of prime numbers as well as in such dynamics. We review progress in understanding this conjecture, starting with Dirichlet (a finite system), Vinogradov (rotation of a circle or torus), Green and Tao (translation on a nilmanifold) and Ubis and Sarnak (horocycle flows in the semisimple case).  Dealing with the GC-content bias in second-generation DNA sequence data 15:10 Fri 12 Aug 11 :: Horace Lamb :: Prof Terry Speed :: Walter and Eliza Hall InstituteMedia...Abstract... The field of genomics is currently dealing with an explosion of data from so-called second-generation DNA sequencing machines. This is creating many challenges and opportunities for statisticians interested in the area. In this talk I will outline the technology and the data flood, and move on to one particular problem where the technology is used: copy-number analysis. There we find a novel bias, which, if not dealt with properly, can dominate the signal of interest. I will describe how we think about and summarize it, and go on to identify a plausible source of this bias, leading up to a way of removing it. Our approach makes use of the total variation metric on discrete measures, but apart from this, is largely descriptive.  Laplace's equation on multiply-connected domains 12:10 Mon 29 Aug 11 :: 5.57 Ingkarni Wardli :: Mr Hayden Tronnolone :: University of AdelaideAbstract... Various physical processes take place on multiply-connected domains (domains with some number of 'holes'), such as the stirring of a fluid with paddles or the extrusion of material from a die. These systems may be described by partial differential equations (PDEs). However, standard numerical methods for solving PDEs are not well-suited to such examples: finite difference methods are difficult to implement on multiply-connected domains, especially when the boundaries are irregular or moving, while finite element methods are computationally expensive. In this talk I will describe a fast and accurate numerical method for solving certain PDEs on two-dimensional multiply-connected domains, considering Laplace's equation as an example. This method takes advantage of complex variable techniques which allow the solution to be found with spectral accuracy provided the boundary data is smooth. Other advantages over traditional numerical methods will also be discussed.  Alignment of time course gene expression data sets using Hidden Markov Models 12:10 Mon 5 Sep 11 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of AdelaideAbstract... Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards. Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data.  Statistical analysis of metagenomic data from the microbial community involved in industrial bioleaching 12:10 Mon 19 Sep 11 :: 5.57 Ingkarni Wardli :: Ms Susana Soto-Rojo :: University of AdelaideAbstract... In the last two decades heap bioleaching has become established as a successful commercial option for recovering copper from low-grade secondary sulfide ores. Genetics-based approaches have recently been employed in the task of characterizing mineral processing bacteria. Data analysis is a key issue and thus the implementation of adequate mathematical and statistical tools is of fundamental importance to draw reliable conclusions. In this talk I will give a recount of two specific problems that we have been working on. The first regarding experimental design and the latter on modeling composition and activity of the microbial consortium.  Can statisticians do better than random guessing? 12:10 Tue 20 Sep 11 :: Napier 210 :: A/Prof Inge Koch :: School of Mathematical SciencesAbstract... In the finance or credit risk area, a bank may want to assess whether a client is going to default, or be able to meet the repayments. In the assessment of benign or malignant tumours, a correct diagnosis is required. In these and similar examples, we make decisions based on data. The classical t-tests provide a tool for making such decisions. However, many modern data sets have more variables than observations, and the classical rules may not be any better than random guessing. We consider Fisher's rule for classifying data into two groups, and show that it can break down for high-dimensional data. We then look at ways of overcoming some of the weaknesses of the classical rules, and I show how these "post-modern" rules perform in practice.  T-duality via bundle gerbes I 13:10 Fri 23 Sep 11 :: B.19 Ingkarni Wardli :: Dr Raymond Vozzo :: University of AdelaideAbstract... In physics T-duality is a phenomenon which relates certain types of string theories to one another. From a topological point of view, one can view string theory as a duality between line bundles carrying a degree three cohomology class (the H-flux). In this talk we will use bundle gerbes to give a geometric realisation of the H-flux and explain how to construct the T-dual of a line bundle together with its T-dual bundle gerbe.  Estimating transmission parameters for the swine flu pandemic 15:10 Fri 23 Sep 11 :: 7.15 Ingkarni Wardli :: Dr Kathryn Glass :: Australian National UniversityMedia...Abstract... Following the onset of a new strain of influenza with pandemic potential, policy makers need specific advice on how fast the disease is spreading, who is at risk, and what interventions are appropriate for slowing transmission. Mathematical models play a key role in comparing interventions and identifying the best response, but models are only as good as the data that inform them. In the early stages of the 2009 swine flu outbreak, many researchers estimated transmission parameters - particularly the reproduction number - from outbreak data. These estimates varied, and were often biased by data collection methods, misclassification of imported cases or as a result of early stochasticity in case numbers. I will discuss a number of the pitfalls in achieving good quality parameter estimates from early outbreak data, and outline how best to avoid them. One of the early indications from swine flu data was that children were disproportionately responsible for disease spread. I will introduce a new method for estimating age-specific transmission parameters from both outbreak and seroprevalence data. This approach allows us to take account of empirical data on human contact patterns, and highlights the need to allow for asymmetric mixing matrices in modelling disease transmission between age groups. Applied to swine flu data from a number of different countries, it presents a consistent picture of higher transmission from children.  Statistical analysis of school-based student performance data 12:10 Mon 10 Oct 11 :: 5.57 Ingkarni Wardli :: Ms Jessica Tan :: University of AdelaideAbstract... Join me in the journey of being a statistician for 15 minutes of your day (if you are not already one) and experience the task of data cleaning without having to get your own hands dirty. Most of you may have sat the Basic Skills Tests when at school or know someone who currently has to do the NAPLAN (National Assessment Program - Literacy and Numeracy) tests. Tests like these assess student progress and can be used to accurately measure school performance. In trying to answer the research question: "what conclusions about student progress and school performance can be drawn from NAPLAN data or data of a similar nature, using mathematical and statistical modelling and analysis techniques?", I have uncovered some interesting results about the data in my initial data analysis which I shall explain in this talk.  Statistical modelling for some problems in bioinformatics 11:10 Fri 14 Oct 11 :: B.17 Ingkarni Wardli :: Professor Geoff McLachlan :: The University of QueenslandMedia...Abstract... In this talk we consider some statistical analyses of data arising in bioinformatics. The problems include the detection of differential expression in microarray gene-expression data, the clustering of time-course gene-expression data and, lastly, the analysis of modern-day cytometric data. Extensions are considered to the procedures proposed for these three problems in McLachlan et al. (Bioinformatics, 2006), Ng et al. (Bioinformatics, 2006), and Pyne et al. (PNAS, 2009), respectively. The latter references are available at http://www.maths.uq.edu.au/~gjm/.  On the role of mixture distributions in the modelling of heterogeneous data 15:10 Fri 14 Oct 11 :: 7.15 Ingkarni Wardli :: Prof Geoff McLachlan :: University of QueenslandMedia...Abstract... We consider the role that finite mixture distributions have played in the modelling of heterogeneous data, in particular for clustering continuous data via mixtures of normal distributions. A very brief history is given starting with the seminal papers by Day and Wolfe in the sixties before the appearance of the EM algorithm. It was the publication in 1977 of the latter algorithm by Dempster, Laird, and Rubin that greatly stimulated interest in the use of finite mixture distributions to model heterogeneous data. This is because the fitting of mixture models by maximum likelihood is a classic example of a problem that is simplified considerably by the EM's conceptual unification of maximum likelihood estimation from data that can be viewed as being incomplete. In recent times there has been a proliferation of applications in which the number of experimental units n is comparatively small but the underlying dimension p is extremely large as, for example, in microarray-based genomics and other high-throughput experimental approaches. Hence there has been increasing attention given not only in bioinformatics and machine learning, but also in mainstream statistics, to the analysis of complex data in this situation where n is small relative to p. The latter part of the talk shall focus on the modelling of such high-dimensional data using mixture distributions.  T-duality via bundle gerbes II 13:10 Fri 21 Oct 11 :: B.19 Ingkarni Wardli :: Dr Raymond Vozzo :: University of AdelaideAbstract... In physics T-duality is a phenomenon which relates certain types of string theories to one another. From a topological point of view, one can view string theory as a duality between line bundles carrying a degree three cohomology class (the H-flux). In this talk we will use bundle gerbes to give a geometric realisation of the H-flux and explain how to construct the T-dual of a line bundle together with its T-dual bundle gerbe.  Likelihood-free Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis 15:10 Fri 21 Oct 11 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South WalesMedia...Abstract... A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed likelihood-free'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines. In this talk, I'll present a brief, non-technical overview of the ideas behind likelihood-free methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis.  Metric geometry in data analysis 13:10 Fri 11 Nov 11 :: B.19 Ingkarni Wardli :: Dr Facundo Memoli :: University of AdelaideAbstract... The problem of object matching under invariances can be studied using certain tools from metric geometry. The central idea is to regard objects as metric spaces (or metric measure spaces). The type of invariance that one wishes to have in the matching is encoded by the choice of the metrics with which one endows the objects. The standard example is matching objects in Euclidean space under rigid isometries: in this situation one would endow the objects with the Euclidean metric. More general scenarios are possible in which the desired invariance cannot be reflected by the preservation of an ambient space metric. Several ideas due to M. Gromov are useful for approaching this problem. The Gromov-Hausdorff distance is a natural candidate for doing this. However, this metric leads to very hard combinatorial optimization problems and it is difficult to relate to previously reported practical approaches to the problem of object matching. I will discuss different variations of these ideas, and in particular will show a construction of an L^p version of the Gromov-Hausdorff metric, called the Gromov-Wassestein distance, which is based on mass transportation ideas. This new metric directly leads to quadratic optimization problems on continuous variables with linear constraints. As a consequence of establishing several lower bounds, it turns out that several invariants of metric measure spaces turn out to be quantitatively stable in the GW sense. These invariants provide practical tools for the discrimination of shapes and connect the GW ideas to a number of pre-existing approaches.  Applications of tropical geometry to groups and manifolds 13:10 Mon 21 Nov 11 :: B.19 Ingkarni Wardli :: Dr Stephan Tillmann :: University of QueenslandAbstract... Tropical geometry is a young field with multiple origins. These include the work of Bergman on logarithmic limit sets of algebraic varieties; the work of the Brazilian computer scientist Simon on discrete mathematics; the work of Bieri, Neumann and Strebel on geometric invariants of groups; and, of course, the work of Newton on polynomials. Even though there is still need for a unified foundation of the field, there is an abundance of applications of tropical geometry in group theory, combinatorics, computational algebra and algebraic geometry. In this talk I will give an overview of (what I understand to be) tropical geometry with a bias towards applications to group theory and low-dimensional topology.  Fluid flows in microstructured optical fibre fabrication 15:10 Fri 25 Nov 11 :: B.17 Ingkarni Wardli :: Mr Hayden Tronnolone :: University of AdelaideAbstract... Optical fibres are used extensively in modern telecommunications as they allow the transmission of information at high speeds. Microstructured optical fibres are a relatively new fibre design in which a waveguide for light is created by a series of air channels running along the length of the material. The flexibility of this design allows optical fibres to be created with adaptable (and previously unrealised) optical properties. However, the fluid flows that arise during fabrication can greatly distort the geometry, which can reduce the effectiveness of a fibre or render it useless. I will present an overview of the manufacturing process and highlight the difficulties. I will then focus on surface-tension driven deformation of the macroscopic version of the fibre extruded from a reservoir of molten glass, occurring during fabrication, which will be treated as a two-dimensional Stokes flow problem. I will outline two different complex-variable numerical techniques for solving this problem along with comparisons of the results, both to other models and to experimental data.  Noncritical holomorphic functions of finite growth on algebraic Riemann surfaces 13:10 Fri 3 Feb 12 :: B.20 Ingkarni Wardli :: Prof Franc Forstneric :: University of LjubljanaAbstract... Given a compact Riemann surface X and a point p in X, we construct a holomorphic function without critical points on the punctured (algebraic) Riemann surface R=X-p which is of finite order at the point p. In the case at hand this improves the 1967 theorem of Gunning and Rossi to the effect that every open Riemann surface admits a noncritical holomorphic function, but without any particular growth condition. (Joint work with Takeo Ohsawa.)  Forecasting electricity demand distributions using a semiparametric additive model 15:10 Fri 16 Mar 12 :: B.21 Ingkarni Wardli :: Prof Rob Hyndman :: Monash UniversityMedia...Abstract... Electricity demand forecasting plays an important role in short-term load allocation and long-term planning for future generation facilities and transmission augmentation. Planners must adopt a probabilistic view of potential peak demand levels, therefore density forecasts (providing estimates of the full probability distributions of the possible future values of the demand) are more helpful than point forecasts, and are necessary for utilities to evaluate and hedge the financial risk accrued by demand variability and forecasting uncertainty. Electricity demand in a given season is subject to a range of uncertainties, including underlying population growth, changing technology, economic conditions, prevailing weather conditions (and the timing of those conditions), as well as the general randomness inherent in individual usage. It is also subject to some known calendar effects due to the time of day, day of week, time of year, and public holidays. I will describe a comprehensive forecasting solution designed to take all the available information into account, and to provide forecast distributions from a few hours ahead to a few decades ahead. We use semi-parametric additive models to estimate the relationships between demand and the covariates, including temperatures, calendar effects and some demographic and economic variables. Then we forecast the demand distributions using a mixture of temperature simulation, assumed future economic scenarios, and residual bootstrapping. The temperature simulation is implemented through a new seasonal bootstrapping method with variable blocks. The model is being used by the state energy market operators and some electricity supply companies to forecast the probability distribution of electricity demand in various regions of Australia. It also underpinned the Victorian Vision 2030 energy strategy.  Spatial-point data sets and the Polya distribution 15:10 Fri 27 Apr 12 :: B.21 Ingkarni Wardli :: Dr Benjamin Binder :: The University of AdelaideMedia...Abstract... Spatial-point data sets, generated from a wide range of physical systems and mathematical models, can be analyzed by counting the number of objects in equally sized bins. We find that the bin counts are related to the Polya distribution. New indexes are developed which quantify whether or not a spatial data set is at its most evenly distributed state. Using three case studies (Lagrangian fluid particles in chaotic laminar flows, cellular automata agents in discrete models, and biological cells within colonies), we calculate the indexes and predict the spatial-state of the system.  On the full holonomy group of special Lorentzian manifolds 13:10 Fri 25 May 12 :: Napier LG28 :: Dr Thomas Leistner :: University of AdelaideAbstract... The holonomy group of a semi-Riemannian manifold is defined as the group of parallel transports along loops based at a point. Its connected component, the restricted holonomy group', is given by restricting in this definition to contractible loops. The restricted holonomy can essentially be described by its Lie algebra and many classification results are obtained in this way. In contrast, the full' holonomy group is a more global object and classification results are out of reach. In the talk I will describe recent results with H. Baum and K. Laerz (both HU Berlin) about the full holonomy group of so-called indecomposable' Lorentzian manifolds. I will explain a construction method that arises from analysing the effects on holonomy when dividing the manifold by the action of a properly discontinuous group of isometries and present several examples of Lorentzian manifolds with disconnected holonomy groups.  A brief introduction to Support Vector Machines 12:30 Mon 4 Jun 12 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of AdelaideMedia...Abstract... Support Vector Machines (SVMs) are used in a variety of contexts for a range of purposes including regression, feature selection and classification. To convey the basic principles of SVMs, this presentation will focus on the application of SVMs to classification. Classification (or discrimination), in a statistical sense, is supervised model creation for the purpose of assigning future observations to a group or class. An example might be determining healthy or diseased labels to patients from p characteristics obtained from a blood sample. While SVMs are widely used, they are most successful when the data have one or more of the following properties: The data are not consistent with a standard probability distribution. The number of observations, n, used to create the model is less than the number of predictive features, p. (The so-called small-n, big-p problem.) The decision boundary between the classes is likely to be non-linear in the feature space. I will present a short overview of how SVMs are constructed, keeping in mind their purpose. As this presentation is part of a double post-grad seminar, I will keep it to a maximum of 15 minutes.  IGA Workshop: Dendroidal sets 14:00 Tue 12 Jun 12 :: Ingkarni Wardli B17 :: Dr Ittay Weiss :: University of the South PacificMedia...Abstract... A series of four 2-hour lectures by Dr. Ittay Weiss. The theory of dendroidal sets was introduced by Moerdijk and Weiss in 2007 in the study of homotopy operads in algebraic topology. In the five years that have past since then several fundamental and highly non-trivial results were established. For instance, it was established that dendroidal sets provide models for homotopy operads in a way that extends the Joyal-Lurie approach to homotopy categories. It can be shown that dendroidal sets provide new models in the study of n-fold loop spaces. And it is very recently shown that dendroidal sets model all connective spectra in a way that extends the modeling of certain spectra by Picard groupoids. The aim of the lecture series will be to introduce the concepts mentioned above, present the elementary theory, and understand the scope of the results mentioned as well as discuss the potential for further applications. Sources for the course will include the article "From Operads to Dendroidal Sets" (in the AMS volume on mathematical foundations of quantum field theory (also on the arXiv)) and the lecture notes by Ieke Moerdijk "simplicial methods for operads and algebraic geometry" which resulted from an advanced course given in Barcelona 3 years ago. No prior knowledge of operads will be assumed nor any knowledge of homotopy theory that is more advanced then what is required for the definition of the fundamental group. The basics of the language of presheaf categories will be recalled quickly and used freely.  Comparison of spectral and wavelet estimators of transfer function for linear systems 12:10 Mon 18 Jun 12 :: B.21 Ingkarni Wardli :: Mr Mohd Aftar Abu Bakar :: University of AdelaideMedia...Abstract... We compare spectral and wavelet estimators of the response amplitude operator (RAO) of a linear system, with various input signals and added noise scenarios. The comparison is based on a model of a heaving buoy wave energy device (HBWED), which oscillates vertically as a single mode of vibration linear system. HBWEDs and other single degree of freedom wave energy devices such as the oscillating wave surge convertors (OWSC) are currently deployed in the ocean, making single degree of freedom wave energy devices important systems to both model and analyse in some detail. However, the results of the comparison relate to any linear system. It was found that the wavelet estimator of the RAO offers no advantage over the spectral estimators if both input and response time series data are noise free and long time series are available. If there is noise on only the response time series, only the wavelet estimator or the spectral estimator that uses the cross-spectrum of the input and response signals in the numerator should be used. For the case of noise on only the input time series, only the spectral estimator that uses the cross-spectrum in the denominator gives a sensible estimate of the RAO. If both the input and response signals are corrupted with noise, a modification to both the input and response spectrum estimates can provide a good estimator of the RAO. However, a combination of wavelet and spectral methods is introduced as an alternative RAO estimator. The conclusions apply for autoregressive emulators of sea surface elevation, impulse, and pseudorandom binary sequences (PRBS) inputs. However, a wavelet estimator is needed in the special case of a chirp input where the signal has a continuously varying frequency.  Geometry - algebraic to arithmetic to absolute 15:10 Fri 3 Aug 12 :: B.21 Ingkarni Wardli :: Dr James Borger :: Australian National UniversityMedia...Abstract... Classical algebraic geometry is about studying solutions to systems of polynomial equations with complex coefficients. In arithmetic algebraic geometry, one digs deeper and studies the arithmetic properties of the solutions when the coefficients are rational, or even integral. From the usual point of view, it's impossible to go deeper than this for the simple reason that no smaller rings are available - the integers have no proper subrings. In this talk, I will explain how an emerging subject, lambda-algebraic geometry, allows one to do just this and why one might care.  AFL Tipping isn't all about numbers and stats...or is it..... 12:10 Mon 6 Aug 12 :: B.21 Ingkarni Wardli :: Ms Jessica Tan :: University of AdelaideMedia...Abstract... The result of an AFL game is always unpredictable - we all know that. Hence why we discuss the weekend's upsets and the local tipping competition as part of the "water-cooler and weekend" conversation on a Monday morning. Different people use various weird and wonderful techniques or criteria to predict the winning team. With readily available data, I will investigate and compare various strategies and define a measure of the hardness of a round (full acknowledgements will be made in my presentation). Hopefully this will help me for next year's tipping competition...  Air-cooled binary Rankine cycle performance with varying ambient temperature 12:10 Mon 13 Aug 12 :: B.21 Ingkarni Wardli :: Ms Josephine Varney :: University of AdelaideMedia...Abstract... Next month, I have to give a presentation in Reno, Nevada to a group of geologists, engineers and geophysicists. So, for this talk, I am going to ask you to pretend you know very little about maths (and perhaps a lot about geology) and give me some feedback on my proposed talk. The presentation itself, is about the effect of air-cooling on geothermal power plant performance. Air-cooling is necessary for geothermal plays in dry areas, and ambient air temperature significantly aï¬ects the power output of air-cooled geothermal power plants. Hence, a method for determining the effect of ambient air temperature on geothermal power plants is presented. Using the ambient air temperature distribution from Leigh Creek, South Australia, this analysis shows that an optimally designed plant produces 6% more energy annually than a plant designed using the mean ambient temperature.  Continuous random walk models for solute transport in porous media 15:10 Fri 17 Aug 12 :: B.21 Ingkarni Wardli :: Prof Pavel Bedrikovetski :: The University of AdelaideMedia...Abstract... The classical diffusion (thermal conductivity) equation was derived from the Master random walk equation and is parabolic. The main assumption was a probabilistic distribution of the jump length while the jump time is constant. Distribution of the jump time along with the jump length adds the second time derivative into the averaged equations, but the equation becomes ... elliptic! Where from to take an extra initial condition? We discuss how to pose the well-posed flow problem, exact 1d solution and numerous engineering applications. This is joint work with A. Shapiro and H. Yuan.  Star Wars Vs The Lord of the Rings: A Survival Analysis 12:10 Mon 27 Aug 12 :: B.21 Ingkarni Wardli :: Mr Christopher Davies :: University of AdelaideMedia...Abstract... Ever wondered whether you are more likely to die in the Galactic Empire or Middle Earth? Well this is the postgraduate seminar for you! I'll be attempting to answer this question using survival analysis, the statistical method of choice for investigating time to event data. Spoiler Warning: This talk will contain references to the deaths of characters in the above movie sagas.  Principal Component Analysis (PCA) 12:30 Mon 3 Sep 12 :: B.21 Ingkarni Wardli :: Mr Lyron Winderbaum :: University of AdelaideMedia...Abstract... Principal Component Analysis (PCA) has become something of a buzzword recently in a number of disciplines including the gene expression and facial recognition. It is a classical, and fundamentally simple, concept that has been around since the early 1900's, its recent popularity largely due to the need for dimension reduction techniques in analyzing high dimensional data that has become more common in the last decade, and the availability of computing power to implement this. I will explain the concept, prove a result, and give a couple of examples. The talk should be accessible to all disciplines as it (should?) only assume first year linear algebra, the concept of a random variable, and covariance.  Geometric quantisation in the noncompact setting 13:10 Fri 14 Sep 12 :: Engineering North 218 :: Dr Peter Hochs :: Leibniz University, HannoverAbstract... Traditionally, the geometric quantisation of an action by a compact Lie group on a compact symplectic manifold is defined as the equivariant index of a certain Dirac operator. This index is a well-defined formal difference of finite-dimensional representations, since the Dirac operator is elliptic and the manifold and the group in question are compact. From a mathematical and physical point of view however, it is very desirable to extend geometric quantisation to noncompact groups and manifolds. Defining a suitable index is much harder in the noncompact setting, but several interesting results in this direction have been obtained. I will review the difficulties connected to noncompact geometric quantisation, and some of the solutions that have been proposed so far, mainly in connection to the "quantisation commutes with reduction" principle. (An introduction to this principle will be given in my talk at the Colloquium on the same day.)  Epidemic models in socially structured populations: when are simple models too simple? 14:00 Thu 25 Oct 12 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of WarwickAbstract... Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for real-time predictions.  Numerical Free Probability: Computing Eigenvalue Distributions of Algebraic Manipulations of Random Matrices 15:10 Fri 2 Nov 12 :: B.20 Ingkarni Wardli :: Dr Sheehan Olver :: The University of SydneyMedia...Abstract... Suppose that the global eigenvalue distributions of two large random matrices A and B are known. It is a remarkable fact that, generically, the eigenvalue distribution of A + B and (if A and B are positive definite) A*B are uniquely determined from only the eigenvalue distributions of A and B; i.e., no information about eigenvectors are required. These operations on eigenvalue distributions are described by free probability theory. We construct a numerical toolbox that can efficiently and reliably calculate these operations with spectral accuracy, by exploiting the complex analytical framework that underlies free probability theory.  Spatiotemporally Autoregressive Partially Linear Models with Application to the Housing Price Indexes of the United States 12:10 Mon 12 Nov 12 :: B.21 Ingkarni Wardli :: Ms Dawlah Alsulami :: University of AdelaideMedia...Abstract... We propose a Spatiotemporal Autoregressive Partially Linear Regression ( STARPLR) model for data observed irregularly over space and regularly in time. The model is capable of catching possible non linearity and nonstationarity in space by coefficients to depend on locations. We suggest two-step procedure to estimate both the coefficients and the unknown function, which is readily implemented and can be computed even for large spatio-temoral data sets. As an illustration, we apply our model to analyze the 51 States' House Price Indexes (HPIs) in USA.  Asymptotic independence of (simple) two-dimensional Markov processes 15:10 Fri 1 Mar 13 :: B.18 Ingkarni Wardli :: Prof Guy Latouche :: Universite Libre de BruxellesMedia...Abstract... The one-dimensional birth-and death model is one of the basic processes in applied probability but difficulties appear as one moves to higher dimensions. In the positive recurrent case, the situation is singularly simplified if the stationary distribution has product-form. We investigate the conditions under which this property holds, and we show how to use the knowledge to find product-form approximations for otherwise unmanageable random walks. This is joint work with Masakiyo Miyazawa and Peter Taylor.  Twistor space for rolling bodies 12:10 Fri 15 Mar 13 :: Ingkarni Wardli B19 :: Prof Pawel Nurowski :: University of WarsawAbstract... We consider a configuration space of two solids rolling on each other without slipping or twisting, and identify it with an open subset U of R^5, equipped with a generic distribution D of 2-planes. We will discuss symmetry properties of the pair (U,D) and will mention that, in the case of the two solids being balls, when changing the ratio of their radii, the dimension of the group of local symmetries unexpectedly jumps from 6 to 14. This occurs for only one such ratio, and in such case the local group of symmetries of the pair (U,D) is maximal. It is maximal not only among the balls with various radii, but more generally among all (U,D)s corresponding to configuration spaces of two solids rolling on each other without slipping or twisting. This maximal group is isomorphic to the split real form of the exceptional Lie group G2. In the remaining part of the talk we argue how to identify the space U from the pair (U,D) defined above with the bundle T of totally null real 2-planes over a 4-manifold equipped with a split signature metric. We call T the twistor bundle for rolling bodies. We show that the rolling distribution D, can be naturally identified with an appropriately defined twistor distribution on T. We use this formulation of the rolling system to find more surfaces which, when rigidly rolling on each other without slipping or twisting, have the local group of symmetries isomorphic to the exceptional group G2.  How fast? Bounding the mixing time of combinatorial Markov chains 15:10 Fri 22 Mar 13 :: B.18 Ingkarni Wardli :: Dr Catherine Greenhill :: University of New South WalesMedia...Abstract... A Markov chain is a stochastic process which is "memoryless", in that the next state of the chain depends only on the current state, and not on how it got there. It is a classical result that an ergodic Markov chain has a unique stationary distribution. However, classical theory does not provide any information on the rate of convergence to stationarity. Around 30 years ago, the mixing time of a Markov chain was introduced to measure the number of steps required before the distribution of the chain is within some small distance of the stationary distribution. One reason why this is important is that researchers in areas such as physics and biology use Markov chains to sample from large sets of interest. Rigorous bounds on the mixing time of their chain allows these researchers to have confidence in their results. Bounding the mixing time of combinatorial Markov chains can be a challenge, and there are only a few approaches available. I will discuss the main methods and give examples for each (with pretty pictures).  Colour 12:10 Mon 13 May 13 :: B.19 Ingkarni Wardli :: Lyron Winderbaum :: University of AdelaideMedia...Abstract... Colour is a powerful tool in presenting data, but it can be tricky to choose just the right colours to represent your data honestly - do the colours used in your heatmap overemphasise the differences between particular values over others? does your choice of colours overemphasize one when they should be represented as equal? etc. All these questions are fundamentally based in how we perceive colour. There has been alot of research into how we perceive colour in the past century, and some interesting results. I will explain how a `standard observer' was found empirically and used to develop an absolute reference standard for colour in 1931. How although the common Red-Green-Blue representation of colour is useful and intuitive, distances between colours in this space do not reflect our perception of difference between colours and how alternative, perceptually focused colourspaces where introduced in 1976. I will go on to explain how these results can be used to provide simple mechanisms by which to choose colours that satisfy particular properties such as being equally different from each other, or being linearly more different in sequence, or maintaining such properties when transferred to greyscale, or for a colourblind person.  Progress in the prediction of buoyancy-affected turbulence 15:10 Fri 17 May 13 :: B.18 Ingkarni Wardli :: Dr Daniel Chung :: University of MelbourneMedia...Abstract... Buoyancy-affected turbulence represents a significant challenge to our understanding, yet it dominates many important flows that occur in the ocean and atmosphere. The presentation will highlight some recent progress in the characterisation, modelling and prediction of buoyancy-affected turbulence using direct and large-eddy simulations, along with implications for the characterisation of mixing in the ocean and the low-cloud feedback in the atmosphere. Specifically, direct numerical simulation data of stratified turbulence will be employed to highlight the importance of boundaries in the characterisation of turbulent mixing in the ocean. Then, a subgrid-scale model that captures the anisotropic character of stratified mixing will be developed for large-eddy simulation of buoyancy-affected turbulence. Finally, the subgrid-scale model is utilised to perform a systematic large-eddy simulation investigation of the archetypal low-cloud regimes, from which the link between the lower-tropospheric stability criterion and the cloud fraction interpreted.  Coincidences 14:10 Mon 20 May 13 :: 7.15 Ingkarni Wardli :: A/Prof. Robb Muirhead :: School of Mathematical SciencesMedia...Abstract... This is a lighthearted (some would say content-free) talk about coincidences, those surprising concurrences of events that are often perceived as meaningfully related, with no apparent causal connection. Time permitting, it will touch on topics like: Patterns in data and the dangers of looking for patterns, unspecified ahead of time, and trying to "explain" them; e.g. post hoc subgroup analyses, cancer clusters, conspiracy theories ... Matching problems; e.g. the birthday problem and extensions People who win a lottery more than once -- how surprised should we really be? What's the question we should be asking? When you become familiar with a new word, and see it again soon afterwards, how surprised should you be? Caution: This is a shortened version of a talk that was originally prepared for a group of non-mathematicians and non-statisticians, so it's mostly non-technical. It probably does not contain anything you don't already know -- it will be an amazing coincidence if it does! ## News matching "Spatial-point data sets and the Polya distribution"  ARC Grant successes The School of Mathematical Sciences has again had outstanding success in the ARC Discovery and Linkage Projects schemes. Congratulations to the following staff for their success in the Discovery Project scheme: Prof Nigel Bean, Dr Josh Ross, Prof Phil Pollett, Prof Peter Taylor, New methods for improving active adaptive management in biological systems,255,000 over 3 years; Dr Josh Ross, New methods for integrating population structure and stochasticity into models of disease dynamics, $248,000 over three years; A/Prof Matt Roughan, Dr Walter Willinger, Internet traffic-matrix synthesis,$290,000 over three years; Prof Patricia Solomon, A/Prof John Moran, Statistical methods for the analysis of critical care data, with application to the Australian and New Zealand Intensive Care Database, $310,000 over 3 years; Prof Mathai Varghese, Prof Peter Bouwknegt, Supersymmetric quantum field theory, topology and duality,$375,000 over 3 years; Prof Peter Taylor, Prof Nigel Bean, Dr Sophie Hautphenne, Dr Mark Fackrell, Dr Malgorzata O'Reilly, Prof Guy Latouche, Advanced matrix-analytic methods with applications, $600,000 over 3 years. Congratulations to the following staff for their success in the Linkage Project scheme: Prof Simon Beecham, Prof Lee White, A/Prof John Boland, Prof Phil Howlett, Dr Yvonne Stokes, Mr John Wells, Paving the way: an experimental approach to the mathematical modelling and design of permeable pavements,$370,000 over 3 years; Dr Amie Albrecht, Prof Phil Howlett, Dr Andrew Metcalfe, Dr Peter Pudney, Prof Roderick Smith, Saving energy on trains - demonstration, evaluation, integration, \$540,000 over 3 years Posted Fri 29 Oct 10.
 ARC Future Fellowship success Associate Professor Zudi Lu has been awarded an ARC Future Fellowship. Associate Professor Lu, and Associate Professor in Statistics, will use the support provided by his Future Fellowship to further improve the theory and practice of econometric modelling of nonlinear spatial time series. Congratulations Zudi. Posted Thu 12 May 11.

## Publications matching "Spatial-point data sets and the Polya distribution"

Publications
Model dynamics across multiple length and time scales on a spatial multigrid
Roberts, Anthony John, Multiscale Modeling & Simulation: a SIAM Interdisciplinary Journal 7 (1525–1548) 2009
CleanBGP: Verifying the consistency of BGP data
Flavel, Ashley; Maennel, Olaf; Chiera, Belinda; Roughan, Matthew; Bean, Nigel, International Network Management Workshop, Orlando, Florida 19/10/08
Energy balanced data gathering in WSNs with grid topologies
Chen, J; Shen, Hong; Tian, Hui, 7th International Conference on Grid and Cooperative Computing, China 24/10/08
Data fusion without data fusion: localization and tracking without sharing sensitive information
Roughan, Matthew; Arnold, Jonathan, Information, Decision and Control 2007, Adelaide, Australia 12/02/07
Nonclassical symmetry solutions for reaction-diffusion equations with explicity spatial dependence
Hajek, Bronwyn; Edwards, M; Broadbridge, P; Williams, G, Nonlinear Analysis-Theory Methods & Applications 67 (2541–2552) 2007
Optimal multilinear estimation of a random vector under constraints of casualty and limited memory
Howlett, P; Torokhti, Anatoli; Pearce, Charles, Computational Statistics & Data Analysis 52 (869–878) 2007
Statistics in review; Part 1: graphics, data summary and linear models
Moran, John; Solomon, Patricia, Critical care and Resuscitation 9 (81–90) 2007
Experimental Design and Analysis of Microarray Data
Wilson, C; Tsykin, Anna; Wilkinson, Christopher; Abbott, C, chapter in Bioinformatics (Elsevier Ltd) 1–36, 2006
Is BGP update storm a sign of trouble: Observing the internet control and data planes during internet worms
Roughan, Matthew; Li, J; Bush, R; Mao, Z; Griffin, T, SPECTS 2006, Calgary, Canada 31/07/06
Watching data streams toward a multi-homed sink under routing changes introduced by a BGP beacon
Li, J; Bush, R; Mao, Z; Griffin, T; Roughan, Matthew; Stutzbach, D; Purpus, E, PAM2006, Adelaide, Australia 30/03/06
Data-recursive smoother formulae for partially observed discrete-time Markov chains
Elliott, Robert; Malcolm, William, Stochastic Analysis and Applications 24 (579–597) 2006
Barwick, Susan; Brown, Matthew; Penttila, T, Journal of Combinatorial Theory Series A 113 (273–290) 2006
Optimal linear estimation and data fusion
Elliott, Robert; Van Der Hoek, John, IEEE Transactions on Automatic Control 51 (686–689) 2006
Secure distributed data-mining and its application to large-scale network measurements
Roughan, Matthew; Zhang, Y, Computer Communication Review 36 (7–14) 2006
Optimal estimation of a random signal from partially missed data
Torokhti, Anatoli; Howlett, P; Pearce, Charles, EUSIPCO 2006, Florence, Italy 04/09/06
Estimating point-to-point and point-to-multipoint traffic matrices: An information-theoretic approach
Zhang, Y; Roughan, Matthew; Lund, C; Donoho, D, IEEE - ACM Transactions on Networking 13 (947–960) 2005
Optimal recursive estimation of raw data
Torokhti, Anatoli; Howlett, P; Pearce, Charles, Annals of Operations Research 133 (285–302) 2005
Self-similar "stagnation point" boundary layer flows with suction or injection
King, J; Cox, Stephen, Studies in Applied Mathematics 115 (73–107) 2005
Combining routing and traffic data for detection of IP forwarding anomalies
Roughan, Matthew; Griffin, T; Mao, M; Greenberg, A; Freeman, B, Sigmetrics - Performance 2004, New York, USA 12/06/04
IP forwarding anomalies and improving their detection using multiple data sources
Roughan, Matthew; Griffin, T; Mao, M; Greenberg, A; Freeman, B, SIGCOMM 2004, Oregon, USA 30/08/04
The data processing inequality and stochastic resonance
McDonnell, Mark; Stocks, N; Pearce, Charles; Abbott, Derek, Noise in Complex Systems and Stochastic Dynamics, Santa Fe, New Mexico, USA 01/06/03
Modelling persistence in annual Australian point rainfall
Whiting, Julian; Lambert, Martin; Metcalfe, Andrew, Hydrology and Earth System Sciences 7 (197–211) 2003
Stochastic resonance and data processing inequality
McDonnell, Mark; Stocks, N; Pearce, Charles; Abbott, Derek, Electronics Letters 39 (1287–1288) 2003
Estimation of point-to-multipoint demands matrices from SNMP link traffic
Roughan, Matthew, Internet Traffic Matrices Estimation (Intimate 2003), Paris, France 16/08/03
Evidence for a Differential Cellular Distribution of Inward Rectifier K Channels in the Rat Isolated Mesenteric Artery
Crane, Glenis Jayne; Walker, S; Dora, K; Garland, C, Journal of Vascular Research 40 (159–168) 2003
Resampling-based multiple testing for microarray data analysis (Invited discussion of paper by Ge, Dudoit and Speed)
Glonek, Garique; Solomon, Patricia, Test 12 (50–53) 2003
Inequalities for lattice constrained planar convex sets
Hillock, P; Scott, Paul, Journal of Inequalities in Pure and Applied Mathematics 3 (www 23:1–www 23:10) 2002
Two-point formulae of Euler type
Matic, M; Pearce, Charles; Pecaric, Josip, The ANZIAM Journal 44 (221–245) 2002
A goodness-of-fit test for the uniform distribution based on a characterization
Morris, Kerwin; Szynal, D, Journal of Mathematical Sciences 106 (2719–2724) 2001
Best estimators of second degree for data analysis
Howlett, P; Pearce, Charles; Torokhti, Anatoli, ASMDA 2001, Compiegne, France 12/06/01
Modelling Service Time Distribution in Cellular Networks Using Phase-Type Service Distributions
Green, David; Asenstorfer, J; Jayasuriya, A,
Optimal successive estimation of observed data
Torokhti, Anatoli; Howlett, P; Pearce, Charles, International Conference on Optimization: Techniques and Applications (5th: 2001), Hong Kong, China 15/12/01
Statistical analysis of medical data: New developments - Book review
Solomon, Patricia, Biometrics 57 (327–328) 2001
Polya-type inequalities
Pearce, Charles; Pecaric, Josip; Varosanec, S, chapter in Handbook of analytic-computational methods in applied mathematics (Chapman & Hall/CRC) 465–505, 2000
Disease surveillance and data collection issues in epidemic modelling
Solomon, Patricia; Isham, V, Statistical Methods in Medical Research 9 (259–277) 2000
Inequalities for convex sets
Scott, Paul; Awyong, P-W, Journal of Inequalities in Pure and Applied Mathematics 1 (1–6) 2000
On two lemmas of Brown and Shepp having application to sum sets and fractals, III
Elezovic, N; Matic, M; Pearce, Charles; Pecaric, Josip, The ANZIAM Journal 41 (329–337) 2000