July
2018  M  T  W  T  F  S  S        1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31      

Search the School of Mathematical SciencesPeople matching "+Operations +research"Courses matching "+Operations +research" 
Optimisation and Operations Research Operations Research (OR) is the application of mathematical techniques and analysis to problem solving in business and industry, in particular to carrying out more efficiently tasks such as scheduling, or optimising the provision of services. OR is an interdisciplinary topic drawing from mathematical modelling, optimisation theory, game theory, decision analysis, statistics, and simulation to help make decisions in complex situations. This first course in OR concentrates on mathematical modelling and optimisation: for example maximising production capacity, or minimising risk. It focuses on linear optimisation problems involving both continuous, and integer variables. The course covers a variety of mathematical techniques for linear optimisation, and the theory behind them. It will also explore the role of heuristics in such problems. Examples will be presented from important application areas, such as the emergency services, telecommunications, transportation, and manufacturing. Students will undertake a team project based on an actual Adelaide problem. Topics covered are: formulating a linear program; the Simplex Method; duality and Complementary slackness; sensitivity analysis; an interior point method; alternative means to solve some linear and integer programs, such as primaldual approaches methods from a complete solution (such as Greedy Methods, and Simulated Annealing), methods from a partial solution (such as Dijkstra's shortest path algorithm, and branchandbound).
More about this course... 
Events matching "+Operations +research" 
American option pricing in a Markov chain market model 15:10 Fri 19 Mar, 2010 :: School Board Room :: Prof Robert Elliott :: School of Mathematical Sciences, University of Adelaide
This paper considers a model for asset pricing in a world where
the randomness is modeled by a Markov chain rather than Brownian motion.
In this paper we develop a theory of optimal stopping and related
variational inequalities for American options in this model. A version of
Saigal's Lemma is established and numerical algorithms developed.
This is a joint work with John van der Hoek. 

Estimation of sparse Bayesian networks using a scorebased approach 15:10 Fri 30 Apr, 2010 :: School Board Room :: Dr Jessica Kasza :: University of Copenhagen
The estimation of Bayesian networks given highdimensional data sets, with more variables than there are observations, has been the focus of much recent research. These structures provide a flexible framework for the representation of the conditional independence relationships of a set of variables, and can be particularly useful in the estimation of genetic regulatory networks given gene expression data.
In this talk, I will discuss some new research on learning sparse networks, that is, networks with many conditional independence restrictions, using a scorebased approach. In the case of genetic regulatory networks, such sparsity reflects the view that each gene is regulated by relatively few other genes. The presented approach allows prior information about the overall sparsity of the underlying structure to be included in the analysis, as well as the incorporation of prior knowledge about the connectivity of individual nodes within the network.


Interpolation of complex data using spatiotemporal compressive sensing 13:00 Fri 28 May, 2010 :: Santos Lecture Theatre :: A/Prof Matthew Roughan :: School of Mathematical Sciences, University of Adelaide
Many complex datasets suffer from missing data, and interpolating these missing
elements is a key task in data analysis. Moreover, it is often the case that we
see only a linear combination of the desired measurements, not the measurements
themselves. For instance, in network management, it is easy to count the traffic
on a link, but harder to measure the endtoend flows. Additionally, typical
interpolation algorithms treat either the spatial, or the temporal
components of data separately, but in many real datasets have strong
spatiotemporal structure that we would like to exploit in reconstructing the
missing data. In this talk I will describe a novel reconstruction algorithm that
exploits concepts from the growing area of compressive sensing to solve all of
these problems and more. The approach works so well on Internet traffic matrices
that we can obtain a reasonable reconstruction with as much as 98% of the
original data missing. 

Some thoughts on wine production 15:05 Fri 18 Jun, 2010 :: School Board Room :: Prof Zbigniew Michalewicz :: School of Computer Science, University of Adelaide
In the modern information era, managers (e.g. winemakers) recognize the
competitive opportunities represented by decisionsupport tools which can
provide a significant cost savings & revenue increases for their businesses.
Wineries make daily decisions on the processing of grapes, from harvest time
(prediction of maturity of grapes, scheduling of equipment and labour, capacity
planning, scheduling of crushers) through tank farm activities (planning and
scheduling of wine and juice transfers on the tank farm) to packaging processes
(bottling and storage activities). As such operation is quite complex, the whole
area is loaded with interesting ORrelated issues. These include the issues of
global vs. local optimization, relationship between prediction and optimization,
operating in dynamic environments, strategic vs. tactical optimization, and
multiobjective optimization & tradeoff analysis. During the talk we address
the above issues; a few realworld applications will be shown and discussed to
emphasize some of the presented material. 

A spatialtemporal point process model for fine resolution multisite rainfall data from Roma, Italy 14:10 Thu 19 Aug, 2010 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of Technology
A point process rainfall model is further developed that has storm origins occurring in spacetime according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in twodimensional
space, where the storm radii are taken to be independent exponential random
variables. Storm origins are of random type z, where z follows a continuous
probability distribution. Cell origins occur in a further spatial Poisson
process and have arrival times that follow a NeymanScott point process. Cell
origins have random radii so that cells form discs in twodimensional space.
Statistical properties up to third order are derived and used to fit the model
to 10 min series taken from 23 sites across the Roma region, Italy.
Distributional properties of the observed annual maxima are compared to
equivalent values sampled from series that are simulated using the fitted
model. The results indicate that the model will be of use in urban drainage
projects for the Roma region.


Compound and constrained regression analyses for EIV models 15:05 Fri 27 Aug, 2010 :: Napier LG28 :: Prof Wei Zhu :: State University of New York at Stony Brook
In linear regression analysis, randomness often exists in the independent variables and the resulting models are referred to errorsinvariables (EIV) models. The existing general EIV modeling framework, the structural model approach, is parametric and dependent on the usually unknown underlying distributions. In this work, we introduce a general nonparametric EIV modeling framework, the compound regression analysis, featuring an intuitive geometric representation and a 11 correspondence to the structural model. Properties, examples and further generalizations of this new modeling approach are discussed in this talk. 

Simultaneous confidence band and hypothesis test in generalised varyingcoefficient models 15:05 Fri 10 Sep, 2010 :: Napier LG28 :: Prof Wenyang Zhang :: University of Bath
Generalised varyingcoefficient models (GVC) are very important
models. There are a considerable number of literature addressing these models.
However, most of the existing literature are devoted to the estimation
procedure. In this talk, I will systematically investigate the statistical
inference for GVC, which includes confidence band as well as hypothesis test. I
will show the asymptotic distribution of the maximum discrepancy between the
estimated functional coefficient and the true functional coefficient. I will
compare different approaches for the construction of confidence band and
hypothesis test. Finally, the proposed statistical inference methods are used to
analyse the data from China about contraceptive use there, which leads to some
interesting findings. 

TBA 15:05 Fri 22 Oct, 2010 :: Napier LG28 :: Dr Andy Lian :: University of Adelaide


Arbitrage bounds for weighted variance swap prices 15:05 Fri 3 Dec, 2010 :: Napier LG28 :: Prof Mark Davis :: Imperial College London
This paper builds on earlier work by Davis and Hobson (Mathematical Finance,
2007) giving modelfreeexcept for a 'frictionless markets' assumption
necessary and sufficient conditions for absence of arbitrage given a set of
currenttime put and call options on some underlying asset. Here we suppose
that the prices of a set of put options, all maturing at the same time, are
given and satisfy the conditions for consistency with absence of arbitrage.
We
now add a pathdependent option, specifically a weighted variance swap, to
the
set of traded assets and ask what are the conditions on its time0 price
under
which consistency with absence of arbitrage is maintained. In the present
work,
we work under the extra modelling assumption that the underlying asset price
process has continuous paths. In general, we find that there is always a
non
trivial lower bound to the range of arbitragefree prices, but only in the
case
of a corridor swap do we obtain a finite upper bound. In the case of, say,
the
vanilla variance swap, a finite upper bound exists when there are additional
traded European options which constrain the left wing of the volatility
surface
in appropriate ways. 

Queues with skill based routing under FCFS–ALIS regime 15:10 Fri 11 Feb, 2011 :: B17 Ingkarni Wardli :: Prof Gideon Weiss :: The University of Haifa, Israel
We consider a system where jobs of several types are served by servers
of several types, and a bipartite graph between server types and job types
describes feasible assignments. This is a common situation in manufacturing,
call centers with skill based routing, matching of parentchild in adoption or
matching in kidney transplants etc. We consider the case of first come first
served policy: jobs are assigned to the first available feasible server in
order of their arrivals. We consider two types of policies for assigning
customers to idle servers  a random assignment and assignment to the longest
idle server (ALIS) We survey some results for four different situations:
 For a loss system we find conditions for reversibility and insensitivity.
 For a manufacturing type system, in which there is enough capacity to serve
all jobs, we discuss a product form solution and waiting times.
 For an infinite matching model in which an infinite sequence of customers of
IID types, and infinite sequence of servers of IID types are matched
according to first come first, we obtain a product form stationary
distribution for this system, which we use to calculate matching rates.
 For a call center model with overload and abandonments we make some plausible
observations.
This talk surveys joint work with Ivo Adan, Rene Caldentey, Cor Hurkens, Ed
Kaplan and Damon Wischik, as well as work by Jeremy Visschers, Rishy Talreja and
Ward Whitt.


Bioinspired computation in combinatorial optimization: algorithms and their computational complexity 15:10 Fri 11 Mar, 2011 :: 7.15 Ingkarni Wardli :: Dr Frank Neumann :: The University of Adelaide
Media...Bioinspired computation methods, such as evolutionary algorithms and ant colony
optimization, are being applied successfully to complex engineering and
combinatorial optimization problems. The computational complexity analysis of
this type of algorithms has significantly increased the theoretical
understanding of these successful algorithms. In this talk, I will give an
introduction into this field of research and present some important results
that we achieved for problems from combinatorial optimization. These results
can also be found in my recent textbook "Bioinspired Computation in
Combinatorial Optimization  Algorithms and Their Computational Complexity". 

Classification for highdimensional data 15:10 Fri 1 Apr, 2011 :: Conference Room Level 7 Ingkarni Wardli :: Associate Prof Inge Koch :: The University of Adelaide
For twoclass classification problems Fisher's discriminant rule performs
well in many scenarios provided the dimension, d, is much smaller than the sample
size n. As the dimension increases, Fisher's rule may no longer be
adequate, and can perform as poorly as random guessing.
In this talk we look at new ways of overcoming this poor performance for
highdimensional data by suitably modifying Fisher's rule, and in particular
we describe the 'Features Annealed Independence Rule (FAIR)? of Fan and Fan
(2008) and a rule based on canonical correlation analysis. I describe some
theoretical developments, and also show analysis of data which illustrate the
performance of these modified rule. 

On parameter estimation in population models 15:10 Fri 6 May, 2011 :: 715 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide
Essential to applying a mathematical model to a realworld application is
calibrating the model to data. Methods for calibrating population models
often become computationally infeasible when the populations size (more generally
the size of the state space) becomes large, or other complexities such as
timedependent transition rates, or sampling error, are present. Here we
will discuss the use of diffusion approximations to perform estimation in several
scenarios, with successively reduced assumptions: (i) under the assumption
of stationarity (the process had been evolving for a very long time with
constant parameter values); (ii) transient dynamics (the assumption of stationarity
is invalid, and thus only constant parameter values may be assumed); and, (iii)
timeinhomogeneous chains (the parameters may vary with time) and accounting
for observation error (a sample of the true state is observed). 

Optimal experimental design for stochastic population models 15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane
Markov population processes are popular models for studying a wide range of
phenomena including the spread of disease, the evolution of chemical reactions
and the movements of organisms in population networks (metapopulations). Our
ability to use these models effectively can be limited by our knowledge about
parameters, such as disease transmission and recovery rates in an epidemic.
Recently, there has been interest in devising optimal experimental designs for
stochastic models, so that practitioners can collect data in a manner that
maximises the precision of maximum likelihood estimates of the parameters for
these models. I will discuss some recent work on optimal design for a variety
of population models, beginning with some simple oneparameter models where the
optimal design can be obtained analytically and moving on to more complicated
multiparameter models in epidemiology that involve latent states and
nonexponentially distributed infectious periods. For these more complex
models, the optimal design must be arrived at using computational methods and we
rely on a Gaussian diffusion approximation to obtain analytical expressions for
Fisher's information matrix, which is at the heart of most optimality criteria
in experimental design. I will outline a simple crossentropy algorithm that
can be used for obtaining optimal designs for these models. We will also
explore the improvements in experimental efficiency when using the optimal
design over some simpler designs, such as the design where observations are
spaced equidistantly in time. 

Priority queueing systems with random switchover times and generalisations of the KendallTakacs equation 16:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
In this talk I will review existing analytical results for priority queueing
systems with Poisson incoming flows, general service times and a single server
which needs some (random) time to switch between requests of different priority.
Specifically, I will discuss analytical results for the busy period and workload
of such systems with a special structure of switchover times.
The results related to the busy period can be seen as generalisations of the
famous KendallTak\'{a}cs functional equation for $MG1$:
being formulated in terms of LaplaceStieltjes transform, they represent systems
of functional recurrent equations.
I will present a methodology and algorithms of their numerical solution;
the efficiency of these algorithms is achieved by acceleration of the numerical
procedure of solving the classical KendallTak\'{a}cs equation.
At the end I will identify open problems with regard to such systems; these open
problems are mainly related to the modelling of switchover times.


Inference and optimal design for percolation and general random graph models (Part I) 09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂ«mostly populatedÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Inference and optimal design for percolation and general random graph models (Part II) 10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂÃÂÃÂ«mostly populatedÃÂÃÂÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Quantitative proteomics: data analysis and statistical challenges 10:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Peter Hoffmann :: Adelaide Proteomics Centre


Introduction to functional data analysis with applications to proteomics data 11:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: A/Prof Inge Koch :: School of Mathematical Sciences


Object oriented data analysis 14:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill
Object Oriented Data Analysis is the statistical analysis of populations of complex objects. In the special case of Functional Data Analysis, these data objects are curves, where standard Euclidean approaches, such as principal components analysis, have been very successful. Recent developments in medical image analysis motivate the statistical analysis of populations of more complex data objects which are elements of mildly nonEuclidean spaces, such as Lie Groups and Symmetric Spaces, or of strongly nonEuclidean spaces, such as spaces of treestructured data objects. These new contexts for Object Oriented Data Analysis create several potentially large new interfaces between mathematics and statistics. Even in situations where Euclidean analysis makes sense, there are statistical challenges because of the High Dimension Low Sample Size problem, which motivates a new type of asymptotics leading to nonstandard mathematical statistics. 

Object oriented data analysis of treestructured data objects 15:10 Fri 1 Jul, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill
The field of Object Oriented Data Analysis has made a lot of
progress on the statistical analysis of the variation in populations
of complex objects. A particularly challenging example of this type
is populations of treestructured objects. Deep challenges arise,
which involve a marriage of ideas from statistics, geometry, and
numerical analysis, because the space of trees is strongly
nonEuclidean in nature. These challenges, together with three
completely different approaches to addressing them, are illustrated
using a real data example, where each data point is the tree of blood
arteries in one person's brain. 

Estimating disease prevalence in hidden populations 14:05 Wed 28 Sep, 2011 :: B.18 Ingkarni Wardli :: Dr Amber Tomas :: The University of Oxford
Estimating disease prevalence in "hidden" populations such as injecting
drug users or men who have sex with men is an important public health
issue. However, traditional designbased estimation methods are
inappropriate because they assume that a list of all members of the
population is available from which to select a sample. Respondent Driven
Sampling (RDS) is a method developed over the last 15 years for sampling
from hidden populations. Similarly to snowball sampling, it leverages the
fact that members of hidden populations are often socially connected to
one another. Although RDS is now used around the world, there are several
common population characteristics which are known to cause estimates
calculated from such samples to be significantly biased. In this talk I'll
discuss the motivation for RDS, as well as some of the recent developments
in methods of estimation. 

Understanding the dynamics of event networks 15:00 Wed 28 Sep, 2011 :: B.18 Ingkarni Wardli :: Dr Amber Tomas :: The University of Oxford
Within many populations there are frequent communications between
pairs of individuals. Such communications might be emails sent within a
company, radio communications in a disaster zone or diplomatic
communications
between states. Often it is of interest to understand the factors that
drive the observed patterns of such communications, or to study how these
factors are changing over over time. Communications can be thought of as
events
occuring on the edges of a network which connects individuals in the
population.
In this talk I'll present a model for such communications which uses ideas
from
social network theory to account for the complex correlation structure
between
events. Applications to the Enron email corpus and the dynamics of hospital
ward transfer patterns will be discussed. 

Statistical modelling for some problems in bioinformatics 11:10 Fri 14 Oct, 2011 :: B.17 Ingkarni Wardli :: Professor Geoff McLachlan :: The University of Queensland
Media...In this talk we consider some statistical analyses of data arising in
bioinformatics. The problems include the detection of differential
expression in microarray geneexpression data, the clustering of
timecourse geneexpression data and, lastly, the analysis of
modernday cytometric data. Extensions are considered to the procedures
proposed for these three problems in McLachlan et al. (Bioinformatics, 2006),
Ng et al. (Bioinformatics, 2006), and Pyne et al. (PNAS, 2009), respectively.
The latter references are available at http://www.maths.uq.edu.au/~gjm/. 

Evaluation and comparison of the performance of Australian and New Zealand intensive care units 14:10 Fri 25 May, 2012 :: 7.15 Ingkarni Wardli :: Dr Jessica Kasza :: The University of Adelaide
Media...Recently, the Australian Government has emphasised the need for monitoring and comparing the performance of Australian hospitals. Evaluating the performance of intensive care units (ICUs) is of particular importance, given that the most severe cases are treated in these units. Indeed, ICU performance can be thought of as a proxy for the overall performance of a hospital. We compare the performance of the ICUs contributing to the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database, the largest of its kind in the world, and identify those ICUs with unusual performance.
It is wellknown that there are many statistical issues that must be accounted for in the evaluation of healthcare provider performance. Indicators of performance must be appropriately selected and estimated, investigators must adequately adjust for casemix, statistical variation must be fully accounted for, and adjustment for multiple comparisons must be made. Our basis for dealing with these issues is the estimation of a hierarchical logistic model for the inhospital death of each patient, with patients clustered within ICUs. Both patient and ICUlevel covariates are adjusted for, with a random intercept and random coefficient for the APACHE III severity score. Given that we expect most ICUs to have similar performance after adjustment for these covariates, we follow Ohlssen et al., JRSS A (2007), and estimate a null model that we expect the majority of ICUs to follow. This methodology allows us to rigorously account for the aforementioned statistical issues, and accurately identify those ICUs contributing to the ANZICS database that have comparatively unusual performance. This is joint work with Prof. Patty Solomon and Assoc. Prof. John Moran. 

Epidemiological consequences of householdbased antiviral prophylaxis for pandemic influenza 14:10 Fri 8 Jun, 2012 :: 7.15 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide
Media...Antiviral treatment offers a fast acting alternative to vaccination. It is viewed as a firstline of defence against pandemic influenza, protecting families and household members once infection has been detected. In clinical trials antiviral treatment has been shown to be efficacious in preventing infection, limiting disease and reducing transmission, yet their impact at containing the 2009 influenza A(H1N1)pdm outbreak was limited. I will describe some of our work, which attempts to understand this seeming discrepancy, through the development of a general model and computationally efficient methodology for studying householdbased interventions.
This is joint work with Dr Andrew Black (Adelaide), and Prof. Matt Keeling and Dr Thomas House (Warwick, U.K.). 

Multiscale models of evolutionary epidemiology: where is HIV going? 14:00 Fri 19 Oct, 2012 :: Napier 205 :: Dr Lorenzo Pellis :: The University of Warwick
An important component of pathogen evolution at the population level is evolution within hosts, which can alter the composition of genotypes available for transmission as infection progresses. I will present a deterministic multiscale model, linking the withinhost competition dynamics with the transmission dynamics at a population level. I will take HIV as an example of how this framework can help clarify the conflicting evolutionary pressure an infectious disease might be subject to. 

Epidemic models in socially structured populations: when are simple models too simple? 14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick
Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for realtime predictions. 

Epidemic models in socially structured populations: when are simple models too simple? 14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick
Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for realtime predictions. 

Exploration vs. Exploitation with Partially Observable Gaussian Autoregressive Arms 15:00 Mon 29 Sep, 2014 :: Engineering North N132 :: Julia Kuhn :: The University of Queensland & The University of Amsterdam
Media...We consider a restless bandit problem with Gaussian autoregressive arms, where the state of an arm is only observed when it is played and the statedependent reward is collected. Since arms are only partially observable, a good decision policy needs to account for the fact that information about the state of an arm becomes more and more obsolete while the arm is not being played. Thus, the decision maker faces a tradeoff between exploiting those arms that are believed to be currently the most rewarding (i.e. those with the largest conditional mean), and exploring arms with a high conditional variance. Moreover, one would like the decision policy to remain tractable despite the infinite state space and also in systems with many arms. A policy that gives some priority to exploration is the Whittle index policy, for which we establish structural properties. These motivate a parametric index policy that is computationally much simpler than the Whittle index but can still outperform the myopic policy. Furthermore, we examine the manyarm behavior of the system under the parametric policy, identifying equations describing its asymptotic dynamics. Based on these insights we provide a simple heuristic algorithm to evaluate the performance of index policies; the latter is used to optimize the parametric index. 

Modelling segregation distortion in multiparent crosses 15:00 Mon 17 Nov, 2014 :: 5.57 Ingkarni Wardli :: Rohan Shah (joint work with B. Emma Huang and Colin R. Cavanagh) :: The University of Queensland
Construction of highdensity genetic maps has been made feasible by lowcost highthroughput genotyping technology; however, the process is still complicated by biological, statistical and computational issues. A major challenge is the presence of segregation distortion, which can be caused by selection, difference in fitness, or suppression of recombination due to introgressed segments from other species. Alien introgressions are common in major crop species, where they have often been used to introduce beneficial genes from wild relatives.
Segregation distortion causes problems at many stages of the map construction process, including assignment to linkage groups and estimation of recombination fractions. This can result in incorrect ordering and estimation of map distances. While discarding markers will improve the resulting map, it may result in the loss of genomic regions under selection or containing beneficial genes (in the case of introgression).
To correct for segregation distortion we model it explicitly in the estimation of recombination fractions. Previously proposed methods introduce additional parameters to model the distortion, with a corresponding increase in computing requirements. This poses difficulties for large, densely genotyped experimental populations. We propose a method imposing minimal additional computational burden which is suitable for highdensity map construction in large multiparent crosses. We demonstrate its use modelling the known Sr36 introgression in wheat for an eightparent complex cross.


Topology Tomography with Spatial Dependencies 15:00 Tue 25 Nov, 2014 :: Engineering North N132 :: Darryl Veitch :: The University of Melbourne
Media...There has been quite a lot of tomography inference work on measurement networks with a tree topology. Here observations are made, at the leaves of the tree, of `probes' sent down from the root and copied at each branch point. Inference can be performed based on loss or delay information carried by probes, and used in order to recover loss parameters, delay parameters, or the topology, of the tree. In all of these a strong assumption of spatial independence between links in the tree has been made in prior work. I will describe recent work on topology inference, based on loss measurement, which breaks that assumption. In particular I will introduce a new model class for loss with non trivial spatial dependence, the `Jump Independent Models', which are well motivated, and prove that within this class the topology is identifiable. 

Queues and cooperative games 15:00 Fri 18 Sep, 2015 :: Ingkarni Wardli B21 :: Moshe Haviv :: Department of Statistics and the Federmann Center for the Study of Rationality, The Hebrew Universit
Media...The area of cooperative game theory deals with models in which a number of individuals, called players, can form coalitions so as to improve the utility of its members. In many cases, the formation of the grand coalition is a natural result of some negotiation or a bargaining procedure.
The main question then is how the players should split the gains due to their cooperation among themselves. Various solutions have been suggested among them the Shapley value, the nucleolus and the core.
Servers in a queueing system can also join forces. For example, they can exchange service capacity among themselves or serve customers who originally seek service at their peers. The overall performance improves and the question is how they should split the gains, or,
equivalently, how much each one of them needs to pay or be paid in order to cooperate with the others. Our major focus is in the core of the resulting cooperative game and in showing that in many queueing games the core is not empty.
Finally, customers who are served by the same server can also be looked at as players who form a grand coalition, now inflicting damage on each other in the form of additional waiting time. We show how cooperative game theory, specifically the AumannShapley prices, leads to a way in which this damage can be attributed to individual customers or groups of customers. 

Modelling Coverage in RNA Sequencing 09:00 Mon 9 Nov, 2015 :: Ingkarni Wardli 5.57 :: Arndt von Haeseler :: Max F Perutz Laboratories, University of Vienna
Media...RNA sequencing (RNAseq) is the method of choice for measuring the expression of RNAs in a cell population. In an RNAseq experiment, sequencing the full length of larger RNA molecules requires fragmentation into smaller pieces to be compatible with limited read lengths of most deepsequencing technologies. Unfortunately, the issue of nonuniform coverage across a genomic feature has been a concern in RNAseq and is attributed to preferences for certain fragments in steps of library preparation and sequencing. However, the disparity between the observed nonuniformity of read coverage in RNAseq data and the assumption of expected uniformity elicits a query on the read coverage profile one should expect across a transcript, if there are no biases in the sequencing protocol. We propose a simple model of unbiased fragmentation where we find that the expected coverage profile is not uniform and, in fact, depends on the ratio of fragment length to transcript length. To compare the nonuniformity proposed by our model with experimental data, we extended this simple model to incorporate empirical attributes matching that of the sequenced transcript in an RNAseq experiment. In addition, we imposed an experimentally derived distribution on the frequency at which fragment lengths occur.
We used this model to compare our theoretical prediction with experimental data and with the uniform coverage model. If time permits, we will also discuss a potential application of our model. 

Use of epidemic models in optimal decision making 15:00 Thu 19 Nov, 2015 :: Ingkarni Wardli 5.57 :: Tim Kinyanjui :: School of Mathematics, The University of Manchester
Media...Epidemic models have proved useful in a number of applications in epidemiology. In this work, I will present two areas that we have used modelling to make informed decisions. Firstly, we have used an age structured mathematical model to describe the transmission of Respiratory Syncytial Virus in a developed country setting and to explore different vaccination strategies. We found that delayed infant vaccination has significant potential in reducing the number of hospitalisations in the most vulnerable group and that most of the reduction is due to indirect protection. It also suggests that marked public health benefit could be achieved through RSV vaccine delivered to age groups not seen as most at risk of severe disease. The second application is in the optimal design of studies aimed at collection of householdstratified infection data. A design decision involves making a tradeoff between the number of households to enrol and the sampling frequency. Two commonly used study designs are considered: crosssectional and cohort. The search for an optimal design uses Bayesian methods to explore the joint parameterdesign space combined with Shannon entropy of the posteriors to estimate the amount of information for each design. We found that for the crosssectional designs, the amount of information increases with the sampling intensity while the cohort design often exhibits a tradeoff between the number of households sampled and the intensity of followup. Our results broadly support the choices made in existing data collection studies. 

A SemiMarkovian Modeling of Limit Order Markets 13:00 Fri 11 Dec, 2015 :: Ingkarni Wardli 5.57 :: Anatoliy Swishchuk :: University of Calgary
Media...R. Cont and A. de Larrard (SIAM J. Financial Mathematics, 2013) introduced a tractable stochastic model for the dynamics of a limit order book, computing various quantities of interest such as the probability of a price increase or the diffusion limit of the price process. As suggested by empirical observations, we extend their framework to 1) arbitrary distributions for book events interarrival times (possibly nonexponential) and 2) both the nature of a new book event and its corresponding interarrival time depend on the nature of the previous book event. We do so by resorting to Markov renewal processes to model the dynamics of the bid and ask queues. We keep analytical tractability via explicit expressions for the Laplace transforms of various quantities of interest. Our approach is justified and illustrated by calibrating the model to the five stocks Amazon, Apple, Google, Intel and Microsoft on June 21st 2012. As in Cont and Larrard, the bidask spread remains constant equal to one tick, only the bid and ask queues are modelled (they are independent from each other and get reinitialized after a price change), and all orders have the same size. (This talk is based on our joint paper with Nelson Vadori (Morgan Stanley)). 

Mathematical modelling of the immune response to influenza 15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne
Media...The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.
We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of crossreactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short interexposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit crossreactive cellular adaptive immune responses. To account for intersubject as well as intervirus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.


SIR epidemics with stages of infection 12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles
Media...This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.
We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semiMarkov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.
Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk. 

Transmission Dynamics of Visceral Leishmaniasis: designing a test and treat control strategy 12:10 Thu 29 Sep, 2016 :: EM218 :: Graham Medley :: London School of Hygiene & Tropical Medicine
Media...Visceral Leishmaniasis (VL) is targeted for elimination from the Indian SubContinent. Progress has been much better in some areas than others. Current control is based on earlier diagnosis and treatment and on insecticide spraying to reduce the density of the vector. There is a surprising dearth of specific information on the epidemiology of VL, which makes modelling more difficult. In this seminar, I describe a simple framework that gives some insight into the transmission dynamics. We conclude that the majority of infection comes from cases prior to diagnosis. If this is the case then, early diagnosis will be advantageous, but will require a test with high specificity. This is a paradox for many clinicians and public health workers, who tend to prioritise high sensitivity.
Medley, G.F., Hollingsworth, T.D., Olliaro, P.L. & Adams, E.R. (2015) Healthseeking, diagnostics and transmission in the control of visceral leishmaniasis. Nature 528, S102S108 (3 December 2015), DOI: 10.1038/nature16042 

Stochastic Modelling of Urban Structure 11:10 Mon 20 Nov, 2017 :: Engineering Nth N132 :: Mark Girolami :: Imperial College London, and The Alan Turing Institute
Media...Urban systems are complex in nature and comprise of a large number of individuals that act according to utility, a measure of net benefit pertaining to preferences. The actions of individuals give rise to an emergent behaviour, creating the socalled urban structure that we observe. In this talk, I develop a stochastic model of urban structure to formally account for uncertainty arising from the complex behaviour. We further use this stochastic model to infer the components of a utility function from observed urban structure. This is a more powerful modelling framework in comparison to the ubiquitous discrete choice models that are of limited use for complex systems, in which the overall preferences of individuals are difficult to ascertain. We model urban structure as a realization of a Boltzmann distribution that is the invariant distribution of a related stochastic differential equation (SDE) that describes the dynamics of the urban system. Our specification of Boltzmann distribution assigns higher probability to stable configurations, in the sense that consumer surplus (demand) is balanced with running costs (supply), as characterized by a potential function. We specify a Bayesian hierarchical model to infer the components of a utility function from observed structure. Our model is doublyintractable and poses significant computational challenges that we overcome using recent advances in Markov chain Monte Carlo (MCMC) methods. We demonstrate our methodology with case studies on the London retail system and airports in England. 
News matching "+Operations +research" 
Potts Medal Winner Professor Charles Pearce, the Elder Profesor of Mathematics, was awarded the Ren Potts Medal by the Australian Society for Operations
Research at its annual meeting in December. This is a national award for outstanding
contributions to Operations Research in Australia.
Posted Tue 22 Jan 08. 

Welcome to Dr Joshua Ross We welcome Dr Joshua Ross as a new lecturer in the School of Mathematical Sciences. Joshua has moved over to Adelaide from the University of Cambridge. His research interests are mathematical modelling (especially mathematical biology) and operations research. Posted Mon 15 Mar 10.More information... 
Publications matching "+Operations +research"Publications 

On risk minimizing portfolios under a Markovian regimeswitching BlackScholes economy Elliott, Robert; Siu, T, Annals of Operations Research 1 (1–21) 2009  Markovian trees: properties and algorithms Bean, Nigel; Kontoleon, Nectarios; Taylor, Peter, Annals of Operations Research 160 (31–50) 2008  Performance measures of a multilayer Markovian fluid model Bean, Nigel; O'Reilly, Malgorzata, Annals of Operations Research 160 (99–120) 2008  Optimal recursive estimation of raw data Torokhti, Anatoli; Howlett, P; Pearce, Charles, Annals of Operations Research 133 (285–302) 2005  The crossentropy method for network reliability estimation Hui, KinPing; Bean, Nigel; Kraetzl, Miro; Kroese, D, Annals of Operations Research 134 (101–118) 2005  Arbitrage in a Discrete Version of the WickFractional Black Scholes Model Bender, C; Elliott, Robert, Mathematics of Operations Research 29 (935–945) 2004  Some new bounds for singular values and eigenvalues of matrix products Lu, LZ; Pearce, Charles, Annals of Operations Research 98 (141–148) 2001 
Advanced search options
You may be able to improve your search results by using the following syntax:
Query  Matches the following 

Asymptotic Equation  Anything with "Asymptotic" or "Equation". 
+Asymptotic +Equation  Anything with "Asymptotic" and "Equation". 
+Stokes "NavierStokes"  Anything containing "Stokes" but not "NavierStokes". 
Dynam*  Anything containing "Dynamic", "Dynamical", "Dynamicist" etc. 
