The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
December 2018
MTWTFSS
     12
3456789
10111213141516
17181920212223
24252627282930
31      

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

People matching "Markov chains"

Professor Nigel Bean
Chair of Applied Mathematics


More about Nigel Bean...
Professor Robert Elliott
Adjunct Professor


More about Robert Elliott...
Dr David Green
Lecturer in Applied Mathematics


More about David Green...
Associate Professor Joshua Ross
Senior Lecturer in Applied Mathematics


More about Joshua Ross...

Events matching "Markov chains"

Alberta Power Prices
15:10 Fri 9 Mar, 2007 :: G08 Mathematics Building University of Adelaide :: Prof. Robert Elliott

Media...
The pricing of electricity involves several interesting features. Apart from daily, weekly and seasonal fluctuations, power prices often exhibit large spikes. To some extent this is because electricity cannot be stored. We propose a model for power prices in the Alberta market. This involves a diffusion process modified by a factor related to a Markov chain which describes the number of large generators on line. The model is calibrated and future contracts priced.
American option pricing in a Markov chain market model
15:10 Fri 19 Mar, 2010 :: School Board Room :: Prof Robert Elliott :: School of Mathematical Sciences, University of Adelaide

This paper considers a model for asset pricing in a world where the randomness is modeled by a Markov chain rather than Brownian motion. In this paper we develop a theory of optimal stopping and related variational inequalities for American options in this model. A version of Saigal's Lemma is established and numerical algorithms developed. This is a joint work with John van der Hoek.
Modelling of Hydrological Persistence in the Murray-Darling Basin for the Management of Weirs
12:10 Mon 4 Apr, 2011 :: 5.57 Ingkarni Wardli :: Aiden Fisher :: University of Adelaide

The lakes and weirs along the lower Murray River in Australia are aggregated and considered as a sequence of five reservoirs. A seasonal Markov chain model for the system will be implemented, and a stochastic dynamic program will be used to find optimal release strategies, in terms of expected monetary value (EMV), for the competing demands on the water resource given the stochastic nature of inflows. Matrix analytic methods will be used to analyse the system further, and in particular enable the full distribution of first passage times between any groups of states to be calculated. The full distribution of first passage times can be used to provide a measure of the risk associated with optimum EMV strategies, such as conditional value at risk (CVaR). The sensitivity of the model, and risk, to changing rainfall scenarios will be investigated. The effect of decreasing the level of discretisation of the reservoirs will be explored. Also, the use of matrix analytic methods facilitates the use of hidden states to allow for hydrological persistence in the inflows. Evidence for hydrological persistence of inflows to the lower Murray system, and the effect of making allowance for this, will be discussed.
On parameter estimation in population models
15:10 Fri 6 May, 2011 :: 715 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide

Essential to applying a mathematical model to a real-world application is calibrating the model to data. Methods for calibrating population models often become computationally infeasible when the populations size (more generally the size of the state space) becomes large, or other complexities such as time-dependent transition rates, or sampling error, are present. Here we will discuss the use of diffusion approximations to perform estimation in several scenarios, with successively reduced assumptions: (i) under the assumption of stationarity (the process had been evolving for a very long time with constant parameter values); (ii) transient dynamics (the assumption of stationarity is invalid, and thus only constant parameter values may be assumed); and, (iii) time-inhomogeneous chains (the parameters may vary with time) and accounting for observation error (a sample of the true state is observed).
Optimal experimental design for stochastic population models
15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane

Markov population processes are popular models for studying a wide range of phenomena including the spread of disease, the evolution of chemical reactions and the movements of organisms in population networks (metapopulations). Our ability to use these models effectively can be limited by our knowledge about parameters, such as disease transmission and recovery rates in an epidemic. Recently, there has been interest in devising optimal experimental designs for stochastic models, so that practitioners can collect data in a manner that maximises the precision of maximum likelihood estimates of the parameters for these models. I will discuss some recent work on optimal design for a variety of population models, beginning with some simple one-parameter models where the optimal design can be obtained analytically and moving on to more complicated multi-parameter models in epidemiology that involve latent states and non-exponentially distributed infectious periods. For these more complex models, the optimal design must be arrived at using computational methods and we rely on a Gaussian diffusion approximation to obtain analytical expressions for Fisher's information matrix, which is at the heart of most optimality criteria in experimental design. I will outline a simple cross-entropy algorithm that can be used for obtaining optimal designs for these models. We will also explore the improvements in experimental efficiency when using the optimal design over some simpler designs, such as the design where observations are spaced equidistantly in time.
Inference and optimal design for percolation and general random graph models (Part I)
09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Inference and optimal design for percolation and general random graph models (Part II)
10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Spectra alignment/matching for the classification of cancer and control patients
12:10 Mon 8 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide

Proteomic time-of-flight mass spectrometry produces a spectrum based on the peptides (chains of amino acids) in each patient’s serum sample. The spectra contain data points for an x-axis (peptide weight) and a y-axis (peptide frequency/count/intensity). It is our end goal to differentiate cancer (and sub-types) and control patients using these spectra. Before we can do this, peaks in these data must be found and common peptides to different spectra must be found. The data are noisy because of biotechnological variation and calibration error; data points for different peptide weights may in fact be same peptide. An algorithm needs to be employed to find common peptides between spectra, as performing alignment ‘by hand’ is almost infeasible. We borrow methods suggested in the literature by metabolomic gas chromatography-mass spectrometry and extend the methods for our purposes. In this talk I will go over the basic tenets of what we hope to achieve and the process towards this.
Alignment of time course gene expression data sets using Hidden Markov Models
12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide

Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards. Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data.
Adventures with group theory: counting and constructing polynomial invariants for applications in quantum entanglement and molecular phylogenetics
15:10 Fri 8 Jun, 2012 :: B.21 Ingkarni Wardli :: Dr Peter Jarvis :: The University of Tasmania

Media...
In many modelling problems in mathematics and physics, a standard challenge is dealing with several repeated instances of a system under study. If linear transformations are involved, then the machinery of tensor products steps in, and it is the job of group theory to control how the relevant symmetries lift from a single system, to having many copies. At the level of group characters, the construction which does this is called PLETHYSM. In this talk all this will be contextualised via two case studies: entanglement invariants for multipartite quantum systems, and Markov invariants for tree reconstruction in molecular phylogenetics. By the end of the talk, listeners will have understood why Alice, Bob and Charlie love Cayley's hyperdeterminant, and they will know why the three squangles -- polynomial beasts of degree 5 in 256 variables, with a modest 50,000 terms or so -- can tell us a lot about quartet trees!
Asymptotic independence of (simple) two-dimensional Markov processes
15:10 Fri 1 Mar, 2013 :: B.18 Ingkarni Wardli :: Prof Guy Latouche :: Universite Libre de Bruxelles

Media...
The one-dimensional birth-and death model is one of the basic processes in applied probability but difficulties appear as one moves to higher dimensions. In the positive recurrent case, the situation is singularly simplified if the stationary distribution has product-form. We investigate the conditions under which this property holds, and we show how to use the knowledge to find product-form approximations for otherwise unmanageable random walks. This is joint work with Masakiyo Miyazawa and Peter Taylor.
How fast? Bounding the mixing time of combinatorial Markov chains
15:10 Fri 22 Mar, 2013 :: B.18 Ingkarni Wardli :: Dr Catherine Greenhill :: University of New South Wales

Media...
A Markov chain is a stochastic process which is "memoryless", in that the next state of the chain depends only on the current state, and not on how it got there. It is a classical result that an ergodic Markov chain has a unique stationary distribution. However, classical theory does not provide any information on the rate of convergence to stationarity. Around 30 years ago, the mixing time of a Markov chain was introduced to measure the number of steps required before the distribution of the chain is within some small distance of the stationary distribution. One reason why this is important is that researchers in areas such as physics and biology use Markov chains to sample from large sets of interest. Rigorous bounds on the mixing time of their chain allows these researchers to have confidence in their results. Bounding the mixing time of combinatorial Markov chains can be a challenge, and there are only a few approaches available. I will discuss the main methods and give examples for each (with pretty pictures).
Filtering Theory in Modelling the Electricity Market
12:10 Mon 6 May, 2013 :: B.19 Ingkarni Wardli :: Ahmed Hamada :: University of Adelaide

Media...
In mathematical finance, as in many other fields where applied mathematics is a powerful tool, we assume that a model is good enough when it captures different sources of randomness affecting the quantity of interests, which in this case is the electricity prices. The power market is very different from other markets in terms of the randomness sources that can be observed in the prices feature and evolution. We start from suggesting a new model that simulates the electricity prices, this new model is constructed by adding a periodicity term, a jumps terms and a positives mean reverting term. The later term is driven by a non-observable Markov process. So in order to prices some financial product, we have to use some of the filtering theory to deal with the non-observable process, these techniques are gaining very much of interest from practitioners and researchers in the field of financial mathematics.
Markov decision processes and interval Markov chains: what is the connection?
12:10 Mon 3 Jun, 2013 :: B.19 Ingkarni Wardli :: Mingmei Teo :: University of Adelaide

Media...
Markov decision processes are a way to model processes which involve some sort of decision making and interval Markov chains are a way to incorporate uncertainty in the transition probability matrix. How are these two concepts related? In this talk, I will give an overview of these concepts and discuss how they relate to each other.
The Hamiltonian Cycle Problem and Markov Decision Processes
15:10 Fri 2 Aug, 2013 :: B.18 Ingkarni Wardli :: Prof Jerzy Filar :: Flinders University

Media...
We consider the famous Hamiltonian cycle problem (HCP) embedded in a Markov decision process (MDP). More specifically, we consider a moving object on a graph G where, at each vertex, a controller may select an arc emanating from that vertex according to a probabilistic decision rule. A stationary policy is simply a control where these decision rules are time invariant. Such a policy induces a Markov chain on the vertices of the graph. Therefore, HCP is equivalent to a search for a stationary policy that induces a 0-1 probability transition matrix whose non-zero entries trace out a Hamiltonian cycle in the graph. A consequence of this embedding is that we may consider the problem over a number of, alternative, convex - rather than discrete - domains. These include: (a) the space of stationary policies, (b) the more restricted but, very natural, space of doubly stochastic matrices induced by the graph, and (c) the associated spaces of so-called "occupational measures". This approach to the HCP has led to both theoretical and algorithmic approaches to the underlying HCP problem. In this presentation, we outline a selection of results generated by this line of research.
Modelling and optimisation of group dose-response challenge experiments
12:10 Mon 28 Oct, 2013 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide

Media...
An important component of scientific research is the 'experiment'. Effective design of these experiments is important and, accordingly, has received significant attention under the heading 'optimal experimental design'. However, until recently, little work has been done on optimal experimental design for experiments where the underlying process can be modelled by a Markov chain. In this talk, I will discuss some of the work that has been done in the field of optimal experimental design for Markov Chains, and some of the work that I have done in applying this theory to dose-response challenge experiments for the bacteria Campylobacter jejuni in chickens.
A few flavours of optimal control of Markov chains
11:00 Thu 12 Dec, 2013 :: B18 :: Dr Sam Cohen :: Oxford University

Media...
In this talk we will outline a general view of optimal control of a continuous-time Markov chain, and how this naturally leads to the theory of Backward Stochastic Differential Equations. We will see how this class of equations gives a natural setting to study these problems, and how we can calculate numerical solutions in many settings. These will include problems with payoffs with memory, with random terminal times, with ergodic and infinite-horizon value functions, and with finite and infinitely many states. Examples will be drawn from finance, networks and electronic engineering.
Weak Stochastic Maximum Principle (SMP) and Applications
15:10 Thu 12 Dec, 2013 :: B.21 Ingkarni Wardli :: Dr Harry Zheng :: Imperial College, London

Media...
In this talk we discuss a weak necessary and sufficient SMP for Markov modulated optimal control problems. Instead of insisting on the maximum condition of the Hamiltonian, we show that 0 belongs to the sum of Clarke's generalized gradient of the Hamiltonian and Clarke's normal cone of the control constraint set at the optimal control. Under a joint concavity condition on the Hamiltonian the necessary condition becomes sufficient. We give examples to demonstrate the weak SMP and its applications in quadratic loss minimization.
Ergodicity and loss of capacity: a stochastic horseshoe?
15:10 Fri 9 May, 2014 :: B.21 Ingkarni Wardli :: Professor Ami Radunskaya :: Pomona College, the United States of America

Media...
Random fluctuations of an environment are common in ecological and economical settings. The resulting processes can be described by a stochastic dynamical system, where a family of maps parametrized by an independent, identically distributed random variable forms the basis for a Markov chain on a continuous state space. Random dynamical systems are a beautiful combination of deterministic and random processes, and they have received considerable interest since von Neuman and Ulam's seminal work in the 1940's. Key questions in the study of a stochastic dynamical system are: does the system have a well-defined average, i.e. is it ergodic? How does this long-term behavior compare to that of the state variable in a constant environment with the averaged parameter? In this talk we answer these questions for a family of maps on the unit interval that model self-limiting growth. The techniques used can be extended to study other families of concave maps, and so we conjecture the existence of a "stochastic horseshoe".
Stochastic models of evolution: Trees and beyond
15:10 Fri 16 May, 2014 :: B.18 Ingkarni Wardli :: Dr Barbara Holland :: The University of Tasmania

Media...
In the first part of the talk I will give a general introduction to phylogenetics, and discuss some of the mathematical and statistical issues that arise in trying to infer evolutionary trees. In particular, I will discuss how we model the evolution of DNA along a phylogenetic tree using a continuous time Markov process. In the second part of the talk I will discuss how to express the two-state continuous-time Markov model on phylogenetic trees in such a way that allows its extension to more general models. In this framework we can model convergence of species as well as divergence (speciation). I will discuss the identifiability (or otherwise) of the models that arise in some simple cases. Use of a statistical framework means that we can use established techniques such as the AIC or likelihood ratio tests to decide if datasets show evidence of convergent evolution.
A Random Walk Through Discrete State Markov Chain Theory
12:10 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: James Walker :: University of Adelaide

Media...
This talk will go through the basics of Markov chain theory; including how to construct a continuous-time Markov chain (CTMC), how to adapt a Markov chain to include non-memoryless distributions, how to simulate CTMC's and some key results.
A Hybrid Markov Model for Disease Dynamics
12:35 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Nicolas Rebuli :: University of Adelaide

Media...
Modelling the spread of infectious diseases is fundamental to protecting ourselves from potentially devastating epidemics. Among other factors, two key indicators for the severity of an epidemic are the size of the epidemic and the time until the last infectious individual is removed. To estimate the distribution of the size and duration of an epidemic (within a realistic population) an epidemiologist will typically use Monte Carlo simulations of an appropriate Markov process. However, the number of states in the simplest Markov epidemic model, the SIR model, is quadratic in the population size and so Monte Carlo simulations are computationally expensive. In this talk I will discuss two methods for approximating the SIR Markov process and I will demonstrate the approximation error by comparing probability distributions and estimates of the distributions of the final size and duration of an SIR epidemic.
Medical Decision Making
12:10 Mon 11 May, 2015 :: Napier LG29 :: Eka Baker :: University of Adelaide

Media...
Practicing physicians make treatment decisions based on clinical trial data every day. This data is based on trials primarily conducted on healthy volunteers, or on those with only the disease in question. In reality, patients do have existing conditions that can affect the benefits and risks associated with receiving these treatments. In this talk, I will explain how we modified an already existing Markov model to show the progression of treatment of a single condition over time. I will then explain how we adapted this to a different condition, and then created a combined model, which demonstrated how both diseases and treatments progressed on the same patient over their lifetime.
A Semi-Markovian Modeling of Limit Order Markets
13:00 Fri 11 Dec, 2015 :: Ingkarni Wardli 5.57 :: Anatoliy Swishchuk :: University of Calgary

Media...
R. Cont and A. de Larrard (SIAM J. Financial Mathematics, 2013) introduced a tractable stochastic model for the dynamics of a limit order book, computing various quantities of interest such as the probability of a price increase or the diffusion limit of the price process. As suggested by empirical observations, we extend their framework to 1) arbitrary distributions for book events inter-arrival times (possibly non-exponential) and 2) both the nature of a new book event and its corresponding inter-arrival time depend on the nature of the previous book event. We do so by resorting to Markov renewal processes to model the dynamics of the bid and ask queues. We keep analytical tractability via explicit expressions for the Laplace transforms of various quantities of interest. Our approach is justified and illustrated by calibrating the model to the five stocks Amazon, Apple, Google, Intel and Microsoft on June 21st 2012. As in Cont and Larrard, the bid-ask spread remains constant equal to one tick, only the bid and ask queues are modelled (they are independent from each other and get reinitialized after a price change), and all orders have the same size. (This talk is based on our joint paper with Nelson Vadori (Morgan Stanley)).
Mathematical modelling of the immune response to influenza
15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne

Media...
The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.

We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of cross-reactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short inter-exposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit cross-reactive cellular adaptive immune responses. To account for inter-subject as well as inter-virus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.
SIR epidemics with stages of infection
12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles

Media...
This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.

We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semi-Markov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.

Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk.
Probabilistic approaches to human cognition: What can the math tell us?
15:10 Fri 26 May, 2017 :: Engineering South S111 :: Dr Amy Perfors :: School of Psychology, University of Adelaide

Why do people avoid vaccinating their children? Why, in groups, does it seem like the most extreme positions are weighted more highly? On the surface, both of these examples look like instances of non-optimal or irrational human behaviour. This talk presents preliminary evidence suggesting, however, that in both cases this pattern of behaviour is sensible given certain assumptions about the structure of the world and the nature of beliefs. In the case of vaccination, we model people's choices using expected utility theory. This reveals that their ignorance about the nature of diseases like whooping cough makes them underweight the negative utility attached to contracting such a disease. When that ignorance is addressed, their values and utilities shift. In the case of extreme positions, we use simulations of chains of Bayesian learners to demonstrate that whenever information is propagated in groups, the views of the most extreme learners naturally gain more traction. This effect emerges as the result of basic mathematical assumptions rather than human irrationality.
Stokes' Phenomenon in Translating Bubbles
15:10 Fri 2 Jun, 2017 :: Ingkarni Wardli 5.57 :: Dr Chris Lustri :: Macquarie University

This study of translating air bubbles in a Hele-Shaw cell containing viscous fluid reveals the critical role played by surface tension in these systems. The standard zero-surface-tension model of Hele-Shaw flow predicts that a continuum of bubble solutions exists for arbitrary flow translation velocity. The inclusion of small surface tension, however, eliminates this continuum of solutions, instead producing a discrete, countably infinite family of solutions, each with distinct translation speeds. We are interested in determining this discrete family of solutions, and understanding why only these solutions are permitted. Studying this problem in the asymptotic limit of small surface tension does not seem to give any particular reason why only these solutions should be selected. It is only by using exponential asymptotic methods to study the Stokes’ structure hidden in the problem that we are able to obtain a complete picture of the bubble behaviour, and hence understand the selection mechanism that only permits certain solutions to exist. In the first half of my talk, I will explain the powerful ideas that underpin exponential asymptotic techniques, such as analytic continuation and optimal truncation. I will show how they are able to capture behaviour known as Stokes' Phenomenon, which is typically invisible to classical asymptotic series methods. In the second half of the talk, I will introduce the problem of a translating air bubble in a Hele-Shaw cell, and show that the behaviour can be fully understood by examining the Stokes' structure concealed within the problem. Finally, I will briefly showcase other important physical applications of exponential asymptotic methods, including submarine waves and particle chains.
Stochastic Modelling of Urban Structure
11:10 Mon 20 Nov, 2017 :: Engineering Nth N132 :: Mark Girolami :: Imperial College London, and The Alan Turing Institute

Media...
Urban systems are complex in nature and comprise of a large number of individuals that act according to utility, a measure of net benefit pertaining to preferences. The actions of individuals give rise to an emergent behaviour, creating the so-called urban structure that we observe. In this talk, I develop a stochastic model of urban structure to formally account for uncertainty arising from the complex behaviour. We further use this stochastic model to infer the components of a utility function from observed urban structure. This is a more powerful modelling framework in comparison to the ubiquitous discrete choice models that are of limited use for complex systems, in which the overall preferences of individuals are difficult to ascertain. We model urban structure as a realization of a Boltzmann distribution that is the invariant distribution of a related stochastic differential equation (SDE) that describes the dynamics of the urban system. Our specification of Boltzmann distribution assigns higher probability to stable configurations, in the sense that consumer surplus (demand) is balanced with running costs (supply), as characterized by a potential function. We specify a Bayesian hierarchical model to infer the components of a utility function from observed structure. Our model is doubly-intractable and poses significant computational challenges that we overcome using recent advances in Markov chain Monte Carlo (MCMC) methods. We demonstrate our methodology with case studies on the London retail system and airports in England.

News matching "Markov chains"

Sam Cohen wins prize for best student talk at Aust MS 2009
Congratulations to Mr Sam Cohen, a PhD student within the School, who was awarded the B. H. Neumann Prize for the best student paper at the 2009 meeting of the Australian Mathematical Society for his talk on Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise. Posted Tue 6 Oct 09.

Publications matching "Markov chains"

Publications
On Markov-modulated exponential-affine bond price formulae
Elliott, Robert; Siu, T, Applied Mathematical Finance 16 (1–15) 2009
Discrete-time expectation maximization algorithms for Markov-modulated poisson processes
Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 53 (247–256) 2008
Pricing Options and Vriance Swaps in Markov-Modulated Brownian Markets
Elliott, Robert; Swishchuk, A, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 45–68, 2007
Smoothed Parameter Estimation for a Hidden Markov Model of Credit Quality
Korolkiewicz, M; Elliott, Robert, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 69–90, 2007
The Term Structure of Interest Rates in a Hidden Markov Setting
Elliott, Robert; Wilson, C, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 15–30, 2007
A Markov analysis of social learning and adaptation
Wheeler, Scott; Bean, Nigel; Gaffney, Janice; Taylor, Peter, Journal of Evolutionary Economics 16 (299–319) 2006
A hidden Markov approach to the forward premium puzzle
Elliott, Robert; Han, B, International Journal of Theoretical and Applied Finance 9 (1009–1020) 2006
Data-recursive smoother formulae for partially observed discrete-time Markov chains
Elliott, Robert; Malcolm, William, Stochastic Analysis and Applications 24 (579–597) 2006
Option pricing for GARCH models with Markov switching
Elliott, Robert; Siu, T; Chan, L, International Journal of Theoretical and Applied Finance 9 (825–841) 2006
Option Pricing for Pure Jump Processes with Markov Switching Compensators
Elliott, Robert, Finance and Stochastics 10 (250–275) 2006
New Gaussian mixture state estimation schemes for discrete time hybrid Gauss-Markov systems
Elliott, Robert; Dufour, F; Malcolm, William, The 2005 American Control Conference, Portland, OR, USA 08/06/05
Simulating catchment-scale monthly rainfall with classes of hidden Markov models
Whiting, Julian; Thyer, M; Lambert, Martin; Metcalfe, Andrew, The 29th Hydrology and Water Resources Symposium, Rydges Lakeside, Canberra, Australia 20/02/05
General smoothing formulas for Markov-modulated Poisson observations
Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 50 (1123–1134) 2005
Hidden Markov chain filtering for a jump diffusion model
Wu, P; Elliott, Robert, Stochastic Analysis and Applications 23 (153–163) 2005
Hidden Markov filter estimation of the occurrence time of an event in a financial market
Elliott, Robert; Tsoi, A, Stochastic Analysis and Applications 23 (1165–1177) 2005
Ramaswami's duality and probabilistic algorithms for determining the rate matrix for a structured GI/M/1 Markov chain
Hunt, Emma, The ANZIAM Journal 46 (485–493) 2005
Risk-sensitive filtering and smoothing for continuous-time Markov processes
Malcolm, William; Elliott, Robert; James, M, IEEE Transactions on Information Theory 51 (1731–1738) 2005
State and mode estimation for discrete-time jump Markov systems
Elliott, Robert; Dufour, F; Malcolm, William, Siam Journal on Control and Optimization 44 (1081–1104) 2005
A probabilistic algorithm for finding the rate matrix of a block-GI/M/1 Markov chain
Hunt, Emma, The ANZIAM Journal 45 (457–475) 2004
Development of Non-Homogeneous and Hierarchical Hidden Markov Models for Modelling Monthly Rainfall and Streamflow Time Series
Whiting, Julian; Lambert, Martin; Metcalfe, Andrew; Kuczera, George, World Water and Environmental Resources Congress (2004), Salt Lake City, Utah, USA 27/06/04
Robust M-ary detection filters and smoothers for continuous-time jump Markov systems
Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 49 (1046–1055) 2004
Arborescences, matrix-trees and the accumulated sojourn time in a Markov process
Pearce, Charles; Falzon, L, chapter in Stochastic analysis and applications Volume 3 (Nova Science Publishers) 147–168, 2003
A Probabilistic algorithm for determining the fundamental matrix of a block M/G/1 Markov chain
Hunt, Emma, Mathematical and Computer Modelling 38 (1203–1209) 2003
A complete yield curve description of a Markov interest rate model
Elliott, Robert; Mamon, R, International Journal of Theoretical and Applied Finance 6 (317–326) 2003
A non-parametric hidden Markov model for climate state identification
Lambert, Martin; Whiting, Julian; Metcalfe, Andrew, Hydrology and Earth System Sciences 7 (652–667) 2003
Robust parameter estimation for asset price models with Markov modulated volatilities
Elliott, Robert; Malcolm, William; Tsoi, A, Journal of Economic Dynamics & Control 27 (1391–1409) 2003
Portfolio optimization, hidden Markov models, and technical analysis of P&F-charts
Elliott, Robert; Hinz, J, International Journal of Theoretical and Applied Finance 5 (385–399) 2002
Supporting maintenance strategies using Markov models
Al-Hassan, K; Swailes, D; Chan, J; Metcalfe, Andrew, IMA Journal of Management Mathematics 13 (17–27) 2002
Hidden Markov chain filtering for generalised Bessel processes
Elliott, Robert; Platen, E, chapter in Stochastics in Finite and Infinite Dimensions - in honor of Gopinath Kallianpur (Birkhauser) 123–143, 2001
Robust M-ary detection filters for continuous-time jump Markov systems
Elliott, Robert; Malcolm, William, The 40th IEEE Conference on Decision and Control (CDC), Orlando, Florida 04/12/01
On the existence of a quasistationary measure for a Markov chain
Lasserre, J; Pearce, Charles, Annals of Probability 29 (437–446) 2001
Hidden state Markov chain time series models for arid zone hydrology
Cigizoglu, K; Adamson, Peter; Lambert, Martin; Metcalfe, Andrew, International Symposium on Water Resources and Environmental Impact Assessment (2001), Istanbul, Turkey 11/07/01
Entropy, Markov information sources and Parrondo games
Pearce, Charles, UPoN'99: Second International Conference, Adelaide, Australia 12/07/99
Level-phase independence for GI/M/1-type markov chains
Latouche, Guy; Taylor, Peter, Journal of Applied Probability 37 (984–998) 2000

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.