The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
September 2018
MTWTFSS
     12
3456789
10111213141516
17181920212223
24252627282930
       

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

People matching "+Markov +chains"

Professor Nigel Bean
Chair of Applied Mathematics


More about Nigel Bean...
Dr David Green
Lecturer in Applied Mathematics


More about David Green...
Associate Professor Joshua Ross
Senior Lecturer in Applied Mathematics


More about Joshua Ross...

Events matching "+Markov +chains"

How fast? Bounding the mixing time of combinatorial Markov chains
15:10 Fri 22 Mar, 2013 :: B.18 Ingkarni Wardli :: Dr Catherine Greenhill :: University of New South Wales

Media...
A Markov chain is a stochastic process which is "memoryless", in that the next state of the chain depends only on the current state, and not on how it got there. It is a classical result that an ergodic Markov chain has a unique stationary distribution. However, classical theory does not provide any information on the rate of convergence to stationarity. Around 30 years ago, the mixing time of a Markov chain was introduced to measure the number of steps required before the distribution of the chain is within some small distance of the stationary distribution. One reason why this is important is that researchers in areas such as physics and biology use Markov chains to sample from large sets of interest. Rigorous bounds on the mixing time of their chain allows these researchers to have confidence in their results. Bounding the mixing time of combinatorial Markov chains can be a challenge, and there are only a few approaches available. I will discuss the main methods and give examples for each (with pretty pictures).
Markov decision processes and interval Markov chains: what is the connection?
12:10 Mon 3 Jun, 2013 :: B.19 Ingkarni Wardli :: Mingmei Teo :: University of Adelaide

Media...
Markov decision processes are a way to model processes which involve some sort of decision making and interval Markov chains are a way to incorporate uncertainty in the transition probability matrix. How are these two concepts related? In this talk, I will give an overview of these concepts and discuss how they relate to each other.
Modelling and optimisation of group dose-response challenge experiments
12:10 Mon 28 Oct, 2013 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide

Media...
An important component of scientific research is the 'experiment'. Effective design of these experiments is important and, accordingly, has received significant attention under the heading 'optimal experimental design'. However, until recently, little work has been done on optimal experimental design for experiments where the underlying process can be modelled by a Markov chain. In this talk, I will discuss some of the work that has been done in the field of optimal experimental design for Markov Chains, and some of the work that I have done in applying this theory to dose-response challenge experiments for the bacteria Campylobacter jejuni in chickens.
A few flavours of optimal control of Markov chains
11:00 Thu 12 Dec, 2013 :: B18 :: Dr Sam Cohen :: Oxford University

Media...
In this talk we will outline a general view of optimal control of a continuous-time Markov chain, and how this naturally leads to the theory of Backward Stochastic Differential Equations. We will see how this class of equations gives a natural setting to study these problems, and how we can calculate numerical solutions in many settings. These will include problems with payoffs with memory, with random terminal times, with ergodic and infinite-horizon value functions, and with finite and infinitely many states. Examples will be drawn from finance, networks and electronic engineering.

Publications matching "+Markov +chains"

Publications
Data-recursive smoother formulae for partially observed discrete-time Markov chains
Elliott, Robert; Malcolm, William, Stochastic Analysis and Applications 24 (579–597) 2006
Level-phase independence for GI/M/1-type markov chains
Latouche, Guy; Taylor, Peter, Journal of Applied Probability 37 (984–998) 2000

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.