
Search the School of Mathematical SciencesEvents matching "Regression: a backwards step?" 
A Bivariate Zeroinflated Poisson Regression Model and application to some Dental Epidemiological data 14:10 Fri 27 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir Paul
Data in the form of paired (pretreatment, posttreatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zeroinflated bivariate Poisson regression (ZIBPR) model for the paired (pretreatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zeroinflated Poisson regression (ZIPR) model of the posttreatment count with the pretreatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zeroinflated Poisson regression model in which the pretreatment DMFT index is taken to be a covariate 

Regression: a backwards step? 13:10 Fri 7 Sep, 2007 :: Maths G08 :: Dr Gary Glonek
Media...Most students of high school mathematics will have encountered the technique of fitting a line to data by least squares. Those who have taken a university statistics course will also have heard this method referred to as regression. However, it is not obvious from common dictionary definitions why this should be the case. For example, "reversion to an earlier or less advanced state or form". In this talk, the mathematical phenomenon that gave regression its name will be explained and will be shown to have implications in some unexpected contexts.


Random walk integrals 13:10 Fri 16 Apr, 2010 :: School Board Room :: Prof Jonathan Borwein :: University of Newcastle
Following Pearson in 1905, we study the expected distance of a twodimensional walk in the plane with unit steps in random directionswhat Pearson called a "ramble". A series evaluation and recursions are obtained making it possible to explicitly determine this distance for small number of steps. Closed form expressions for all the moments of a 2step and a 3step walk are given, and a formula is conjectured for the 4step walk. Heavy use is made of the analytic continuation of the underlying integral. 

Compound and constrained regression analyses for EIV models 15:05 Fri 27 Aug, 2010 :: Napier LG28 :: Prof Wei Zhu :: State University of New York at Stony Brook
In linear regression analysis, randomness often exists in the independent variables and the resulting models are referred to errorsinvariables (EIV) models. The existing general EIV modeling framework, the structural model approach, is parametric and dependent on the usually unknown underlying distributions. In this work, we introduce a general nonparametric EIV modeling framework, the compound regression analysis, featuring an intuitive geometric representation and a 11 correspondence to the structural model. Properties, examples and further generalizations of this new modeling approach are discussed in this talk. 

Principal Component Analysis Revisited 15:10 Fri 15 Oct, 2010 :: Napier G04 :: Assoc. Prof Inge Koch :: University of Adelaide
Since the beginning of the 20th century, Principal Component Analysis (PCA) has been an important tool in the analysis of multivariate data. The principal components summarise data in fewer than the original number of variables without losing essential information, and thus allow a split of the data into signal and noise components. PCA is a linear method, based on elegant mathematical theory.
The increasing complexity of data together with the emergence of fast computers in the later parts of the 20th century has led to a renaissance of PCA. The growing numbers of variables (in particular, highdimensional low sample size problems), nonGaussian data, and functional data (where the data are curves) are posing exciting challenges to statisticians, and have resulted in new research which extends the classical theory.
I begin with the classical PCA methodology and illustrate the challenges presented by the complex data that we are now able to collect. The main part of the talk focuses on extensions of PCA: the duality of PCA and the Principal Coordinates of Multidimensional Scaling, Sparse PCA, and consistency results relating to principal components, as the dimension grows. We will also look at newer developments such as Principal Component Regression and Supervised PCA, nonlinear PCA and Functional PCA.


Change detection in rainfall time series for Perth, Western Australia 12:10 Mon 16 May, 2011 :: 5.57 Ingkarni Wardli :: Farah Mohd Isa :: University of Adelaide
There have been numerous reports that the rainfall in south Western Australia,
particularly around Perth has observed a step change decrease, which is
typically attributed to climate change. Four statistical tests are used to
assess the empirical evidence for this claim on time series from five
meteorological stations, all of which exceed 50 years. The tests used in this
study are: the CUSUM; Bayesian Change Point analysis; consecutive ttest and the
Hotellingâs TÂ²statistic. Results from multivariate Hotellingâs TÂ² analysis are
compared with those from the three univariate analyses. The issue of multiple
comparisons is discussed. A summary of the empirical evidence for the claimed
step change in Perth area is given. 

Stochastic models of reaction diffusion 15:10 Fri 17 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Jon Chapman :: Oxford University
Media...We consider two different position jump processes: (i) a random
walk on a lattice (ii) the Euler scheme for the Smoluchowski
differential equation. Both of these reduce to the diffusion equation as the time step
and size of the jump tend to zero.
We consider the problem of adding chemical reactions to these
processes, both at a surface and in the bulk. We show how the
"microscopic" parameters should be chosen to achieve the correct
"macroscopic" reaction rate. This choice is found to depend on
which stochastic model for diffusion is used. 

Mathematical modelling of lobster populations in South Australia 12:10 Mon 12 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr John Feenstra :: University of Adelaide
Just how many lobsters are there hanging around the South Australian coastline? How is this number changing over time? What is the demographic breakdown of this number? And what does it matter? Find out the answers to these questions in my upcoming talk. I will provide a brief flavour of the kinds of quantitative methods involved, showcasing relevant applications of regression, population modelling, estimation, as well as simulation. A product of these analyses are biological performance indicators which are used by government to help decide on fishery controls such as yearly total allowable catch quotas. This assists in maintaining the sustainability of the fishery and hence benefits both the fishers and the lobsters they catch. 

Are Immigrants Discriminated in the Australian Labour Market? 12:10 Mon 7 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Wei Xian Lim :: University of Adelaide
Media...In this talk, I will present what I did in my honours project, which was to determine if immigrants, categorised as immigrants from English speaking countries and NonEnglish speaking countries, are discriminated in the Australian labour market. To determine if discrimination exists, a decomposition of the wage function is applied and analysed via regression analysis. Two different methods of estimating the unknown parameters in the wage function will be discussed:
1. the Ordinary Least Square method,
2. the Quantile Regression method.
This is your rare chance of hearing me talk about nonnanomathematics related stuff! 

Change detection in rainfall times series for Perth, Western Australia 12:10 Mon 14 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Farah Mohd Isa :: University of Adelaide
Media...There have been numerous reports that the rainfall in south Western Australia,
particularly around Perth has observed a step change decrease, which is
typically attributed to climate change. Four statistical tests are used to
assess the empirical evidence for this claim on time series from five
meteorological stations, all of which exceed 50 years. The tests used in this
study are: the CUSUM; Bayesian Change Point analysis; consecutive ttest and the
Hotelling's T^2statistic. Results from multivariate Hotelling's T^2 analysis are
compared with those from the three univariate analyses. The issue of multiple
comparisons is discussed. A summary of the empirical evidence for the claimed
step change in Perth area is given. 

A brief introduction to Support Vector Machines 12:30 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide
Media...Support Vector Machines (SVMs) are used in a variety of contexts for a range of purposes including regression, feature selection and classification. To convey the basic principles of SVMs, this presentation will focus on the application of SVMs to classification. Classification (or discrimination), in a statistical sense, is supervised model creation for the purpose of assigning future observations to a group or class. An example might be determining healthy or diseased labels to patients from p characteristics obtained from a blood sample.
While SVMs are widely used, they are most successful when the data have one or more of the following properties:
The data are not consistent with a standard probability distribution.
The number of observations, n, used to create the model is less than the number of predictive features, p. (The socalled smalln, bigp problem.)
The decision boundary between the classes is likely to be nonlinear in the feature space.
I will present a short overview of how SVMs are constructed, keeping in mind their purpose. As this presentation is part of a double postgrad seminar, I will keep it to a maximum of 15 minutes.


Spatiotemporally Autoregressive Partially Linear Models with Application to the Housing Price Indexes of the United States 12:10 Mon 12 Nov, 2012 :: B.21 Ingkarni Wardli :: Ms Dawlah Alsulami :: University of Adelaide
Media...We propose a Spatiotemporal Autoregressive Partially Linear Regression ( STARPLR) model for data observed irregularly over space and regularly in time. The model is capable of catching possible non linearity and nonstationarity in space by coefficients to depend on locations. We suggest twostep procedure to estimate both the coefficients and the unknown function, which is readily implemented and can be computed even for large spatiotemoral data sets. As an illustration, we apply our model to analyze the 51 States' House Price Indexes (HPIs) in USA. 

Recent developments in special holonomy manifolds 12:10 Fri 1 Nov, 2013 :: Ingkarni Wardli 7.15 :: Prof Robert Bryant :: Duke University
One of the big classification results in differential geometry from the past century has been the classification of the possible holonomies of affine manifolds, with the major first step having been taken by Marcel Berger in his 1954 thesis. However, Berger's classification was only partial, and, in the past 20 years, an extensive research effort has been expended to complete this classification and extend it in a number of ways. In this talk, after recounting the major parts of the history of the subject, I will discuss some of the recent results and surprising new examples discovered as a byproduct of research into Finsler geometry. If time permits, I will also discuss some of the open problems in the subject. 

Viscoelastic fluids: mathematical challenges in determining their relaxation spectra 15:10 Mon 17 Mar, 2014 :: 5.58 Ingkarni Wardli :: Professor Russell Davies :: Cardiff University
Determining the relaxation spectrum of a viscoelastic fluid is a crucial step before a linear or nonlinear constitutive model can be applied. Information about the relaxation spectrum is obtained from simple flow experiments such as creep or oscillatory shear. However, the determination process involves the solution of one or more highly illposed inverse problems. The availability of only discrete data, the presence of noise in the data, as well as incomplete data, collectively make the problem very hard to solve.
In this talk I will illustrate the mathematical challenges inherent in determining relaxation spectra, and also introduce the method of wavelet regularization which enables the representation of a continuous relaxation spectrum by a set of hyperbolic scaling functions.


Multivariate regression in quantitative finance: sparsity, structure, and robustness 15:10 Fri 1 May, 2015 :: Engineering North N132 :: A/Prof Mark Coates :: McGill University
Many quantitative hedge funds around the world strive to predict future equity and futures returns based on many sources of information, including historical returns and economic data. This leads to a multivariate regression problem. Compared to many regression problems, the signaltonoise ratio is extremely low, and profits can be realized if even a small fraction of the future returns can be accurately predicted. The returns generally have heavytailed distributions, further complicating the regression procedure.
In this talk, I will describe how we can impose structure into the regression problem in order to make detection and estimation of the very weak signals feasible. Some of this structure consists of an assumption of sparsity; some of it involves identification of common factors to reduce the dimension of the problem. I will also describe how we can formulate alternative regression problems that lead to more robust solutions that better match the performance metrics of interest in the finance setting. 

Noncrossing quantiles 15:10 Fri 14 Aug, 2015 :: Ingkarni Wardli B21 :: Dr Yanan Fan :: UNSW
Media...Quantile regression has received increased attention in the statistics community in recent years. However, since the quantile regression curves are estimated separately, the curves can cross, leading to invalid response distribution. Many authors have proposed remedies for this in the context of frequentist estimation. In this talk, I will explain some of the existing approaches, and then describe a new Bayesian semiparametric approach for fitting noncrossing quantile regression models simultaneously. 

Modelling Directionality in Stationary Geophysical Time Series 12:10 Mon 12 Oct, 2015 :: Benham Labs G10 :: Mohd Mahayaudin Mansor :: University of Adelaide
Media...Many time series show directionality inasmuch as plots against time and against timetogo are qualitatively different, and there is a range of statistical tests to quantify this effect. There are two strategies for allowing for directionality in time series models. Linear models are reversible if and only if the noise terms are Gaussian, so one strategy is to use linear models with nonGaussian noise. The alternative is to use nonlinear models. We investigate how nonGaussian noise affects directionality in a first order autoregressive process AR(1) and compare this with a threshold autoregressive model with two thresholds. The findings are used to suggest possible improvements to an AR(9) model, identified by an AIC criterion, for the average yearly sunspot numbers from 1700 to 1900. The improvement is defined in terms of onestepahead forecast errors from 1901 to 2014. 

ChernSimons classes on loop spaces and diffeomorphism groups 12:10 Fri 16 Oct, 2015 :: Ingkarni Wardli B17 :: Steve Rosenberg :: Boston University
Media...Not much is known about the topology of the diffeomorphism group Diff(M) of manifolds M of dimension four and higher. We'll show that for a class of manifolds of dimension 4k+1, Diff(M) has infinite fundamental group. This is proved by translating the problem into a question about ChernSimons classes on the tangent bundle to the loop space LM. To build the CS classes, we use a family of metrics on LM associated to a Riemannian metric on M. The curvature of these metrics takes values in an algebra of pseudodifferential operators. The main technical step in the CS construction is to replace the ordinary matrix trace in finite dimensions with the Wodzicki residue, the unique trace on this algebra. The moral is that some techniques in finite dimensional Riemannian geometry can be extended to some examples in infinite dimensional geometry.


Tales of Multiple Regression: Informative Missingness, Recommender Systems, and R2D2 15:10 Fri 17 Aug, 2018 :: Napier 208 :: Prof Howard Bondell :: University of Melbourne
In this talk, we briefly discuss two projects tangentially related under the umbrella of highdimensional regression.
The first part of the talk investigates informative missingness in the framework of recommender systems. In this setting, we envision a potential rating for every objectuser pair. The goal of a recommender system is to predict the unobserved ratings in order to recommend an object that the user is likely to rate highly. A typically overlooked piece is that the combinations are not missing at random. For example, in movie ratings, a relationship between the user ratings and their viewing history is expected, as human nature dictates the user would seek out movies that they anticipate enjoying. We model this informative missingness, and place the recommender system in a sharedvariable regression framework which can aid in prediction quality.
The second part of the talk deals with a new class of prior distributions for shrinkage regularization in sparse linear regression, particularly the high dimensional case. Instead of placing a prior on the coefficients themselves, we place a prior on the regression Rsquared. This is then distributed to the coefficients by decomposing it via a Dirichlet Distribution. We call the new prior R2D2 in light of its RSquared Dirichlet Decomposition. Compared to existing shrinkage priors, we show that the R2D2 prior can simultaneously achieve both high prior concentration at zero, as well as heavier tails. These two properties combine to provide a higher degree of shrinkage on the irrelevant coefficients, along with less bias in estimation of the larger signals. 
Publications matching "Regression: a backwards step?"Publications 

Modeling as a necessary step for understanding Internetwide route progagation Uhlig, S; Maennel, Olaf; Muehlbauer, W, Wired2006: Workshop on Internet Routing Evolution and Design, Georgia Tech, Atlanta, GA USA 16/10/67  A deterministic discretisationstep upper bound for state estimation via Clark transformations Malcolm, William; Elliott, Robert; Van Der Hoek, John, J.A.M.S.A. Journal of Applied Mathematics and Stochastic Analysis 2004 (371–384) 2004  A step towards holistic discretisation of stochastic partial differential equations Roberts, Anthony John, The ANZIAM Journal 45 (C1–C15) 2003 
Advanced search options
You may be able to improve your search results by using the following syntax:
Query  Matches the following 

Asymptotic Equation  Anything with "Asymptotic" or "Equation". 
+Asymptotic +Equation  Anything with "Asymptotic" and "Equation". 
+Stokes "NavierStokes"  Anything containing "Stokes" but not "NavierStokes". 
Dynam*  Anything containing "Dynamic", "Dynamical", "Dynamicist" etc. 
