Locally D-optimal designs for generalized linear models

156371-Thumbnail Image.png
Description
Generalized Linear Models (GLMs) are widely used for modeling responses with non-normal error distributions. When the values of the covariates in such models are controllable, finding an optimal (or at least efficient) design could greatly facilitate the work of collecting

Generalized Linear Models (GLMs) are widely used for modeling responses with non-normal error distributions. When the values of the covariates in such models are controllable, finding an optimal (or at least efficient) design could greatly facilitate the work of collecting and analyzing data. In fact, many theoretical results are obtained on a case-by-case basis, while in other situations, researchers also rely heavily on computational tools for design selection.

Three topics are investigated in this dissertation with each one focusing on one type of GLMs. Topic I considers GLMs with factorial effects and one continuous covariate. Factors can have interactions among each other and there is no restriction on the possible values of the continuous covariate. The locally D-optimal design structures for such models are identified and results for obtaining smaller optimal designs using orthogonal arrays (OAs) are presented. Topic II considers GLMs with multiple covariates under the assumptions that all but one covariate are bounded within specified intervals and interaction effects among those bounded covariates may also exist. An explicit formula for D-optimal designs is derived and OA-based smaller D-optimal designs for models with one or two two-factor interactions are also constructed. Topic III considers multiple-covariate logistic models. All covariates are nonnegative and there is no interaction among them. Two types of D-optimal design structures are identified and their global D-optimality is proved using the celebrated equivalence theorem.
Date Created
2018
Agent

A study of components of Pearson's chi-square based on marginal distributions of cross-classified tables for binary variables

156264-Thumbnail Image.png
Description
The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit

The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error rate due to sparseness. Pearson's statistic can be decomposed into orthogonal components associated with the marginal distributions of observed variables, and an omnibus fit statistic can be obtained as a sum of these components. When the statistic is a sum of components for lower-order marginals, it has good performance for Type I error rate and statistical power even when applied to a sparse table. In this dissertation, goodness-of-fit statistics using orthogonal components based on second- third- and fourth-order marginals were examined. If lack-of-fit is present in higher-order marginals, then a test that incorporates the higher-order marginals may have a higher power than a test that incorporates only first- and/or second-order marginals. To this end, two new statistics based on the orthogonal components of Pearson's chi-square that incorporate third- and fourth-order marginals were developed, and the Type I error, empirical power, and asymptotic power under different sparseness conditions were investigated. Individual orthogonal components as test statistics to identify lack-of-fit were also studied. The performance of individual orthogonal components to other popular lack-of-fit statistics were also compared. When the number of manifest variables becomes larger than 20, most of the statistics based on marginal distributions have limitations in terms of computer resources and CPU time. Under this problem, when the number manifest variables is larger than or equal to 20, the performance of a bootstrap based method to obtain p-values for Pearson-Fisher statistic, fit to confirmatory dichotomous variable factor analysis model, and the performance of Tollenaar and Mooijaart (2003) statistic were investigated.
Date Created
2018
Agent

Essays on the identification and modeling of variance

156163-Thumbnail Image.png
Description
In the presence of correlation, generalized linear models cannot be employed to obtain regression parameter estimates. To appropriately address the extravariation due to correlation, methods to estimate and model the additional variation are investigated. A general form of the mean-variance

In the presence of correlation, generalized linear models cannot be employed to obtain regression parameter estimates. To appropriately address the extravariation due to correlation, methods to estimate and model the additional variation are investigated. A general form of the mean-variance relationship is proposed which incorporates the canonical parameter. The two variance parameters are estimated using generalized method of moments, negating the need for a distributional assumption. The mean-variance relation estimates are applied to clustered data and implemented in an adjusted generalized quasi-likelihood approach through an adjustment to the covariance matrix. In the presence of significant correlation in hierarchical structured data, the adjusted generalized quasi-likelihood model shows improved performance for random effect estimates. In addition, submodels to address deviation in skewness and kurtosis are provided to jointly model the mean, variance, skewness, and kurtosis. The additional models identify covariates influencing the third and fourth moments. A cutoff to trim the data is provided which improves parameter estimation and model fit. For each topic, findings are demonstrated through comprehensive simulation studies and numerical examples. Examples evaluated include data on children’s morbidity in the Philippines, adolescent health from the National Longitudinal Study of Adolescent to Adult Health, as well as proteomic assays for breast cancer screening.
Date Created
2018
Agent

Three essays on correlated binary outcomes: detection and appropriate models

156148-Thumbnail Image.png
Description
Correlation is common in many types of data, including those collected through longitudinal studies or in a hierarchical structure. In the case of clustering, or repeated measurements, there is inherent correlation between observations within the same group, or between observations

Correlation is common in many types of data, including those collected through longitudinal studies or in a hierarchical structure. In the case of clustering, or repeated measurements, there is inherent correlation between observations within the same group, or between observations obtained on the same subject. Longitudinal studies also introduce association between the covariates and the outcomes across time. When multiple outcomes are of interest, association may exist between the various models. These correlations can lead to issues in model fitting and inference if not properly accounted for. This dissertation presents three papers discussing appropriate methods to properly consider different types of association. The first paper introduces an ANOVA based measure of intraclass correlation for three level hierarchical data with binary outcomes, and corresponding properties. This measure is useful for evaluating when the correlation due to clustering warrants a more complex model. This measure is used to investigate AIDS knowledge in a clustered study conducted in Bangladesh. The second paper develops the Partitioned generalized method of moments (Partitioned GMM) model for longitudinal studies. This model utilizes valid moment conditions to separately estimate the varying effects of each time-dependent covariate on the outcome over time using multiple coefficients. The model is fit to data from the National Longitudinal Study of Adolescent to Adult Health (Add Health) to investigate risk factors of childhood obesity. In the third paper, the Partitioned GMM model is extended to jointly estimate regression models for multiple outcomes of interest. Thus, this approach takes into account both the correlation between the multivariate outcomes, as well as the correlation due to time-dependency in longitudinal studies. The model utilizes an expanded weight matrix and objective function composed of valid moment conditions to simultaneously estimate optimal regression coefficients. This approach is applied to Add Health data to simultaneously study drivers of outcomes including smoking, social alcohol usage, and obesity in children.
Date Created
2018
Agent

Three essays on comparative simulation in three-level hierarchical data structure

155978-Thumbnail Image.png
Description
Though the likelihood is a useful tool for obtaining estimates of regression parameters, it is not readily available in the fit of hierarchical binary data models. The correlated observations negate the opportunity to have a joint likelihood when fitting hierarchical

Though the likelihood is a useful tool for obtaining estimates of regression parameters, it is not readily available in the fit of hierarchical binary data models. The correlated observations negate the opportunity to have a joint likelihood when fitting hierarchical logistic regression models. Through conditional likelihood, inferences for the regression and covariance parameters as well as the intraclass correlation coefficients are usually obtained. In those cases, I have resorted to use of Laplace approximation and large sample theory approach for point and interval estimates such as Wald-type confidence intervals and profile likelihood confidence intervals. These methods rely on distributional assumptions and large sample theory. However, when dealing with small hierarchical datasets they often result in severe bias or non-convergence. I present a generalized quasi-likelihood approach and a generalized method of moments approach; both do not rely on any distributional assumptions but only moments of response. As an alternative to the typical large sample theory approach, I present bootstrapping hierarchical logistic regression models which provides more accurate interval estimates for small binary hierarchical data. These models substitute computations as an alternative to the traditional Wald-type and profile likelihood confidence intervals. I use a latent variable approach with a new split bootstrap method for estimating intraclass correlation coefficients when analyzing binary data obtained from a three-level hierarchical structure. It is especially useful with small sample size and easily expanded to multilevel. Comparisons are made to existing approaches through both theoretical justification and simulation studies. Further, I demonstrate my findings through an analysis of three numerical examples, one based on cancer in remission data, one related to the China’s antibiotic abuse study, and a third related to teacher effectiveness in schools from a state of southwest US.
Date Created
2017
Agent

Are NBA Video Games Representing the Real Game? A Statistical Comparison of Phoenix Suns' Shooting Patterns and their Video Game Counterpart

135661-Thumbnail Image.png
Description
This paper intends to analyze the Phoenix Suns' shooting patterns in real NBA games, and compare them to the "NBA 2k16" Suns' shooting patterns. Data was collected from the first five Suns' games of the 2015-2016 season and the same

This paper intends to analyze the Phoenix Suns' shooting patterns in real NBA games, and compare them to the "NBA 2k16" Suns' shooting patterns. Data was collected from the first five Suns' games of the 2015-2016 season and the same games played in "NBA 2k16". The findings of this paper indicate that "NBA 2k16" utilizes statistical findings to model their gameplay. It was also determined that "NBA 2k16" modeled the shooting patterns of the Suns in the first five games of the 2015-2016 season very closely. Both, the real Suns' games and the "NBA 2k16" Suns' games, showed a higher probability of success for shots taken in the first eight seconds of the shot clock than the last eight seconds of the shot clock. Similarly, both game types illustrated a trend that the probability of success for a shot increases as a player holds onto a ball longer. This result was not expected for either game type, however, "NBA 2k16" modeled the findings consistent with real Suns' games. The video game modeled the Suns with significantly more passes per possession than the real Suns' games, while they also showed a trend that more passes per possession has a significant effect on the outcome of the shot. This trend was not present in the real Suns' games, however literature supports this finding. Also, "NBA 2k16" did not correctly model the allocation of team shots for each player, however, the differences were found only in bench players. Lastly, "NBA 2k16" did not correctly allocate shots across the seven regions for Eric Bledsoe, however, there was no evidence indicating that the game did not correctly model the allocation of shots for the other starters, as well as the probability of success across the regions.
Date Created
2016-05
Agent

The Adaptive Lasso Procedure for Building a Traffic Forecasting Model

134415-Thumbnail Image.png
Description
This paper will begin by initially discussing the potential uses and challenges of efficient and accurate traffic forecasting. The data we used includes traffic volume from seven locations on a busy Athens street in April and May of 2000. This

This paper will begin by initially discussing the potential uses and challenges of efficient and accurate traffic forecasting. The data we used includes traffic volume from seven locations on a busy Athens street in April and May of 2000. This data was used as part of a traffic forecasting competition. Our initial observation, was that due to the volatility and oscillating nature of daily traffic volume, simple linear regression models will not perform well in predicting the time-series data. For this we present the Harmonic Time Series model. Such model (assuming all predictors are significant) will include a sinusoidal term for each time index within a period of data. Our assumption is that traffic volumes have a period of one week (which is evidenced by the graphs reproduced in our paper). This leads to a model that has 6,720 sine and cosine terms. This is clearly too many coefficients, so in an effort to avoid over-fitting and having an efficient model, we apply the sub-setting algorithm known as Adaptive Lass.
Date Created
2017-05
Agent

What Life Tables Can Tell Us about Disease

Description
In this project, we aim to examine the methods used to obtain U.S. mortality rates, as well as the changes in the mortality rate between subgroups of interest within our population due to various diseases.
Date Created
2014-05
Agent

Water Temperature Forecasting for Spanish Rivers by Means of Nonlinear Mixed Models

128152-Thumbnail Image.png
Description

Study Region: 43 rivers in Spain with measurement stations for air and water temperatures.

Study Focus: River water temperatures influence aquatic ecosystem dynamics. This work aims to develop transferable river temperature forecasting models, which are not confined to sites with historical measurements of

Study Region: 43 rivers in Spain with measurement stations for air and water temperatures.

Study Focus: River water temperatures influence aquatic ecosystem dynamics. This work aims to develop transferable river temperature forecasting models, which are not confined to sites with historical measurements of air and water temperatures. For that purpose, we estimate nonlinear mixed models (NLMM), which are based on site-specific time-series models and account for seasonality and S-shaped air-to-water temperature associations. A detailed evaluation of the short-term forecasting performance of both NLMM and site-specific models is undertaken. Measurements from 31 measurement sites were used to estimate model parameters whereas data from 12 additional sites were used solely for the evaluation of NLMM.

New Hydrological Insights for the Region: Mixed models achieve levels of accuracy analogous to linear site-specific time-series regressions. Nonlinear site-specific models attain 1-day ahead forecasting accuracy close to 1 °C in terms of mean absolute error (MAE) and root mean square error (RMSE). Our results may facilitate adaptive management of freshwater resources in Spain in accordance with European water policy directives.

Date Created
2016-02-13
Agent

fMRI design under autoregressive model with one type of stimulus

155642-Thumbnail Image.png
Description
Functional magnetic resonance imaging (fMRI) is used to study brain activity due

to stimuli presented to subjects in a scanner. It is important to conduct statistical

inference on such time series fMRI data obtained. It is also important to select optimal designs

Functional magnetic resonance imaging (fMRI) is used to study brain activity due

to stimuli presented to subjects in a scanner. It is important to conduct statistical

inference on such time series fMRI data obtained. It is also important to select optimal designs for practical experiments. Design selection under autoregressive models

have not been thoroughly discussed before. This paper derives general information

matrices for orthogonal designs under autoregressive model with an arbitrary number

of correlation coefficients. We further provide the minimum trace of orthogonal circulant designs under AR(1) model, which is used as a criterion to compare practical

designs such as M-sequence designs and circulant (almost) orthogonal array designs.

We also explore optimal designs under AR(2) model. In practice, types of stimuli can

be more than one, but in this paper we only consider the simplest situation with only

one type of stimuli.
Date Created
2017
Agent