Capturing the Fundamental Motives to Use Social Media

Description
The specific and concrete motives to use social media are likely to grow as social media multiplies. Study 1 was conducted to identify the hierarchical structure of motives of using social media that explain a wide range of previously identified

The specific and concrete motives to use social media are likely to grow as social media multiplies. Study 1 was conducted to identify the hierarchical structure of motives of using social media that explain a wide range of previously identified motives from Uses and Gratification theory (Katz & Blumler, 1974). College students (N = 948) completed previously established measures of social media motives and a range of social media behaviors. Findings revealed two higher-order factors: (1) “Instrumental” motivation captures motives to achieve a specific aim by using social media (e.g., for information, self-expression, social interaction) and these motives are positively correlated with private self-conscious on social media, and (2) “Experiential” motivation captures motives to escape from reality by using social media (e.g., for entertainment, passing time, convenience) and these motives are positively correlated with social media addiction. Study 2 aimed to determine if the higher order structure, “Instrumental” and “Experiential”, emerge across a wider range of motivations to use social media. College students (N = 216) completed a survey on the 7 social media motivations from study 1 and 16 more social media motivations found in the two pilot studies. Findings from the confirmatory factor analysis revealed the 23-factor model was the better predictor to use social media than the higher order factors. The predictive validity of the higher order factors suggests “Instrumental” motivation is the better predictor of personality and “Experiential” motivation is positively correlated with social media addiction.
Date Created
2024-05
Agent

Evaluation of Univariate and Multivariate Dynamic Structural Equation Models with Categorical Outcomes

189395-Thumbnail Image.png
Description
The proliferation of intensive longitudinal datasets has necessitated the development of analytical techniques that are flexible and accessible to researchers collecting dyadic or individual data. Dynamic structural equation models (DSEMs), as implemented in Mplus, provides the flexibility researchers require by

The proliferation of intensive longitudinal datasets has necessitated the development of analytical techniques that are flexible and accessible to researchers collecting dyadic or individual data. Dynamic structural equation models (DSEMs), as implemented in Mplus, provides the flexibility researchers require by combining components from multilevel modeling, structural equation modeling, and time series analyses. This dissertation project presents a simulation study that evaluates the performance of categorical DSEM using a probit link function across different numbers of clusters (N = 50 or 200), timepoints (T = 14, 28, or 56), categories on the outcome (2, 3, or 5), and distribution of responses on the outcome (symmetric/approximate normal, skewed, or uniform) for both univariate and multivariate models (representing individual data and dyadic longitudinal Actor-Partner Interdependence Model data, respectively). The 3- and 5-category model conditions were also evaluated as continuous DSEMs across the same cluster, timepoint, and distribution conditions to evaluate to what extent ignoring the categorical nature of the outcome impacted model performance. Results indicated that previously-suggested minimums for number of clusters and timepoints from studies evaluating continuous DSEM performance with continuous outcomes are not large enough to produce unbiased and adequately powered models in categorical DSEM. The distribution of responses on the outcome did not have a noticeable impact in model performance for categorical DSEM, but did affect model performance when fitting a continuous DSEM to the same datasets. Ignoring the categorical nature of the outcome lead to underestimated effects across parameters and conditions, and showed large Type-I error rates in the N = 200 cluster conditions.
Date Created
2023
Agent

Exploring Heterogeneity in Factor Analytic Results

171917-Thumbnail Image.png
Description
The last two decades have seen growing awareness of and emphasis on the replication of empirical findings. While this is a large literature, very little of it has focused on or considered the interaction of replication and psychometrics. This is

The last two decades have seen growing awareness of and emphasis on the replication of empirical findings. While this is a large literature, very little of it has focused on or considered the interaction of replication and psychometrics. This is unfortunate given that sound measurement is crucial when considering the complex constructs studied in psychological research. If the psychometric properties of a scale fail to replicate, then inferences made using scores from that scale are questionable at best. In this dissertation, I begin to address replication issues in factor analysis – a widely used psychometric method in psychology. After noticing inconsistencies across results for studies that factor analyzed the same scale, I sought to gain a better understanding of what replication means in factor analysis as well as address issues that affect the replicability of factor analytic models. With this work, I take steps toward integrating factor analysis into the broader replication discussion. Ultimately, the goal of this dissertation was to highlight the importance of psychometric replication and bring attention to its role in fostering a more replicable scientific literature.
Date Created
2022
Agent

Evaluating When Subscores Can Have Value in Psychological and Health Applications

168527-Thumbnail Image.png
Description
Scale scores play a significant role in research and practice in a wide range of areas such as education, psychology, and health sciences. Although the methods of scale scoring have advanced considerably over the last 100 years, researchers and practitioners

Scale scores play a significant role in research and practice in a wide range of areas such as education, psychology, and health sciences. Although the methods of scale scoring have advanced considerably over the last 100 years, researchers and practitioners have generally been slow to implement these advances. There are many topics that fall under this umbrella but the current study focuses on two. The first topic is that of subscores and total scores. Many of the scales in psychological and health research are designed to yield subscores, yet it is common to see total scores reported instead. Simplifying scores in this way, however, may have important implications for researchers and scale users in terms of interpretation and use. The second topic is subscore augmentation. That is, if there are subscores, how much value is there in using a subscore augmentation method? Most people using psychological assessments are unfamiliar with score augmentation techniques and the potential benefits they may have over the traditional sum score approach. The current study borrows methods from education to explore the magnitude of improvement of using augmented scores over observed scores. Data was simulated using the Graded Response Model. Factors controlled in the simulation were number of subscales, number of items per subscale, level of correlation between subscales, and sample size. Four estimates of the true subscore were considered (raw, subscore-adjusted, total score-adjusted, joint score-adjusted). Results from the simulation suggest that the score adjusted with total score information may perform poorly when the level of inter-subscore correlation is 0.3. Joint scores perform well most of the time, and the subscore-adjusted scores and joint-adjusted scores were always better performers than raw scores. Finally, general advice to applied users is provided.
Date Created
2022
Agent

Content Agnostic Game Based Stealth Assessment

161463-Thumbnail Image.png
Description
Serious or educational games have been a subject of research for a long time. They usually have game mechanics, game content, and content assessment all tied together to make a specialized game intended to impart learning of the associated content

Serious or educational games have been a subject of research for a long time. They usually have game mechanics, game content, and content assessment all tied together to make a specialized game intended to impart learning of the associated content to its players. While this approach is good for developing games for teaching highly specific topics, it consumes a lot of time and money. Being able to re-use the same mechanics and assessment for creating games that teach different contents would lead to a lot of savings in terms of time and money. The Content Agnostic Game Engineering (CAGE) Architecture mitigates the problem by disengaging the content from game mechanics. Moreover, the content assessment in games is often quite explicit in the way that it disturbs the flow of the players and thus hampers the learning process, as it is not integrated into the game flow. Stealth assessment helps to alleviate this problem by keeping the player engagement intact while assessing them at the same time. Integrating stealth assessment into the CAGE framework in a content-agnostic way will increase its usability and further decrease in game and assessment development time and cost. This research presents an evaluation of the learning outcomes in content-agnostic game-based assessment developed using the CAGE framework.
Date Created
2021
Agent

Is More Always Better? The Relation Between Socioeconomic Status and Human Development

161394-Thumbnail Image.png
Description
Socioeconomic status (SES) is one of the most well researched constructs in developmental science, yet important questions underly how to best model it. That is, are relations with SES always in the same direction or does the direction of association

Socioeconomic status (SES) is one of the most well researched constructs in developmental science, yet important questions underly how to best model it. That is, are relations with SES always in the same direction or does the direction of association change at different levels of SES? In this dissertation, I conducted a meta-analysis using individual participant data (IPD) to examine two questions: 1) Does a nonmonotonic (quadratic) model of the relations between components of SES (i.e., income, years of education, occupation status/prestige), depressive symptoms, and academic achievement fit better than a monotonic (linear) model? and 2) Is the magnitude of relation moderated by developmental period, gender/sex, or race/ethnicity? I hypothesized that there would be more support for the nonmonotonic model. Moderation analyses were exploratory. I identified nationally representative IPD from the Inter-university Consortium for Political and Social Research (ICPSR). I included 59 datasets, which represent 23 studies (e.g., Add Health) and 1,844,577 participants. Higher income (β = -0.11; β = 0.10), years of education (β = -0.09; β = 0.13), and occupational status (β = -0.04; β = 0.04) and prestige (β = -0.03; β = 0.04) were associated with a linear decrease in depressive symptoms and increase in academic achievement, respectively. Higher income (β = 0.05), years of education (β = 0.02), and occupational status/prestige (β = 0.02) were quadratically associated with a decrease in depressive symptoms followed by a slight increase at higher levels of income and a diminishing association towards higher levels of education and occupational status/prestige. Higher income was also quadratically associated with academic achievement (β = -0.03). I found evidence that these associations varied between developmental periods and racial/ethnic samples, but I did not find evidence of variation between females and males. I integrate these findings with three conclusions: (1) more is not always better and (2) there are unique contexts and resources associated with different levels of SES that (3) operate in a dynamic fashion with other cultural systems (e.g., racism), which affect the integrated actions between the individual and context. I outline several measurement implications and limitations for future research directions.
Date Created
2021
Agent

A Psychometric Analysis of an Operational ASU Exam

131721-Thumbnail Image.png
Description
This thesis explored the psychometric properties of an ASU midterm. These analyses were done to explore the efficacy of the questions on the exam using the methods of item analysis difficulty and discrimination. The discrimination and difficulty scores as well

This thesis explored the psychometric properties of an ASU midterm. These analyses were done to explore the efficacy of the questions on the exam using the methods of item analysis difficulty and discrimination. The discrimination and difficulty scores as well as the correlations of questions led to suggests of questions that may need revision.
Date Created
2020-05
Agent

Robustness of the General Factor Mean Difference Estimation in Bifactor Ordinal Data

157145-Thumbnail Image.png
Description
A simulation study was conducted to explore the robustness of general factor mean difference estimation in bifactor ordered-categorical data. In the No Differential Item Functioning (DIF) conditions, the data generation conditions varied were sample size, the number of categories per

A simulation study was conducted to explore the robustness of general factor mean difference estimation in bifactor ordered-categorical data. In the No Differential Item Functioning (DIF) conditions, the data generation conditions varied were sample size, the number of categories per item, effect size of the general factor mean difference, and the size of specific factor loadings; in data analysis, misspecification conditions were introduced in which the generated bifactor data were fit using a unidimensional model, and/or ordered-categorical data were treated as continuous data. In the DIF conditions, the data generation conditions varied were sample size, the number of categories per item, effect size of latent mean difference for the general factor, the type of item parameters that had DIF, and the magnitude of DIF; the data analysis conditions varied in whether or not setting equality constraints on the noninvariant item parameters.

Results showed that falsely fitting bifactor data using unidimensional models or failing to account for DIF in item parameters resulted in estimation bias in the general factor mean difference, while treating ordinal data as continuous had little influence on the estimation bias as long as there was no severe model misspecification. The extent of estimation bias produced by misspecification of bifactor datasets with unidimensional models was mainly determined by the degree of unidimensionality (i.e., size of specific factor loadings) and the general factor mean difference size. When the DIF was present, the estimation accuracy of the general factor mean difference was completely robust to ignoring noninvariance in specific factor loadings while it was very sensitive to failing to account for DIF in threshold parameters. With respect to ignoring the DIF in general factor loadings, the estimation bias of the general factor mean difference was substantial when the DIF was -0.15, and it can be negligible for smaller sizes of DIF. Despite the impact of model misspecification on estimation accuracy, the power to detect the general factor mean difference was mainly influenced by the sample size and effect size. Serious Type I error rate inflation only occurred when the DIF was present in threshold parameters.
Date Created
2019
Agent

The impact of information quantity and quality on parameter estimation for a selection of dynamic Bayesian network models with latent variables

156690-Thumbnail Image.png
Description
Dynamic Bayesian networks (DBNs; Reye, 2004) are a promising tool for modeling student proficiency under rich measurement scenarios (Reichenberg, in press). These scenarios often present assessment conditions far more complex than what is seen with more traditional assessments and require

Dynamic Bayesian networks (DBNs; Reye, 2004) are a promising tool for modeling student proficiency under rich measurement scenarios (Reichenberg, in press). These scenarios often present assessment conditions far more complex than what is seen with more traditional assessments and require assessment arguments and psychometric models capable of integrating those complexities. Unfortunately, DBNs remain understudied and their psychometric properties relatively unknown. If the apparent strengths of DBNs are to be leveraged, then the body of literature surrounding their properties and use needs to be expanded upon. To this end, the current work aimed at exploring the properties of DBNs under a variety of realistic psychometric conditions. A two-phase Monte Carlo simulation study was conducted in order to evaluate parameter recovery for DBNs using maximum likelihood estimation with the Netica software package. Phase 1 included a limited number of conditions and was exploratory in nature while Phase 2 included a larger and more targeted complement of conditions. Manipulated factors included sample size, measurement quality, test length, the number of measurement occasions. Results suggested that measurement quality has the most prominent impact on estimation quality with more distinct performance categories yielding better estimation. While increasing sample size tended to improve estimation, there were a limited number of conditions under which greater samples size led to more estimation bias. An exploration of this phenomenon is included. From a practical perspective, parameter recovery appeared to be sufficient with samples as low as N = 400 as long as measurement quality was not poor and at least three items were present at each measurement occasion. Tests consisting of only a single item required exceptional measurement quality in order to adequately recover model parameters. The study was somewhat limited due to potentially software-specific issues as well as a non-comprehensive collection of experimental conditions. Further research should replicate and, potentially expand the current work using other software packages including exploring alternate estimation methods (e.g., Markov chain Monte Carlo).
Date Created
2018
Agent

Assessing measurement invariance and latent mean differences with bifactor multidimensional data in structural equation modeling

156621-Thumbnail Image.png
Description
Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been

Investigation of measurement invariance (MI) commonly assumes correct specification of dimensionality across multiple groups. Although research shows that violation of the dimensionality assumption can cause bias in model parameter estimation for single-group analyses, little research on this issue has been conducted for multiple-group analyses. This study explored the effects of mismatch in dimensionality between data and analysis models with multiple-group analyses at the population and sample levels. Datasets were generated using a bifactor model with different factor structures and were analyzed with bifactor and single-factor models to assess misspecification effects on assessments of MI and latent mean differences. As baseline models, the bifactor models fit data well and had minimal bias in latent mean estimation. However, the low convergence rates of fitting bifactor models to data with complex structures and small sample sizes caused concern. On the other hand, effects of fitting the misspecified single-factor models on the assessments of MI and latent means differed by the bifactor structures underlying data. For data following one general factor and one group factor affecting a small set of indicators, the effects of ignoring the group factor in analysis models on the tests of MI and latent mean differences were mild. In contrast, for data following one general factor and several group factors, oversimplifications of analysis models can lead to inaccurate conclusions regarding MI assessment and latent mean estimation.
Date Created
2018
Agent