dc.description.abstract | Measurement invariance testing is prerequisite if meaningful comparisons of latent construct across groups are important to the study in social science. If measurement invariance is rejected, the result of non-invariance might be from unbalanced covariates across groups. Propensity score is one approach to correct unbalanced covariates in the data when these unbalanced covariates are the source of measurement non-invariance.
The main purpose of this dissertation is to evaluate propensity score adjustment in testing measurement invariance in both empirical data and Monte Carlo simulation study. The traditional logistic regression and machine learning estimation method (i.e., random forest) were applied to obtain accurate propensity score.
In empirical study, when propensity score was applied as a new covariate to adjust unbalanced covariates across groups, measurement invariance was improved from metric invariance to scalar invariance. Weighting by odds method with random forest estimation improved the metric invariance to scalar invariance, but weighting with logistic regression did not.
The results of a simulation study indicated a substantial Type I error rate inflation if ignoring the unbalanced covariates among groups and using multiple group CFA to conduct the measurement invariance test. Type I error rate inflation was also observed if logistic regression was applied to adjust measurement invariance. On the other hand, using random forest estimation method to balance covariates across groups gave accurate measurement invariance test conclusion. | en |