Show simple item record

dc.contributor.advisorVannest, Kimberly J
dc.creatorPeltier, Corey James
dc.date.accessioned2019-01-16T19:28:01Z
dc.date.available2019-12-01T06:35:58Z
dc.date.created2017-12
dc.date.issued2017-12-01
dc.date.submittedDecember 2017
dc.identifier.urihttps://hdl.handle.net/1969.1/173120
dc.description.abstractThe mathematical performance of U.S. students has drawn attention from the field of education as well as the public sector. An integral component of the nationwide initiative to improve mathematics instruction is using data for decision-making. However, data is only useful if it is reliable and valid, which requires technically sound measures. This dissertation includes two articles: (a) a literature review on the criterion validity of mathematics curriculum-based measures and (b) a correlational study analyzing the criterion validity of a mathematics curriculum-based measure. The first study is a review of the literature that administered mathematics curriculum-based measures (m-CBMs) and examined the criterion validity of the scores. The review includes 40 articles that met the following criteria: (a) published in a peer-review journal, (b) administered a m-CBM with school age students, (c) reported quantitative data regarding the validity of scores, and (d) was published in English. Variables were identified and coded that may moderate the validity of scores produced, these variables included the mathematical focus of the measure and administration protocol (i.e., timing, paper pencil/computer, proctor, and grouping [i.e., classwide, small group, individual]). Results suggest concepts and applications m-CBMs yielded the strongest validity coefficients to standardized measures of mathematics performance for students in upper elementary and middle school. Scores from numeracy measures indicate evidence of criterion validity to standardized measures of mathematical achievement for early elementary students. There was no evidence the proctor or grouping moderate the validity; a mismatch between the administration format or the m-CBM and the criterion measure may affect the validity. The second article analyzes the criterion validity of a computer adaptive m-CBM used for universal screening purposes. Data from 1195 students in third through eighth grade attending four schools located in the rural Southern U.S. were included. Correlational analyses were used to identify the predictive and concurrent validity of the computer adaptive m-CBM to the end-of-year state assessment. Multiple linear regression analyses were used to identify whether student demographic variables (i.e., gender, race, free and reduced meals, limited English proficiency, special education, Section 504) moderated the validity. Results suggest the m-CBM had strong criterion validity to the end-of-year state assessment across grades. Validity coefficients were strongest to the major content domain and the weakest to the additional and supporting content. Moderator analyses reveal that the demographic variables: gender, SPED, FARMS, Section 504, and LEP moderated the criterion validity of m-CBM.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectcurriculum-based measuresen
dc.subjectvalidityen
dc.subjectmathematicsen
dc.subjectmoderator analysisen
dc.subjectsystematic reviewen
dc.titleVerifying and Looking into Data: Validity of Mathematics Curriculum Based Measuresen
dc.typeThesisen
thesis.degree.departmentEducational Psychologyen
thesis.degree.disciplineSpecial Educationen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberGanz, Jennifer
dc.contributor.committeeMemberKwok, Oi-man
dc.contributor.committeeMemberLynch, Patricia
dc.contributor.committeeMemberNeshyba, Monica
dc.type.materialtexten
dc.date.updated2019-01-16T19:28:01Z
local.embargo.terms2019-12-01
local.etdauthor.orcid0000-0003-3138-4126


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record