Show simple item record

dc.contributor.advisorGoodson, Patricia
dc.creatorValdez, Jr., Daniel
dc.date.accessioned2019-01-18T15:42:20Z
dc.date.available2020-08-01T06:38:58Z
dc.date.created2018-08
dc.date.issued2018-08-06
dc.date.submittedAugust 2018
dc.identifier.urihttps://hdl.handle.net/1969.1/174067
dc.description.abstractIn an environment where one article is published every 20 seconds, we cannot be certain all studies are upheld to the same high quality standard. Thus, there is growing speculation that much of what is published today may contain embedded biases that detract from the quality of science. Though aware of bias in research, we are ill-equipped to address, identify and mitigate bias from published literature. Therefore, the purpose of this dissertation is to (1) explore the complexity and saliency of bias in published work via two domains: bias in numeric data (numeric bias), and bias embedded in language patterns (language bias) and (2) test technological tools intended to detect bias more objectively— namely the Cochrane Institute’s GRADEPro, and topic modeling. Numeric bias was defined as bias within number data and detected via the Cochrane Institute’s GRADEPro software. To tout the effectiveness of using GRADEPro as a valid tool with which to detect number bias, this study used a heuristic example with currently published manuscripts on Pre-Exposure Prophylaxis (PrEP). Findings indicated, primarily, there were varying levels of evidence quality, ranging from Very High quality of evidence, to Very Low quality of evidence. Further, the efficacy of the medication in each study also varied by different extents. Language bias was defined as bias within written language and identified more objectively via topic modeling. To demonstrate the effectiveness of topic modeling, I compared corpora of text data among three bias-inducing variables—time, funding source and nation of origin. For each corpus, language patterns varied among the bias inducing variables, suggesting, among other considerations, bias inducing variables influence the direction of language despite testing the same hypothesis. Overall, this dissertation sought to present tools outside of Public Health that could more objectively identify problematic issues within numeric and language data. For both types of bias, language and numeric, bias was identified and distilled in a more efficient and effective manner. Therefore, issues such as recurrent bias in Public Health should be addressed via these presented tools, as well as potential others, in the continued effort to uphold the integrity of science.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectPublic Healthen
dc.subjectBiasen
dc.subjectTopic Modelingen
dc.subjectGRADEproen
dc.titleBias in Public Health Research: Ethical Implications and Objective Assessment Toolsen
dc.typeThesisen
thesis.degree.departmentHealth and Kinesiologyen
thesis.degree.disciplineHealth Educationen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberBarry, Adam E
dc.contributor.committeeMemberBrison, Natasha
dc.contributor.committeeMemberLightfoot, J Timothy
dc.type.materialtexten
dc.date.updated2019-01-18T15:42:20Z
local.embargo.terms2020-08-01
local.etdauthor.orcid0000-0002-2355-9881


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record