Show simple item record

dc.contributor.advisorYamauchi, Takashi
dc.creatorBowman, Casady Diane
dc.date.accessioned2016-04-06T16:09:27Z
dc.date.available2017-12-01T06:36:17Z
dc.date.created2015-12
dc.date.issued2015-12-11
dc.date.submittedDecember 2015
dc.identifier.urihttps://hdl.handle.net/1969.1/156199
dc.description.abstractMusic and speech are used to express emotion, yet it is unclear how these domains are related. This dissertation addresses three problems in the current literature. First, speech and music have largely been studied separately. Second, studies in these domains are primarily correlational. Third, most studies utilize dimensional emotions where motivational salience has not been considered. A three-part regression study investigated the first problem, and examined whether acoustic components explained emotion in instrumental (Experiment 1a), baby (Experiment 1b), and artificial mechanical sounds (Experiment 1c). Participants rated whether stimuli sounded happy, sad, angry, fearful and disgusting. Eight acoustic components were extracted from the sounds and a regression analysis revealed that the components explained participants’ emotion ratings of instrumental and baby sounds well, but not artificial mechanical sounds. These results indicate that instrumental and baby sounds were perceived similarly compared to artificial mechanical sounds. To address the second and third problems, I examined the extent to which emotion processing for vocal and instrumental sounds crossed domains and whether similar mechanisms were used for emotion perception. In two sets of four-part experiments participants heard an angry or fearful sound four times, followed by a test sound from an anger-fear morphed continuum and judged whether the test sound was angry or fearful. Experiments 2a-2d examined adaptation of instrumental and voice sounds, where Experiments 3a-3d used vocal and musical sounds. Results from Experiments 2a, 2b, 3a and 3b were analogous such that aftereffects occurred for the perception of angry and not fearful sounds in different domains. Experiments 2c, 2d, 3c, and 3d examined if adaptation occurred across modalities. Cross-modal aftereffects occurred in only one direction (voice to instrument and vocal sound to musical sound) and this effect occurred only for angry sounds. These results provide evidence that similar mechanisms are used for emotion perception in vocal and musical sounds, and that the nature of this relationship is more complex than a simple shared mechanism. Specifically, there is likely a unidirectional relationship where vocal sounds can encompass musical sounds but not vice-versa and where motivational aspects of sound (approach vs. avoidance) play a key role.en
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.subjectemotion perceptionen
dc.subjectmusicen
dc.subjectspeechen
dc.titleTimbre in Musical and Vocal Sounds: The Link to Shared Emotion Processing Mechanismsen
dc.typeThesisen
thesis.degree.departmentPsychologyen
thesis.degree.disciplinePsychologyen
thesis.degree.grantorTexas A & M Universityen
thesis.degree.nameDoctor of Philosophyen
thesis.degree.levelDoctoralen
dc.contributor.committeeMemberVaid, Jyotsna
dc.contributor.committeeMemberBeaster-Jones, Jayson
dc.contributor.committeeMemberFerris, Thomas
dc.type.materialtexten
dc.date.updated2016-04-06T16:09:27Z
local.embargo.terms2017-12-01
local.etdauthor.orcid0000-0003-3188-8228


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record