The full text of this item is not available at this time because the student has placed this item under an embargo for a period of time. The Libraries are not authorized to provide a copy of this work during the embargo period, even for Texas A&M users with NetID.
Timbre in Musical and Vocal Sounds: The Link to Shared Emotion Processing Mechanisms
MetadataShow full item record
Music and speech are used to express emotion, yet it is unclear how these domains are related. This dissertation addresses three problems in the current literature. First, speech and music have largely been studied separately. Second, studies in these domains are primarily correlational. Third, most studies utilize dimensional emotions where motivational salience has not been considered. A three-part regression study investigated the first problem, and examined whether acoustic components explained emotion in instrumental (Experiment 1a), baby (Experiment 1b), and artificial mechanical sounds (Experiment 1c). Participants rated whether stimuli sounded happy, sad, angry, fearful and disgusting. Eight acoustic components were extracted from the sounds and a regression analysis revealed that the components explained participants’ emotion ratings of instrumental and baby sounds well, but not artificial mechanical sounds. These results indicate that instrumental and baby sounds were perceived similarly compared to artificial mechanical sounds. To address the second and third problems, I examined the extent to which emotion processing for vocal and instrumental sounds crossed domains and whether similar mechanisms were used for emotion perception. In two sets of four-part experiments participants heard an angry or fearful sound four times, followed by a test sound from an anger-fear morphed continuum and judged whether the test sound was angry or fearful. Experiments 2a-2d examined adaptation of instrumental and voice sounds, where Experiments 3a-3d used vocal and musical sounds. Results from Experiments 2a, 2b, 3a and 3b were analogous such that aftereffects occurred for the perception of angry and not fearful sounds in different domains. Experiments 2c, 2d, 3c, and 3d examined if adaptation occurred across modalities. Cross-modal aftereffects occurred in only one direction (voice to instrument and vocal sound to musical sound) and this effect occurred only for angry sounds. These results provide evidence that similar mechanisms are used for emotion perception in vocal and musical sounds, and that the nature of this relationship is more complex than a simple shared mechanism. Specifically, there is likely a unidirectional relationship where vocal sounds can encompass musical sounds but not vice-versa and where motivational aspects of sound (approach vs. avoidance) play a key role.
Bowman, Casady Diane (2015). Timbre in Musical and Vocal Sounds: The Link to Shared Emotion Processing Mechanisms. Doctoral dissertation, Texas A & M University. Available electronically from