top of page

Reading Reflections

Eduardo Coutinho, Nicola Dibben. Psychoacoustic cues to emotion in speech prosody and music. Cognition and Emotion. Vol 27 (2013)

I found the theory behind this article fascinating as the subject is something that can be widely underappreciated. The idea that; 'Music, like speech, can communicate emotions to listeners through the organization of acoustic signals' is by no means a new thought and perhaps even understood by most people. It is easy to listen to a piece of music and experience an emotional response, and it is even easier to respond to speech.

 

I found it interesting that the refinement of this idea and the focus on speech porosity, something that, again, I believe, can be easily overlooked as just being a part of everyday speech.

 

The overlap between speech porosity and music is what interested me the most about this article. The two can be shaped universally through rate to create a simple increase or decrease in tension and energy.

 

After reflecting on this, I have begun to expand on this concept of overlapping influences and believe that these same influences can apply to text. Emphasizing speech porosity through text with the aid of musical cues and animations could create a similar effect in a medium, such as a game, where characters are not voiced, and there is a crucial absence of speech. I have found some examples of this with two games, both independently produced; Stardew Valley and Underrail.

Moore, Adrian. Sonic Art: An Introduction to Electroacoustic Music Composition: An Introduction to Electroacoustic Music Composition. London: Taylor & Francis Group, 2016. Accessed March 21, 2021. ProQuest Ebook Central.

I found this book to be an excellent resource for referring back to while composing. The language is clear and the examples are excellent at describing techniques and processes for the development of any sonic art piece.

The section I found most interesting was the terminology for different listening "states". These states play a big part in the listening of a piece but also the composition. While composing it is easy to fall into the "Ouir" or Passive state of listening, where the listener is detached from the piece. Listening in a more analytic or "Ecouter" state allows us as composers to critically think about the sounds we are hearing and the events and spaces they represent.

These states will prove useful in defining what sounds I can use alongside narrative in the game space, to affect the emotional response of the player. If the player is listening in a passive state, but actively engaged in the narrative, how does this affect their emotional response to the game? Does being actively involved in the sound and narrative overwhelm the player? These are some questions I would like to explore further, alongside the psychoacoustic concepts mentioned previously.

Roginska, Agnieszka, and Geluso, Paul, eds. Immersive Sound: The Art and Science of Binaural and Multi-Channel Audio. Milton. Taylor & Francis Group, 2017. Accessed March 21, 2021. ProQuest Ebook Central.

Immersive Sound:

Immersion is a critical element of any sound piece. It is used to connect to the audience to the narrative of the soundpiece but it also creates a space for them to travel to. Using techniques like reverb, the composer is able to manipulate the sounds to have them fit into this acoustic space and subsequently immerse them. Another way the composer can convey a sense of space is with panning and multichannel sounds. By panning sounds from left and right; or in the case of a multichannel soundpiece, across multiple channels, the composer can move the space around the audience to make the space feel more alive and responsive to them. Overlapping sounds and panning them hard to the left and right and manipulating the timbre of these sounds, creates a stereoscopic sound from a signle mono track and adds texture to the space.

Perception and Spacial Sound:

In order to create an immersive sound piece, the reflections and timbre of the sounds needs to be understood. This text explains how the brain and ears respond to different frequencies of sound, as well as the different directions and relfections of sound. Some of these effects can be created digitially, although not as effective as recording sounds using a 3D mic in a particular site. I find reverb and delay to be the most effective tool for immersing the listener in sounds as the reverb simulates the reflections of sounds within a space and delay reflects the "Interaural Time Difference" between ears. In my own work, I attempted to improve the sound of my digital guitar sounds by adding reverb and delay to two different sounding guitars and had them play on the left and right at the same time. The other use of these effects is to modulate digital sounds so that they do not sound the same each time they are played. This is a big part of immersion when working with digital sounds since sounds are never the same and the way we hear them is always different.

Cullen, M. "Basics of Sound Design for Video Games." ICS 62: Game Technologies and Interactive Media, Class Lecture at UC Irvine, California, USA (2016), Retrieved from Basics of Sound Design for Video Games"

I found this text to be a useful guide for understanding the uses of sound in games, as well as the mastering techniques most used in the industry. These techniques include things like randomizing the pitch of sounds like footsteps (as stated above, this modulation improves the immersion of the sound) and dynamically changing the mixing levels during gameplay. The main part I liked about this text was the amount of examples they use. Examples of good implimentation and bad in games as well as examples of the technical implimentation within the game engine.

bottom of page