Poster Presentations

Poster Presentations

All videos can also be found on our Conference OSF!


Poster Session 1:
Saturday, March 6; 3:00-3:45 PM EST

Chenette, Timothy

Does Gestalt hearing exist?

Karpinski 2000 describes “Gestalt hearing,” identifying chords instantly and holistically, as the ideal endpoint of training in harmonic dictation. Yet the subdominant chord (e.g.) is not a single object but a collection of objects that differ in timbre, texture, inversion, spacing, etc. Is it really possible to become so acquainted with this group that one perceives it as a Gestalt? If so, what experiences and abilities are necessary to develop Gestalt hearing? This article draws on relevant results from an observational, exploratory study of harmonic listening (N=73) to suggest preliminary answers to these questions and paths for future research in this area. Our results suggest that Gestalt hearing, if it exists, may not be available in college-level instruction.

Video/Supplementary Materials
Author Manuscript

Delasanta, Lana J.

 Information specification during singing: A theoretical approach to music performance

Self-organized systems emphasize Gibson’s (1966) proposal that organisms and the environment are one coupled system. As a result, energy flow throughout the system allows its subsystems to utilize it in a meaningful way. Specifically, emergent collective organization provides information about and specific to the world around us. During music performance, this is especially important for both performers and listeners alike. This theoretical proposal discusses how we can consider examining ecological physics and the theory of the global array through music performance. If we consider a group of singers as a self-organized system, it opens the door to understanding the dynamics and information flow within and around it. The goal of this paper is to explore an unorthodox approach to examining the perception of music performance.

Video/Supplementary Materials
Author Manuscript

Foley, Liam, Schutz, Mike, & Schachtler, Laura

The role of timbre, envelope, and movement on audio-visual integration of musicians movements

Specific acoustic properties meaningfully shape the perceptual system’s binding of sight and sound (Vatakis & Spence, 2007, 2008). One understudied acoustic property in cross-modal integration is amplitude envelope, the way in which a sounds loudness changes over time. Here we examine the effect of amplitude envelope (flat versus percussive) on binding. Participants completed a temporal order judgement (TOJ) task, indicating which sensory modality was presented first. We hypothesized better binding for percussive envelope natural sounds compared to flat pure tones. Our preliminary results show a larger binding window for natural percussive tones. Further research into the role of timbre, temporal variation, and movement in cross-modal integration will help further our understanding on binding of natural stimuli.

Video/Supplementary Materials

Gardner, Sammy

The effect of gesture on the perception of linearity in instrumental music

Given that music performances are made up of gestures, we might ask how the movements of an individual can alter how one perceives music. To address this question, this paper examines a hypothesis concerning the gestural priming of melodic events, and the role of this priming on the perceived continuation of the melody. When primed with a linear gesture, we hypothesize that participants will be more likely to select the continuation of a melodic idea, that is to say the melody keeps moving in the same direction. Conversely, when primed with circular gestures, participants will be more likely to select musical ideas that reverse and return to the starting pitch. Our results show that there was no significant effect of gesture, but there was a significant effect of musical scale when diatonic scales were used alongside the gesture. It appears that gestural priming is not a predictor of whether participants selected a musical gesture that continued or returned. These results suggest that familiarity with a musical context is perhaps more predictive of melodic expectation than gesture.

This paper tests the hypothesis that movement in either a linear or a cyclical pattern influences whether listeners might infer a continued progression of a musical idea (in which new material will be presented) or the repetition of a musical idea (in which there is a cyclical return to previous material). We do this by running two experiments, a response and production task, where trained musicians are asked what comes next in a musical pattern, of a sequence or melodic line, while watching cyclic or linear gestures while the music is played. This project would place a new importance on the visual and gestural aspects surrounding music. If non-musical gesture can shape how one perceives musical time, then this resituates the role of outside bodies in music perception.

Video/Supplementary Materials
Author Manuscript

Lenchitz, Jordan

 Partial-oriented listening and the timbre-pitch perceptual continuum

Partial-oriented listening is a mode of listening that entails attending to timbral upper partials as potential pitches, hearing out spectral prominences distinctly from timbral aggregates. Questions such as “What is the highest pitch you hear?” and “How many pitches do you hear?” invite partial-oriented listening, which can be considered a “top-down”—that is, attend to high frequencies first and then to low frequencies—top-down listening strategy. I posit the existence of a timbre-pitch continuum of percepts of upper partials arising from the confluence of top-down listening strategies (such as partial-oriented listening) and bottom-up acoustic features (such as spectral fission, my theorization of the other side of the coin to McAdams’s “spectral fusion” that describes situations in which many listeners are likely to perceive a timbral upper partial as a discrete pitch). I argue that in order to best account for both flexibility of listening behaviors for any given listener as well as individual differences between listeners, it is most productive to center variance and variety along this continuum in terms of modes of listening and listening behaviors rather than “types of listeners.” In other words, any individual listener is better represented by a band of percepts along the continuum than by any one individual percept on it. I present this theoretical framework to facilitate study of the role of individual differences in timbre and pitch perception and to embrace the diversity of potential perceptual experiences that can arise from different modes of listening to the same sound.

Video/Supplementary Materials

Miskinis, Alena, Lin XiangXu, & Kanan Shadi

Virtual Harmony: Music interaction with virtual reality to reduce stress

This paper presents a demonstration of a newly created device called Virtual Harmony, designed to address and reduce stress by applying Virtual Reality (VR) in a Music Therapy (MT) environment. The treatment combines VR and MT to stimulate audition through background music, vision through three-dimensional VR, and touch through virtual percussive instruments. A pilot study was conducted among 19 high school and college students, in which participants used Virtual Harmony and provided pre- and post-exposure questionnaire responses about their experiences. 90% of participants reported that Virtual Harmony was effective and worth purchasing and 32% of participants reported a significant decrease in stress levels after having used the device. Only 5% reported that their stress increased which may be related to past experiences with severe vertigo. Further controlled experimentation is needed, but these early results are consistent with Virtual Harmony being a promising, affordable, and accessible way for users to manage their stress.

Video/Supplementary Materials
Author Manuscript

Norden, Nathalie, VanHandel, Leigh, & McAuley, Devin

Effects of musicianship on hypermetrical interpretation of rhythms

Previous research reaches different conclusions on whether non-musicians or musicians are more likely to perceive the beat of a rhythm at a slower tempo (at a higher level in the metric hierarchy).

To investigate this, participants completed two tasks for a set of thirty monotonic rhythms in either a Fast tempo (150 bpm, 400 ms inter-beat-interval) or Slow tempo condition (75 bpm, 800 ms inter-beat-interval). Their first task was to adjust the tempo of each rhythm in real time until it was at what they determined to be the best tempo; their second task was to tap along with what they felt was the beat of each rhythm. Participants did this both with and without an isochronous metrical context. Finally, participants’ musical background was assessed using the Goldsmith’s Musical Sophistication Index (GMSI). The ratio of the determined best tempo to the tapped tempo was calculated for each participant and rhythm. If participants tap a beat at the same tempo as their determined tempo, the ratio would be 1:1 between tapped and determined tempo. Of particular interest were ratios of 0.5, where participants tapped at half the tempo of the beat unit, a phenomenon known as a hypermetrical interpretation. Overall, participants produced more hypermetrical interpretations in the Fast tempo condition and for rhythms presented with a metrical context than without a metrical context. Moreover, participants with higher GMSI scores tended to produce more hypermetrical interpretations than participants with lower GMSI scores, but only in the Fast tempo condition.

Video/Supplementary Materials

Patrick, Morgan & Ashley, Richard

The effect of pattern deflection on the perception of melodic completeness: Investigating Leonard Meyer’s insight

A fundamental insight of Leonard Meyer’s approach to melody perception is his notion of reversal. Defined as a marked deflection in an ongoing pattern, reversal was a syntactic necessity for Meyer, without which ensuing resting points would feel incomplete (Meyer, 1980). In this exploratory study, we frame Meyer’s theory as an empirical question: do listeners judge musical phrases with marked reversals to be more complete than those without? In a forced-choice paradigm, music students listened to pairs of synthesized phrases with matched beginnings and endings but different kinds of reversal in their continuations, selecting which phrase felt more complete. A conjoint analysis modeling selection behavior as a function of reversal type (pitch, rhythm, and both pitch and rhythm) compared to the baseline case of no reversal revealed a significant preference for reversal melodies, F(3,2192)=6.05,p<.01. Average marginal component effects demonstrated that phrases with pitch reversals, as well as phrases with both pitch and rhythm reversals, significantly increased likelihood of selection relative to no reversal (p<.05 and p<.01, respectively). The effect of solely rhythm reversal did not reach, but approached, significance. These preliminary results are consistent with Meyer’s insight that melodic reversals result in feelings of phrase-ending completeness, and future directions are considered.

Video/Supplementary Materials

Reymore, Lindsey & Hansen, Niels Chr.

Towards a theory of instrument-specific absolute pitch: Effects of timbre and motor imagery

While absolute pitch (AP)—the ability to name musical pitches without reference—is rare in expert musicians (Levitin & Rogers, 2005; Ward, 1999), anecdotal evidence suggests that some musicians may better identify pitches played on their primary instrument than pitches played on other instruments. We call this phenomenon “instrument-specific absolute pitch” (ISAP) and offer the first theory of underlying mechanisms (Reymore & Hansen, 2020). This theory is situated in neuroscientific research on the multimodal nature of expertise (e.g., Krishnan et al., 2018; Proverbio & Orlandi, 2016). We propose that informative timbral cues arise from performer- or instrument-specific idiosyncrasies or from timbre-facilitated tonotopic representations and that sounds of one’s primary instrument may activate kinaesthetic memory and motor imagery, aiding pitch identification (Hansen & Reymore, 2021). Hypotheses derived from this theory are tested in two professional oboists. Only one of the two oboists showed an advantage for identifying oboe tones over piano tones. For this oboist, pitch-naming accuracy decreased and variance around the correct pitch value increased as an effect of transposition and motor interference, but not of instrument or performer. These results suggest that some musicians possess instrument-specific absolute pitch while others do not and that candidate mechanisms behind this ability capitalize on timbral cues and motor imagery. In a Registered Report (Hansen & Reymore, 2021), we plan to extend these findings to a larger population of oboists. A deeper understanding of instrument-specific absolute pitch has theoretical implications for research on musical expertise, absolute pitch, timbre and pitch cognition, and musical embodiment, as well as practical implications for musical practice and pedagogy. Finally, the theory offers several directions for future research, employing behavioral, neuroimaging, and brain stimulation methods.

Video/Supplementary Materials

Schmuckler, Mark A.

Musical surface and musical structure: The role of abstraction in musical processing

Because of its temporally ephemeral nature, musical processing presents a unique challenge to the perceptual systems. To understand musical passages one must continuously infer underlying musical structure based on musical surface information that is both constantly changing, and physically transitory. Accordingly, a central component of musical behavior involves the abstraction of the underlying musical structure from the ever fleeting musical surface. Research across a variety of domains, including both perception and performance, indicates two fundamental underlying structures involved in musical abstraction – the importance of tonal structure and the role of contour structure and relations. This presentation reviews the importance of these underlying structures across a range of experimental contexts in musical processing. Specifically, research is presented demonstrating that such diverse areas as the understanding of tonal-metric coherence in melodic processing, melodic memory, piano performance, and melodic prototype abstraction, are all driven by the abstraction of musical structure, relative to the processing of musical surface information. Such research supports the idea that musical understanding, and potentially auditory processing more globally, are fundamentally driven by the apprehension of underlying structural patterns, and not necessarily by surface information.

Video/Supplementary Materials
Author Manuscript

Yu Wang, Anna

Stimulus accessibility and music theory/therapy

This paper draws on music therapeutic, neuroscientific, and philosophical literature to posit three aspects of musical engagement that qualify music as an unusually accessible stimulus: 1) audition as a means of self-orientation 2) music’s instigation of self-referential thought, and 3) the lower threshold required for processing musical meaning compared to linguistic meaning. This accessibility renders music a promising therapeutic stimulus for people living with a disorder of consciousness or other cognitive disorders, as clinical studies suggest. Moreover, this paper argues that culturally sensitive music theory and cognition can help maximize music’s therapeutic potential by clarifying the variables that influence the accessibility of musical stimuli. Specifically, by complicating the research findings from participant cohorts dominated by members of Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies, music theory and cognition can illuminate how cultural context impacts the manner and extent to which listeners derive therapeutic benefit from musical structures. This suggests that there is fertile ground for future collaborative work between music therapists, cognitivists, and theorists.

Video/Supplementary Materials
Author Manuscript

 

Poster Session 2:
Sunday, March 7; 3:00-3:45 PM EST

Anderson, Cameron & Schutz, Mike

Analyzing expressive differences in historic prelude sets using cluster analysis

Diverging patterns in mode’s associations with other musical cues carry distinctive expressive connotations. The major and minor modes’ relationship with loudness and timing are often understood in absolute terms, with major pieces described as being faster and louder than their minor counterparts. However, recent findings suggest mode’s relationship to other cues shifted markedly in the Romantic era (Horn & Huron, 2015). Here we expand on previous work using cluster analysis to track expressive changes in music history, applying this technique to Bach’s The Well-Tempered Clavier (1722) and Chopin’s Preludes (1839). Analyzing clusters of each composer reveals empirical support for mode’s changing expressive associations. Specifically, Chopin’s minor pieces are distinguished by fast attack rates and louder dynamics than Bach’s, consistent with research highlighting mode’s changing musical meaning. In tandem with our team’s work performing perceptual experiments with these pieces, this analysis provides a valuable complement to the small but growing body of research exploring changes in the use of emotive acoustic cues over musical history.

Video/Supplementary Materials

Cumming, Julie & McKay, Cory

Using corpus studies to find the origins of the madrigal

A recurring topic in musicology is the origin of the madrigal. Did it come from the frottola (Einstein 1949), the chanson and motet (Fenlon and Haar 1988), or Florentine song (Cummings 2004)? These scholars discuss few pieces and do minimal musical analysis.

MS Florence, BNC, 164-167 (c. 1520) has 4 sections, each devoted to a different genre: madrigals, other Italian-texted genres, chansons, and motets. These sections provide evidence of genre classification from the period.
We encoded the 82 pieces in the manuscript and used jSymbolic to extract 801 features from each file. We then used machine learning (using WEKA) to train classifiers to identify the pieces in the different sections. This allowed us to test the claims of earlier scholars as to similarity or difference between the madrigals and the other genres.

We expected that the madrigals would be more similar to the motets than to the other genres; to our surprise, the classifier was able to distinguish them 99% of the time. The classifiers could distinguish the Italian-texted genres from the madrigals only 65% of the time, suggesting that these early madrigals are more similar to other Italian-texted pieces than to the other genres. Statistical analysis showed that rhythmic features (not considered in the literature) were the most important for distinguishing the genres.

Video/Supplementary Materials
Author Manuscript

Merseal, Hannah; Beaty, Roger; Frieler, Klaus; Norgaards, Martin; MacDonald, Maryellen; & Weiss, Daniel

Biases in language production are reflected in musical improvisation: Evidence from large-scale corpus analysis

Producing language involves the real-time sequencing of words into phrases, leading to considerable demands on working memory that can be relieved by ordering biases in spoken utterances. One such bias is called easy-first: the tendency for more easily-accessible phrases to occur earlier in an utterance, allowing for incremental planning of more complex phrases (MacDonald, 2013). Recent evidence suggests that this bias may extend beyond language to effect other domains involving real-time action sequencing. In the current study, we sought to test for the presence of the easy-first bias in a creative domain that similarly requires real-time action sequencing: musical improvisation (e.g. Pressing, 1987). Using a corpus of 456 transcribed improvisations from eminent jazz musicians (e.g., Charlie Parker, John Coltrane), we tested for easy-first on multiple definitions of easiness over the phrase and over the corpus: interval frequency, interval size, interval variety, pitch variety, and direction changes. Similar to language production, our findings suggest that expert improvisers consistently retrieve “easier” melodic sequences before generating more complex and novel sequences, indicating a similarity in the domain-general sequencing biases that facilitate the spontaneous production of music and language.

Video/Supplementary Materials

Miyake, Jan

Implications of thematic reuse in Haydn’s sonata forms

Haydn’s approach to form is underserved by current theories as discussed by Burstein (2016), Duncan (2011), Fillion (2012), Korstvedt (2013), Ludwig (2012), Neuwirth (2011, 2013), and Riley (2015). Comparing Haydn to composers a generation younger (Mozart and Beethoven) instead of with his contemporaries (such as Dittersdorf and Vanhal) distorts what is, and what is not, idiosyncratic about his compositional form. His inclination to reuse the opening theme later in the movement can impact the melody of his S theme, the path to and through his recapitulations, and the construction of his phrase’s middles (Miyake, 2011). This compositional feature, however, often leads to forms that do not fit neatly into theories of Classical Era form forwarded by Caplin (2001) and Hepokoski and Darcy (2006). The concept of thematic saturation provides a window into investigating how Haydn reuses themes. The quantity and density of thematic saturation measure different aspects of thematic reuse and further our understanding of Haydn’s approach to form. This project is part of a larger project that investigates whether patterns of thematic returns are independent of traditional formal designations (sonata form, sonata rondo, ABACA).

Video/Supplementary Materials
Author Manuscript

O’Connor, Rachel, Wu Fu, Puo, & Weiss, Susan

Musical instrument, personality, and interpretation: Music cognition at a United States college-conservatory

Orchestral musicians have a tendency to stereotype one another based on their instruments. While research shows that musicians frequently hold these views of other players (Lipton, 1987), there is less research that links personality traits to instrument played. In large ensembles instruments often play ‘roles:’ “basses determine  rhythmic pulse,” or “oboes’ solos necessitate high artistic interpretation.” Much of this is determined by training, reception history and instrumental sound. Our research sought to explore the feasibility of examining both personality traits and interpretation among a small sample of musicians focusing on a comparison of instrumental groups (strings, brass, woodwinds) as the independent variable. Our pilot study explored two primary questions: first, do musicians who play strings, woodwinds or brass exhibit different personality traits? Second, do musicians who play these instruments interpret music differently?  Our study looked at differences in the ways instrumentalists interpreted three musical examples without markings other than time and key signatures. The 40+ students also took the “Big-Five” personality test. Preliminary data revealed that the Big-Five scores aligned with stereotypes (e.g., brass scoring lowest on neuroticism and woodwinds low on extraversion but high on neuroticism). Groups also displayed consistent differences in their interpretive approach to the musical examples.

Video/Supplementary Materials
Author Manuscript

Riggle, Mark

Pleasurable music selects for enhanced music memory, hence music emotions: The evolutionary forces laid bare

We have a phenomenal memory for music and seem highly motivated to remember pleasurable music. Since emotions greatly enhance memory, perhaps music evoked emotions are responsible for our substantial music memory. This makes music an evolutionary difficulty because of that consumed memory plus music’s strong pleasures drive resource consuming behaviors that appear not useful to survival. Why should music evoked emotions, which enhance memory, exist and be so pleasurable? We introduce a new evolutionary framework for music where an important mechanism has been overlooked: Trait elaboration for sensory exploitation of a sensory preference. The theory shows that music selection developed cognitive neural capacities that are directly reusable for providing language. Music is not a fitness indicator, but it is attractive which makes evolution run.

Video/Supplementary Materials
Author Manuscript

Sankhe Pranav M. & Madan Ritik

Cortical representations of auditory perception using graph independent component analysis on EEG

Recent studies indicate that the neurons involved in a cognitive task aren’t locally limited but span out to multiple regions of the human brain. We obtain network components and their locations for the task of listening to music. The recorded EEG data is modeled as a graph and it is assumed that the overall activity is a contribution of several independent subnetworks. To identify these intrinsic cognitive subnetworks corresponding to music perception, we propose to decompose the whole brain graph-network into multiple subnetworks. We perform this decomposition to a group of brain networks by performing Graph-Independent Component Analysis. Graph-ICA is a variant of ICA that decomposes the measured graph into independent source graphs.  Having obtained independent subnetworks, we calculate the electrode positions by computing the local maxima of these subnetwork matrices. We observe that the location of the computed electrodes corresponds to the temporal lobes and the Broca’s area, which are indeed involved in the task of auditory processing and perception. The computed electrodes also span the frontal lobe of the brain which is involved in attention and generating stimulus-evoked response. The weight of the subnetwork which corresponds to the aforementioned brain regions increases with the increase in the tempo of the music recording. The results suggest that whole-brain networks can be decomposed into independent subnetworks and analyze cognitive responses to music stimulus.

Video/Supplementary Materials
Author Manuscript

Siu, Joseph Chi-Sing

Phrase rhythmic norms in Classical expositions:  A corpus study of Haydn’s and Mozart’s piano sonatas

Recent research in phrase rhythm and hypermeter have found that some phrase rhythmic patterns, such as the end-accented “closing-theme schema,” appear regularly in certain parts of the Classical sonata exposition. These phrase rhythmic norms can, therefore, be regarded as the first-level defaults according to the compositional preference hierarchy in Hepokoski & Darcy’s Sonata Theory. However, besides the closing-theme schema, there has been no systematic study to examine the phrase rhythmic norms in the other locations of the sonata exposition. Therefore, this study aims to fill that research gap by conducting a corpus analysis of phrase rhythmic usage in all the first-movement piano sonata expositions composed by Haydn and Mozart (52 by Haydn and 19 by Mozart). This corpus study can then inform our understanding of phrase rhythmic default levels in Classical sonata form as well as any individual differences in the compositional styles of Haydn and Mozart.

In Haydn’s and Mozart’s piano sonatas, phrase rhythm in the primary themes are generally regular, while the secondary themes are mostly irregular. However, in the transitions, Haydn and Mozart have different first-level defaults, with regular phrase rhythm occurring more often in Haydn while irregular phrase rhythm is the norm in Mozart. When irregular phrase rhythms occur, Haydn’s sonatas demonstrate a strong preference to focus on a single loosening device, non-quadruple hypermeasures, while Mozart’s sonatas tend to also include the use of metrical reinterpretations and end-accented phrases. This study also reports on the phrase-rhythmic norms at the boundaries of the sonata formal sections and the hypermetric placements for the MCs, the dominant-locks, and the EECs.

Video/Supplementary Materials

Verosky, Niels & Morgan, Emily

Modeling melodic expectations with expectation networks

Expectation networks have been proposed as a computationally simple method for learning tonal expectations associated with individual scale degrees (Verosky, 2019). Using principles of activation and decay, expectation networks infer the expectation of encountering a given event type followed in the near (but not necessarily immediate) future by any other event type. The current work outlines how these learned expectations can be used to predict melody continuations and tests the predictions against listener responses to a melodic cloze task previously used to compare two other models of melodic expectation, IDyOM and Temperley’s Gaussian model (Morgan, Fogel, Nair, & Patel, 2019; Pearce, 2005; Temperley, 2008). Results of multinomial logistic regression indicate that all three models account for unique variance in listener predictions, with coefficient estimates highest for expectation networks. Despite expectation networks’ computational simplicity relative to IDyOM, direct comparisons between IDyOM and expectation networks similarly yielded higher coefficient estimates for the latter. Although all three models are limited in their ability to incorporate global, hierarchical information about pitch structure, expectation networks seem to benefit from a tendency to predict all three notes of the tonic triad at cadence points while ranking the tonic as the most probable continuation. Our findings suggest that generalized scale degree expectations as captured by expectation networks, stereotypical pitch sequences as captured by IDyOM, and immediate intervallic expectations as captured by Temperley’s model all factor into real-time listener predictions to varying extents, highlighting several possible areas for future work.

Video/Supplementary Materials

Warrenburg, Lindsay; Centa, Nathan; Li, Xintong; Park, Hansae; Sari, Diana; & Xie, Feiyu

Sonic intimacy in the music of Billie Eilish and recordings that induce ASMR

This article explores similarities in the music of Billie Eilish and recordings that induce the Autonomous Sensory Meridian Response (ASMR). Two complementary approaches are presented. First, the methodology and preliminary results of an empirical study are reported, which investigates peoples’ emotional responses to Eilish’s music, mouth-related ASMR sounds (oral wetness cues, whispering, breathing sounds), and non-mouth ASMR sounds (tapping, scratching). Second, a new theory of sonic intimacy is presented that draws on similar electroacoustic techniques in the music of Billie Eilish and Bing Crosby and may account for their popularity during times of stress and isolation.

Video/Supplementary Materials
Author Manuscript

Skip to toolbar