Back to Search
Start Over
Harmonics co-occurrences bootstrap pitch and tonality perception in music: Evidence from a statistical unsupervised learning model
- Source :
- Proceedings of the Annual Meeting of the Cognitive Science Society; vol 37, iss 0
- Publication Year :
- 2015
-
Abstract
- The ability to extract meaningful relationships from sequences is crucial to many aspects of perception and cognition, such as speech and music. This paper explores how leading computational techniques may be used to model how humans learn abstract musical relationships, namely, tonality and octave equivalence. Rather than hard-coding musical rules, this model uses an unsupervised learning approach to glean tonal relationships from a musical corpus. We develop and test a novel input representation technique, using a perceptually-inspired harmonics-based representation, to bootstrap the model’s learning of tonal structure. The results are compared with behavioral data from listeners’ performance on a standard music perception task: the model effectively encodes tonal relationships from musical data, simulating expert performance on the listening task. Lastly, the results are contrasted with previous findings from a computational model that uses a more simple symbolic input representation of pitch.
Details
- Database :
- OAIster
- Journal :
- Proceedings of the Annual Meeting of the Cognitive Science Society; vol 37, iss 0
- Notes :
- application/pdf, Proceedings of the Annual Meeting of the Cognitive Science Society vol 37, iss 0
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1449588447
- Document Type :
- Electronic Resource