Tejaswinee

Dr Tejaswinee Kelkar


Researcher in Music Technology, Data Scientist


contact: tejaswinee.kelkar[at]gmail.com



Research



The Virtual Lab for Music is a web-based application to provide anybody who is interested in Hindustani music a broad and comprehensive resource containing live experiments to test and hone their abilities; repositories to refer to and obtain material from; and all this in a way that enhances a multi-modal web experience across many domains within music. In this document, we describe the motivation behind setting up such a lab, the model of integrating experiments, repositories and semantic connections as a complete way of setting up a learning experience that will benefit not just learners of Hindustani Classical Music (hereafter referred to as HCM), but also will function as a resource for other applications such as computation and cognition of music.

Published Papers:
Link to Code
Link to NIME Paper

The Virtual Lab for Music is a web-based application to provide anybody who is interested in Hindustani music a broad and comprehensive resource containing live experiments to test and hone their abilities; repositories to refer to and obtain material from; and all this in a way that enhances a multi-modal web experience across many domains within music. In this document, we describe the motivation behind setting up such a lab, the model of integrating experiments, repositories and semantic connections as a complete way of setting up a learning experience that will benefit not just learners of Hindustani Classical Music (hereafter referred to as HCM), but also will function as a resource for other applications such as computation and cognition of music.

Published Papers:
Link to Code
Link to NIME Paper

TrAP (TRace-A-Phrase) is a new interface for generating phrases of Hindustani Classical Music (HCM). In this system the user traces melodic phrases on a tablet interface to create phrases in a raga. We begin by analyzing tracings drawn by 28 participants, and train a classifier to categorize them into one of four melodic categories from the theory of Hindustani Music. Then we create a model based on note transitions from the raga grammar for the notes used in the singable octaves in HCM. Upon being given a new tracing, the system segments the tracing and computes a final phrase that best approximates the tracing.
Link to Code
Link to NIME Paper

The aim of this dissertation is to understand the role of embodiment in melodic contour perception. In other words, it studies how we move our bodies in response to music. Melodies play an important role in both speech and music. This thesis discusses theoretical motivations and methods used along with a collection of four articles, each of which explores a dimension of melodic contour: verticality, motion metaphors, body use, and multi-feature correlational analysis. This brings together the multimodal mappings of pitched sound, gestural imagery evoked by these sounds, and defining geometries of these contours. Two sound-tracing experiments were conducted, resulting in three datasets that have been used in the analyses. In the experiments, participants listened to 16 melodies from four different genres: operatic vocalise, jazz scatting, North-Indian singing, and Sámi joik. In the second listening, participants ``draw'' the sounds in the air. Infrared motion capture was used to record the participants' body movement, and the analysis is focused primarily on the movement of their hands. The sound analysis is based on signal processing algorithms for pitch detection and methods for contour representation. Cross-correlation of the data is performed using a range of methods from statistical hypothesis testing to canonical correlation analysis.
Link to Accompanying Website
Link to Thesis

Melodic contour, the 'shape' of a melody, is a common way to visualize and remember a musical piece. The purpose of this paper is to explore the building blocks of a future 'gesture-based' melody retrieval system. We present a dataset containing 16 melodic phrases from four musical styles and with a large range of contour variability. This is accompanied by full-body motion capture data of 26 participants performing sound-tracing to the melodies. The dataset is analyzed using canonical correlation analysis (CCA), and its neural network variant (Deep CCA), to understand how melodic contours and sound tracings relate to each other. The analyses reveal non-linear relationships between sound and motion. The link between pitch and verticality does not appear strong enough for complex melodies. We also find that descending melodic contours have the least correlation with tracings.

Published Papers:
Link to Code
Link to Paper


A raag is a melodic structure with grammatical rules for improvised phrases. Raags define tonal relationships between various notes. There are hundreds of raags in number, all having unique descriptors. In this paper, we visualize tonal spaces of raag by creating a graph with a force directed layout, and a propose mapping of colour to this tonal space. We derive the graph for the visualization by parameterization of raags as described in the theory of HCM. We compare a radial layout for these tonal spaces to a colour harmony profile and explain some cross raag relationships using the methods used to derive colour schemes. We discuss the affective implications and empirical verifications of this model. This model has potential applications in sonification and tone-color mapping. The implications of a layout for tonal music is also useful for deriving implicit harmonic relationships. The graphical relationships of raags can be accessed here> Link to Accompanying Website
Link to Thesis

Abstract: Cross-modal integration is ubiquitous within perception and, in humans, the McGurk effect demonstrates that seeing a person articulating speech can change what we hear into a new auditory percept. It remains unclear whether cross-modal integration of sight and sound generalizes to other visible vocal articulations like those made by singers. We surmise that perceptual integrative effects should involve music deeply, since there is ample indeterminacy and variability in its auditory signals. We show that switching videos of sung musical intervals changes systematically the estimated distance between two notes of a musical interval so that pairing the video of a smaller sung interval to a relatively larger auditory led to compression effects on rated intervals, whereas the reverse led to a stretching effect. In addition, after seeing a visually switched video of an equally-tempered sung interval and then hearing the same interval played on the piano, the two intervals were judged often different though they differed only in instrument. These findings reveal spontaneous, cross-modal, integration of vocal sounds and clearly indicate that


Link to Accompanying Website
Link to Thesis

Melodic contour, the 'shape' of a melody, is a common way to visualize and remember a musical piece. The purpose of this paper is to explore the building blocks of a future 'gesture-based' melody retrieval system. We present a dataset containing 16 melodic phrases from four musical styles and with a large range of contour variability. This is accompanied by full-body motion capture data of 26 participants performing sound-tracing to the melodies. The dataset is analyzed using canonical correlation analysis (CCA), and its neural network variant (Deep CCA), to understand how melodic contours and sound tracings relate to each other. The analyses reveal non-linear relationships between sound and motion. The link between pitch and verticality does not appear strong enough for complex melodies. We also find that descending melodic contours have the least correlation with tracings.

Published Papers:
Link to Code
Link to Paper


Link to Accompanying Website
Link to Thesis



Last updated-> May 2022