Check out some of the design, HCI and Psychology research I'm doing at Columbia! On this page I'm featuring papers I've written, research studies I've participated in, as well as my own, ongoing research. 👩🔬
As part of a research group under the direction of Prof. Lydia Chilton and Prof. Harry West at Columbia University's SEAS, I facilitated background research on the use of voice-activated home assistants by designing and conducting a user research study across multiple users. My research informed the curriculum for an Advanced Web Design Studio Class taught to Undergraduate and Graduate students and influenced the groups research direction going forward. As part of my study, I analyzed insights to derive best practices for voice interface design and held a guest lecture on my results.
Download LectureInspired by thoughts about what our interaction with machines could look like in the future, I looked at application areas where artificially intelligent robots might become comercially available in the near future. With an aging population and little birth rate, care robots could become an essential part of elderly care in the future. In my paper, I review what aspects of a care robot are crucial, and explore how artificial empathy could be implemented into an assistive care robot. In the end, I give design recommendations based on the insights of my literature review.
Download PaperFor my research methods requirement, I submitted a proposal for a small-scale research study about the role of context in perceived trustworthiness of voice assistants.
Inspired by how the user-acceptance of of intelligent voice assistants is closely tied to the user's perceived trustworthiness towards it, this study hypothesizes that if a voice agent is presented in a context that frames it as more human, it will be perceived as more trustworthy. The study measures trustworthiness in a human-context as well as a machine-context condition. It is to be expected that a t-test shows a significant difference in perceived trustworthiness between human and non-humancontext conditions. The expected result will suggest a significant relationship between a voice agent's context and a subject's noted trustworthiness.
As part of this project, I'm currently using the Librosa library in Python to extract low-level features from the musical emotion stimuli (classical music pieces specifically designed for this study and performed by an orchestra). By doing this, we want to make sure that there's no confounds, such as differeng tempo, in the musical pieces. I'm extracting features such as RMS, MFCC, Centroid and other spectral measurements. My analysis will late on be used as data for the Hidden Markov Model to predict emotion transitions in the respective cortical regions.
This study will use Hidden Markov Models to see if expected brain areas participate in emotion transitions evoked by emotional musical pieces. It aims to answer the question whether subjective perception of an emotion (e.g. music makes subject feel joyful or sad) leads to brain activation patterns that are unfluenced by the order of emotion context they are presented with.