HCI Research
Building on the foundations of my Master's in HCI, I've worked and published research projects across different modalities such as haptic, visual and audios, ranging from accessibility and inclusive design for wearables and VR interfaces.
Jul '23 - Apr '24
We experimented SoundHapticVR, a head-mounted system that directly translates a wide range of spatial audio sources into distinct multi-channel haptic signals on the head, enabling Deaf and Hard-of-Hearing users to perceive sound source location and identity in VR environments.
Apr - Jun '23
We presented an experimental study that investigates perceptual errors in audio externalization, examines the impact of personalized HRTFs on spatial audio rendering accuracy, and is designed to enable rapid prototyping and user evaluation in realistic listening scenarios.
Apr '23
We studied Haptic-Captioning, a wrist-worn system that translates real-time audio into vibrotactile feedback to complement captions, enhancing speaker identification, emotion perception, and engagement for Deaf and Hard-of-Hearing users in multi-speaker media scenarios.
Nov '24
HealthBench is a new evaluation benchmark for AI in healthcare which evaluates models in realistic scenarios. Built with input from 250+ physicians, it aims to provide a shared standard for measuring AI capabilities in medical contexts.
Sep '23
We presented a two-part study that evaluates the usability of commercial indoor navigation apps from the perspective of Blind and Visually Impaired users, identifies key interface gaps between research and real-world deployment, and offers user-centered design recommendations for improving accessibility and wayfinding support.