Facial Landmark Data Collection
An Optimization to Train Facial Emotion Detection Learning Models
This research addresses a modern challenge in virtual performance delivery. The team developed software to collect emotional data from online audiences during live performances, recognizing that typical audience feedback is not viable in a virtual setting.
Rather than relying on conventional approaches, the researchers built a system to capture facial expressions and corresponding physical reactions (such as applause or disapproval) in real time. The goal was establishing whether a relationship between face mesh data collected across frames and reactionary motions exists—connections that might not be immediately apparent through standard video observation.
The project created a database intended for training machine learning models that could eventually provide performers with authentic audience feedback during virtual events.
Contributors: Roshan Prabhakar, Leevi Symister, Kaiser Williams, with supervision from Professor Tsachy Wiessman and Doctor Shubham Chandak.