I am a Senior Scientist at Hyperfine working on human sensing and machine learning. I received my Computer Science Ph.D. at Ubicomp Group, Georgia Tech. My thesis was at the intersection of human-computer interaction, machine intelligence, and materials -- self-powered light sensing surfaces that enable implicit activity detection and explicit interactions on everyday surfaces. My research is fueled by novel materials and sensing techniques, intelligent user interfaces, and machine learning.
I have received ACM IMWUT Distinguished Paper Award (2021) and CRNCH Ph.D. Research Fellowship (2020). I have also worked at Facebook, Disney Research, and Technicolor as research interns.
Computational photodetectors use in-sensor computation to extract mid-level vision features in the analog domain for low power and latency ubiquitous sensing applications. We adopt emerging organic semiconductor (OSC) devices in developing privacy-compliant large-scale sensing surfaces for implicit activity detection and explicit user interactions. [Paper]
We apply computer-proximal energy (i.e., wind, light, vibration, and heat) harvesting in and around the automobile to provide power for intelligent sensors retrofitted to any automobile without wiring. We demonstrated a thermoelectric energybased parking assistant, which is attached to the exhaust pipe, and a wind-powered external pedestrian display, which is anchored to the front bumper of a car. [Paper]
OptoSense is a general-purpose self-powered sensing system which senses ambient light at the surface level of everyday objects to infer user activities and interactions. We present a design framework of ambient light sensing surfaces, enabling implicit activity sensing and explicit interactions in a wide range of use cases with varying sensing dimensions (0D, 1D, 2D), fields of view (wide, narrow), and perspectives (egocentric, allocentric), which supports applications ranging from object use and indoor traffic detection, to liquid sensing and multitouch input. [Video] [Paper]
UbiquiTouch is an ultra low power wireless touch interface. With an average power consumption of less than 50 uW, UbiquiTouch can run on energy harvested from ambient light. It encodes touch events on a printable surface and passively communicates to a nearby smartphone using ambient FM backscatter. This approach minimizes the need for additional infrastructure for communication. [Video] [Paper]
Serpentine is a self-powered reversibly deformable cord capable of sensing a variety of human input such as pluck, twirl, stretch, pinch, wiggle, and twist. The sensor operates without external power source based on the principle of Triboelectric Nanogenerators (TENG), and can be employed in wearable and playful interfaces. [Video] [Paper]
BlockPrint is a fabrication pipeline that produces high-quality, colored board books with embedded interactivity. Each book page is fabricated by a commodity 3D paper-based printer with interactive color 3D structures, then embedded with a variety of sensing and actuatione elements for storytelling, and finally are bound into a book as a standalone system requiring no additional digital devices.
Whoosh is an interaction technique using non-voice acoustic input including blows, sip-and-puff, and directional air swipes to enable low-cost, hands-free, and rapid input on smartwatches. Inspired by the design of musical instruments, we also develop a 3D-printed custom watch case to introduce directional and bezel blows without additional electronics. With the variety of vocabulary, Whoosh enables real-time discreet microinteractions on smartwatch. [Paper]
I worked in a team to develop projected augmented reality to add design studio learning models to a classroom for STEM classes that encourage creativity, innovation, and help build strong peer learning environments. Students do classwork using an enhanced version of Pythy, a web IDE for Python and Jython, that captures students' work and displays it around the room. We leverage the Microsoft RoomAlive Toolkit to construct a room-scale augmented reality using pairs of projectors and depth cameras. The system "pins" students' work to the walls, where teachers and students can view, interact with, and discuss. [Poster]
I have been investigating ways to facilitate eye-free interactions with textile buttons by using different types of techniques, including different vibrating patterns, various textures, and locking mechanisms. I am also experimenting different types of textile buttons including capacitive sensing, resistive sensing, and piezoelectric harvesting buttons.
I worked in a team to develop BeyondTouch, which extends and enriches smartphone inputs to a wide variety of additional tapping and sliding inputs on the case of and the surface adjacent to the smartphone, by using only existing sensing capabilities on a commodity smartphone. It can be applied to a variety of application scenarios. [Video] [Paper]
I developed assistive technologies for ALS patients to explore surroundings with wearable technology and a camera-mounted quadcopter. Google Glass is used to creating telepresence by displaying drone-retrieved first-person view, and presenting visual stimili for Steady-State Visually Evoked Potential (SSVEP). OpenBCI, a mobile Brain-Computer Interface, acquires user's electroencephalogram (EEG) for real-time analysis. Thus, user's attention to different icons presented on Glass is used to navigate the quadcopter wirelessly. Java, Android Studio, and Matlab are used in the project. [Video]
I worked in a team to develop an unobtrusive wearable noise-canceling system for NASA astronauts onboard International Space Station (ISS) where very high level of noise is contantly generated from life support systems. Together we designed a vest with 3D printed adjustable collar integrated with circuit boards, speakers, microphones, and power supplies. I implemented anti-noise generation in MAX/MSP, C, and Matlab for which could decrease various kinds of noise up to -10 dBA SPL. [Poster]
To help dressage riders analyze and review their performance, I developed approaches of wearable technologies, data analytics & visualization, and pattern recognition: Data are collected from sensors instrumented on rider and horse. Insights of the sport are revealed by signal processing and visualization techniques. Finally, a machine learning model is built to classify 10-class gaits with an overall accuracy of 97.4%. The development is done with Java, Android Studio, and Bluetooth Low Energy. [Poster]
I worked on Computational Creativity which investigates ways to make computers generate creative products or use technology to support and enhance human creativity. We developed an aritificial intelligence program called Drawing Apprentice, which collaborates with human users as they draw on a digital canvas. The user and the program take turns to draw strokes, where the program learns the style of the human user and adds its own creativity . [Paper]