• church3
    WSSL (pronounced "Whistle") is a research lab at UMass Amhest exploring novel wireless,
    mobile and wearable systems for healthcare and environmental monitoring.
  • church3
    WSSL (pronounced "Whistle") is a research lab at the UMass Amhest exploring novel wireless,
    mobile and wearable systems for healthcare and environmental monitoring.
  • church3
    WSSL (pronounced "Whistle") is a research lab at the UMass Amhest exploring novel wireless,
    mobile and wearable systems for healthcare and environmental monitoring.
  • church3
    WSSL (pronounced "Whistle") is a research lab at the University of Texas at Arlington exploring novel wireless,
    mobile and wearable systems for healthcare and environmental monitoring.

Our research interests and topics include:

Internet of Medical Things

Internet of Living Things

Internet of Flying Things

Quantum Computing

Lab News


 Active Projects (will be updated soon)

PROS: an Efficient Pattern-Driven Compressive Sensing Framework for Low-Power Biopotential-based Wearables with On-chip Intelligence
ACM MobiCom 2022 .
Nhat Pham, Hong Jia, Minh Tran, Tuan Dinh, Nam Bui, Young Kwon, Dong Ma, Phuc Nguyen, Cecilia Mascolo, and Tam Vu

Abstract

This study proposes PROS, an efficient pattern-driven compressive sensing framework for low-power biopotential-based wearables. PROS eliminates the conventional trade-off between signal quality, response time, and power consumption by introducing tiny pattern recognition primitives and a pattern-driven compressive sensing technique that exploits the sparsity of biosignals. Specifically, we (i) develop tiny machine learning models to eliminate irrelevant biosignal patterns, (ii) efficiently perform compressive sampling of relevant biosignals with appropriate sparse wavelet domains, and (iii) optimize hardware and OS operations to push processing efficiency. PROS also provides an abstraction layer, so the application only needs to care about detected relevant biosignal patterns without knowing the optimizations underneath..


IoTree: A Battery-free Wearable System with Biocompatible Sensors for Continuous Tree Health Monitoring
ACM MobiCom 2022 .
Tuan Dang, Trung Tran, Khang Nguyen, Tien Pham, Nhat Pham, Tam Vu, and Phuc Nguyen

Abstract

We present a low-maintenance, wind-powered, batteryfree, biocompatible, tree wearable, and intelligent sensing system, namely IoTree, to monitor water and nutrient levels inside a living tree. IoTree system includes tiny-size, biocompatible, and implantable sensors that continuously measure the impedance variations inside the living tree’s xylem, where water and nutrients are transported from the root to the upper parts. The collected data are then compressed and transmitted to a base station located at up to 1.8 kilometers (approximately 1.1 miles) away. The entire IoTree system is powered by wind energy and controlled by an adaptive computing technique called block-based intermittent computing, ensuring the forward progress and data consistency under intermittent power and allowing the firmware to execute with the most optimal memory and energy usage. We prototype IoTree that opportunistically performs sensing, data compression, and long-range communication tasks without batteries. During in-lab experiments, IoTree also obtains the accuracy of 91.08% and 90.51% in measuring 10 levels of nutrients, 𝑁𝐻3 and 𝐾2𝑂, respectively. While tested with Burkwood Viburnum and White Bird trees in the indoor environment, IoTree data strongly correlated with multiple watering and fertilizing events. We also deployed IoTree on a grapevine farm for 30 days, and the system is able to provide sufficient measurements every day.


MuteIt: Jaw Motion based Unvoiced Command Recognition using Earable
ACM UbiComp/IMWUT 2022 .
Tanmay Srivastava, Prerna Khanna, Shijia Pan, Phuc Nguyen, Shubham Jain

Abstract

we present MuteIt, an ear-worn system for recognizing unvoiced human commands. MuteIt presents an intuitive alternative to voice-based interactions that can be unreliable in noisy environments, disruptive to those around us, and compromise our privacy. We propose a twin-IMU set up to track the user's jaw motion and cancel motion artifacts caused by head and body movements. MuteIt processes jaw motion during word articulation to break each word signal into its constituent syllables, and further each syllable into phonemes (vowels, visemes, and plosives). Recognizing unvoiced commands by only tracking jaw motion is challenging. As a secondary articulator, jaw motion is not distinctive enough for unvoiced speech recognition. MuteIt combines IMU data with the anatomy of jaw movement as well as principles from linguistics, to model the task of word recognition as an estimation problem. Rather than employing machine learning to train a word classifier, we reconstruct each word as a sequence of phonemes using a bi-directional particle filter, enabling the system to be easily scaled to a large set of words.


FaceSense: Sensing Face Touch with an Ear-worn System
ACM MobiCom 2021 .
Vimal Kakaraparthi, Qijia Shao, Charles J. Carver, Tien Pham, Nam Bui, Phuc Nguyen, Xia Zhou, Tam Vu

Abstract

Face touch is an unconscious human habit. Frequent touching of sensitive/mucosal facial zones (eyes, nose, and mouth) increases health risks by passing pathogens into the body and spreading diseases. Furthermore, accurate monitoring of face touch is critical for behavioral intervention. Existing monitoring systems only capture objects approaching the face, rather than detecting actual touches. As such, these systems are prone to false positives upon hand or object movement in proximity to one’s face (e.g., picking up a phone). We present FaceSense, an ear-worn system capable of identifying actual touches and differentiating them between sensitive/mucosal areas from other facial areas. Following a multimodal approach, FaceSense integrates low-resolution thermal images and physiological signals. Thermal sensors sense the thermal infrared signal emitted by an approaching hand, while physiological sensors monitor impedance changes caused by skin deformation during a touch. Processed thermal and physiological signals are fed into a deep learning model (TouchNet) to detect touches and identify the facial zone of the touch.


BioFace-3D: Continuous 3D Facial Rconstruction Through Lightweight Single-ear Biosensors
ACM MobiCom 2021 .
Yi Wu, Vimal Kakaraparthi, Zhuohang Li, Tien Pham, Jian Liu, Phuc Nguyen

Abstract

We propose the first single-earpiece lightweight biosensing system, BioFace3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. Our single-earpiece biosensing system takes advantage of the cross-modal transfer learning model to transfer the knowledge embodied in a high-grade visual facial landmark detection model to the low-grade biosignal domain. After training, our BioFace-3D can directly perform continuous 3D facial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications.


DroneScale: drone load estimation via remote passive RF sensing
ACM MobiCom 2021 .
Phuc Nguyen, Vimal Kakaraparthi, Nam Bui, Nikshep Umamahesh, Nhat Pham, Hoang Truong, Yeswanth Guddeti, Dinesh Bharadia, Richard Han, Eric Frew, Daniel Massey, and Tam Vu

Abstract

Drones have carried weapons, drugs, explosives and illegal packages in the recent past, raising strong concerns from public authorities. While existing drone monitoring systems only focus on detecting drone presence, localizing or !ngerprinting the drone, there is a lack of a solution for estimating the additional load carried by a drone. In this paper, we present a novel passive RF system, namely DroneScale, to monitor the wireless signals transmitted by commercial drones and then con!rm their models and loads. Our key technical contribution is a proposed technique to passively capture vibration at high resolution (i.e., 1Hz vibration) from afar, which was not possible before. We prototype DroneScale using COTS RF components and illustrate that it can monitor the body vibration of a drone at the targeted resolution. In addition, we develop learning algorithms to extract the physical vibration of the drone from the transmitted signal to infer the model of a drone and the load carried by it. We evaluate the DroneScale system using 5 di"erent drone models, which carry external loads of up to 400g..


Painometry: Wearable and Objective Quantification System for Acute Postoperative Pain
ACM MobiSys 2020 (34 out of 175 submissions, acceptance ratio: 19.4%).
H. Truong, N. Bui, Z. Raghebi, M. Ceko, N. Pham, P. Nguyen, A. Nguyen, T. Kim, K. Siegfried, E. Stene, T. Tvrdy, L. Weinman, T. Payne, D. Burke, T. Dinh, S. D’Mello, F. Banaei-Kashani, T. Wager, P. Goldstein, and T. Vu

Abstract

This paper explores a wearable system, named Painometry, which objectively quanti!es users’ pain perception based-on multiple physiological signals and facial expressions of pain. We propose a sensing technique, called sweep impedance pro!ling (SIP), to capture the movement of the facial muscle corrugator supercilii, one of the important physiological expressions of pain. We deploy SIP together with other biosignals, including electroencephalography (EEG), photoplethysmogram (PPG), and galvanic skin response (GSR) for pain quanti!cation


WAKE: A Behind-the-ear Wearable System for Microsleep Detection
ACM MobiSys 2020 (34 out of 175 submissions, acceptance ratio: 19.4%).
N. Pham, T. Dinh, Z. Raghebi, T. Kim, N. Bui, P. Nguyen, H. Truong, F. Banaei-Kashani, A. Halbower, T. Dinh, and T. Vu

Abstract

We propose a novel behind-the-ear wearable device for microsleep detection, called WAKE. WAKE detects microsleep by monitoring biosignals from the brain, eye movements, facial muscle contractions, and sweat gland activities from behind the user’s ears. In particular, we introduce a Three-fold Cascaded Amplifying (3CA) technique to tame the motion artifacts and environmental noises for capturing high fidelity signals. The behind-the-ear form factor is motivated by the fact that bone-conductance headphones, which are worn around the ear, are becoming widely used. This technology trend gives us an opportunity to enable a wide range of cognitive monitoring and improvement applications by integrating more sensing and actuating functionality into the ear-phone, making it a smarter one.


Please check our Publications page for projects before 2020


Sponsors