Navigation

Technical Program

October 17, 2022 (Australian Eastern Daylight Time, AEDT)

Join Zoom Meeting: https://cuboulder.zoom.us/j/3521084554

Join Slack Channel: https://tinyurl.com/4bz9yvkh

------------//------------

Welcome from Chairs 8:00–8:15

Lecture 1: "Wear-to-compute? Challenges of earable computing for health" 8:15–9:00

Speaker: Prof. Cecilia Mascolo (University of Cambridge, UK)

Abstract

Earable devices are pervasive in our lives, they accompanying us in every virtual meeting or while we are running. These devices are becoming, in theory, very good proxies for human behaviour. Yet, making the inference from the raw sensor data to individuals’ behaviour remains difficult. In this talk I will discuss and highlight the open challenges that these technologies still face before they can be trusted health measurement proxies. Namely, the ability to work in the wild, the sensitivity of the data versus centralisation of computation, the uncertainty of the prediction over the data. I will use examples from my group's ongoing research on on-device machine learning, earable sensing and uncertainty estimation for health application in collaboration with epidemiologists and clinicians.

Speaker Bio

Cecilia Mascolo is the mother of a teenage daughter but also a Full Professor of Mobile Systems in the Department of Computer Science and Technology, University of Cambridge, UK. She is director of the Centre for Mobile, Wearable System and Augmented Intelligence. She is also a Fellow of Jesus College Cambridge and the recipient of an ERC Advanced Research Grant. Prior joining Cambridge in 2008, she was a faculty member in the Department of Computer Science at University College London. She holds a PhD from the University of Bologna. Her research interests are in mobile systems and machine learning for mobile health. She has published in a number of top tier conferences and journals in the area and her investigator experience spans projects funded by Research Councils and industry. She has served as steering, organizing and programme committee member of mobile and sensor systems, data science and machine learning conferences. More details at www.cl.cam.ac.uk/users/cm542.

Lecture 2: "Sensing and Stimulation with Earable Devices " 9:00–10:00

Speaker: Prof. Tam Vu (CEO, Earable Inc. and University of Colorado Boulder, USA)

Abstract

This talk discusses Earable computers as sensing and actuating devices that are worn inside, behind, around, or on users' ears for sensing many important physiological signals such as the brain, eyes, facial muscles, heart rate, blood pressure, core body temperature, and more. These signals could enable a wide range of applications from human-computer interaction, to health care, attention/focus monitoring, and opioid use reduction. Drawing the analogy from the evolutions of mobile systems and wearable systems, in this talk, I will discuss the opportunities that earable systems could bring. I will share our experience and lessons learned through realizing such earable systems in the healthcare context, from research to commercialization.

Speaker Bio

Tam Vu is the CEO of Earable Inc., a neuroscience company developing wearables that detect real-time neuro signals to stimulate the brain to improve sleep, focus, and other cognitive functions; and is on-leave from the University of Colorado Boulder. He was an associate professor at Oxford University in the UK before he left to focus more on the company. He leads the Mobile and Networked Systems(MNS) Lab where he and his team conduct system research in the areas of wearable and mobile systems, exploring the physiological signals of a user and using them for inventing new human-computer interaction techniques and health-care solutions. Together with his teams, the outcomes of his works resulted in a CES Innovation Award 2023, MUSE Design Award, VMark Design Award, Sloan Fellowship, NSF CAREER award, two Google Faculty Awards, 12 best paper awards, best paper nominations, and research highlights in flagship venues in mobile system research. He is also actively pushing his research outcomes to practice through technology transfer activities with 35 patents filed and attracted external investment for 2 venture-backed companies that he co-founded to commercialize them. One of the companies that he found, named Earable Inc, has raised $10M from leading investors to launch its product to the market later this year to help the world sleep better.

Coffee Break 10:00–10:15

Lecture 3: "Leveraging Earables for Unvoiced Speech and Beyond" 10:15–11:00

Speaker: Prof. Shubham Jain (Stony Brook University, USA)

Abstract

Modern environments are alive with sensors, such as smartphones, watches, earphones and other wearables. However, despite the plethora of sensors, technologies supporting the disabled population are still scarce. This is primarily due to the difficulty in direct sensing of desired events in the real-world. We believe that earables are the next frontier in passive sensing that offer a novel interaction modality and can provide a range of services to a diverse population. This talk will explore our research efforts in leveraging the inertial sensors on earables to interpret unvoiced or silent speech. Not only is this a privacy preserving interaction modality, but it holds great promise for those with speech disabilities.

Speaker Bio

Shubham Jain is an Assistant Professor in the Department of Computer Science at Stony Brook University where she leads PiCASSo (Pervasive Computing and Smart Sensing) Lab. Her research interests lie in cyber-physical systems, mobile health, and data analytics for wearable sensing. Her work on pedestrian safety has been featured in several media outlets, including The Wall Street Journal. She received her PhD in Electrical & Computer Engineering from Rutgers University in 2017.

Lecture 4: "Revisiting a Few Classical Problems in the Context of Earable Computing" 11:00–12:30

Speaker: Prof. Romit Roy and Yu-Lin Wei (University of Illinois Urbana-Champaign, USA)

Abstract

The rapid advances in speech recognition, NLP, and human-centric sensing make the earable platform a launching pad for many new technologies and applications. On the other hand, earables also encapsulate a much higher information density, complexity, and constraints, compared to other mobile devices. In this talk, we will revisit classical problems under the new challenges brought by the earable computing platform. For instance, how could we estimate multiple angle of arrivals (AoAs) in a heavily reverberated environment? How could we separate multiple ambient signals with a few microphones? How could we locate and calibrate a user’s location without help from other infrastructure? This talk will take quick glimpses on classical approaches in each of these topics, and then discuss some new ideas (both algorithmic and deep-learning-based) for the special case of earable computing.

Speaker Bio

Yu-Lin Wei (Wally) is a 5th year Ph.D. student from the Electrical Computer Engineering department at UIUC. He got his bachelor's and master's degrees from Computer Science and Information Engineering department at National Taiwan University. He received the ECE Rambus fellowship and was selected as the young researcher at Heidelberg Laureate Forum. His research interests include acoustic signal processing, earable computing, wireless and visible communication, and indoor positioning.

Romit Roy Choudhury is a Jerry Sanders III AMD Scholar and Professor of ECE and CS at the University of Illinois at Urbana Champaign (UIUC). He joined UIUC from Fall 2013, prior to which he was an Associate Professor at Duke University. Romit received his PhD in the CS department of UIUC in Fall 2006. His research interests are in applied signal processing, with a focus on audio sensing and inferencing. For more information, visit Romit's Systems Networking Research Group (SyNRG) at http://synrg.csl.illinois.edu

Lunch Break 12:30–1:15

Lecture 5: "Multimodal and Transfer Learning for Fine-Grained Human Facial Sensing Using Earables" 1:15–02:00

Speaker: Prof. VP Nguyen (University of Texas at Arlington, USA)

Abstract

In this talk, I will discuss an important emerging research topic of developing a socially-acceptable earable system that can unobtrusively, continuously, and reliably sense human facial activities in fine-grained. Building such a system requires answering many important questions. For example, where do we place sensors or electrodes? How many sensing modalities, electrodes, or channels are sufficient? How do we select the best combination of sensors, electrodes, and channels for the desired applications? How can we adaptively optimize the contributions of each sensing input? Can we allow sensing modalities to learn from each other to enhance accuracy? Can the knowledge be transferred from high-dimensional data to one or few dimensional data? This talk will describe the roles of multimodal and transfer learning in addressing these important questions. I will then discuss our experiences and lessons learned in developing novel algorithms to allow earables to reliably track 3D facial activities and face-hand interaction. I will close the talk by presenting other opportunities that the proposed techniques could bring to earable research and challenges that need to be addressed to unleash their potential.

Speaker Bio

VP Nguyen is an Assistant Professor of Computer Science and Engineering at the University of Texas at Arlington, where he directs the Wireless and Sensor Systems Laboratory (WSSL). His lab focuses on building low-power, battery-free, and intelligent cyber-physical systems for smart-health, precision agriculture, 5G/6G communication, and UAV security and privacy. He is the recipient of SONY Faculty Innovation Award 2021, University of Texas System Rising Stars Award, CACM Research Highlights 2021, ACM SIGMOBILE Research Highlights 2017, 2020, 2022, Best Paper Award at ACM MobiCom 2019, Best Poster Award at IPSN 2022, Best Paper Runner up Award at ACM SenSys 2018, Best Paper Nominee at ACM SenSys 2017, Best Paper Awards at ACM MobiCom-S3 2016-2017. He obtained his Ph.D. degree in Computer Science from the University of Colorado Boulder.

Lecture 6: Robust Sensing and Efficient Computing for Intelligent Biosignal-based Earables. 02:00–02:45

Speaker: Nhat Pham (University of Oxford, UK)

Abstract

Earables have great potential in enabling various healthcare applications due to their close proximity to multiple sources of critical head-based biosignals such as EEG, EOG, EMG, and EDA originating from the brain, eyes, facial muscles, and sweat glands, respectively. However, there are several fundamental challenges to realize their potential, such as (1) how we can capture biosignals reliably given the noisy environments and (2) how we can optimize the system to ensure high-fidelity signals while being energy-efficient for prolonged usage. This talk will discuss our recent research efforts in hardware and software techniques to overcome these challenges. We will also take a glimpse on the potential use cases such as microsleep detection or long-term epileptic seizure monitoring.

Speaker Bio

Nhat (Nick) Pham is a final-year Ph.D. candidate at the Department of Computer Science, University of Oxford. His Ph.D. study is supervised by Prof. Niki Trigoni, Prof. Andrew Markam, and Prof. Tam Vu and is funded by the University of Oxford Scholarship. He received his M.Sc. in Computer Science and B.Eng. in Computer Engineering from the Korea Advanced Institute of Science and Technology (KAIST) in 2018 and Vietnam National University - Ho Chi Minh City University of Technology (HCMUT) in 2015, respectively. He is the recipient of CACM Research Highlight 2021, ACM SIGMOBILE Research Highlight 2020, ACM GetMobile Research Highlight 2019, the Best Paper Award at ACM MobiCom 2019, KAIST Graduate Scholarship, HCMUT Silver Graduation Medal, and Odon Vallet's Scholarship. His current research interests include Intelligent Wearables for Healthcare and well-being, Edge-AI and On-chip Computing, and Quantum-based Human Sensing.

Lecture 7: "Programming Earable BrainBand Made Easy" 02:45–04:30

Speaker: Galen Pogoncheff (galen@earable.ai) and Vinh Nguyen (vinhnx@earable.ai)

Abstract

This talk presents Earable BrainBand - Frenz - from a researcher's perspective and a live-coding demo. We present technical details of Earable Brainband, the data that we receive, how it can be used in research, the current set of available features, etc. We then do a live implementation of 3 simple demos that highlight relevant ideas for researchers. In this live coding session and demo, we will show a basic setup for connecting to a band with a bluetooth connection, starting data streaming, preprocessing the raw signals, and visualizing the signals in real-time. Since a lot of prospective researchers desire the ability to collect signal data directly from the device, this will be a very important feature of our SDK to show. We will then expand upon this code, demonstrating facial gesture detection and focus. Each of these features should be associated with an associated visual that keeps the audience engaged.

Speaker Bio

Galen Pogoncheff and Vinh Nguyen are Data scientists in Earable Inc.

Back to Top