In the current state of ravage of COVID-19, telehealth has emerged as a critical method to provide healthcare, and at the same time minimize contact with patients. However, with phone calls or Zoom appointments, it’s harder for doctors to understand important vital signs of patients such as respiration rate or pulse in real-time.
In an effort to advance telehealth, a team of researchers at the University of Washington has developed a method that uses the camera on an individuals’ smartphone or camera to capture their respiration and pulse signals via a real-time video of their face. This state-of-the-art system was presented at the Neural Information Processing Systems Conference in December.
Meanwhile, the team is proposing an improved system to measure these physiological signs. This new system has advantages that it is less likely to tumble using different cameras, facial features, or lighting conditions. The details of the improved system will be presented at the ACM Conference on Health, Interference, and Learning in April.
In terms of technological prowess, machine learning is fairly good at classifying images. Elaborately, if MI is subject to a series of photos of cats, and if it is programmed to find cats in other images, it can do it. However, for machine learning to be helpful in health sensing, it requires a system that can identify the area of interest in a video that carries the solid source of physiological information, say pulse, and measure it over time, stated the lead author of the study.
Nonetheless, physiologically and anatomically, every person is different. Therefore, it requires the system to quickly adapt to the unique physiological signature of each person.