HomeEventsPhd Defence Muhammad Shoaib

Phd Defence Muhammad Shoaib

Sitting is the new smoking; online complex human activity recognition with smartphones and wearables

Human activity recognition plays an important role in the development of various applications about fitness tracking, health monitoring, context-aware feedback and self-management of a smartphone or a wearable device. For example, users can monitor their activities in real-time as well as its history over longer periods of time. Health professionals can monitor the daily routine (activities) of patients and any deviations from such routine. It can be used for giving feedback at the right time, such as the device can disable incoming calls while a user is jogging or is in a meeting. In terms of self-management, the device can dynamically turn on and off various sensors, data features, and classifiers depending on the current activity to save resources. For example, a smartphone or a smartwatch can turn off gyroscope and use accelerometer only if a user is sitting or disable WiFi when the user is jogging, thereby saving battery life.


Many people already use smartphones in their daily life and there has been an increase in the use of smart wearables, such as smartwatches in recent years. These devices are equipped with different sensors such as accelerometer, gyroscope, and magnetometer which can provide information that can be used to recognize various human activities. Therefore, we mainly use these devices for human activity recognition in our research. 

A significant amount of work has been done in human activity recognition by different researchers. However, most of the work focuses on simple physical activities. Simple activities are periodic in nature and can be easily recognized; for example, walking, jogging, biking, writing, typing, sitting and standing. Complex activities may involve hand gestures which are not periodic; for example, eating, drinking coffee, smoking and giving a talk. Most of the existing work has been performed offline where data is collected from a smartphone or a wearable device, but the activity recognition process is carried out offline on a desktop machine at a later stage. In the online method, this process is performed in real-time on the device. In this context, we investigate the recognition of both simple and complex activities using different sensors from smartphones and wearables. We do so both in offline as well as in online mode. To this end, we address the following questions:

How to recognize various human activities using different sensors from wearable devices and smartphones both offline and online (on the device)? What are the tradeoffs between different data features, sampling rates, segmentation window sizes, sensor positions on human body, sensors, and classification methods and its consequences on resources consumption and recognition performance of various human activities?

To address these questions, we started with data collection experiments where we collected multiple datasets for various human activities over time. We use these datasets for investigating different aspects of human activity recognition using smartphones and wearables. We describe the main contributions of this thesis as follows:

  • We investigate the recognition of both simple and complex human activities using various machine learning algorithms. Based on this analysis, we provide recommendations on how and when to use certain sensors, classifiers and body positions for the recognition of a specific activity.                             
  • We propose to use a hierarchical lazy classification approach for the recognition of complex activities involving hand gestures such as smoking and other similar activities. It uses neighboring information among the data segments on top of a classifier in correcting misclassified segments. We show the performance improvements of this algorithm compared to a single layer classification approach for smoking, eating and drinking activities.                                         
  • We developed an online activity recognition framework for smartphones and smartwatches. Based on this framework, we implement a prototype application for these devices which can recognize various human activities in real-time. As an example use case, we use the smartphone for recognizing seven physical activities, whereas the smartwatch is used for smoking recognition. We also implement a smoking session detection algorithm using a hierarchical approach. We investigate the resource consumption (CPU, memory, and power) of our online activity recognition system on a mobile phone and a smartwatch with respect to different aspects. We also tested this system for three weeks for recognizing various activities in real-time and observed recognition results are encouraging. Based on our offline and online analysis, we propose a context-aware activity recognition (AR) algorithm that can adapt different aspects of the AR process in real-time to save resources.