PhD Thesis

Title: Context-based Reasoning in Ambient Intelligence

– CoReAmI –

[pdf] | [presentation]

The research for my PhD was focused on context-based reasoning applied on wearable sensors’ data. That is, I want to include context in the reasoning process and this way to improve the reasoning performance.For example, if an activity recognition system recognizes that the user is sitting and the heart-rate is high (e.g., 130 beats in a minute) that should be an alarming situation regarding the user’s health status. However, this is not necessarily an alarming situation. If the system is aware of the context, e.g., the previous activity of the user, and if that activity is running, it should be able to infer/reason that this situation is normal and any alarm should be disregarded.


The availability of small, wearable, low-cost, power-efficient sensors, combined with advanced signal processing and information extraction, is driving the revolution in the ambient intelligence (AmI) domain. This revolution has enabled novel approaches and technologies for accurate measurements in the area of healthcare, enhanced sports and fitness training, and life-style monitoring.

Early AmI systems included a single type of sensors that has made it possible to develop the first proof-of-concept applications. As the field has matured, these systems have gained additional sensors, resulting in the development of advanced and more accurate multi-sensor techniques and applications. However, combining multiple sources of information from multiple sensors is a challenging task. The first issue is that each sensor has its own technical configuration (for example, the data sampling rate) and requires different data-processing techniques in order to first align the different sensor data, and later to extract useful information. The second issue is that even if the multi-source data is aligned, it can be challenging to find an intelligent way to combine this multi-source information in order to reason about the user or the environment. While several approaches for combining multiple sources of information and knowledge have been developed (such as Kalman filters, ensemble learning, and co-training), these approaches have not been specialized for AmI tasks.

This thesis addresses the problem of combining multiple sources of information extracted from sensor data by proposing a novel context-based approach called CoReAmI (Context-based Reasoning in Ambient Intelligence). The CoReAmI approach consists of three phases: context extraction, context modeling, and context aggregation. In the first phase, multiple contexts are extracted from the sensor data. In the second phase, the problem is modeled using the already extracted contexts. In the third phase, when evaluating a data sample, the models that correspond to the current context are invoked, and their outputs are aggregated in the final decision.

The feasibility of this approach is shown in the three domains that have emerged as essential building blocks in AmI: activity recognition, energy-expenditure estimation, and fall detection. For each of these domains, the thesis offers an appropriate description of the domain, its relevance, and its most relevant related work. The application of the CoReAmI approach to each problem domain is then described, followed by a thorough evaluation of the approach. The results show that CoReAmI significantly outperforms the competing approaches in each of the domains. This is mainly due to the fact that, by extracting multiple sources of information and combining them by using each source of information as a context, a multi-view perspective is created, which leads to better performance than with conventional approaches.

CoReAmI Overview


In the thesis a novel Context-based Reasoning approach in Ambient Intelligence called CoReAmI is proposed. It is based on two principles: (i) using context and (ii) using multiple points of view on the same situation.

In order to explain the first principle, consider an example of a user whose heart rate and activities are monitored by an AmI system. Suppose that the system monitors that the user is sitting and has relatively high heart. This could have been an alarming situation, but not if the user was exercising a few moments previously. Therefore, a system that is aware of the context – that is, the previous activity – should reason better than one that reasons without context.

The second principle is related to using multiple views in order to reason about a user or environment in general. An intuitive example of this concept could be sensing food, a process of forming a decision (“complete picture”) about the food that we eat. When we eat, multiple senses contribute to forming the “complete picture”. First, we use the sight to collect the information about the appearance of food. Then, we usually smell the food and finally we taste it. Each of the senses gives unique information about the food and when all three inputs are combined, the “complete picture” of the food is formed. However, the three inputs are combined in an intelligent way, not independently. They are combined in such a way that if some sense is missing it influences also the other two. A typical example is when we have a cold and most of the food that we eat has the same taste, and that is only because we cannot smell right.

Using these two principles, we developed CoReAmI, which reasons about the situation from different points of view created by using each source of information as a context individually.

     Context-based multi-view perspective

CoReAmI first extracts multiple contexts from multiple sources of information (sensor data). This way, a dataset containing all the contexts and their values is created. Then, it partitions the dataset into multiple subsets according to the values of the extracted contexts i.e., context-based data partitioning. An example dataset with the context-based data partitioning is shown in Figure 1. The dataset consists of three contexts: A, B and C, and a class (decision). In this particular example in Figure 1, B is chosen to be a context, and the dataset is divided into three subsets, each corresponding to a value of B, i.e., B1, B2 and B3. Each row in the dataset is called data instance (a vector that contains the extracted context values and the class value). Therefore, the subset for B1 contains only those data instances (examples) with B1 value. The same procedure is performed for each of the contexts individually, resulting in multiple views, i.e., context-based views, of the dataset. In the next step, for each of the subsets a model is constructed that reasons about the user. Finally, when evaluating a data instance, the decisions provided by each model are aggregated together (by an aggregation function) and the final decision is provided.


    Figure 1.

     CoReAmI Flowchart

The CoReAmI approach is a general approach for context-based reasoning in AmI. At the top level are the sensors, which provide the raw data. The multiple sensors data is usually represented by multivariate time-series with mixed sampling rates. These time-series are input to our CoReAmI approach. Figure 2 shows an overview of the approach, which consists of three phases marked with A, B and C, where sensors are marked with {s1, … , sm}, contexts {c1, … , cn}, context values vc, reasoning data Rvc, and models mc.



 Figure 2