Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computi 2018
DOI: 10.1145/3267305.3274178
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Modeling of User Context Combining Physical and Virtual Sensor Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 41 publications
0
10
0
Order By: Relevance
“…The main goal of COMPASS is to find similarities in the mobile sensors data to recognize the situation in which a mobile user is involved. In order to evaluate its performance in the reference scenario, in this section we perform a set of experiments by using two real-world datasets: ContextLabeler, presented in our previous work (Campana et al, 2018), and ExtraSensory, proposed in (Vaizman et al, 2018a). Both datasets have been collected from real mobile devices and in the same experimental setup: data has been collected from users that were engaged in their regular natural behavior (i.e., in the wild).…”
Section: Real-world Context Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…The main goal of COMPASS is to find similarities in the mobile sensors data to recognize the situation in which a mobile user is involved. In order to evaluate its performance in the reference scenario, in this section we perform a set of experiments by using two real-world datasets: ContextLabeler, presented in our previous work (Campana et al, 2018), and ExtraSensory, proposed in (Vaizman et al, 2018a). Both datasets have been collected from real mobile devices and in the same experimental setup: data has been collected from users that were engaged in their regular natural behavior (i.e., in the wild).…”
Section: Real-world Context Datasetsmentioning
confidence: 99%
“…As a first step we are planning to test and evaluate the proposed solution in real mobile environments. To this aim, we are currently integrating COMPASS in ContextKit, a prototype mobile framework we developed to provide contextaware features to third-party mobile applications, presented in our previous work (Campana et al, 2018). Currently, ContextKit implements a sensing layer that is able to collect data generated by a heterogeneous set of smartphone sensors, including both physical and virtual sensors.…”
Section: Time Performances Evaluationmentioning
confidence: 99%
“…The crossover with affective computing, to recognise and interpret human emotion, further highlights the complexity and challenges in creating context-aware systems [37]. However, improvements in machine learning [39], increased availability of relevant data [9], an enhanced battery and network performance mean that efficacious context models are increasingly practicable [11]. Although research has attempted to encourage behaviour change through recommendations [18,20,21,38,42,49], machine learning models [28,41] and timely interventions [30,32,39], CoCo is the first system, that we know of, which combines alcohol, caffeine and cortisol sensors with a functional context model in order to encourage specific user behaviours.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Context-awareness represents the key feature of such applications. Specifically, the great variety of sensors embedded in modern mobile devices (e.g., smartphones and wearables) provides essential information to recognize different aspects of the user's daily life, including, for example, movements and body postures [12] and daily life situations [13]. According to Rawassizadeh et al [3], processing user's data directly on the local device provides several advantages.…”
Section: Introductionmentioning
confidence: 99%
“…In this case social context and location information are based on data derived from smartphone-embedded sensors and from the user interaction with her personal mobile device. As shown in Figure 1, we envision the implementation of the proposed models as part of a pre-existing middleware architecture [13], which includes all the necessary components to perform the context-recognition task directly on the user's device: a Sensing Manager (SM) layer that unobtrusively collects context data from both physical and virtual sensors available on mobile devices, a Context Modeling (CM) layer aimed at processing raw sensor data to extract meaningful features to characterize the user's context; and a Context Reasoning (CR) layer that relies on such features to recognize the user's context by using machine learning models, including both unsupervised clustering solutions [24,25] and pre-trained supervised classifiers (e.g., Random Forest and Artificial Neural Networks [26,27]).…”
Section: Introductionmentioning
confidence: 99%