HOME JOURNALS CONTACT

Journal of Artificial Intelligence

Year: 2018 | Volume: 11 | Issue: 2 | Page No.: 71-78
DOI: 10.3923/jai.2018.71.78
Multi-resident Activity Recognition Method Based in Deep Belief Network
Nadia Oukrich , El Bouazaoui Cherraqi, Abdelilah Maach and Driss Elghanami

Abstract: Background and Objective: Existing work on human activity recognition mainly focuses on recognizing activities for a single resident. However, in real life, activities are often performed by multiple users. This study aimed to recognize multiple resident activities inside home using deep neural networks and an ontological approach for features selection. Materials and Methods: This model comprised an ontological approach method for robust features extraction and selection, a Deep Belief Network (DBN) algorithm for recognising three categories of multiple resident activities inside home. A simulated experiment was conducted using publicly two multiple resident CASAS databases collected at Washington State University (WSU) and the proposed approach was compared with traditional recognition approaches such as Support Vector Machine (SVM) and Artificial Neural Network (ANN). Results: The results showed that the proposed approach based on DBN and ontology produce better accuracy results compared to SVM and ANN. Conclusion: In this research, deep neural network algorithm had been successfully developed to recognize daily life human activities using features manually extracted.

Fulltext PDF Fulltext HTML

How to cite this article
Nadia Oukrich, El Bouazaoui Cherraqi, Abdelilah Maach and Driss Elghanami, 2018. Multi-resident Activity Recognition Method Based in Deep Belief Network. Journal of Artificial Intelligence, 11: 71-78.

Keywords: ontological approach, multiple residents, Activity recognition, features extraction and deep belief network

INTRODUCTION

The key point in development of smart home is the recognition of normal and daily routine activities of its residents. This recognition can reduce costs of health and elderly care that exceed $7 trillion annually worldwide and rising1,2. It helps ensuring comfort, homecare3, safety and reduces energy consumption. For these reasons, human activity recognition has been the focus of many researches for nearly a couple of decades. In fact, a large amount of research treat recognition of Activities of Daily Living4 (ADLs) which means activities, performed in resident daily routine, such as eating, cooking, sleeping and toileting. There are various reasons for mostly covering ADLs in the literature. with pertinent examples of general and common activities between differentiating between young and old people. ADLs are the most used in standard tests of resident autonomy; disability with ADLs is the most common reason that older people live in nursing facilities5. Finally, ADLs are the best suited as inputs to perform different home applications. For the following reasons, this research is focused on recognizing ADLs human activities.

Most of existing works on activity recognition are mainly focused on recognizing activities for one resident in smart home. However, in real life, there are often multiple inhabitants lived in the same house and perform ADLs together or concurrently6. Recognizing multiple residents’ activities is challenging because of several reasons; it should incorporate an appropriate amount of sensors, suitable methods to model multiple residents’ interactions and filtering noise data from the raw data. Carrying out sensor data fusion for such settings to achieve sufficient accuracy for multiple residents’ activity recognition is still an open research issue7.

In this study, multiple residents’ activities are classified into three big categories:

Single resident performs activities one by one: a single resident performs an activity in a sequential manner independently (e.g., personal hygiene or bed to toilet transition). Most of literature work have focused on this type of ADLs
Multiple residents perform the same activity together: Two or more residents do an activity in a cooperative or participatory manner (e.g., two or more residents are eating meal or watching TV together)
Multiple residents perform different activities independently8: In this category, two or more residents perform different activities simultaneously (e.g., one resident watch TV and one other prepare meal)

In this study, the recognition of those three categories of multiple residents’ activities have been treated. As far as multi-resident activities recognition is concerned, few articles have been published on the subject and few experiments made in real conditions. Many of the studies are done on simple scenarios in case of multi-resident perform different activities independently9,10, although parallel exclusive and cooperative activities are the most frequent in nature. To the best of our knowledge, no work has addressed all types of activities. However, there were some existing works on recognizing multi-resident or group activities in pervasive computing9-12. However, these works cited bellow and others were still in a development phase because of the complexity of multi-resident states and activities.

Due to low cost, low power consumption and privacy respect, emerging sensors-based approach became a centre of interest at the last decade. Researchers have commonly tested machine learning in recognizing activities based on sensor readings. Typical static approaches include Naive Bayes (NB)13, decision trees14 and Support Vector Machine (SVM)15. Temporal approaches includes Hidden Markov Model (HMM)16, knowledge-driven approach (KDA)17, Conditional Random Field (CRF)18 and Evolutionary Ensembles Model (EEM)19. Nevertheless, researchers have tried to train neural networks algorithms inspired by the architectural of the brain. Neural networks are used in various researches in recent years to recognize human activities and actions20-23 and seem successful and more efficient compared to other machine learning algorithms.

Deep Belief Network (DBN) have attracted many activity recognition researchers because of two major advantages: it can learn many more parameters than discriminative models without overfitting and it is easy to see what the network has learned by generating from its model24. For instance, Fang and Hu25 used 4 hidden layers of DBN algorithm to solve the problem about recognizing human activities, the results was compared with hidden Markov model and Naïve Bayes, the higher accuracy obtained was 79.32%. Hassan et al.26 used DBN with two hidden layers and 100 inputs for activity recognition and compared it with Support Vector Machine (SVM) and Artificial Neural Network (ANN) where it outperformed them. The higher accuracy obtained was 89.61%.

This study has proposed an ontological approach for features extraction, combined with deep belief network in supervised learning in order to recognise three types of multiple resident activities described above. The combination of these two approaches not only increase the accuracy of results compared to literature, but also reduces the training complexity27. Oukrich et al.28 activity recognition was explored using back-propagation algorithm and auto-encoders feature selection to recognize activities of multi-resident. But, in fact the method described in this study outperformed the previous work and it gave pertinent results.

MATERIALS AND METHODS

The proposed system basically composed of three main parts: sensing, features extraction and recognition. The first part was data collection used as input to the Human Activity Recognition (HAR) system. For this study, emergent sensors in the two smart homes have been selected for data collection: motion sensors and door sensors. The sensors provided a continuous data during a long period. The second part was for the feature extraction. In this part an ontological approach had been used to extract relevant features and the most adequate for the learning part. The third part of the system was for modelling activities from the features via deep learning where DBN was adopted.

Consequently, this study was taking more than six months of tests and research to carry out this proposed system and all researches related to this study were done in the IT laboratory of the Mohammed V University in Rabat Morocco.

Data collection: For data collection, two multiple resident datasets collected from the Centre for Advanced Studies in Adaptive Systems (CASAS)29 are used to evaluate our approach:

Tulum data set was collected from April to July in 2009. The apartment housed by two married residents where they performed 10 normal daily activities. This data set contained two categories of activities: Single resident performing activities one by one and multi-resident performing the same activity together. This data set contains 1513 samples
Twor data set collected in the WSU smart apartment test bed during the academic year of 2009-2010. The apartment housed by two residents, R1 and R2, at this time and they performed 26 normal daily activities. Multi-resident activities category extended in the database represented activities performed by numerous entities independently but not concurrently. This data set contained 3896 samples

Activities details were explained in Table 1.

Fig. 1: Ontological representation of activity

Table 1: ADLs activities of Tulum and Twor datasets

Proposed approach to extract features: To achieve a better representation of ADLs, extracting a maximum of relevant features seemed to be essential. In this trend, an ontology approach was used as the feature space to represent the training dataset and extract information from raw data. As explained in Fig. 1, a relationship was established between activity and other entities. Based on this ontological approach, 17 features were extracted and detailed:

where, Si is the means of sensors ID of activity i, ni is the number of motion and door sensors noted in the dataset between the beginning and end of the activity and Sik is the kth Sensor ID
The logical value of the first Sensor ID triggered by the current activity
The logical value of the second Sensor ID triggered by the current activity
The logical value of the last Sensor ID triggered by the current activity
The logical value of before the last Sensor ID triggered by the current activity
The name of the first sensor triggered by the current activity
The name of the last sensor triggered by the current activity
The variance of all Sensor IDs triggered by the current activity
The beginning time of the current activity
The ending time of the current activity
The duration of the current activity
Day of week, which is converted into a value in the range of 0-6
Previous activity, which represents the activity that occurred before the current activity
Activity length, which is the number of instances between the beginning and the end of current activity
The name of the dominant sensor durant the current activity
Location of the dominant sensor
Frequence of the dominant sensor

Based on the above mentioned features, in this work, an algorithm was developed on the basis of C++ for better assessment of extracting features from row data.

DBN used in the proposed work: Deep Belief Networks (DBN) was stacked and trained in a greedy manner using Restricted Boltzmann Machines (RBM)30. In fact, DBN had two basic parts: pre-training and fine-tuning. Once the network was pre-trained based on RBM, fine-tuning was performed using supervised gradient descent. Specifically, a logistic regression classifier was used to classify the input based on the output of the last hidden layer of the DBN.

Fig. 2: Structure of a DBN used in this work with 17 neurons in input layer and 3 hidden layers

That is, once the weights of the RBMs in the first hidden layer were trained, they were used as inputs to the second hidden layer. Figure 2 showed the three hidden layers that were being used in this study. According to Hinton et al.24 proposal, this work was based on the contrastive divergence (CD) algorithm to train RBM in supervised scenario.

RESULTS

For experiments, as described above, two databases were used to validate proposed approach. Twor database had 3896 events and Tulum database had 1367 events, when 70% used in training and 30% used in testing activities. It is to be noted that in the database used in this work, the number of samples for training and testing different activity was not evenly distributed. Some activities contained huge number of samples whereas some of them had a very small number of samples. The number of inputs for the three algorithms has been fixed to ensure a good comparison. Several number of hidden layer have been tested for both BPA and DBN algorithms and the ideal results was kept.

The experiments were started with Back-Propagation Algorithm (BPA). For that, this algorithm was running several times using different number of hidden layers and fixed number of inputs and outputs layers according to dataset. At last, mean recognition rate yielded to 88.75% of at the best in the Twor datasets and 76.79% in Tulum datasets. The back-propagation-based experimental results were shown in Table 2. After that, Support Vector Machines (SVMs) was applied it was contributed in 87.42% of mean recognition rate at best in the Twor datasets and 73.52% in Tulum datasets. The SVM-based experimental results were reported in Table 3. Finally, the propose approach was tested and was yielded the highest recognition rate of 90.23% in Twor datasets and 78.49% in Tulum datasets.

Table 2: HAR-experiment results using back-propagation based approach

Table 3: HAR-experiment results using traditional SVM-based approach

Thus, the proposed approach was showed the superiority over others. Table 4 was exhibiting the experimental results using the proposed approach. Figure 3 and 4 was demonstrated for the three models BPA, SVM and DBN the accuracies comparison of different activities of Twor and Tulum datasets.

DISCUSSION

The results from experiments confirmed that DBN had proved supremacy in terms of accuracy compared to other algorithms those results was confirmed by other recent research done in the field of human activity recognition25,26,31.

Fig. 3: Comparison of BPA, SVM and DBN of Twor dataset

Table 4: HAR-experiment results using DBN-based approach

However, this result did not mean that DBN was superior to other deep learning algorithms in activity recognition field. Technically there was no model which outperforms all the others in all situations32, so it was recommended to choose models based on several features explained in detail in Wang32 survey’s.

Despite the fact there was a different number of samples in different tested activities, weak mean recognition rate does not indicate poor accuracy26,33,34. For instance, the activity Group_Meeting where the two married residents performed the same activity together was difficult to recognize because it happened within very few instances. Moreover, the together was difficult to recognize because it happened within very few instances. Moreover, the meeting place were not fixed and based on emerging sensors, the system cannot detect the presence of two residents in the same place.

Recognition of activities when multiple residents performed different activities independently is quite easy. In general, there are activities that are specific to the woman and others that was generally exerted by the man and the method followed by the woman was different to the man and by this difference the algorithm proposed learns to detect the activity and the person who carried it out.

Fig. 4: Comparison of BPA, SVM and DBN of Tulum dataset

CONCLUSION

This study applied three machine learning algorithms to represent and recognize human activities. From the results, it can be concluded that DBN algorithm is better than SVM and BPA. The main reasons are, DBN was the most suitable for ADLs activities and it had a strong ability in learning to interpret complex sensor events in smart home environments. Furthermore, the robust feature sets manually extracted one by one generate a higher human activity recognition accuracy since it considers the specificity of the database.

SIGNIFICANCE STATEMENT

This research paper study the efficacy of using deep neural network in human activity recognition based in efficient features manually extracted one by one and compared it with traditional recognition approaches such as Support Vector Machine (SVM) and Back-Propagation Algorithm (BPA). Additionally, it highlights the recognition of multiple residents activities inside home so as to come near real life.

REFERENCES

  • Rodriguez, E. and K. Chan, 2016. Smart interactions for home healthcare: A semantic shift. Int. J. Arts Technol., 9: 299-319.
    CrossRef    Direct Link    


  • Anonymous, 2018. Healthcare, pharmaceuticals and medical devices industry analysis and data from The EIU. http://www.eiu.com/industry/Healthcare.


  • Moreno, L.V., M.L.M. Ruiz, J.M. Hernandez, M.A.V. Duboy and M. Linden, 2017. The Role of Smart Homes in Intelligent Homecare and Healthcare Environments. In: Ambient Assisted Living and Enhanced Living Environments, Dobre, C., C. Mavromoustakis, N. Garcia, R. Goleva and G. Mastorakis (Eds.)., Butterworth-Heinemann, France, pp: 345-394


  • Merrilees, J., 2014. Activities of Daily Living. In: Encyclopedia of the Neurological Sciences, Daroff, R. and M.J. Aminoff (Eds.)., 2nd Edn., Academic Press, USA., pp: 47-48


  • Fried, L.P. and J.M. Guralnik, 1997. Disability in older adults: Evidence regarding significance, etiology and risk. J. Am. Geriat. Soc., 45: 92-100.
    CrossRef    Direct Link    


  • Benmansour, A., A. Bouchachia and M. Feham, 2016. Multioccupant activity recognition in pervasive smart home environments. ACM Comput. Surv., Vol. 48.
    CrossRef    


  • Alemdar, H. and C. Ersoy, 2017. Multi-resident activity tracking and recognition in smart environments. J. Ambient Intell. Humanized Comput., 8: 513-529.
    CrossRef    Direct Link    


  • Gu, T., Z. Wu, L. Wang, X. Tao and J. Lu, 2009. Mining emerging patterns for recognizing activities of multiple users in pervasive computing. Proceedings of the 6th Annual International Mobile and Ubiquitous Systems: Networking and Services, July 13-16, 2009, Toronto, ON., Canada, pp: 1-10.


  • Rashidi, P., G.M. Youngblood, D.J. Cook and S.K. Das, 2007. Inhabitant Guidance of Smart Environments. In: Human-Computer Interaction. Interaction Platforms and Techniques, Rashidi, P., G.M. Youngblood, D.J. Cook and S.K. Das (Eds.)., Springer, Berlin, Heidelberg, pp: 910-919


  • Lin, Z.H. and L.C. Fu, 2007. Multi-user preference model and service provision in a smart home environment. Proceedings of the IEEE International Conference on Automation Science and Engineering, September 22-25, 2007, Scottsdale, AZ., USA., pp: 759-764.


  • Wang, L., T. Gu, X. Tao, H. Chen and J. Lu, 2011. Recognizing multi-user activities using wearable sensors in a smart home. Pervas. Mobile Comput., 7: 287-298.
    CrossRef    Direct Link    


  • Hsu, K.C., Y.T. Chiang, G.Y. Lin, C.H. Lu, J.Y.J. Hsu and L.C. Fu, 2010. Strategies for Inference Mechanism of Conditional Random Fields for Multiple-Resident Activity Recognition in a Smart Home. In: Trends in Applied Intelligent Systems, Garcia-Pedrajas, N., F. Herrera, C. Fyfe, J.M. Benitez and M. Ali (Eds.)., Springer, Berlin, Heidelberg, pp: 417-426


  • Borges, V. and W. Jeberson, 2015. Fortune at the bottom of the classifier pyramid: A novel approach to human activity recognition. Proc. Comput. Sci., 46: 37-44.
    CrossRef    Direct Link    


  • Logan, B., J. Healey, M. Philipose, E.M. Tapia and S. Intille, 2007. A Long-Term Evaluation of Sensing Modalities for Activity Recognition. In: UbiComp 2007: Ubiquitous Computing, Krumm, J., G.D. Abowd, A. Seneviratne and T. Strang (Eds.)., Springer, Berlin, Heidelberg, pp: 483-500


  • Fahad, L.G. and M. Rajarajan, 2015. Integration of discriminative and generative models for activity recognition in smart homes. Applied Soft Comput., 37: 992-1001.
    CrossRef    Direct Link    


  • Crandall, A.S. and D.J. Cook, 2010. Using a hidden markov model for resident identification. Proceedings of the 6th International Conference on Intelligent Environments, July 19-21, 2010, Kuala Lumpur, Malaysia, pp: 74-79.


  • Chen, L., C.D. Nugent and H. Wang, 2012. A knowledge-driven approach to activity recognition in smart homes. IEEE Trans. Knowledge Data Eng., 24: 961-974.
    CrossRef    Direct Link    


  • Zhan, K., S. Faux and F. Ramos, 2015. Multi-scale conditional random fields for first-person activity recognition on elders and disabled patients. Pervas. Mobile Comput., 16: 251-267.
    CrossRef    Direct Link    


  • Fahim, M., I. Fatima, S. Lee and Y.K. Lee, 2013. EEM: Evolutionary ensembles model for activity recognition in smart homes. Applied Intelli., 38: 88-98.
    CrossRef    Direct Link    


  • Mehr, H.D., H. Polat and A. Cetin, 2016. Resident activity recognition in smart homes by using artificial neural networks. Proceedings of the 4th International Istanbul Smart Grid Congress and Fair, April 20-21, 2016, Istanbul, Turkey, pp: 1-5.


  • Bourobou, S.T.M. and Y. Yoo, 2015. User activity recognition in smart homes using pattern clustering applied to temporal ANN algorithm. Sensors, 15: 11953-11971.
    Direct Link    


  • Cherraqi, E.B. and A. Maach, 2017. Load Signatures Identification Based on Real Power Fluctuations. In: International Conference on Information Technology and Communication Systems, Noreddine, G. and J. Kacprzyk (Eds.)., Springer, Cham, pp: 143-152


  • Wang, A., G. Chen, C. Shang, M. Zhang and L. Liu, 2016. Human Activity Recognition in a Smart Home Environment with Stacked Denoising Autoencoders. In: Web-Age Information Management, Song, S. and Y. Tong (Eds.)., Springer, Cham, pp: 29-40


  • Hinton, G.E., S. Osindero and Y.W. Teh, 2006. A fast learning algorithm for deep belief nets. Neural Comput., 18: 1527-1554.
    CrossRef    Direct Link    


  • Fang, H. and C. Hu, 2014. Recognizing human activity in smart home using deep learning algorithm. Proceedings of the 33rd Chinese Control Conference, July 28-30, 2014, Nanjing, China, pp: 4716-4720.


  • Hassan, M.M., M.Z. Uddin, A. Mohamed and A. Almogren, 2018. A robust human activity recognition system using smartphone sensors and deep learning. Future Generat. Comput. Syst., 81: 307-313.
    CrossRef    Direct Link    


  • Rizk, Y., N. Hajj, N. Mitri and M. Awad, 2018. Deep belief networks and cortical algorithms: A comparative study for supervised classification. Applied Comput. Infor.
    CrossRef    


  • Oukrich, N., E.B. Cherraqi and A. Maach, 2017. Human Daily Activity Recognition Using Neural Networks and Ontology-Based Activity Representation. In: Innovations in Smart Cities and Applications, Ben Ahmed, M. and A. Boudhir (Eds.)., Springer, Germany, pp: 622-633


  • Cook, D.J. and M. Schmitter-Edgecombe, 2009. Assessing the quality of activities in a smart environment. Meth. Infor. Med., 48: 480-485.
    Direct Link    


  • Hinton, G.E. and R.R. Salakhutdinov, 2006. Reducing the dimensionality of data with neural networks. Science, 313: 504-507.
    CrossRef    Direct Link    


  • Zhang, L., X. Wu and D. Luo, 2015. Recognizing human activities from raw accelerometer data using deep neural networks. Proceedings of the IEEE 14th International Conference on Machine Learning and Applications (ICMLA), December 9-11, 2015, Miami, FL., USA., pp: 865-870.


  • Wang, J., Y. Chen, S. Hao, X. Peng and L. Hu, 2018. Deep learning for sensor-based activity recognition: A survey. Pattern Recog. Lett.,
    CrossRef    


  • Tapia, E.M., S.S. Intille and K. Larson, 2004. Activity Recognition in the Home Using Simple and Ubiquitous Sensors. In: International Conference on Pervasive Computing, Ferscha, A. and M. Friedemann (Eds.). Springer, Berlin, Germany, ISBN:978-3-540-24646-6, pp: 158-175
    CrossRef    Direct Link    


  • Fang, H., L. He, H. Si, P. Liu and X. Xie, 2014. Human activity recognition based on feature selection in smart home using back-propagation algorithm. ISA Trans., 53: 1629-1638.
    CrossRef    Direct Link    

  • © Science Alert. All Rights Reserved