46 results on '"human behaviour analysis"'
Search Results
2. A Survey on Few-Shot Techniques in the Context of Computer Vision Applications Based on Deep Learning
- Author
-
San-Emeterio, Miguel G., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Mazzeo, Pier Luigi, editor, Frontoni, Emanuele, editor, Sclaroff, Stan, editor, and Distante, Cosimo, editor
- Published
- 2022
- Full Text
- View/download PDF
3. CamNuvem: A Robbery Dataset for Video Anomaly Detection.
- Author
-
de Paula, Davi D., Salvadeo, Denis H. P., and de Araujo, Darlan M. N.
- Subjects
- *
VIDEO surveillance , *ANOMALY detection (Computer security) , *ROBBERY , *CRIMINAL investigation , *CAMCORDERS , *HUMAN behavior - Abstract
(1) Background: The research area of video surveillance anomaly detection aims to automatically detect the moment when a video surveillance camera captures something that does not fit the normal pattern. This is a difficult task, but it is important to automate, improve, and lower the cost of the detection of crimes and other accidents. The UCF–Crime dataset is currently the most realistic crime dataset, and it contains hundreds of videos distributed in several categories; it includes a robbery category, which contains videos of people stealing material goods using violence, but this category only includes a few videos. (2) Methods: This work focuses only on the robbery category, presenting a new weakly labelled dataset that contains 486 new real–world robbery surveillance videos acquired from public sources. (3) Results: We have modified and applied three state–of–the–art video surveillance anomaly detection methods to create a benchmark for future studies. We showed that in the best scenario, taking into account only the anomaly videos in our dataset, the best method achieved an AUC of 66.35%. When all anomaly and normal videos were taken into account, the best method achieved an AUC of 88.75%. (4) Conclusion: This result shows that there is a huge research opportunity to create new methods and approaches that can improve robbery detection in video surveillance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Embedding AI ethics into the design and use of computer vision technology for consumer's behaviour understanding.
- Author
-
Tiribelli, Simona, Giovanola, Benedetta, Pietrini, Rocco, Frontoni, Emanuele, and Paolanti, Marina
- Subjects
ARTIFICIAL intelligence ,HUMAN behavior ,COMPUTER engineering ,BEHAVIORAL assessment ,CONSUMER behavior ,DEEP learning - Abstract
Artificial Intelligence (AI) techniques are becoming more and more sophisticated showing the potential to deeply understand and predict consumer behaviour in a way to boost the retail sector; however, retail-sensitive considerations underpinning their deployment have been poorly explored to date. This paper explores the application of AI technologies in the retail sector, focusing on their potential to enhance decision-making processes by preventing major ethical risks inherent to them, such as the propagation of bias and systems' lack of explainability. Drawing on recent literature on AI ethics, this study proposes a methodological path for the design and the development of trustworthy, unbiased, and more explainable AI systems in the retail sector. Such framework grounds on European (EU) AI ethics principles and addresses the specific nuances of retail applications. To do this, we first examine the VRAI framework, a deep learning model used to analyse shopper interactions, people counting and re-identification, to highlight the critical need for transparency and fairness in AI operations. Second, the paper proposes actionable strategies for integrating high-level ethical guidelines into practical settings, and particularly, to mitigate biases leading to unfair outcomes in AI systems and improve their explainability. By doing so, the paper aims to show the key added value of embedding AI ethics requirements into AI practices and computer vision technology to truly promote technically and ethically robust AI in the retail domain. • Establishment of AI ethics guidelines for the retail domain. • Design of a detailed framework for AI ethics. • Evaluation of the VRAI framework on new explainability metrics. • Recommendations for AI transparency improvements. • Integration of ethics into the design of AI systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. All signals point to personality: A dual-pipeline LSTM-attention and symbolic dynamics framework for predicting personality traits from Bio-Electrical signals.
- Author
-
Kumar, Deepak, Singh, Pradeep, and Raman, Balasubramanian
- Subjects
FIVE-factor model of personality ,HUMAN behavior ,PERSONALITY ,BEHAVIORAL assessment ,SYMBOLIC dynamics - Abstract
The prediction of personality traits offers valuable insights into human behaviour, more specifically in psychology, healthcare, and social science. In this paper, we present a novel methodology for personality trait prediction using a dual-pipeline architecture. The model architecture leverages Long Short-Term Memory (LSTM) networks with batch normalization for capturing sequential dependencies in data and incorporates temporal attention heads for feature extraction. By combining these parallel pipelines, our network effectively utilizes both LSTM and attention mechanisms to create a comprehensive representation of input data. The network's goal is to predict the OCEAN (openness, conscientiousness, extraversion, agreeableness and neuroticism) traits using physiological signals including: EEG, ECG and GSR. Including attention mechanisms enables the model to focus on critical moments in these signals, resulting in significantly improved prediction accuracy. Experimental evaluations demonstrate the superior performance of our method compared to traditional machine learning methods on two publicly available datasets: ASCERTAIN and AMIGOS. Our source code is accessible at https://github.com/deepakkumar-iitr/AT3NET. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. A Deep Learning Architecture for Recognizing Abnormal Activities of Groups Using Context and Motion Information
- Author
-
Borja-Borja, Luis Felipe, Azorín-López, Jorge, Saval-Calvo, Marcelo, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Herrero, Álvaro, editor, Cambra, Carlos, editor, Urda, Daniel, editor, Sedano, Javier, editor, Quintián, Héctor, editor, and Corchado, Emilio, editor
- Published
- 2021
- Full Text
- View/download PDF
7. Deciphering the Code: Evidence for a Sociometric DNA in Design Thinking Meetings
- Author
-
Kohl, Steffi, Graus, Mark P., Lemmink, Jos G. A. M., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Stephanidis, Constantine, editor, Antona, Margherita, editor, and Ntoa, Stavroula, editor
- Published
- 2020
- Full Text
- View/download PDF
8. CamNuvem: A Robbery Dataset for Video Anomaly Detection
- Author
-
Davi D. de Paula, Denis H. P. Salvadeo, and Darlan M. N. de Araujo
- Subjects
video anomaly detection ,dataset ,human behaviour analysis ,weakly supervised ,activity recognition ,video surveillance ,Chemical technology ,TP1-1185 - Abstract
(1) Background: The research area of video surveillance anomaly detection aims to automatically detect the moment when a video surveillance camera captures something that does not fit the normal pattern. This is a difficult task, but it is important to automate, improve, and lower the cost of the detection of crimes and other accidents. The UCF–Crime dataset is currently the most realistic crime dataset, and it contains hundreds of videos distributed in several categories; it includes a robbery category, which contains videos of people stealing material goods using violence, but this category only includes a few videos. (2) Methods: This work focuses only on the robbery category, presenting a new weakly labelled dataset that contains 486 new real–world robbery surveillance videos acquired from public sources. (3) Results: We have modified and applied three state–of–the–art video surveillance anomaly detection methods to create a benchmark for future studies. We showed that in the best scenario, taking into account only the anomaly videos in our dataset, the best method achieved an AUC of 66.35%. When all anomaly and normal videos were taken into account, the best method achieved an AUC of 88.75%. (4) Conclusion: This result shows that there is a huge research opportunity to create new methods and approaches that can improve robbery detection in video surveillance.
- Published
- 2022
- Full Text
- View/download PDF
9. Classifying Human Body Postures by a Support Vector Machine with Two Simple Features
- Author
-
Van Tao, Nguyen, Hoa, Nong Thi, Truong, Quach Xuan, Kacprzyk, Janusz, Series editor, Pal, Nikhil R., Advisory editor, Bello Perez, Rafael, Advisory editor, Corchado, Emilio, Advisory editor, Hagras, Hani, Advisory editor, Kóczy, László T., Advisory editor, Kreinovich, Vladik, Advisory editor, Lin, Chin-Teng, Advisory editor, Lu, Jie, Advisory editor, Melin, Patricia, Advisory editor, Nedjah, Nadia, Advisory editor, Nguyen, Ngoc Thanh, Advisory editor, Wang, Jun, Advisory editor, Akagi, Masato, editor, Nguyen, Thanh-Thuy, editor, Vu, Duc-Thai, editor, Phung, Trung-Nghia, editor, and Huynh, Van-Nam, editor
- Published
- 2017
- Full Text
- View/download PDF
10. Automatic Recognition of Facial Displays of Unfelt Emotions.
- Author
-
Kulkarni, Kaustubh, Corneanu, Ciprian Adrian, Ofodile, Ikechukwu, Escalera, Sergio, Baro, Xavier, Hyniewska, Sylwia, Allik, Juri, and Anbarjafari, Gholamreza
- Abstract
Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average, it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. On Social Involvement in Mingling Scenarios: Detecting Associates of F-Formations in Still Images.
- Author
-
Zhang, Lu and Hung, Hayley
- Abstract
In this paper, we carry out an extensive study of social involvement in free standing conversing groups (the so-called F-formations) from static images. By introducing a novel feature representation, we show that the standard features which have been used to represent full membership in an F-formation cannot be applied to the detection of so-called associates of F-formations due to their sparser nature. We also enrich state-of-the-art F-formation modelling by learning a frustum of attention that accounts for the spatial context. That is, F-formation configurations vary with respect to the arrangement of furniture and the non-uniform crowdedness in the space during mingling scenarios. Moroever, the majority of prior works have considered the labelling of conversing groups as an objective task, requiring only a single annotator. However, we show that by embracing the subjectivity of social involvement, we not only generate a richer model of the social interactions in a scene but can use the detected associates to improve initial estimates of the full members of an F-formation. We carry out extensive experimental validation of our proposed approach by collecting a novel set of multi-annotator labels of involvement on two publicly available datasets; The Idiap Poster Data and SALSA data set. Moreover, we show that parameters learned from the Idiap Poster Data can be transferred to the SALSA data, showing the power of our proposed representation in generalising over new unseen data from a different environment. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Deep understanding of shopper behaviours and interactions using RGB-D vision.
- Author
-
Paolanti, Marina, Pietrini, Rocco, Mancini, Adriano, Frontoni, Emanuele, and Zingaretti, Primo
- Abstract
In retail environments, understanding how shoppers move about in a store’s spaces and interact with products is very valuable. While the retail environment has several favourable characteristics that support computer vision, such as reasonable lighting, the large number and diversity of products sold, as well as the potential ambiguity of shoppers’ movements, mean that accurately measuring shopper behaviour is still challenging. Over the past years, machine-learning and feature-based tools for people counting as well as interactions analytic and re-identification were developed with the aim of learning shopper skills based on occlusion-free RGB-D cameras in a top-view configuration. However, after moving into the era of multimedia big data, machine-learning approaches evolved into deep learning approaches, which are a more powerful and efficient way of dealing with the complexities of human behaviour. In this paper, a novel VRAI deep learning application that uses three convolutional neural networks to count the number of people passing or stopping in the camera area, perform top-view re-identification and measure shopper–shelf interactions from a single RGB-D video flow with near real-time performances has been introduced. The framework is evaluated on the following three new datasets that are publicly available: TVHeads for people counting, HaDa for shopper–shelf interactions and TVPR2 for people re-identification. The experimental results show that the proposed methods significantly outperform all competitive state-of-the-art methods (accuracy of 99.5% on people counting, 92.6% on interaction classification and 74.5% on re-id), bringing to different and significative insights for implicit and extensive shopper behaviour analysis for marketing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
13. A Framework of Matching Algorithm for Influencer Marketing.
- Author
-
Iwashita, M.
- Abstract
Modern lifestyles have been altered significantly by recent developments in information and communications technology. In the marketing field, enterprises often use video advertising because of its effectiveness. The number of content producers is increasing, and YouTube is an effective medium for advertising. This advertising method is called 'influencer marketing.' Therefore, both the demand and the supply are increasing in the field of video advertising. In the present study, this trend was analysed from the viewpoints of an enterprise and a content producer. Then, a new business model was developed for increasing the demand and supply. The business model includes a matching function between enterprises and content producers for video advertising to achieve such increases. Second, a matching algorithm based on the calculation of the relativity between enterprises and content producers was proposed. Because the inputs of an enterprise and content producer include both numerical and textual data, a relativity-value calculation algorithm using these inputs was developed. Moreover, the feasibility of the proposed algorithm was evaluated. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
14. Fuzzy Logic Based Human Activity Recognition in Video Surveillance Applications
- Author
-
Abdelhedi, Slim, Wali, Ali, Alimi, Adel M., Kacprzyk, Janusz, Series editor, Abraham, Ajith, editor, Wegrzyn-Wolska, Katarzyna, editor, Hassanien, Aboul Ella, editor, Snasel, Vaclav, editor, and Alimi, Adel M., editor
- Published
- 2016
- Full Text
- View/download PDF
15. Shape-Based Eye Blinking Detection and Analysis
- Author
-
Boukhers, Zeyd, Jarzyński, Tomasz, Schmidt, Florian, Tiebe, Oliver, Grzegorzek, Marcin, Kacprzyk, Janusz, Series editor, Burduk, Robert, editor, Jackowski, Konrad, editor, Kurzyński, Marek, editor, Woźniak, Michał, editor, and Żołnierek, Andrzej, editor
- Published
- 2016
- Full Text
- View/download PDF
16. Using Social Signals to Predict Shoplifting: A Transparent Approach to a Sensitive Activity Analysis Problem
- Author
-
Shane Reid, Sonya Coleman, Philip Vance, Dermot Kerr, and Siobhan O’Neill
- Subjects
human behaviour analysis ,social signal processing ,video processing ,bias detection ,ethical AI ,machine learning ,Chemical technology ,TP1-1185 - Abstract
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. Until recently, most state-of-the-art vision-based approaches to this problem have relied heavily on the use of black box deep learning models. While these models have been shown to achieve very high accuracy, this lack of understanding on how decisions are made raises concerns about potential bias in the models. This limits the ability of retailers to implement these solutions, as several high-profile legal cases have recently ruled that evidence taken from these black box methods is inadmissible in court. There is an urgent need to develop models which can achieve high accuracy while providing the necessary transparency. One way to alleviate this problem is through the use of social signal processing to add a layer of understanding in the development of transparent models for this task. To this end, we present a social signal processing model for the problem of shoplifting prediction which has been trained and validated using a novel dataset of manually annotated shoplifting videos. The resulting model provides a high degree of understanding and achieves accuracy comparable with current state of the art black box methods.
- Published
- 2021
- Full Text
- View/download PDF
17. Human behavior analysis in the production and consumption of scientific knowledge across regions : A case study on publications in Scopus
- Author
-
Qasim, Muhammad Awais, Ul Hassan, Saeed, Aljohani, Naif Radi, and Lytras, Miltiadis D.
- Published
- 2017
- Full Text
- View/download PDF
18. Regional adaptive affinitive patterns (RADAP) with logical operators for facial expression recognition.
- Author
-
Mandal, Murari, Verma, Monu, Mathur, Sonakshi, Vipparthi, Santosh Kumar, Murala, Subrahmanyam, and Kranthi Kumar, Deveerasetty
- Abstract
Automated facial expression recognition plays a significant role in the study of human behaviour analysis. In this study, the authors propose a robust feature descriptor named regional adaptive affinitive patterns (RADAP) for facial expression recognition. The RADAP computes positional adaptive thresholds in the local neighbourhood and encodes multi‐distance magnitude features which are robust to intra‐class variations and irregular illumination variation in an image. Furthermore, they established cross‐distance co‐occurrence relations in RADAP by using logical operators. They proposed XRADAP, ARADAP, and DRADAP using xor, adder and decoder, respectively. The XRADAP engrains the quality of robustness to intra‐class variations in RADAP features using pairwise co‐occurrence. Similarly, ARADAP and DRADAP extract more stable and illumination invariant features and capture the minute expression features which are usually missed by regular descriptors. The performance of the proposed methods is evaluated by conducting experiments on nine benchmark datasets Cohn–Kanade+ (CK+), Japanese female facial expression (JAFFE), Multimedia Understanding Group (MUG), MMI, OULU‐CASIA, Indian spontaneous expression database, DISFA, AFEW and Combined (CK+, JAFFE, MUG, MMI & GEMEP‐FERA) database in both person dependent and person independent setup. The experimental results demonstrate the effectiveness of the proposed method over state‐of‐the‐art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Constrained self-organizing feature map to preserve feature extraction topology.
- Author
-
Azorin-Lopez, Jorge, Saval-Calvo, Marcelo, Fuster-Guillo, Andres, Garcia-Rodriguez, Jose, and Mora-Mora, Higinio
- Subjects
- *
SELF-organizing maps , *VECTOR spaces , *VIDEO surveillance , *ARTIFICIAL neural networks , *FEATURE extraction - Abstract
In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context
- Author
-
Alexandros Andre Chaaraoui, José Ramón Padilla-López, Francisco Javier Ferrández-Pastor, Mario Nieto-Hidalgo, and Francisco Flórez-Revuelta
- Subjects
intelligent monitoring ,vision system ,ambient-assisted living ,human behaviour analysis ,human action recognition ,multi-view recognition ,telecare monitoring ,privacy preservation ,privacy by context ,Chemical technology ,TP1-1185 - Abstract
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people’s behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.
- Published
- 2014
- Full Text
- View/download PDF
21. Computer-Aided Experimentation for Human Behaviour Analysis
- Author
-
Grübel, Jascha, Sumner, Robert W., Hölscher, Christoph, and Giannopoulos, Ioannis
- Subjects
Human Behaviour Analysis ,Reproducibility ,Computer-Aided Experimentation ,Computer-Aided Experiments ,FOS: Psychology ,Digital Twin ,Generalities, science ,Data processing, computer science ,ddc:690 ,ddc:150 ,ddc:000 ,Psychology ,ddc:004 ,Buildings - Abstract
The implementation of behavioural experiments in various disciplines has relied on computers for the last five decades. Interestingly, the use of computers has both facilitated and hindered different aspects of the experimental process. Over time, frameworks have been developed that make it easier to design experiments and conduct them. At the same time, the multiplicity of frameworks and the lack of documentation have made the replication of experiments a difficult endeavour. The reproducibility crisis is not unique to computer-aided experiments but is severely aggravated by it. Most experimental research focuses on translating a research question into an actionable hypothesis that can be falsified through experiments. However, the process of conducting the experiment—experimentation—is often left to the researchers' whims. While method sections in papers are meant to ameliorate this by giving an overview of how the research was conducted, it often omits several steps supposedly for clarity. This approach is following in the footsteps of Karl Popper to "omit with advantage" but the reproducibility crisis has shown that it is not easy to tell a priori what actually can be omitted with advantage. In this dissertation, I address this issue by formalising Computer-Aided Experimentation (CAE). The focus of previous research has been on reproducing experimental outcomes instead of creating a theoretical foundation for reproducible experimentation. Here, I rely on concepts from industrial research such as "Digital Twins", cloud research such as the "_ as Code" revolution, and insights from behavioural research on reproducibility to define Computer-Aided Experimentation for Human Behaviour Analysis. This thesis proposes a three-fold approach of theories, systems and applications to define CAE. First, I outline my theoretical framework behind CAE that relies on the three concepts: the "Experiments as Digital Twins" perspective, the "Experiments as Code" paradigm, and the "Design, Experiment, Analyse, and Reproduce" (DEAR) principle. Subsequently, I present five systems that employ the theoretical concepts of CAE. Each system addresses human behaviour analysis across a wide range of disciplines. Lastly, I report how these five systems are put to a test in domain science applications that have advanced their respective research fields. I conclude this dissertation with a discussion on the importance of these new theoretisations of experimentation and point towards future tasks in Computer-Aided Experimentation for Human Behaviour Analysis., Die Durchführung von Verhaltensexperimenten in verschiedenen Disziplinen hat sich in den letzten fünf Jahrzehnten auf Computer gestützt. Interessanterweise hat der Einsatz von Computern verschiedene Aspekte des experimentellen Prozesses sowohl erleichtert als auch behindert. Im Laufe der Zeit wurden Frameworks entwickelt, die die Planung und Durchführung von Experimenten erleichtern. Gleichzeitig haben die Vielzahl der Systeme und die fehlende Dokumentation die Reproduktion von Experimenten zu einem schwierigen Unterfangen gemacht. Das Problem der Reproduzierbarkeit ist nicht nur bei computergestützten Experimenten zu beobachten, wird aber durch sie noch erheblich verschärft. Die meisten experimentellen Untersuchungen konzentrieren sich darauf, eine Forschungsfrage in eine umsetzbare Hypothese zu übersetzen, die durch Experimente falsifiziert werden kann. Die Durchführung des Experiments - der Experimentsaufbau - wird jedoch oft den Launen der Forscher überlassen. Die Methodenabschnitte in den Veröffentlichungen sollen dies zwar abmildern, indem sie einen Überblick darüber geben, wie die Forschung durchgeführt wurde, dabei lassen diese aber oft mehrere Schritte aus, um vermeintlich der Klarheit zu dienen. Dieser Ansatz folgt den Fußstapfen von Karl Popper, um "zum Vorteil auszulassen", aber die Krise der Reproduzierbarkeit hat gezeigt, dass es nicht einfach ist, a priori zu sagen, was tatsächlich mit Vorteil ausgelassen werden kann. In dieser Dissertation gehe ich dieses Problem an, indem ich die computergestützte Experimentierung (Computer-Aided Experimentation; CAE) formalisiere. Der Schwerpunkt der bisherigen Forschung lag auf der Reproduktion von Experimentsergebnissen, anstatt eine theoretische Grundlage für reproduzierbare Experimentsaufbaue zu schaffen. Hier stütze ich mich auf Konzepte aus der industriellen Forschung wie "Digitale Zwillinge", aus der Cloud-Forschung wie der "_ als Code"-Revolution und auf Erkenntnisse aus der Verhaltensforschung zur Reproduzierbarkeit, um Computer-Aided Experimentation for Human Behaviour Analysis zu definieren. In dieser Arbeit wird ein dreiteiliger Ansatz aus Theorien, Systemen und Anwendungen vorgeschlagen, um CAE zu definieren. Zunächst skizziere ich meinen theoretischen Rahmen für CAE, der sich auf drei Konzepte stützt: die Perspektive "Experimente als digitale Zwillinge", das Paradigma "Experimente als Code" und das Prinzip "Design, Experiment, Analyse und Reproduktion" (DEAR). Anschließend stelle ich fünf Systeme vor, die die theoretischen Konzepte von CAE anwenden. Jedes System befasst sich mit der Analyse des menschlichen Verhaltens in einem breiten Spektrum von Disziplinen. Schließlich berichte ich darüber, wie diese fünf Systeme in fachwissenschaftlichen Anwendungen getestet wurden, die ihre jeweiligen Forschungsgebiete vorangebracht haben. Ich schließe diese Dissertation mit einer Diskussion über die Bedeutung dieser neuen Theorien des Experimentierens und weise auf zukünftige Aufgaben in Computer-Aided Experimentation for Human Behaviour Analysis hin.
- Published
- 2022
- Full Text
- View/download PDF
22. ARCHITECTURAL DESIGN VALIDATION BASED ON HUMAN BEHAVIOR ANALYSIS IN A VIRTUAL REALITY ENVIRONMENT.
- Author
-
HADI, SARMAD M. and HUSSEIN, ALI A.
- Subjects
ARCHITECTURAL design ,VIRTUAL reality ,HUMAN behavior ,DATA visualization ,INFORMATION design - Abstract
The authors have introduced the idea of architectural design validation based on human behavior analysis in a virtual reality environment. At present most architectural designs are not being validated for the clarity of navigation from the customer perspective, and in cases where verification is obtained; testing is conducted after finishing the construction of the facility, where design modification is relatively more expensive and time consuming. The authors have introduced the idea of design validation in early stages of the projects, and the authors have achieved good results in this approach. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. On Social Involvement in Mingling Scenarios: Detecting Associates of F-formations in Still Images
- Author
-
Zhang, L. (author), Hung, H.S. (author), Zhang, L. (author), and Hung, H.S. (author)
- Abstract
In this paper, we carry out an extensive study of social involvement in free standing conversing groups (the so-called F-formations) from static images. By introducing a novel feature representation, we show that the standard features which have been used to represent full membership in an F-formation cannot be applied to the detection of so-called associates of F-formations due to their sparser nature. We also enrich state-of-The-Art F-formation modelling by learning a frustum of attention that accounts for the spatial context. That is, F-formation configurations vary with respect to the arrangement of furniture and the non-uniform crowdedness in the space during mingling scenarios. Moroever, the majority of prior works have considered the labelling of conversing groups as an objective task, requiring only a single annotator. However, we show that by embracing the subjectivity of social involvement, we not only generate a richer model of the social interactions in a scene but can use the detected associates to improve initial estimates of the full members of an F-formation. We carry out extensive experimental validation of our proposed approach by collecting a novel set of multi-Annotator labels of involvement on two publicly available datasets; The Idiap Poster Data and SALSA data set. Moreover, we show that parameters learned from the Idiap Poster Data can be transferred to the SALSA data, showing the power of our proposed representation in generalising over new unseen data from a different environment., Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public., Pattern Recognition and Bioinformatics
- Published
- 2020
- Full Text
- View/download PDF
24. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context.
- Author
-
Chaaraoui, Alexandros Andre, Padilla-López, José Ramón, Ferrández-Pastor, Francisco Javier, Nieto-Hidalgo, Mario, and Flórez-Revuelta, Francisco
- Subjects
- *
LIFE expectancy , *CAMCORDERS , *VIDEO surveillance , *AUTOMATION , *TECHNOLOGY , *AMBIENT intelligence - Abstract
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people's behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
25. On Social Involvement in Mingling Scenarios: Detecting Associates of F-formations in Still Images
- Author
-
Lu Zhang and Hayley Hung
- Subjects
0209 industrial biotechnology ,Spatial contextual awareness ,Information retrieval ,Computer science ,Feature extraction ,F-formations detection ,02 engineering and technology ,Semantics ,Visualization ,Human-Computer Interaction ,Data set ,020901 industrial engineering & automation ,social group detection ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Task analysis ,020201 artificial intelligence & image processing ,Set (psychology) ,human behaviour analysis ,Software - Abstract
In this paper, we carry out an extensive study of social involvement in free standing conversing groups (the so-called F-formations) from static images. By introducing a novel feature representation, we show that the standard features which have been used to represent full membership in an F-formation cannot be applied to the detection of so-called associates of F-formations due to their sparser nature. We also enrich state-of-The-Art F-formation modelling by learning a frustum of attention that accounts for the spatial context. That is, F-formation configurations vary with respect to the arrangement of furniture and the non-uniform crowdedness in the space during mingling scenarios. Moroever, the majority of prior works have considered the labelling of conversing groups as an objective task, requiring only a single annotator. However, we show that by embracing the subjectivity of social involvement, we not only generate a richer model of the social interactions in a scene but can use the detected associates to improve initial estimates of the full members of an F-formation. We carry out extensive experimental validation of our proposed approach by collecting a novel set of multi-Annotator labels of involvement on two publicly available datasets; The Idiap Poster Data and SALSA data set. Moreover, we show that parameters learned from the Idiap Poster Data can be transferred to the SALSA data, showing the power of our proposed representation in generalising over new unseen data from a different environment.
- Published
- 2020
26. Discriminative functional analysis of human movements.
- Author
-
Samadani, Ali-Akbar, Ghodsi, Ali, and Kulić, Dana
- Subjects
- *
HUMAN activity recognition , *FUNCTIONAL analysis , *DIMENSION reduction (Statistics) , *EMBEDDED computer systems , *SEQUENCES (Motion pictures) , *PRINCIPAL components analysis , *SPLINE theory , *DISCRIMINANT analysis - Abstract
Abstract: This paper investigates the use of statistical dimensionality reduction (DR) techniques for discriminative low dimensional embedding to enable affective movement recognition. Human movements are defined by a collection of sequential observations (time-series features) representing body joint angle or joint Cartesian trajectories. In this work, these sequential observations are modelled as temporal functions using B-spline basis function expansion, and dimensionality reduction techniques are adapted to enable application to the functional observations. The DR techniques adapted here are: Fischer discriminant analysis (FDA), supervised principal component analysis (PCA), and Isomap. These functional DR techniques along with functional PCA are applied on affective human movement datasets and their performance is evaluated using leave-one-out cross validation with a one-nearest neighbour classifier in the corresponding low-dimensional subspaces. The results show that functional supervised PCA outperforms the other DR techniques examined in terms of classification accuracy and time resource requirements. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
27. Deep learning for human action understanding
- Author
-
Gammulle, Pranali Harshala and Gammulle, Pranali Harshala
- Abstract
This thesis addresses the problem of understanding human behaviour in videos in multiple problem settings including, recognition, segmentation, and prediction. Considering the complex nature of human behaviour, we propose to capture both short-term and long-term context in the given videos and propose novel multitask learning-based approaches to solve the action prediction task, as well as an adversarially-trained approach to action recognition. We demonstrate the efficacy of these techniques by applying them to multiple real-world human behaviour understanding settings including, security surveillance, sports action recognition, group activity recognition and recognition of cooking activities.
- Published
- 2019
28. Social signal processing: Survey of an emerging domain
- Author
-
Vinciarelli, Alessandro, Pantic, Maja, and Bourlard, Hervé
- Subjects
- *
COMPUTER vision , *SOCIAL intelligence , *SOCIAL interaction , *SPEECH processing systems , *SIGNAL processing , *INTERPERSONAL relations , *HUMAN behavior , *SURVEYS - Abstract
Abstract: The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence – the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement – in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
29. Computer-Aided Experimentation for Human Behaviour Analysis
- Author
-
Grübel, Jascha; id_orcid 0000-0002-6428-4685
- Subjects
- Computer-Aided Experimentation, Computer-Aided Experiments, Human Behaviour Analysis, Digital Twin, Reproducibility, Generalities, science, Data processing, computer science, Psychology, Buildings
- Abstract
The implementation of behavioural experiments in various disciplines has relied on computers for the last five decades. Interestingly, the use of computers has both facilitated and hindered different aspects of the experimental process. Over time, frameworks have been developed that make it easier to design experiments and conduct them. At the same time, the multiplicity of frameworks and the lack of documentation have made the replication of experiments a difficult endeavour. The reproducibility crisis is not unique to computer-aided experiments but is severely aggravated by it. Most experimental research focuses on translating a research question into an actionable hypothesis that can be falsified through experiments. However, the process of conducting the experiment—experimentation—is often left to the researchers' whims. While method sections in papers are meant to ameliorate this by giving an overview of how the research was conducted, it often omits several steps supposedly for clarity. This approach is following in the footsteps of Karl Popper to "omit with advantage" but the reproducibility crisis has shown that it is not easy to tell a priori what actually can be omitted with advantage. In this dissertation, I address this issue by formalising Computer-Aided Experimentation (CAE). The focus of previous research has been on reproducing experimental outcomes instead of creating a theoretical foundation for reproducible experimentation. Here, I rely on concepts from industrial research such as "Digital Twins", cloud research such as the "_ as Code" revolution, and insights from behavioural research on reproducibility to define Computer-Aided Experimentation for Human Behaviour Analysis. This thesis proposes a three-fold approach of theories, systems and applications to define CAE. First, I outline my theoretical framework behind CAE that relies on the three concepts: the "Experiments as Digital Twins" perspective, the "Experiments as Code" paradigm, and the "Design, Experiment, Analyse, and Reproduce" (DEAR) principle. Subsequently, I present five systems that employ the theoretical concepts of CAE. Each system addresses human behaviour analysis across a wide range of disciplines. Lastly, I report how these five systems are put to a test in domain science applications that have advanced their respective research fields. I conclude this dissertation with a discussion on the importance of these new theoretisations of experimentation and point towards future tasks in Computer-Aided Experimentation for Human Behaviour Analysis.
- Published
- 2022
30. Using Social Signals to Predict Shoplifting: A Transparent Approach to a Sensitive Activity Analysis Problem.
- Author
-
Reid, Shane, Coleman, Sonya, Vance, Philip, Kerr, Dermot, and O'Neill, Siobhan
- Subjects
- *
SHOPLIFTING , *DEEP learning , *SIGNAL processing , *SOCIAL processes , *BLACK art , *VIDEO processing - Abstract
Retail shoplifting is one of the most prevalent forms of theft and has accounted for over one billion GBP in losses for UK retailers in 2018. An automated approach to detecting behaviours associated with shoplifting using surveillance footage could help reduce these losses. Until recently, most state-of-the-art vision-based approaches to this problem have relied heavily on the use of black box deep learning models. While these models have been shown to achieve very high accuracy, this lack of understanding on how decisions are made raises concerns about potential bias in the models. This limits the ability of retailers to implement these solutions, as several high-profile legal cases have recently ruled that evidence taken from these black box methods is inadmissible in court. There is an urgent need to develop models which can achieve high accuracy while providing the necessary transparency. One way to alleviate this problem is through the use of social signal processing to add a layer of understanding in the development of transparent models for this task. To this end, we present a social signal processing model for the problem of shoplifting prediction which has been trained and validated using a novel dataset of manually annotated shoplifting videos. The resulting model provides a high degree of understanding and achieves accuracy comparable with current state of the art black box methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
31. Pedestrian intention prediction: A convolutional bottom-up multi-task approach.
- Author
-
Razali, Haziq, Mordan, Taylor, and Alahi, Alexandre
- Subjects
- *
DRIVER assistance systems , *PEDESTRIANS , *INTENTION , *SOURCE code - Abstract
• A bottom-up convolutional multi-task network for pedestrian intention prediction. • A runtime that is nearly independent of the number of pedestrians. • Can be easily extended to perform a wide variety of vision-related tasks. The ability to predict pedestrian behaviour is crucial for road safety, traffic management systems, Advanced Driver Assistance Systems (ADAS), and more broadly autonomous vehicles. We present a vision-based system that simultaneously locates where pedestrians are in the scene, estimates their body pose and predicts their intention to cross the road. Given a single image, our proposed neural network is designed using a bottom-up approach and thus runs at nearly constant time without relying on a pedestrian detector. Our method jointly detects human body poses and predicts their intention in a multitask framework. Experimental results show that the proposed model outperforms the precision scores of the state-of-the-art for the task of intention prediction by approximately 20% while running in real-time (5 fps). The source code is publicly available so that it can be easily integrated into an ADAS or into any traffic light management systems. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Constrained self-organizing feature map to preserve feature extraction topology
- Author
-
Jose Garcia-Rodriguez, Jorge Azorin-Lopez, Andres Fuster-Guillo, Higinio Mora-Mora, Marcelo Saval-Calvo, Universidad de Alicante. Departamento de Tecnología Informática y Computación, and Informática Industrial y Redes de Computadores
- Subjects
Self-organizing feature map ,0209 industrial biotechnology ,Sequence ,Artificial neural network ,Computer science ,Topology preservation ,Feature vector ,Feature extraction ,02 engineering and technology ,Topology ,computer.software_genre ,Human behaviour analysis ,020901 industrial engineering & automation ,Artificial Intelligence ,Feature (computer vision) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,Focus (optics) ,Arquitectura y Tecnología de Computadores ,computer ,Software ,Topology (chemistry) - Abstract
In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets. This study was supported in part by the University of Alicante, Valencian Government and Spanish government under grants GRE11-01, GV/2013/005 and DPI2013-40534-R.
- Published
- 2016
- Full Text
- View/download PDF
33. Technologies and paradigms for natural human-robot collaboration
- Author
-
Shukla, Dadhichi Bhadresh and Shukla, Dadhichi Bhadresh
- Abstract
As humans, we are good at conveying intentions, working on shared goals. We coordinate actions, and interpret contextual information from behavioural cues. We can perform all of the above entirely while performing a collaborative task in a team. The research in human-human interaction (HHI) has set a precedent for human-robot interaction (HRI) to develop frameworks facilitating teamwork among humans and robots. To design frameworks where collaborative robots can work alongside humans, it is essential to endow the robots with communication abilities. With the advancements in the field of computer vision and robotics, it is now possible to integrate natural-ways of human communication in the HRI frameworks. In the context of human-robot collaboration, this dissertation presents technologies which are developed to facilitate a natural, human-like collaboration with the robots. The technologies allow naive users to work with the collaborative robot in close proximity.During a task, the user instructs the robot using static hand gestures, to which it responds with a gaze movement indicating the action it is going to perform. Once the user approves of the indicated action, the robot proceeds to execute it. The dissertation also presents an analysis of human behaviour towards a change in the behaviour of the robot. The work presented in the dissertation focuses on three central topics related to HRI, (1) a vocabulary of the static hand gestures performed by humans in a collaborative task, (2) teaching semantics of the gestures to the robot during the task, and (3) understanding human behaviour in response to the ability of the robot to execute the task.Concerning the first topic, a gesture detection and pose estimation method was developed to detect the static hand gestures independent of the upper/full-body pose. The method was evaluated on two datasets, Innsbruck Pointing at Objects (IPO) dataset and Innsbruck Multi-view Hand Gesture (IMHG) dataset, consisting of the ges, Dadhichi Shukla, Kumulative Dissertation aus fünf Artikeln, Universität Innsbruck, Dissertation, 2018, OeBB, (VLID)2758513
- Published
- 2018
34. Automatic recognition of facial displays of unfelt emotions
- Author
-
Xavier Baró, Ikechukwu Ofodile, Gholamreza Anbarjafari, Jüri Allik, Ciprian A. Corneanu, Sergio Escalera, Sylwia Hyniewska, Kaustubh Kulkarni, Universitat Autònoma de Barcelona, University of Tartu, Institute of Physiology and Pathology of Hearing, Hasan Kalyoncu University, Universitat Oberta de Catalunya (UOC), HKÜ, Mühendislik Fakültesi, Elektrik Elektronik Mühendisliği Bölümü, and HKÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü
- Subjects
FOS: Computer and information sciences ,Emotion recognition , Face recognition , Feature extraction , Face , Psychology , Observers , Trajectory ,expresión facial sin emoción ,Experimental psychology ,Computer science ,Speech recognition ,Contempt ,Computer Vision and Pattern Recognition (cs.CV) ,Feature extraction ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,expressió facial sense emoció ,02 engineering and technology ,computació afectiva ,Facial recognition system ,análisis del comportamiento humano ,050105 experimental psychology ,Human face recognition (Computer science) ,Discriminative model ,0202 electrical engineering, electronic engineering, information engineering ,facial expression recognition ,0501 psychology and cognitive sciences ,Face recognition ,Affective computing ,Observers ,affective computing ,TrAffective computing ,anàlisi del comportament humà ,Facial expression ,Reconeixement facial (Informàtica) ,reconeixement d'expressió facial ,05 social sciences ,Disgust ,Human-Computer Interaction ,computación afectiva ,020201 artificial intelligence & image processing ,Reconocimiento facial (Informática) ,Emotion recognition ,human behaviour analysis ,unfelt facial expression of emotion ,reconocimiento de la expresión facial ,Software - Abstract
Humans modify their facial expressions in order to communicate their internal states and sometimes to mislead observers regarding their true emotional states. Evidence in experimental psychology shows that discriminative facial responses are short and subtle. This suggests that such behavior would be easier to distinguish when captured in high resolution at an increased frame rate. We are proposing SASE-FE, the first dataset of facial expressions that are either congruent or incongruent with underlying emotion states. We show that overall the problem of recognizing whether facial movements are expressions of authentic emotions or not can be successfully addressed by learning spatio-temporal representations of the data. For this purpose, we propose a method that aggregates features along fiducial trajectories in a deeply learnt space. Performance of the proposed model shows that on average, it is easier to distinguish among genuine facial expressions of emotion than among unfelt facial expressions of emotion and that certain emotion pairs such as contempt and disgust are more difficult to distinguish than the rest. Furthermore, the proposed methodology improves state of the art results on CK+ and OULU-CASIA datasets for video emotion recognition, and achieves competitive results when classifying facial action units on BP4D datase.
- Published
- 2018
35. Constrained self-organizing feature map to preserve feature extraction topology
- Author
-
Universidad de Alicante. Departamento de Tecnología Informática y Computación, Azorin-Lopez, Jorge, Saval-Calvo, Marcelo, Fuster-Guilló, Andrés, Garcia-Rodriguez, Jose, Mora, Higinio, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Azorin-Lopez, Jorge, Saval-Calvo, Marcelo, Fuster-Guilló, Andrés, Garcia-Rodriguez, Jose, and Mora, Higinio
- Abstract
In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datas
- Published
- 2016
36. Social signal processing: Survey of an emerging domain
- Author
-
Alessandro Vinciarelli, Maja Pantic, and Hervé Bourlard
- Subjects
HMI-HF: Human Factors ,EWI-17123 ,Computer Vision ,HMI-MI: MULTIMODAL INTERACTIONS ,020207 software engineering ,02 engineering and technology ,EC Grant Agreement nr.: FP7/211486 ,EC Grant Agreement nr.: FP7/231287 ,METIS-264460 ,Signal Processing ,IR-69475 ,0202 electrical engineering, electronic engineering, information engineering ,Social Interactions ,020201 artificial intelligence & image processing ,Social signals ,Computer Vision and Pattern Recognition ,human behaviour analysis ,speech processing - Abstract
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence – the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement – in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
- Published
- 2009
- Full Text
- View/download PDF
37. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context
- Author
-
José Ramón Padilla-López, Mario Nieto-Hidalgo, Francisco Flórez-Revuelta, Francisco-Javier Ferrandez-Pastor, Alexandros Andre Chaaraoui, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Informática Industrial y Redes de Computadores, and Domótica y Ambientes Inteligentes
- Subjects
Engineering ,telecare monitoring ,Multi-view recognition ,Databases, Factual ,Video Recording ,Poison control ,lcsh:Chemical technology ,computer.software_genre ,Biochemistry ,Analytical Chemistry ,Privacy preservation ,Vision system ,lcsh:TP1-1185 ,Instrumentation ,Home for elderly ,computer ,intelligent monitoring ,vision system ,ambient-assisted living ,human behaviour analysis ,human action recognition ,multi-view recognition ,privacy preservation ,privacy by context ,Telecare ,Ambient-assisted living ,Home Care Services ,Telemedicine ,Atomic and Molecular Physics, and Optics ,Privacy ,The Right to Privacy ,Independent Living ,Arquitectura y Tecnología de Computadores ,Context (language use) ,Computer security ,Article ,Human behaviour analysis ,Humans ,Electrical and Electronic Engineering ,Human action recognition ,Telecare monitoring ,Behavior ,business.industry ,Privacy by context ,Intelligent monitoring ,Personal Autonomy ,business ,Independent living ,Dependency (project management) - Abstract
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people’s behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services. This work has been partially supported by the Spanish Ministry of Science and Innovation under project ‘Sistema de visión para la monitorización de la actividad de la vida diaria en el hogar’ (TIN2010-20510-C04-02). Alexandros Andre Chaaraoui and José Ramón Padilla-López acknowledge financial support by the Conselleria d’Educació, Formació i Ocupació of the Generalitat Valenciana (fellowships ACIF/2011/160 and ACIF/2012/064, respectively).
- Published
- 2014
38. Vision-based Recognition of Human Behaviour for Intelligent Environments
- Author
-
Chaaraoui, Alexandros Andre, Universidad de Alicante. Departamento de Tecnología Informática y Computación, and Flórez Revuelta, Francisco
- Subjects
Computer vision ,Information fusion ,Human action recognition ,Evolutionary algorithms ,Arquitectura y Tecnología de Computadores ,Human behaviour analysis - Abstract
A critical requirement for achieving ubiquity of artificial intelligence is to provide intelligent environments with the ability to recognize and understand human behaviour. If this is achieved, proactive interaction can occur and, more interestingly, a great variety of services can be developed. In this thesis we aim to support the development of ambient-assisted living services with advances in human behaviour analysis. Specifically, visual data analysis is considered in order to detect and understand human activity at home. As part of an intelligent monitoring system, single- and multi-view recognition of human actions is performed, along several optimizations and extensions. The present work may pave the way for more advanced human behaviour analysis techniques, such as the recognition of activities of daily living, personal routines and abnormal behaviour detection.
- Published
- 2014
39. A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context
- Author
-
Universidad de Alicante. Departamento de Tecnología Informática y Computación, Chaaraoui, Alexandros Andre, Padilla López, José Ramón, Ferrandez-Pastor, Francisco-Javier, Nieto-Hidalgo, Mario, Flórez-Revuelta, Francisco, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Chaaraoui, Alexandros Andre, Padilla López, José Ramón, Ferrandez-Pastor, Francisco-Javier, Nieto-Hidalgo, Mario, and Flórez-Revuelta, Francisco
- Abstract
Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, people’s behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.
- Published
- 2014
40. Vision-based Recognition of Human Behaviour for Intelligent Environments
- Author
-
Flórez Revuelta, Francisco, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Chaaraoui, Alexandros Andre, Flórez Revuelta, Francisco, Universidad de Alicante. Departamento de Tecnología Informática y Computación, and Chaaraoui, Alexandros Andre
- Abstract
A critical requirement for achieving ubiquity of artificial intelligence is to provide intelligent environments with the ability to recognize and understand human behaviour. If this is achieved, proactive interaction can occur and, more interestingly, a great variety of services can be developed. In this thesis we aim to support the development of ambient-assisted living services with advances in human behaviour analysis. Specifically, visual data analysis is considered in order to detect and understand human activity at home. As part of an intelligent monitoring system, single- and multi-view recognition of human actions is performed, along several optimizations and extensions. The present work may pave the way for more advanced human behaviour analysis techniques, such as the recognition of activities of daily living, personal routines and abnormal behaviour detection.
- Published
- 2014
41. Human behaviour recognition based on trajectory analysis using neural networks
- Author
-
Andres Fuster-Guillo, Marcelo Saval-Calvo, Jose Garcia-Rodriguez, Jorge Azorin-Lopez, Universidad de Alicante. Departamento de Tecnología Informática y Computación, and Informática Industrial y Redes de Computadores
- Subjects
Artificial neural network ,business.industry ,Computer science ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,Human behaviour analysis ,Activity description vector ,Simple (abstract algebra) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Trajectory analysis ,Point (geometry) ,Artificial intelligence ,business ,Representation (mathematics) ,Cluster analysis ,Arquitectura y Tecnología de Computadores ,computer - Abstract
Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre. This work was supported in part by the University of Alicante under Grant GRE11-01.
- Published
- 2013
- Full Text
- View/download PDF
42. Social Signal Processing: The Research Agenda
- Author
-
Pantic, Maja, Cowie, Roderick, D'Errico, Francesca, Heylen, Dirk K.J., Mehu, Marc, Pelachaud, Catherine, Poggi, Isabella, Schroeder, Marc, Vinciarelli, Alessandro, Moeslund, Thomas B., Hilton, Adrain, Krüger, Volker, and Sigal, Leonid
- Subjects
METIS-284956 ,Social intelligence ,media_common.quotation_subject ,Face (sociological concept) ,HMI-MI: MULTIMODAL INTERACTIONS ,Empathy ,02 engineering and technology ,Social Signal Processing ,Formative assessment ,Trends in Cognitive Sciences ,Order (exchange) ,0202 electrical engineering, electronic engineering, information engineering ,media_common ,Cognitive science ,Politeness ,Field (Bourdieu) ,IR-79322 ,020207 software engineering ,EC Grant Agreement nr.: FP7/231287 ,human behaviour synthesis ,020201 artificial intelligence & image processing ,Human computer interaction ,EWI-21150 ,human behaviour analysis ,Psychology ,Social psychology - Abstract
The exploration of how we react to the world and interact with it and each other remains one of the greatest scientific challenges. Latest research trends in cognitive sciences argue that our common view of intelligence is too narrow, ignoring a crucial range of abilities that matter immensely for how people do in life. This range of abilities is called social intelligence and includes the ability to express and recognise social signals produced during social interactions like agreement, politeness, empathy, friendliness, conflict, etc., coupled with the ability to manage them in order to get along well with others while winning their cooperation. Social Signal Processing (SSP) is the new research domain that aims at understanding and modelling social interactions (human-science goals), and at providing computers with similar abilities in human-computer interaction scenarios (technological goals). SSP is in its infancy, and the journey towards artificial social intelligence and socially-aware computing is still long. This research agenda is a twofold, a discussion about how the field is understood by people who are currently active in it and a discussion about issues that the researchers in this formative field face.
- Published
- 2011
- Full Text
- View/download PDF
43. Human Behaviour Recognition based on Trajectory Analysis using Neural Networks
- Author
-
Universidad de Alicante. Departamento de Tecnología Informática y Computación, Azorin-Lopez, Jorge, Saval-Calvo, Marcelo, Fuster-Guilló, Andrés, Garcia-Rodriguez, Jose, Universidad de Alicante. Departamento de Tecnología Informática y Computación, Azorin-Lopez, Jorge, Saval-Calvo, Marcelo, Fuster-Guilló, Andrés, and Garcia-Rodriguez, Jose
- Abstract
Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.
- Published
- 2013
44. Pedestrian intention prediction: A convolutional bottom-up multi-task approach
- Author
-
Alexandre Alahi, Taylor Mordan, and Haziq Razali
- Subjects
Human Pose Estimation ,Source code ,Computer science ,Machine vision ,media_common.quotation_subject ,Pedestrian Intention Prediction ,Transportation ,Advanced driver assistance systems ,Pedestrian ,Machine learning ,computer.software_genre ,Autonomous Vehicles ,Advanced Traffic Management System ,Task (project management) ,Advanced Driver Assistance Systems ,Civil and Structural Engineering ,media_common ,Traffic Management Systems ,Artificial neural network ,business.industry ,Human Behaviour Analysis ,Computer Science Applications ,Automotive Engineering ,Management system ,Artificial intelligence ,business ,computer - Abstract
The ability to predict pedestrian behaviour is crucial for road safety, traffic management systems, Advanced Driver Assistance Systems (ADAS), and more broadly autonomous vehicles. We present a vision-based system that simultaneously locates where pedestrians are in the scene, estimates their body pose and predicts their intention to cross the road. Given a single image, our proposed neural network is designed using a bottom-up approach and thus runs at nearly constant time without relying on a pedestrian detector. Our method jointly detects human body poses and predicts their intention in a multitask framework. Experimental results show that the proposed model outperforms the precision scores of the state-of-the-art for the task of intention prediction by approximately 20% while running in real-time (5 fps). The source code is publicly available so that it can be easily integrated into an ADAS or into any traffic light management systems.
45. 3D Gaze Tracking and Automatic Gaze Coding from RGB-D Cameras
- Author
-
Funes Mora, Kenneth Alberto and Odobez, Jean-Marc
- Subjects
cognition ,Gaze estimation ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,RGB-D ,human behaviour analysis - Abstract
Gaze is recognised as one of the most important cues for the analysis of the cognitive behaviors of a person such as the attention displayed towards objects or people, their interactions, functionality and causality patterns. In this short paper, we present our investigations towards the development of 3D gaze sensing solutions from consumer RGB-D sensors, including their use for the inference of visual attention in natural dyadic interactions and the resources we have made or will make available to the community.
46. A filtering mechanism for normal fish trajectories
- Author
-
Cigdem Beyan and Fisher, R. B.
- Subjects
rule-based trajectory filtering mechanism ,filtering mechanism ,knowledge based systems ,Trajectory ,environmental change effects ,natural habitat applications ,Aquaculture ,abnormal behaviors identification ,image motion analysis ,traffic surveillance ,nursing home surveillance ,Videos ,normal motion pattern extraction ,normal-abnormal fish behavior detection ,environmental factors ,Marine animals ,Educational institutions ,filtering theory ,Computer vision ,Filtering ,human behaviour analysis ,normal fish trajectories - Abstract
Understanding fish behavior by extracting normal motion patterns and then identifying abnormal behaviors is important for understanding the effects of environmental change. In the literature, there are many studies on normal/abnormal behavior detection in the areas of human behaviour analysis, traffic surveillance, and nursing home surveillance, etc. However, the literature is very limited in terms of normal/abnormal fish behavior understanding especially when natural habitat applications are considered. In this study, we present a rule based trajectory filtering mechanism to extract normal fish trajectories which potentially helps to increase the accuracy of the abnormal fish behavior detection systems and can be used as a preliminary method especially when the number of abnormal fish behaviors are very small (e.g. 40-50 times smaller) compared to the number of normal fish behaviors and/or when the number of trajectories are huge.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.