16 results on '"Timothy Callemein"'
Search Results
2. Real-Time Embedded Person Detection and Tracking for Shopping Behaviour Analysis.
- Author
-
Robin Schrijvers, Steven Puttemans, Timothy Callemein, and Toon Goedemé
- Published
- 2020
- Full Text
- View/download PDF
3. How Low Can You Go? Privacy-preserving People Detection with an Omni-directional Camera.
- Author
-
Timothy Callemein, Kristof Van Beeck, and Toon Goedemé
- Published
- 2019
- Full Text
- View/download PDF
4. Anyone here? Smart Embedded Low-Resolution Omnidirectional Video Sensor to Measure Room Occupancy.
- Author
-
Timothy Callemein, Kristof Van Beeck, and Toon Goedemé
- Published
- 2019
- Full Text
- View/download PDF
5. Automated Analysis of Eye-Tracker-Based Human-Human Interaction Studies.
- Author
-
Timothy Callemein, Kristof Van Beeck, Geert Brône, and Toon Goedemé
- Published
- 2018
- Full Text
- View/download PDF
6. Building Robust Industrial Applicable Object Detection Models using Transfer Learning and Single Pass Deep Learning Architectures.
- Author
-
Steven Puttemans, Timothy Callemein, and Toon Goedemé
- Published
- 2018
- Full Text
- View/download PDF
7. The autonomous hidden camera crew.
- Author
-
Timothy Callemein, Wiebe Van Ranst, and Toon Goedemé
- Published
- 2017
- Full Text
- View/download PDF
8. Automatic Camera Selection and PTZ Canvas Steering for Autonomous Filming of Reality TV.
- Author
-
Timothy Callemein, Wiebe Van Ranst, and Toon Goedemé
- Published
- 2017
- Full Text
- View/download PDF
9. Eye-tracking analyses of physician face gaze patterns in consultations
- Author
-
N. T. Boeske, Marij A. Hillen, Johannes A. Romijn, H. G. van den Boorn, Ellen M. A. Smets, Chiara Jongerius, Timothy Callemein, Center of Experimental and Molecular Medicine, Graduate School, APH - Personalized Medicine, APH - Quality of Care, Amsterdam Gastroenterology Endocrinology Metabolism, Endocrinology, Medical Psychology, and APH - Methodology
- Subjects
Adult ,Data Analysis ,Male ,Eye Movements ,genetic structures ,Internal medicine physicians ,Science ,Fixation, Ocular ,Article ,Physicians ,Human behaviour ,Humans ,Psychology ,Association (psychology) ,Eye-Tracking Technology ,Eye Movement Measurements ,Referral and Consultation ,Multidisciplinary ,Gaze ,Face (geometry) ,Health Care Surveys ,Eye tracking ,Medicine ,Female ,Cognitive psychology - Abstract
Face gaze is a fundamental non-verbal behaviour and can be assessed using eye-tracking glasses. Methodological guidelines are lacking on which measure to use to determine face gaze. To evaluate face gaze patterns we compared three measures: duration, frequency and dwell time. Furthermore, state of the art face gaze analysis requires time and manual effort. We tested if face gaze patterns in the first 30, 60 and 120 s predict face gaze patterns in the remaining interaction. We performed secondary analyses of mobile eye-tracking data of 16 internal medicine physicians in consultation with 100 of their patients. Duration and frequency of face gaze were unrelated. The lack of association between duration and frequency suggests that research may yield different results depending on which measure of face gaze is used. Dwell time correlates both duration and frequency. Face gaze during the first seconds of the consultations predicted face gaze patterns of the remaining consultation time (R2 0.26 to 0.73). Therefore, face gaze during the first minutes of the consultations can be used to predict face gaze patterns over the complete interaction. Researchers interested to study face gaze may use these findings to make optimal methodological choices.
- Published
- 2021
10. Eye-tracking glasses in face-to-face interactions: Manual versus automated assessment of areas-of-interest
- Author
-
K Van Beeck, Johannes A. Romijn, Timothy Callemein, Ellen M. A. Smets, Chiara Jongerius, Toon Goedemé, Marij A. Hillen, Graduate School, Medical Psychology, APH - Personalized Medicine, APH - Quality of Care, Amsterdam Gastroenterology Endocrinology Metabolism, Endocrinology, and APH - Methodology
- Subjects
genetic structures ,Eye Movements ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Experimental and Cognitive Psychology ,computer.software_genre ,Computer vision algorithm ,050105 experimental psychology ,Article ,03 medical and health sciences ,Face-to-face ,0302 clinical medicine ,Person re-identification ,Arts and Humanities (miscellaneous) ,Developmental and Educational Psychology ,Computer vision algorithms ,Humans ,Gaze behaviour ,0501 psychology and cognitive sciences ,Eye-Tracking Technology ,Pose ,General Psychology ,Vision, Ocular ,Pose estimation ,business.industry ,05 social sciences ,Reproducibility of Results ,Gaze ,eye diseases ,Inter-rater reliability ,Eye-tracking glasses ,Manual annotation ,Eye tracking ,Areas-of-interest ,Psychology (miscellaneous) ,Artificial intelligence ,business ,computer ,030217 neurology & neurosurgery ,Natural language processing ,Algorithms - Abstract
The assessment of gaze behaviour is essential for understanding the psychology of communication. Mobile eye-tracking glasses are useful to measure gaze behaviour during dynamic interactions. Eye-tracking data can be analysed by using manually annotated areas-of-interest. Computer vision algorithms may alternatively be used to reduce the amount of manual effort, but also the subjectivity and complexity of these analyses. Using additional re-identification (Re-ID) algorithms, different participants in the interaction can be distinguished. The aim of this study was to compare the results of manual annotation of mobile eye-tracking data with the results of a computer vision algorithm. We selected the first minute of seven randomly selected eye-tracking videos of consultations between physicians and patients in a Dutch Internal Medicine out-patient clinic. Three human annotators and a computer vision algorithm annotated mobile eye-tracking data, after which interrater reliability was assessed between the areas-of-interest annotated by the annotators and the computer vision algorithm. Additionally, we explored interrater reliability when using lengthy videos and different area-of-interest shapes. In total, we analysed more than 65 min of eye-tracking videos manually and with the algorithm. Overall, the absolute normalized difference between the manual and the algorithm annotations of face-gaze was less than 2%. Our results show high interrater agreements between human annotators and the algorithm with Cohen’s kappa ranging from 0.85 to 0.98. We conclude that computer vision algorithms produce comparable results to those of human annotators. Analyses by the algorithm are not subject to annotator fatigue or subjectivity and can therefore advance eye-tracking analyses.
- Published
- 2021
11. Show me where the action is! Automatic capturing and timeline generation for reality TV
- Author
-
Tinne Tuytelaars, Hugo Van hamme, Timothy Callemein, Toon Goedemé, Luc Van Gool, Ali Diba, Luc Van Eycken, Tom Roussel, Wim Boes, and Floris De Feyter
- Subjects
Reality TV ,Technology ,Computer Networks and Communications ,Computer science ,02 engineering and technology ,Space (commercial competition) ,computer.software_genre ,Facial recognition system ,Action recognition ,Engineering ,Autonomous PTZ steering ,Human–computer interaction ,Computer Science, Theory & Methods ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Event timeline ,Audio signal processing ,Science & Technology ,Computer Science, Information Systems ,Sound recognition ,Process (computing) ,020206 networking & telecommunications ,Timeline ,Engineering, Electrical & Electronic ,Video processing ,Facial recognition ,Computer Science, Software Engineering ,Action (philosophy) ,Hardware and Architecture ,Computer Science ,020201 artificial intelligence & image processing ,computer ,Software - Abstract
Reality TV shows have gained popularity, motivating many production houses to bring new variants for us to watch. Compared to traditional TV shows, reality TV shows have spontaneous unscripted footage. Computer vision techniques could partially replace the manual labour needed to record and process this spontaneity. However, automated real-world video recording and editing is a challenging topic. In this paper, we propose a system that utilises state-of-the-art video and audio processing algorithms to, on the one hand, automatically steer cameras, replacing camera operators and on the other hand, detect all audiovisual action cues in the recorded video, to ease the job of the film editor. This publication has hence two main contributions. The first, automating the steering of multiple Pan-Tilt-Zoom PTZ cameras to take aesthetically pleasing medium shots of all the people present. These shots need to comply with the cinematographic rules and are based on the poses acquired by a pose detector. Secondly, when a huge amount of audio-visual data has been collected, it becomes labour intensive for a human editor retrieve the relevant fragments. As a second contribution, we combine state-of-the-art audio and video processing techniques for sound activity detection, action recognition, face recognition, and pose detection to decrease the required manual labour during and after recording. These techniques used during post-processing produce meta-data allowing for footage filtering, decreasing the search space. We extended our system further by producing timelines uniting generated meta-data, allowing the editor to have a quick overview. We evaluated our system on three in-the-wild reality TV recording sessions of 24 hours (× 8 cameras) each taken in real households.
- Published
- 2021
12. Building Robust Industrial Applicable Object Detection Models Using Transfer Learning and Single Pass Deep Learning Architectures
- Author
-
Toon Goedemé, Steven Puttemans, Timothy Callemein, Imai, F, Tremeau, A, and Braz, J
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Darknet ,Deep learning ,Computer Science - Computer Vision and Pattern Recognition ,Machine learning ,computer.software_genre ,Convolutional neural network ,Object detection ,Task (project management) ,Deep Learning ,Object Detection ,Eye tracking ,Segmentation ,Artificial intelligence ,Transfer of learning ,business ,computer ,Industrial Specific Solutions - Abstract
© 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. The uprising trend of deep learning in computer vision and artificial intelligence can simply not be ignored. On the most diverse tasks, from recognition and detection to segmentation, deep learning is able to obtain state-of-the-art results, reaching top notch performance. In this paper we explore how deep convolutional neural networks dedicated to the task of object detection can improve our industrial-oriented object detection pipelines, using state-of-the-art open source deep learning frameworks, like Darknet. By using a deep learning architecture that integrates region proposals, classification and probability estimation in a single run, we aim at obtaining real-time performance. We focus on reducing the needed amount of training data drastically by exploring transfer learning, while still maintaining a high average precision. Furthermore we apply these algorithms to two industrially relevant applications, one being the detection of promotion boards in eye tracking data and the other detecting and recognizing packages of warehouse products for augmented advertisements. ispartof: pages:209-217 ispartof: VISIGRAPP 2018 - Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications vol:5 pages:209-217 ispartof: VISIGRAPP 2018 location:Madeira, Portugal date:27 Jan - 29 Jan 2018 status: published
- Published
- 2020
13. The autonomous hidden camera crew
- Author
-
Wiebe Van Ranst, Timothy Callemein, and Toon Goedemé
- Subjects
FOS: Computer and information sciences ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Crew ,Computer Science - Computer Vision and Pattern Recognition ,020207 software engineering ,02 engineering and technology ,Cinematography ,Work (electrical) ,Human–computer interaction ,Reality tv ,0202 electrical engineering, electronic engineering, information engineering ,Multiple Cameras ,Switch matrix ,Limit (mathematics) ,Artificial intelligence ,Virtual camera ,business - Abstract
Reality TV shows that follow people in their day-to-day lives are not a new concept. However, the traditional methods used in the industry require a lot of manual labour and need the presence of at least one physical camera man. Because of this, the subjects tend to behave differently when they are aware of being recorded. This paper will present an approach to follow people in their day-to-day lives, for long periods of time (months to years), while being as unobtrusive as possible. To do this, we use unmanned cinematographically-aware cameras hidden in people's houses. Our contribution in this paper is twofold: First, we create a system to limit the amount of recorded data by intelligently controlling a video switch matrix, in combination with a multi-channel recorder. Second, we create a virtual camera man by controlling a PTZ camera to automatically make cinematographically pleasing shots. Throughout this paper, we worked closely with a real camera crew. This enabled us to compare the results of our system to the work of trained professionals., 4 pages, 6 figures
- Published
- 2020
14. Real-Time Embedded Person Detection and Tracking for Shopping Behaviour Analysis
- Author
-
Toon Goedemé, Timothy Callemein, Robin Schrijvers, and Steven Puttemans
- Subjects
FOS: Computer and information sciences ,Person detection ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Real-time computing ,Detector ,Embedded hardware ,Computer Science - Computer Vision and Pattern Recognition ,Optical flow ,02 engineering and technology ,Pedestrian ,Overlay ,010501 environmental sciences ,Tracking (particle physics) ,01 natural sciences ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0105 earth and related environmental sciences - Abstract
Shopping behaviour analysis through counting and tracking of people in shop-like environments offers valuable information for store operators and provides key insights in the stores layout (e.g.frequently visited spots). Instead of using extra staff for this, automated on-premise solutions are preferred. These automated systems should be cost-effective, preferably on lightweight embedded hardware, work in very challenging situations (e.g. handling occlusions) and preferably work real-time. We solve this challenge by implementing a real-time Tensor RT optimized YOLOv3-based pedestrian detector, on a Jetson TX2 hardware platform. By combining the detector with a sparse optical flow tracker we assign a unique ID to each customer and tackle the problem of loosing partially occluded customers. Our detector-tracker based solution achieves an average precision of 81.59% at a processing speed of 10 FPS. Besides valuable statistics, heat maps of frequently visited spots are extracted and used as an overlay on the video stream. ispartof: pages:541-553 ispartof: ACIVS: International Conference on Advanced Concepts for Intelligent Vision Systems vol:abs/2007.04942 pages:541-553 ispartof: Advanced Concepts for Intelligent Vision Systems location:Auckland, New-Zealand date:10 Feb - 14 Feb 2020 status: published
- Published
- 2020
15. Anyone here? Smart Embedded Low-Resolution Omnidirectional Video Sensor to Measure Room Occupancy
- Author
-
Toon Goedemé, Timothy Callemein, and Kristof Van Beeck
- Subjects
FOS: Computer and information sciences ,Artificial neural network ,Occupancy ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Deep learning ,Detector ,Real-time computing ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,Rendering (computer graphics) ,Omnidirectional camera ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Omnidirectional antenna ,business ,Image resolution ,0105 earth and related environmental sciences - Abstract
In this paper, we present a room occupancy sensing solution with unique properties: (i) It is based on an omnidirectional vision camera, capturing rich scene info over a wide angle, enabling to count the number of people in a room and even their position. (ii) Although it uses a camera-input, no privacy issues arise because its extremely low image resolution, rendering people unrecognisable. (iii) The neural network inference is running entirely on a low-cost processing platform embedded in the sensor, reducing the privacy risk even further. (iv) Limited manual data annotation is needed, because of the self-training scheme we propose. Such a smart room occupancy rate sensor can be used in e.g. meeting rooms and flex-desks. Indeed, by encouraging flex-desking, the required office space can be reduced significantly. In some cases, however, a flex-desk that has been reserved remains unoccupied without an update in the reservation system. A similar problem occurs with meeting rooms, which are often under-occupied. By optimising the occupancy rate a huge reduction in costs can be achieved. Therefore, in this paper, we develop such system which determines the number of people present in office flex-desks and meeting rooms. Using an omnidirectional camera mounted in the ceiling, combined with a person detector, the company can intelligently update the reservation system based on the measured occupancy. Next to the optimisation and embedded implementation of such a self-training omnidirectional people detection algorithm, in this work we propose a novel approach that combines spatial and temporal image data, improving performance of our system on extreme low-resolution images.
- Published
- 2019
16. Automated Analysis of Eye-Tracker-Based Human-Human Interaction Studies
- Author
-
Geert Brône, Timothy Callemein, Toon Goedemé, Kristof Van Beeck, Kim, KJ, and Baek, N
- Subjects
FOS: Computer and information sciences ,business.industry ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,05 social sciences ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Usability ,02 engineering and technology ,Post analysis ,Gaze ,050105 experimental psychology ,mobile eye-trackers ,Software ,Human–computer interaction ,Human interaction ,Robustness (computer science) ,Human-Human-interaction studies ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision algorithms ,Eye tracking ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,business ,Pace - Abstract
© 2019, Springer Nature Singapore Pte Ltd. Mobile eye-tracking systems have been available for about a decade now and are becoming increasingly popular in different fields of application, including marketing, sociology, usability studies and linguistics. While the user-friendliness and ergonomics of the hardware are developing at a rapid pace, the software for the analysis of mobile eye-tracking data in some points still lacks robustness and functionality. With this paper, we investigate which state-of-the-art computer vision algorithms may be used to automate the post-analysis of mobile eye-tracking data. For the case study in this paper, we focus on mobile eye-tracker recordings made during human-human face-to-face interactions. We compared two recent publicly available frameworks (YOLOv2 and OpenPose) to relate the gaze location generated by the eye-tracker to the head and hands visible in the scene camera data. In this paper we will show that the use of this single pipeline framework provides robust results, which are both more accurate and faster than previous work in the field. Moreover, our approach does not rely on manual interventions during this process. ispartof: pages:499-509 ispartof: INFORMATION SCIENCE AND APPLICATIONS 2018, ICISA 2018 vol:514 pages:499-509 ispartof: iCatse International Conference on Information Science and Applications (ICISA) location:Hong Kong, HONG KONG date:25 Jun - 27 Jun 2018 status: published
- Published
- 2018
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.