689 results on '"Temporal consistency"'
Search Results
2. Cascaded Sliding-Window-Based Relativistic GAN Fusion for Perceptual and Consistent Video Super-Resolution
- Author
-
Li, Dingyi, Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Carette, Jacques, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Shi, Zhongzhi, editor, Witbrock, Michael, editor, and Tian, Qi, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Temporally Consistent Enhancement of Low-Light Videos via Spatial-Temporal Compatible Learning.
- Author
-
Zhu, Lingyu, Yang, Wenhan, Chen, Baoliang, Zhu, Hanwei, Meng, Xiandong, and Wang, Shiqi
- Subjects
- *
MATHEMATICAL optimization , *INFORMATION networks , *DATABASES , *VIDEOS , *LIGHTING - Abstract
Temporal inconsistency is the annoying artifact that has been commonly introduced in low-light video enhancement, but current methods tend to overlook the significance of utilizing both data-centric clues and model-centric design to tackle this problem. In this context, our work makes a comprehensive exploration from the following three aspects. First, to enrich the scene diversity and motion flexibility, we construct a synthetic diverse low/normal-light paired video dataset with a carefully designed low-light simulation strategy, which can effectively complement existing real captured datasets. Second, for better temporal dependency utilization, we develop a Temporally Consistent Enhancer Network (TCE-Net) that consists of stacked 3D convolutions and 2D convolutions to exploit spatial-temporal clues in videos. Last, the temporal dynamic feature dependencies are exploited to obtain consistency constraints for different frame indexes. All these efforts are powered by a Spatial-Temporal Compatible Learning (STCL) optimization technique, which dynamically constructs specific training loss functions adaptively on different datasets. As such, multiple-frame information can be effectively utilized and different levels of information from the network can be feasibly integrated, thus expanding the synergies on different kinds of data and offering visually better results in terms of illumination distribution, color consistency, texture details, and temporal coherence. Extensive experimental results on various real-world low-light video datasets clearly demonstrate the proposed method achieves superior performance to state-of-the-art methods. Our code and synthesized low-light video database will be publicly available at https://github.com/lingyzhu0101/low-light-video-enhancement.git. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Row–Column Separated Attention Based Low‐Light Image/Video Enhancement.
- Author
-
Dong, Chengqi, Cao, Zhiyuan, Qi, Tuoshi, Wu, Kexin, Gao, Yixing, and Tang, Fan
- Subjects
- *
VIDEOS , *NOISE - Abstract
U‐Net structure is widely used for low‐light image/video enhancement. The enhanced images result in areas with large local noise and loss of more details without proper guidance for global information. Attention mechanisms can better focus on and use global information. However, attention to images could significantly increase the number of parameters and computations. We propose a Row–Column Separated Attention module (RCSA) inserted after an improved U‐Net. The RCSA module's input is the mean and maximum of the row and column of the feature map, which utilizes global information to guide local information with fewer parameters. We propose two temporal loss functions to apply the method to low‐light video enhancement and maintain temporal consistency. Extensive experiments on the LOL, MIT Adobe FiveK image, and SDSD video datasets demonstrate the effectiveness of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Machine learning-based estimation of land surface temperature variability over a large region: a temporally consistent approach using single-day satellite imagery.
- Author
-
Rengma, Nyenshu Seb and Yadav, Manohar
- Subjects
LAND surface temperature ,URBAN heat islands ,MACHINE learning ,REMOTE-sensing images ,RANDOM forest algorithms - Abstract
Accurate retrieval of LST is crucial for understanding and mitigating the effects of urban heat islands, and ultimately addressing the broader challenge of global warming. This study emphasizes the importance of a single day satellite imageries for large-scale LST retrieval. It explores the impact of Spectral indices of the surface parameters, using machine learning algorithms to enhance accuracy. The research proposes a novel approach of capturing satellite data on a single day to reduce uncertainties in LST estimations. A case study over Chandigarh city using Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine, and Random Forest (RF) reveals RF's superior performance in LST estimations during both summer and winter seasons. All the ML models gave an R-square of above 0.8 and RF with slightly higher R-square during both summer (0.93) and winter (0.85). Building on these findings, the study extends its focus to Ranchi, demonstrating RF's robustness with impressive accuracy in capturing LST variations. The research contributes to bridging existing gaps in large-scale LST estimation methodologies, offering valuable insights for its diverse applications in understanding Earth's dynamic systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Video Colorization Based on Variational Autoencoder.
- Author
-
Zhang, Guangzi, Hong, Xiaolin, Liu, Yan, Qian, Yulin, and Cai, Xingquan
- Subjects
GRAYSCALE model ,VIDEO processing ,VIDEOS ,VIDEO surveillance - Abstract
This paper introduces a variational autoencoder network designed for video colorization using reference images, addressing the challenge of colorizing black-and-white videos. Although recent techniques perform well in some scenarios, they often struggle with color inconsistencies and artifacts in videos that feature complex scenes and long durations. To tackle this, we propose a variational autoencoder framework that incorporates spatio-temporal information for efficient video colorization. To improve temporal consistency, we unify semantic correspondence with color propagation, allowing for simultaneous guidance in colorizing grayscale video frames. Additionally, the variational autoencoder learns spatio-temporal feature representations by mapping video frames into a latent space through an encoder network. The decoder network then transforms these latent features back into color images. Compared to traditional coloring methods, our approach accurately captures temporal relationships between video frames, providing precise colorization while ensuring video consistency. To further enhance video quality, we apply a specialized loss function that constrains the generated output, ensuring that the colorized video remains spatio-temporally consistent and natural. Experimental results demonstrate that our method significantly improves the video colorization process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Consistent Panoramic Video Style Transfer via Temporal-Spatial Cross Perception
- Author
-
Wang, Weiyu, Qing, Chunmei, Tan, Junpeng, Xu, Xiangmin, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Si, Zhanjun, editor, and Guo, Jiayang, editor
- Published
- 2024
- Full Text
- View/download PDF
8. Single-Video Temporal Consistency Enhancement with Rolling Guidance
- Author
-
Fang, Xiaonan, Zhang, Song-Hai, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Zhang, Fang-Lue, editor, and Sharf, Andrei, editor
- Published
- 2024
- Full Text
- View/download PDF
9. Temporal stability of behavior, temporal cue-behavior associations, and physical activity habit strength among mothers with school-aged children.
- Author
-
Maher, Jaclyn P., Wang, Wei-Lin, Hedeker, Donald, and Dunton, Genevieve F.
- Subjects
- *
PSYCHOLOGY of mothers , *MULTIPLE regression analysis , *BEHAVIOR , *PHYSICAL activity , *DESCRIPTIVE statistics , *RESEARCH funding , *PROMPTS (Psychology) - Abstract
Objective: PA habits reflect stable, consistent patterns in behaviours that are performed automatically in response to temporal or contextual cues. Mothers face multiple demands and complex schedules related to parenting. This study examined how subject-level mean, variability, and slopes in device-measured moderate-to-vigorous physical activity (MVPA) over three different timescales were associated with mothers' PA habits. Methods and Measures: Mothers (n = 125; Mage=41.4 years) completed six measurement periods across three years. Each measurement period consisted of seven days of accelerometry. MVPA minutes were processed across hours, days, and measurement periods. PA habits were assessed in the last measurement period. Results: Subject-level means of MVPA at all timescales were positively associated with stronger PA habits (βs = 0.42-0.48, ps<.01). Subject-level variability in day-level MVPA was positively associated with habits (β = 0.39, p=.01). Furthermore, mothers who engaged in higher mean day-level MVPA had a more positive association between subject-level variability in day-level MVPA and habit strength compared to mothers with lower mean day-level MVPA overall (β = 0.28, p=.04). Mothers who had steeper increases in MVPA across measurement periods (i.e. subject-level slope) reported stronger habits (β = 0.43; p = 0.03). Conclusion: Flexibly adjusting daily PA levels may be a necessary strategy to maintain habits in the face of parenting demands. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Temporally consistent video colorization with deep feature propagation and self-regularization learning.
- Author
-
Liu, Yihao, Zhao, Hengyuan, Chan, Kelvin C. K., Wang, Xintao, Loy, Chen Change, Qiao, Yu, and Dong, Chao
- Subjects
TIME management - Abstract
Video colorization is a challenging and highly ill-posed problem. Although recent years have witnessed remarkable progress in single image colorization, there is relatively less research effort on video colorization, and existing methods always suffer from severe flickering artifacts (temporal inconsistency) or unsatisfactory colorization. We address this problem from a new perspective, by jointly considering colorization and temporal consistency in a unified framework. Specifically, we propose a novel temporally consistent video colorization (TCVC) framework. TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization. Furthermore, TCVC introduces a self-regularization learning (SRL) scheme to minimize the differences in predictions obtained using different time steps. SRL does not require any ground-truth color videos for training and can further improve temporal consistency. Experiments demonstrate that our method can not only provide visually pleasing colorized video, but also with clearly better temporal consistency than state-of-the-art methods. A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE, while code is available at https://github.com/lyh-18/TCVC-Temporally-Consistent-Video-Colorization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Temporally consistent video colorization with deep feature propagation and self-regularization learning
- Author
-
Yihao Liu, Hengyuan Zhao, Kelvin C. K. Chan, Xintao Wang, Chen Change Loy, Yu Qiao, and Chao Dong
- Subjects
video colorization ,temporal consistency ,feature propagation ,self-regularization ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Video colorization is a challenging and highly ill-posed problem. Although recent years have witnessed remarkable progress in single image colorization, there is relatively less research effort on video colorization, and existing methods always suffer from severe flickering artifacts (temporal inconsistency) or unsatisfactory colorization. We address this problem from a new perspective, by jointly considering colorization and temporal consistency in a unified framework. Specifically, we propose a novel temporally consistent video colorization (TCVC) framework. TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization. Furthermore, TCVC introduces a self-regularization learning (SRL) scheme to minimize the differences in predictions obtained using different time steps. SRL does not require any ground-truth color videos for training and can further improve temporal consistency. Experiments demonstrate that our method can not only provide visually pleasing colorized video, but also with clearly better temporal consistency than state-of-the-art methods. A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE , while code is available at https://github.com/lyh-18/TCVC-Temporally-Consistent-Video-Colorization .
- Published
- 2024
- Full Text
- View/download PDF
12. Reliable interconnected channels for dynamic DCF based visual tracking.
- Author
-
Gopal, Goutam Yelluru and Amer, Maria
- Abstract
In Discriminative Correlation Filter (DCF) based trackers, individual channel responses are aggregated to compute the overall DCF response during target localization. Due to the effect of various external factors, several non-discriminative or unreliable channels display ambiguous filter responses. The reliability-based methods mitigate this problem by computing channel weights based on estimated scores from filter responses. However, this approach is prone to false suppression of discriminative channels due to noisy reliability scores. In this paper, we address this problem by proposing a three-fold objective function that accounts for the relationship between channels along with per-channel reliability scores during channel weight learning and a temporal prior. In addition, our paper presents an algorithm to compute channel weights efficiently and maintain the tracking speed. We show that our channel adaptation method can be seamlessly integrated into a multi-channel DCF-based tracking framework. Results on OTB2015, TC128, and VOT2018 datasets show that the proposed method improves the performance of baseline DCF trackers; for example, our dca-ECO is on average 12.7% better than ECO and outperforms related channel-adaptive Convolutional Neural Network-based trackers CGRCF, ACSDCF, and GFSDCF by 17.1% in terms of failure rate, also the recent trackers E.T.Track and FEAR by 4% and 19.1%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Motion-Inspired Real-Time Garment Synthesis with Temporal-Consistency.
- Author
-
Wei, Yu-Kun, Shi, Min, Feng, Wen-Ke, Zhu, Deng-Ming, and Mao, Tian-Lu
- Subjects
CLOTHING & dress ,MOTION ,COMPUTER graphics - Abstract
Synthesizing garment dynamics according to body motions is a vital technique in computer graphics. Physics-based simulation depends on an accurate model of the law of kinetics of cloth, which is time-consuming, hard to implement, and complex to control. Existing data-driven approaches either lack temporal consistency, or fail to handle garments that are different from body topology. In this paper, we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape. Given a sequence of body motions, our workflow is able to generate corresponding garment dynamics with both spatial and temporal coherence. To that end, we develop a transformerbased garment synthesis network to learn the mapping from body motions to garment dynamics. Frame-level attention is employed to capture the dependency of garments and body motions. Moreover, a post-processing procedure is further taken to perform penetration removal and auto-texturing. Then, textured clothing animation that is collision-free and temporally-consistent is generated. We quantitatively and qualitatively evaluated our proposed workflow from different aspects. Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation, while running 1 000 times faster. Besides, our workflow achieved superior synthesis performance compared with alternative approaches. To stimulate further research in this direction, our code will be publicly available soon. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Does the Time-of-Day of Exercise Influence the Total Volume of Exercise? A Cross-Sectional Analysis of Objectively Monitored Physical Activity Among Active Individuals.
- Author
-
Brooker, Paige G., Jung, Mary E., Kelly-Bowers, Dominic, Morlotti, Veronica, Gomersall, Sjaan R., King, Neil A., and Leveritt, Michael D.
- Subjects
EXERCISE ,WORKING class ,PHYSICAL activity ,COMPLIANT behavior ,ACCELEROMETRY - Abstract
Background: To improve compliance and adherence to exercise, the concept of temporal consistency has been proposed. Before- and after-work are periods when most working adults may reasonably incorporate exercise into their schedule. However, it is unknown if there is an association between the time-of-day that exercise is performed and overall physical activity levels. Methods: Activity was assessed over 1 week in a sample of 69 active adults (n = 41 females; mean age = 34.9 [12.3] y). At the end of the study, participants completed an interviewer-assisted questionnaire detailing their motivation to exercise and their exercise time-of-day preferences. Results: Participants were classified as "temporally consistent" (n = 37) or "temporally inconsistent" (n = 32) exercisers based on their accelerometry data. The "temporally consistent" group was further analyzed to compare exercise volume between "morning-exercisers" (n = 16) and "evening-exercisers" (n = 21). "Morning-exercisers" performed a greater volume of exercise than "evening-exercisers" (419 [178] vs 330 [233] min by self-report; 368 [224] vs 325 [156] min actigraph-derived moderate to vigorous physical activity, respectively). Conclusions: Our findings suggest that active individuals use a mixture of temporal patterns to meet PA guidelines. Time-of-day of exercise should be reported in intervention studies so the relationship between exercise time-of-day, exercise behavior, and associated outcomes can be better understood. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. A Temporal Consistency Enhancement Algorithm Based on Pixel Flicker Correction
- Author
-
Meng, Junfeng, Shen, Qiwei, He, Yangliu, Liao, Jianxin, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Tanveer, Mohammad, editor, Agarwal, Sonali, editor, Ozawa, Seiichi, editor, Ekbal, Asif, editor, and Jatowt, Adam, editor
- Published
- 2023
- Full Text
- View/download PDF
16. Virtual Try-On Considering Temporal Consistency for Videoconferencing
- Author
-
Shimizu, Daiki, Yanai, Keiji, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Dang-Nguyen, Duc-Tien, editor, Gurrin, Cathal, editor, Larson, Martha, editor, Smeaton, Alan F., editor, Rudinac, Stevan, editor, Dao, Minh-Son, editor, Trattner, Christoph, editor, and Chen, Phoebe, editor
- Published
- 2023
- Full Text
- View/download PDF
17. Deep Video Matting with Temporal Consistency
- Author
-
Li, Yanzhuo, Fang, Li, Ye, Long, Yang, Xinyan, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Zhai, Guangtao, editor, Zhou, Jun, editor, Yang, Hua, editor, Yang, Xiaokang, editor, An, Ping, editor, and Wang, Jia, editor
- Published
- 2023
- Full Text
- View/download PDF
18. Scene-Adaptive Temporal Stabilisation for Video Colourisation Using Deep Video Priors
- Author
-
Blanch, Marc Gorriz, O’Connor, Noel, Mrak, Marta, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Karlinsky, Leonid, editor, Michaeli, Tomer, editor, and Nishino, Ko, editor
- Published
- 2023
- Full Text
- View/download PDF
19. STC: Spatio-Temporal Contrastive Learning for Video Instance Segmentation
- Author
-
Jiang, Zhengkai, Gu, Zhangxuan, Peng, Jinlong, Zhou, Hang, Liu, Liang, Wang, Yabiao, Tai, Ying, Wang, Chengjie, Zhang, Liqing, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Karlinsky, Leonid, editor, Michaeli, Tomer, editor, and Nishino, Ko, editor
- Published
- 2023
- Full Text
- View/download PDF
20. Deep Video Harmonization by Improving Spatial-temporal Consistency
- Author
-
Chen, Xiuwen, Fang, Li, Ye, Long, and Zhang, Qin
- Published
- 2024
- Full Text
- View/download PDF
21. Temporally consistent sequence-to-sequence translation of cataract surgeries.
- Author
-
Frisch, Yannik, Fuchs, Moritz, and Mukhopadhyay, Anirban
- Abstract
Purpose: Image-to-image translation methods can address the lack of diversity in publicly available cataract surgery data. However, applying image-to-image translation to videos—which are frequently used in medical downstream applications—induces artifacts. Additional spatio-temporal constraints are needed to produce realistic translations and improve the temporal consistency of translated image sequences. Methods: We introduce a motion-translation module that translates optical flows between domains to impose such constraints. We combine it with a shared latent space translation model to improve image quality. Evaluations are conducted regarding translated sequences' image quality and temporal consistency, where we propose novel quantitative metrics for the latter. Finally, the downstream task of surgical phase classification is evaluated when retraining it with additional synthetic translated data. Results: Our proposed method produces more consistent translations than state-of-the-art baselines. Moreover, it stays competitive in terms of the per-image translation quality. We further show the benefit of consistently translated cataract surgery sequences for improving the downstream task of surgical phase prediction. Conclusion: The proposed module increases the temporal consistency of translated sequences. Furthermore, imposed temporal constraints increase the usability of translated data in downstream tasks. This allows overcoming some of the hurdles of surgical data acquisition and annotation and enables improving models' performance by translating between existing datasets of sequential frames. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. SOR-TC: Self-attentive octave ResNet with temporal consistency for compressed video action recognition.
- Author
-
Zhang, Junsan, Wang, Xiaomin, Wan, Yao, Wang, Leiquan, Wang, Jian, and Yu, Philip S.
- Subjects
- *
VIDEO summarization , *VIDEO surveillance , *RECOGNITION (Psychology) , *VIDEOS - Abstract
Modeling and recognizing video activities from videos are key parts of many promising applications such as visual surveillance, human–computer interaction, and video summarization. However, current approaches mainly suffer from two issues: (a) Short-term local and global spatial features are not well represented. The spatial redundancy and dependency have not been well considered on CNN-based action recognition, which may result in a further increase in both memory and computation cost. (b) Long-term temporal consistency is not well captured. The action consistency across multiple clips has been ignored in video-level action recognition approaches. To address these two issues, we propose a S elf-Attentive O ctave R esNet with T emporal C onsistency (SOR-TC) for compressed video action recognition to better capture the short-term and long-term features in video and improve the efficiency and effectiveness of action recognition. In addition, this paper introduces a consistency hypothesis that adjacent clips should predict similar actions. So the consistency loss function is designed to learn the correlation of clips. Finally, extensive experimental results on two benchmark datasets HMDB-51 and UCF-101 verify the effectiveness of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. TempFormer: Temporally Consistent Transformer for Video Denoising
- Author
-
Song, Mingyang, Zhang, Yang, Aydın, Tunç O., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
24. Tackling Background Distraction in Video Object Segmentation
- Author
-
Cho, Suhwan, Lee, Heansung, Lee, Minhyeok, Park, Chaewon, Jang, Sungjun, Kim, Minjung, Lee, Sangyoun, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
25. Source-Free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition
- Author
-
Xu, Yuecong, Yang, Jianfei, Cao, Haozhi, Wu, Keyu, Wu, Min, Chen, Zhenghua, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
26. CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer
- Author
-
Wu, Zijie, Zhu, Zhen, Du, Junping, Bai, Xiang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
27. Depth-Aware Neural Style Transfer for Videos.
- Author
-
Ioannou, Eleftherios and Maddock, Steve
- Subjects
DIGITAL preservation ,ARTISTIC style ,STREAMING video & television ,VIDEOS ,VIDEO coding - Abstract
Temporal consistency and content preservation are the prominent challenges in artistic video style transfer. To address these challenges, we present a technique that utilizes depth data and we demonstrate this on real-world videos from the web, as well as on a standard video dataset of three-dimensional computer-generated content. Our algorithm employs an image-transformation network combined with a depth encoder network for stylizing video sequences. For improved global structure preservation and temporal stability, the depth encoder network encodes ground-truth depth information which is fused into the stylization network. To further enforce temporal coherence, we employ ConvLSTM layers in the encoder, and a loss function based on calculated depth information for the output frames is also used. We show that our approach is capable of producing stylized videos with improved temporal consistency compared to state-of-the-art methods whilst also successfully transferring the artistic style of a target painting. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Spatio-temporal matching for siamese visual tracking.
- Author
-
Zhang, Jinpu, Dai, Kaiheng, Li, Ziwen, Wei, Ruonan, and Wang, Yuehuan
- Subjects
- *
SPATIOTEMPORAL processes , *CROSS correlation , *TIME-varying networks , *TRACKING radar - Abstract
Siamese trackers formulate the visual tracking task as a similarity matching problem through cross correlation. It is arduous for such methods to track targets with the presence of distractors. We suspect the reasons are twofold: 1) The irrelevant activated channels in the correlation map will produce ambiguous matching results. 2) The pipeline is a per-frame matching process and cannot handle the response aberrance caused by temporal context variation. In this paper, we propose a spatio-temporal matching process to thoroughly explore the capability of 4-D matching in space (height, width and channel) and time. In spatial matching, we introduce a space-variant instance-aware correlation (SI-Corr) to implement different channel-wise response recalibration for each matching position. SI-Corr can guide the generation of instance-aware features and distinguish the target and distractors at the instance level. In temporal matching, we design an aberrance repressed module (ARM) to investigate the short-term positional relationship between the target and distractors. ARM utilizes a simple optimization method to restrict the abrupt alteration of the interframe response maps, which allows the network to learn a temporal consistency of context structure distribution. Moreover, we efficiently embed temporal consistency into the inference process. Experiments on six benchmarks, including OTB100, VOT2018, VOT2020, GOT-10k, LaSOT and TrackingNet, demonstrate the state-of-the-art performance of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. Testing for personality consistency across naturally occurring behavioral contexts in sanctuary chimpanzees (Pan troglodytes).
- Author
-
Chotard, Hélène, Bard, Kim A., Micheletta, Jérôme, and Davila‐Ross, Marina
- Subjects
- *
CHIMPANZEES , *PERSONALITY tests , *PERSONALITY studies , *PERSONALITY , *INDIVIDUAL differences , *HOMINIDS - Abstract
Personality is both a reflection of the bio‐behavioral profile of individuals and a summary of how they typically interact with their physical and social world. Personality is usually defined as having distinct behavioral characteristics, which are assumed to be consistent over time and across contexts. Like other mammals, primates have individual differences in personality. Although temporal consistency is sometimes measured in primates, and contextual consistency is sometimes measured across experimental contexts, it is rare to measure both in the same individuals and outside of experimental settings. Here, we aim to measure both temporal and contextual consistency in chimpanzees, assessing their personality with behavioral observations from naturally occurring contexts (i.e., real‐life settings). We measured personality‐based behaviors in 22 sanctuary chimpanzees, in the contexts of feeding, affiliation, resting, and solitude, across two time periods, spanning 4 years. Of the 22 behaviors recorded, about 64% were consistent across two to four contexts and 50% were consistent over time. Ten behaviors loaded significantly onto three trait components: explorativeness, boldness‐sociability, and anxiety‐sociability, as revealed by factor analysis. Like others, we documented individual differences in the personality of chimpanzees based on reliably measured observations in real‐life contexts. Furthermore, we demonstrated relatively strong, but not absolute, temporal, and contextual consistency in personality‐based behaviors. We also found another aspect of individual differences in personality, specifically, the extent to which individual chimpanzees show consistency. Some individuals showed contextual and temporal consistency, whereas others show significant variation across behaviors, contexts, and/or time. We speculate that the relative degree of consistency in personality may vary within chimpanzees. It may be that different primate species vary in the extent to which individuals show consistency of personality traits. Our behavioral‐based assessment can be used with wild populations, increasing the validity of personality studies, facilitating comparative studies and potentially being applicable to conservation efforts. Research Highlights: Chimpanzees show consistency in their personality‐based behaviors across naturally occurring contexts and over time.Individuals show different contextual and temporal consistency of their personality‐based behavior profile. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Counting People by Estimating People Flows.
- Author
-
Liu, Weizhe, Salzmann, Mathieu, and Fua, Pascal
- Subjects
- *
OPTICAL flow , *ACTIVE learning , *COUNTING , *DEEP learning , *VIDEO compression - Abstract
Modern methods for counting people in crowded scenes rely on deep networks to estimate people densities in individual images. As such, only very few take advantage of temporal consistency in video sequences, and those that do only impose weak smoothness constraints across consecutive frames. In this paper, we advocate estimating people flows across image locations between consecutive images and inferring the people densities from these flows instead of directly regressing them. This enables us to impose much stronger constraints encoding the conservation of the number of people. As a result, it significantly boosts performance without requiring a more complex architecture. Furthermore, it allows us to exploit the correlation between people flow and optical flow to further improve the results. We also show that leveraging people conservation constraints in both a spatial and temporal manner makes it possible to train a deep crowd counting model in an active learning setting with much fewer annotations. This significantly reduces the annotation cost while still leading to similar performance to the full supervision case. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Temporal-Consistency-Aware Video Color Transfer
- Author
-
Liu, Shiguang, Zhang, Yu, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Magnenat-Thalmann, Nadia, editor, Interrante, Victoria, editor, Thalmann, Daniel, editor, Papagiannakis, George, editor, Sheng, Bin, editor, Kim, Jinman, editor, and Gavrilova, Marina, editor
- Published
- 2021
- Full Text
- View/download PDF
32. Background Modeling and Elimination Using Differential Block-Based Quantized Histogram (D-BBQH)
- Author
-
Maity, Satyabrata, Ghosh, Nimisha, Maity, Krishanu, Saha, Sayantan, Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Mishra, Debahuti, editor, Buyya, Rajkumar, editor, Mohapatra, Prasant, editor, and Patnaik, Srikanta, editor
- Published
- 2021
- Full Text
- View/download PDF
33. Depth Estimation for Colonoscopy Images with Self-supervised Learning from Videos
- Author
-
Cheng, Kai, Ma, Yiting, Sun, Bin, Li, Yang, Chen, Xuejin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, de Bruijne, Marleen, editor, Cattin, Philippe C., editor, Cotin, Stéphane, editor, Padoy, Nicolas, editor, Speidel, Stefanie, editor, Zheng, Yefeng, editor, and Essert, Caroline, editor
- Published
- 2021
- Full Text
- View/download PDF
34. Consistency Assessments of the Land Cover Products on the Tibetan Plateau
- Author
-
Liping Cai, Shanshan Wang, Lizhi Jia, Yijia Wang, Hui Wang, Donglin Fan, and Lin Zhao
- Subjects
Land cover (LC) pattern ,LC product ,spatial consistency ,temporal consistency ,Tibetan Plateau ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Land cover (LC) and LC change information are essential data in terrestrial surface research. However, the LC products are highly inconsistent, especially in the mountainous area. Most LC assessment studies focused on the consistency of spatial patterns, while ignoring the consistency of change and elevation heterogeneity. In this article, four LC products were assessed for spatiotemporal consistency on the Tibetan Plateau (TP), including the moderate resolution imaging spectroradiometer LC (MCD12Q1), the climate change initiative LC (CCI-LC), the 30-meter global land cover (Globeland30) and the multiperiod land use/ LC remote sensing monitoring database in China. The impact of elevation on the consistency of multiple LC products across spatiotemporal scales was further analyzed. The spatial consistency of the three and more products was about 70%, with higher consistency for grassland and bare land and lower consistency for wetland and shrubland on the TP. Globeland30 and CCI-LC were better than others for overall monitoring, with the inconsistency of less than 45% by Google Earth dataset validation. The temporal change consistency of the four LC products was less than 10%. With increasing elevation, the average spatial consistency decreased and the LC change area and temporal change consistency increased. There is a high inconsistency of LC changes on the TP in existing commonly used products, demonstrating the need to develop high-quality LC products in long time series.
- Published
- 2022
- Full Text
- View/download PDF
35. Revisit the Performance of MODIS and VIIRS Leaf Area Index Products from the Perspective of Time-Series Stability
- Author
-
Dongxiao Zou, Kai Yan, Jiabin Pu, Si Gao, Wenjuan Li, Xihan Mu, Yuri Knyazikhin, and Ranga B. Myneni
- Subjects
Leaf area index (LAI) ,MODerate resolution Imaging Spectroradiometer (MODIS) ,temporal consistency ,Tibet Plateau (TP) ,time-series anomaly (TSA) ,time-series stability (TSS) ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
As an essential vegetation structural parameter, leaf area index (LAI) is involved in many critical biochemical processes, such as photosynthesis, respiration, and precipitation interception. The MODerate resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imager Radiometer Suite (VIIRS) LAI sequence products have long supported various global climate, biogeochemistry, and energy flux research. These applications all rely on the accuracy of the product's long time series. However, uncontrolled interferences (e.g., adverse observation conditions and sensor uncertainties) potentially introduce substantial uncertainties to time series in product applications. As one of the most sensitive areas in response to global climate change, the Tibet Plateau (TP) has been treated as a crucial testing ground for thousands of studies on vegetation. To ensure the credibility of the studies arising from MODIS/VIIRS LAI products, the temporal quality uncertainties of data need to be clarified. This article proposed a method to revisit the temporal stability of the MODIS (MOD and MYD) and VIIRS (VNP) LAI in the TP, expecting to provide useful information for better accounting for the uncertainties in this area. Results show that the MODIS and VIIRS LAI were relatively stable in time series and available to be used continuously, among which the temporal quality of the MODIS LAI was the most stable. Moreover, the MODIS and VIIRS LAI products performed similarly in both time-series stability and time-series anomaly distribution, magnitudes and fluctuations. The time-series stability evaluation strategy applied to the MODIS and VIIRS LAI can also be employed to other remote sensing products.
- Published
- 2022
- Full Text
- View/download PDF
36. Analysis of temporal consistency management protocols for resource-constrained multi-agent systems
- Author
-
Mohamed Limame, Julien Henriet, Christophe Lang, and Nicolas Marilleau
- Subjects
consistency management ,temporal consistency ,shared data ,synchronization ,multi-agent systems ,multi-agent systems with limited resources and connectivity ,Technology ,Technology (General) ,T1-995 - Abstract
Multi-Agent Systems (MAS) are distributed system composed of a set of connected nodes, that communicate and share data. Since nodes do not have access to a central clock, the consistency of shared data is one of the important issues raised by this type of system. Approaches to establish temporal consistency of these shared data have been designed and implemented in the past. This paper presents and analyses existing protocols for managing temporal consistency in order to identify those that are in line with the specificities of weakly adjoining MAS with limited resources and connectivity. We propose a classification and a comparative analysis of these protocols for use within this category of MAS.
- Published
- 2022
37. Preserving Temporal Consistency in Videos Through Adaptive SLIC
- Author
-
Zhang, Han, Ali, Riaz, Sheng, Bin, Li, Ping, Kim, Jinman, Wang, Jihong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Magnenat-Thalmann, Nadia, editor, Stephanidis, Constantine, editor, Wu, Enhua, editor, Thalmann, Daniel, editor, Sheng, Bin, editor, Kim, Jinman, editor, Papagiannakis, George, editor, and Gavrilova, Marina, editor
- Published
- 2020
- Full Text
- View/download PDF
38. Efficient Semantic Video Segmentation with Per-Frame Inference
- Author
-
Liu, Yifan, Shen, Chunhua, Yu, Changqian, Wang, Jingdong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vedaldi, Andrea, editor, Bischof, Horst, editor, Brox, Thomas, editor, and Frahm, Jan-Michael, editor
- Published
- 2020
- Full Text
- View/download PDF
39. Exploiting Temporal Coherence for Self-Supervised One-Shot Video Re-identification
- Author
-
Raychaudhuri, Dripta S., Roy-Chowdhury, Amit K., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vedaldi, Andrea, editor, Bischof, Horst, editor, Brox, Thomas, editor, and Frahm, Jan-Michael, editor
- Published
- 2020
- Full Text
- View/download PDF
40. Estimating People Flows to Better Count Them in Crowded Scenes
- Author
-
Liu, Weizhe, Salzmann, Mathieu, Fua, Pascal, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vedaldi, Andrea, editor, Bischof, Horst, editor, Brox, Thomas, editor, and Frahm, Jan-Michael, editor
- Published
- 2020
- Full Text
- View/download PDF
41. Shape-Matching GAN++: Scale Controllable Dynamic Artistic Text Style Transfer.
- Author
-
Yang, Shuai, Wang, Zhangyang, and Liu, Jiaying
- Subjects
- *
ARTISTIC style , *GALLIUM nitride , *DEFORMATION of surfaces - Abstract
Dynamic artistic text style transfer aims to migrate the style in terms of both the appearance and motion patterns from a reference style video to the target text to create artistic text animation. Recent researches have improved the usability of transfer models by introducing texture control. However, it remains an important open challenge to investigate the control of the stylistic degree with respect to shape deformation. In this paper, we explore a new problem of dynamic artistic text style transfer with glyph stylistic degree control. The key idea is to build multi-scale glyph-style shape mappings through a novel bidirectional shape matching framework. Following this idea, we first introduce a scale-ware Shape-Matching GAN to learn such mappings to simultaneously model the style shape features at multiple scales and transfer them onto the target glyph. Furthermore, an advanced Shape-Matching GAN++ is proposed to animate a static text image based on the reference style video. Our Shape-Matching GAN++ characterizes the short-term consistency of motion patterns via shape matchings within consecutive frames, which are propagated to achieve effective long-term consistency. Experiments show that the proposed method outperforms previous state-of-the-arts both qualitatively and quantitatively, and generate high-quality and controllable artistic text. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Relation-Based Associative Joint Location for Human Pose Estimation in Videos.
- Author
-
Dang, Yonghao, Yin, Jianqin, and Zhang, Shaojie
- Subjects
- *
DEEP learning , *JOINTS (Anatomy) , *MACHINE learning , *VIDEOS , *FEATURE extraction - Abstract
Video-based human pose estimation (VHPE) is a vital yet challenging task. While deep learning algorithms have made tremendous progress for the VHPE, lots of these approaches to this task implicitly model the long-range interaction between joints by expanding the receptive field of the convolution or designing a graph manually. Unlike prior methods, we design a lightweight and plug-and-play joint relation extractor (JRE) to explicitly and automatically model the associative relationship between joints. The JRE takes the pseudo heatmaps of joints as input and calculates their similarity. In this way, the JRE can flexibly learn the correlation between any two joints, allowing it to learn the rich spatial configuration of human poses. Furthermore, the JRE can infer invisible joints according to the correlation between joints, which is beneficial for locating occluded joints. Then, combined with temporal semantic continuity modeling, we propose a Relation-based Pose Semantics Transfer Network (RPSTN) for video-based human pose estimation. Specifically, to capture the temporal dynamics of poses, the pose semantic information of the current frame is transferred to the next with a joint relation guided pose semantics propagator (JRPSP). The JRPSP can transfer the pose semantic features from the non-occluded frame to the occluded frame. The proposed RPSTN achieves state-of-the-art or competitive results on the video-based Penn Action, Sub-JHMDB, PoseTrack2018, and HiEve datasets. Moreover, the proposed JRE improves the performance of backbones on the image-based COCO2017 dataset. Code is available at https://github.com/YHDang/pose-estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Video Super-Resolution via a Spatio-Temporal Alignment Network.
- Author
-
Wen, Weilei, Ren, Wenqi, Shi, Yinghuan, Nie, Yunfeng, Zhang, Jingang, and Cao, Xiaochun
- Subjects
- *
CONVOLUTIONAL neural networks , *ADAPTIVE filters , *OPTICAL flow - Abstract
Deep convolutional neural network based video super-resolution (SR) models have achieved significant progress in recent years. Existing deep video SR methods usually impose optical flow to wrap the neighboring frames for temporal alignment. However, accurate estimation of optical flow is quite difficult, which tends to produce artifacts in the super-resolved results. To address this problem, we propose a novel end-to-end deep convolutional network that dynamically generates the spatially adaptive filters for the alignment, which are constituted by the local spatio-temporal channels of each pixel. Our method avoids generating explicit motion compensation and utilizes spatio-temporal adaptive filters to achieve the operation of alignment, which effectively fuses the multi-frame information and improves the temporal consistency of the video. Capitalizing on the proposed adaptive filter, we develop a reconstruction network and take the aligned frames as input to restore the high-resolution frames. In addition, we employ residual modules embedded with channel attention as the basic unit to extract more informative features for video SR. Both quantitative and qualitative evaluation results on three public video datasets demonstrate that the proposed method performs favorably against state-of-the-art super-resolution methods in terms of clearness and texture details. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Spatial and temporal consistency learning for monocular 6D pose estimation.
- Author
-
Zhang, Hong-Bo, Liang, Jia-Yu, Hong, Jia-Xin, Lei, Qing, Liu, Jing-Hua, and Du, Ji-Xiang
- Subjects
- *
MONOCULARS , *POSE estimation (Computer vision) , *VISUAL fields , *COMPUTER vision , *LEARNING strategies - Abstract
Monocular 6D pose estimation is a challenging task in the field of computer vision and robotics. Many previous works only input the cropped image of single object during training and inference, aiming to remove the noise from non-object regions. However, most of these methods ignore the viewpoint and spatial relationships of objects in the scene, which are crucial for accurate pose estimation of camera. To address this issue, this paper proposes a novel multi-view and multi-object based learning strategy for monocular 6D pose estimation, which involves the consistency of object coordinate for the same object at different viewpoints and the consistency of world coordinate for different objects in the same space. In the proposed method, the spatial and temporal groups are generated to trained the monocular 6D pose estimation network. Due to the camera motion, scene images taken at different times can be regarded as images captured from different viewpoints. Therefore, a temporal consistency loss is designed to constraint the relationship of the same object at different viewpoints, while a spatial consistency loss is designed to constraint the relationship of different objects at the same space. Finally, the proposed method is verified on the public datasets. Experimental results show that the proposed method is accurate, robust, and outperforms similar state-of-the-art approaches. • A novel multi-view and multi-object framework for monocular 6D pose estimation. • Spatial and temporal groups are generated to trained the monocular 6D pose estimation network. • A pairwise learning strategy is proposed for the MVMO-based monocular 6D pose estimation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Depth-Aware Neural Style Transfer for Videos
- Author
-
Eleftherios Ioannou and Steve Maddock
- Subjects
neural style transfer ,deep learning ,depth estimation ,temporal consistency ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Temporal consistency and content preservation are the prominent challenges in artistic video style transfer. To address these challenges, we present a technique that utilizes depth data and we demonstrate this on real-world videos from the web, as well as on a standard video dataset of three-dimensional computer-generated content. Our algorithm employs an image-transformation network combined with a depth encoder network for stylizing video sequences. For improved global structure preservation and temporal stability, the depth encoder network encodes ground-truth depth information which is fused into the stylization network. To further enforce temporal coherence, we employ ConvLSTM layers in the encoder, and a loss function based on calculated depth information for the output frames is also used. We show that our approach is capable of producing stylized videos with improved temporal consistency compared to state-of-the-art methods whilst also successfully transferring the artistic style of a target painting.
- Published
- 2023
- Full Text
- View/download PDF
46. Context-Aware Temporal Knowledge Graph Embedding
- Author
-
Liu, Yu, Hua, Wen, Xin, Kexuan, Zhou, Xiaofang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cheng, Reynold, editor, Mamoulis, Nikos, editor, Sun, Yizhou, editor, and Huang, Xin, editor
- Published
- 2019
- Full Text
- View/download PDF
47. Learning for Video Super-Resolution Through HR Optical Flow Estimation
- Author
-
Wang, Longguang, Guo, Yulan, Lin, Zaiping, Deng, Xinpu, An, Wei, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Jawahar, C. V., editor, Li, Hongdong, editor, Mori, Greg, editor, and Schindler, Konrad, editor
- Published
- 2019
- Full Text
- View/download PDF
48. Myocardial Segmentation of Cardiac MRI Sequences With Temporal Consistency for Coronary Artery Disease Diagnosis
- Author
-
Yutian Chen, Wen Xie, Jiawei Zhang, Hailong Qiu, Dewen Zeng, Yiyu Shi, Haiyun Yuan, Jian Zhuang, Qianjun Jia, Yanchun Zhang, Yuhao Dong, Meiping Huang, and Xiaowei Xu
- Subjects
myocardial segmentation ,MRI ,cardiac sequences ,temporal consistency ,coronary artery disease ,diagnosis ,Diseases of the circulatory (Cardiovascular) system ,RC666-701 - Abstract
Coronary artery disease (CAD) is the most common cause of death globally, and its diagnosis is usually based on manual myocardial (MYO) segmentation of MRI sequences. As manual segmentation is tedious, time-consuming, and with low replicability, automatic MYO segmentation using machine learning techniques has been widely explored recently. However, almost all the existing methods treat the input MRI sequences independently, which fails to capture the temporal information between sequences, e.g., the shape and location information of the myocardium in sequences along time. In this article, we propose a MYO segmentation framework for sequence of cardiac MRI (CMR) scanning images of the left ventricular (LV) cavity, right ventricular (RV) cavity, and myocardium. Specifically, we propose to combine conventional neural networks and recurrent neural networks to incorporate temporal information between sequences to ensure temporal consistency. We evaluated our framework on the automated cardiac diagnosis challenge (ACDC) dataset. The experiment results demonstrate that our framework can improve the segmentation accuracy by up to 2% in the Dice coefficient.
- Published
- 2022
- Full Text
- View/download PDF
49. Analysis of temporal consistency management protocols for resource-constrained multi-agent systems.
- Author
-
Limame, Mohamed, Henriet, Julien, Lang, Christophe, and Marilleau, Nicolas
- Subjects
INFORMATION sharing ,MULTIAGENT systems ,COMPARATIVE studies - Abstract
Multi-Agent Systems (MAS) are distributed system composed of a set of connected nodes, that communicate and share data. Since nodes do not have access to a central clock, the consistency of shared data is one of the important issues raised by this type of system. Approaches to establish temporal consistency of these shared data have been designed and implemented in the past. This paper presents and analyses existing protocols for managing temporal consistency in order to identify those that are in line with the specificities of weakly adjoining MAS with limited resources and connectivity. We propose a classification and a comparative analysis of these protocols for use within this category of MAS. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. When Do Offenders Commit Crime? An Analysis of Temporal Consistency in Individual Offending Patterns.
- Author
-
van Sleeuwen, Sabine E. M., Steenbeek, Wouter, and Ruiter, Stijn
- Subjects
- *
CRIMINALS , *RECIDIVISTS , *CRIME , *RECIDIVISM , *PREDICTIVE policing , *CRIMINAL methods , *CRIME analysis - Abstract
Objectives: Building on Hägerstrand's time geography, we expect temporal consistency in individual offending behavior. We hypothesize that repeat offenders commit offenses at similar times of day and week. In addition, we expect stronger temporal consistency for crimes of the same type and for crimes committed within a shorter time span. Method: We use police-recorded crime data on 28,274 repeat offenders who committed 152,180 offenses between 1996 and 2009 in the greater The Hague area in the Netherlands. We use a Monte Carlo permutation procedure to compare the overall level of temporal consistency observed in the data to the temporal consistency that is to be expected given the overall temporal distribution of crime. Results: Repeat offenders show strong temporal consistency: they commit their crimes at more similar hours of day and week than expected. Moreover, the observed temporal consistency patterns are indeed stronger for offenses of the same type of crime and when less time has elapsed between the offenses, especially for offenses committed within a month after the prior offense. Discussion: The results are consistent with offenders having recurring rhythms that shape their temporal crime pattern. These findings might prove valuable for improving predictive policing methods and crime linkage analysis as well as interventions to reduce recidivism. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.