1,136 results on '"Changsheng Xu"'
Search Results
52. Weakly-supervised Video Scene Graph Generation via Unbiased Cross-modal Learning.
53. Client-Adaptive Cross-Model Reconstruction Network for Modality-Incomplete Multimodal Federated Learning.
54. AffectFAL: Federated Active Affective Computing with Non-IID Data.
55. C2MR: Continual Cross-Modal Retrieval for Streaming Multi-modal Data.
56. mPLUG-Octopus: The Versatile Assistant Empowered by A Modularized End-to-End Multimodal LLM.
57. Quantification of Artist Representativity within an Art Movement.
58. Fine-grained Temporal Contrastive Learning for Weakly-supervised Temporal Action Localization.
59. StyTr2: Image Style Transfer with Transformers.
60. Dynamic Scene Graph Generation via Anticipatory Pre-training.
61. DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis.
62. Relative Alignment Network for Source-Free Multimodal Video Domain Adaptation.
63. Comprehensive Relationship Reasoning for Composed Query Based Image Retrieval.
64. Feeling Without Sharing: A Federated Video Emotion Recognition Framework Via Privacy-Agnostic Hybrid Aggregation.
65. Adaptive Anti-Bottleneck Multi-Modal Graph Learning Network for Personalized Micro-video Recommendation.
66. Attribute-guided Dynamic Routing Graph Network for Transductive Few-shot Learning.
67. Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion.
68. Adaptive Transformer-Based Conditioned Variational Autoencoder for Incomplete Social Event Classification.
69. MMT: Image-guided Story Ending Generation with Multimodal Memory Transformer.
70. Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning.
71. Multi-Modal Learning with Text Merging for TEXTVQA.
72. Dual-Evidential Learning for Weakly-supervised Temporal Action Localization.
73. Cross-Modal Federated Human Activity Recognition via Modality-Agnostic and Modality-Specific Representation Learning.
74. Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer.
75. Robustly Recognizing Irregular Scene Text by Rectifying Principle Irregularities.
76. Multi-modal Queried Object Detection in the Wild.
77. Diving Into The Relations: Leveraging Semantic and Visual Structures For Video Moment Retrieval.
78. Meta-Learning Causal Feature Selection for Stable Prediction.
79. Fast Video Moment Retrieval.
80. Active Universal Domain Adaptation.
81. Text Style Transfer With Decorative Elements.
82. Global Relation-Aware Attention Network for Image-Text Retrieval.
83. ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-Shot Learning.
84. Unveiling the Potential of Structure Preserving for Weakly Supervised Object Localization.
85. Hierarchical Multi-Task Learning for Diagram Question Answering with Multi-Modal Transformer.
86. Weakly-Supervised Video Object Grounding via Stable Context Learning.
87. Multi-Level Counterfactual Contrast for Visual Commonsense Reasoning.
88. Efficient Graph Deep Learning in TensorFlow with tf_geometric.
89. Zero-shot Video Emotion Recognition via Multimodal Protagonist-aware Transformer Network.
90. Multimodal Global Relation Knowledge Distillation for Egocentric Action Anticipation.
91. Few-shot Egocentric Multimodal Activity Recognition.
92. Arbitrary Video Style Transfer via Multi-Channel Correlation.
93. Dual Adversarial Graph Neural Networks for Multi-label Cross-modal Retrieval.
94. Hierarchical Multi-modal Contextual Attention Network for Fake News Detection.
95. Category-Level Adversarial Self-Ensembling for Domain Adaptation.
96. Multi-attribute Guided Painting Generation.
97. Fake News Detection via Knowledge-driven Multimodal Graph Convolutional Networks.
98. Dynamic Refinement Network for Oriented and Densely Packed Object Detection.
99. Joint Attribute Manipulation and Modality Alignment Learning for Composing Text and Image to Image Retrieval.
100. Multi-modal Attentive Graph Pooling Model for Community Question Answer Matching.
Catalog
Books, media, physical & digital resources
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.