347 results on '"Adversarial"'
Search Results
52. TAC-GAIL: A Multi-modal Imitation Learning Method
- Author
-
Zhu, Jiacheng, Jiang, Chong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, Haiqin, editor, Pasupa, Kitsuchart, editor, Leung, Andrew Chi-Sing, editor, Kwok, James T., editor, Chan, Jonathan H., editor, and King, Irwin, editor
- Published
- 2020
- Full Text
- View/download PDF
53. Generative Adversarial-Synergetic Networks for Anomaly Detection
- Author
-
Li, Hongjun, Li, Chaobo, Zhou, Ze, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Lu, Yue, editor, Vincent, Nicole, editor, Yuen, Pong Chi, editor, Zheng, Wei-Shi, editor, Cheriet, Farida, editor, and Suen, Ching Y., editor
- Published
- 2020
- Full Text
- View/download PDF
54. Conditional Image Repainting via Semantic Bridge and Piecewise Value Function
- Author
-
Weng, Shuchen, Li, Wenbo, Li, Dawei, Jin, Hongxia, Shi, Boxin, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Vedaldi, Andrea, editor, Bischof, Horst, editor, Brox, Thomas, editor, and Frahm, Jan-Michael, editor
- Published
- 2020
- Full Text
- View/download PDF
55. Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers.
- Author
-
Korkmaz, Yilmaz, Dar, Salman U. H., Yurt, Mahmut, Ozbey, Muzaffer, and Cukur, Tolga
- Subjects
- *
MAGNETIC resonance imaging , *LATENT variables - Abstract
Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
56. Crafting Adversarial Perturbations via Transformed Image Component Swapping.
- Author
-
Agarwal, Akshay, Ratha, Nalini, Vatsa, Mayank, and Singh, Richa
- Subjects
- *
IMAGE recognition (Computer vision) , *DATABASES , *DEEP learning - Abstract
Adversarial attacks have been demonstrated to fool the deep classification networks. There are two key characteristics of these attacks: firstly, these perturbations are mostly additive noises carefully crafted from the deep neural network itself. Secondly, the noises are added to the whole image, not considering them as the combination of multiple components from which they are made. Motivated by these observations, in this research, we first study the role of various image components and the impact of these components on the classification of the images. These manipulations do not require the knowledge of the networks and external noise to function effectively and hence have the potential to be one of the most practical options for real-world attacks. Based on the significance of the particular image components, we also propose a transferable adversarial attack against unseen deep networks. The proposed attack utilizes the projected gradient descent strategy to add the adversarial perturbation to the manipulated component image. The experiments are conducted on a wide range of networks and four databases including ImageNet and CIFAR-100. The experiments show that the proposed attack achieved better transferability and hence gives an upper hand to an attacker. On the ImageNet database, the success rate of the proposed attack is up to 88.5%, while the current state-of-the-art attack success rate on the database is 53.8%. We have further tested the resiliency of the attack against one of the most successful defenses namely adversarial training to measure its strength. The comparison with several challenging attacks shows that: (i) the proposed attack has a higher transferability rate against multiple unseen networks and (ii) it is hard to mitigate its impact. We claim that based on the understanding of the image components, the proposed research has been able to identify a newer adversarial attack unseen so far and unsolvable using the current defense mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
57. Intelligent networking in adversarial environment: challenges and opportunities.
- Author
-
Zhao, Yi, Xu, Ke, Li, Qi, Wang, Haiyang, Wang, Dan, and Zhu, Min
- Abstract
Although deep learning technologies have been widely exploited in many fields, they are vulnerable to adversarial attacks by adding small perturbations to legitimate inputs to fool targeted models. However, few studies have focused on intelligent networking in such an adversarial environment, which can pose serious security threats. In fact, while challenging intelligent networking, adversarial environments also bring about opportunities. In this paper, we, for the first time, simultaneously analyze the challenges and opportunities that the adversarial environment brings to intelligent networking. Specifically, we focus on challenges that the adversarial environment will pose on the existing intelligent networking. Furthermore, we investigate frameworks and approaches that combine adversarial machine learning with intelligent networking to solve the existing deficiencies of intelligent networking. Finally, we summarize the issues, including opportunities and challenges, which can allow researchers to focus on intelligent networking in adversarial environments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
58. Learning time-aware features for action quality assessment.
- Author
-
Zhang, Yu, Xiong, Wei, and Mi, Siya
- Subjects
- *
VIDEO excerpts , *HUMAN behavior , *SPORTS events , *TASK performance , *REGRESSION analysis - Abstract
• We propose to use the TA attention module to capture the relationship of different video clips. • We introduce an adversarial loss to ensure the stability and effectiveness to learn the model. • Our method achieves a competitive action evaluation result on the MTL-AQA dataset. Action quality assessment (AQA) is a task to assess the performance of a human action, which can be widely used in many real-world scenarios such as sport events. Current AQA methods generally extract features from the video and perform regression analysis to obtain the action quality score. In this process, aggregated video features may not reflect different stages of an action, which are important to evaluate an action is good or not. To address this issue, we propose to divide the video into different clips and learn the relationship between them, which may capture the action changes for accurate assessment. Time-aware (TA) attention mechanism is used to evaluate this relationship. In the experiment, our proposed method achieves promising results on the MTL-AQA dataset compared with existing AQA methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
59. Optimal Transport-Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes.
- Author
-
Blanchet, Jose, Murthy, Karthyek, and Zhang, Fan
- Subjects
ROBUST optimization ,STRUCTURAL optimization ,COST functions ,TRANSPORTATION costs - Abstract
We consider optimal transport-based distributionally robust optimization (DRO) problems with locally strongly convex transport cost functions and affine decision rules. Under conventional convexity assumptions on the underlying loss function, we obtain structural results about the value function, the optimal policy, and the worst-case optimal transport adversarial model. These results expose a rich structure embedded in the DRO problem (e.g., strong convexity even if the non-DRO problem is not strongly convex, a suitable scaling of the Lagrangian for the DRO constraint, etc., which are crucial for the design of efficient algorithms). As a consequence of these results, one can develop efficient optimization procedures that have the same sample and iteration complexity as a natural non-DRO benchmark algorithm, such as stochastic gradient descent. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
60. Launching Adversarial Attacks against Network Intrusion Detection Systems for IoT
- Author
-
Pavlos Papadopoulos, Oliver Thornewill von Essen, Nikolaos Pitropakis, Christos Chrysoulas, Alexios Mylonas, and William J. Buchanan
- Subjects
adversarial ,machine learning ,network IDS ,Internet of Things ,Technology (General) ,T1-995 - Abstract
As the internet continues to be populated with new devices and emerging technologies, the attack surface grows exponentially. Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought. Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy. Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision. Nevertheless, machine learning models are also vulnerable to attacks. Adversarial examples can be used to evaluate the robustness of a designed model before it is deployed. Further, using adversarial examples is critical to creating a robust model designed for an adversarial environment. Our work evaluates both traditional machine learning and deep learning models’ robustness using the Bot-IoT dataset. Our methodology included two main approaches. First, label poisoning, used to cause incorrect classification by the model. Second, the fast gradient sign method, used to evade detection measures. The experiments demonstrated that an attacker could manipulate or circumvent detection with significant probability.
- Published
- 2021
- Full Text
- View/download PDF
61. Investigating strategies towards adversarially robust time series classification.
- Author
-
Abdu-Aguye, Mubarak G., Gomaa, Walid, Makihara, Yasushi, and Yagi, Yasushi
- Subjects
- *
COMPUTER vision , *CLASSIFICATION , *AXIOMS , *TIME series analysis - Abstract
• Classifying time series with Euclidean distance is robust against adversarial attacks. • Time series classifiers using fixed kernels are robust against adversarial attacks. • This is empirically proven on 85 datasets for 2 state-of-the-art adversarial attacks. Deep neural networks have been shown to be vulnerable against specifically-crafted perturbations designed to affect their predictive performance. Such perturbations, formally termed 'adversarial attacks' have been designed for various domains in the literature, most prominently in computer vision and more recently, in time series classification. Therefore there is a need to derive robust strategies to defend deep networks from such attacks. In this work we propose to establish axioms of robustness against adversarial attacks in time series classification. We subsequently design a suitable experimental methodology and empirically validate the hypotheses put forth. Results obtained from our investigations confirm the proposed hypotheses, and provide a strong empirical baseline with a view to mitigating the effects of adversarial attacks in deep time series classification. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
62. Journalists' Adversarial Questions in Iran and the United States Political Interviews
- Author
-
Maryam Farnia and Nasrin Abedian
- Subjects
journalists’ questions ,adversarial ,political interview ,press conference ,Language and Literature ,Language. Linguistic theory. Comparative grammar ,P101-410 - Abstract
The present study, utilizing a descriptive approach and quantitative analysis, analyzed questions in the political interviews in Iran and the United States in order to show what types of adversarial are used in the journalists’ questions and whether there are differences in the use of adversarial between Iranian and American journalists. To this end, the questions addressing the presidents in Iran (i.e. Presidents AhmadiNejad and Roohani) and presidents in the US (i.e. Presidents Obama and Trump), around 70 journalists (35 in each corpus) in political press conference, were randomly collected from 2012 to 2017. The data were then analyzed based Clayman et al.’s (2006) framework to examine how language is used to express adversarial questions. The findings showed that preface tilt was significantly used more in American corpus while other-referencing frames and global adversarial were significantly used more in Iranian corpus. Moreover, in the two corpora, negative questions were the least frequently used type of question and declarative questions was absent in American corpus.
- Published
- 2020
- Full Text
- View/download PDF
63. Court Activity in the Process of Evidence in the Conditions of Compatibility in Civil Court-Production
- Author
-
A. D. Dzumatov
- Subjects
court ,adversarial ,proving ,reclamation of evidence ,collection of evidence ,Law - Abstract
The article gives a brief historical description of the role of the court in the collection of evidence. The arguments in favor of strengthening the role of the court for the recovery of evidence are presented. In particular, the Author substantiates the conclusion that it is necessary to impose on the court the obligation to independently demand evidence in civil proceedings. The Author comes to the conclusion that strengthening the role of the court in the process of proof in civil proceedings are dictated by the requirements due to the many flaws in civil procedural law, the unequal social status of participants in the process, making decisions not in accordance with actual circumstances of the case.
- Published
- 2020
- Full Text
- View/download PDF
64. Achieving Fairness with Decision Trees: An Adversarial Approach
- Author
-
Vincent Grari, Boris Ruf, Sylvain Lamprier, and Marcin Detyniecki
- Subjects
Fair machine learning ,Adversarial ,Gradient boosting ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Abstract Fair classification has become an important topic in machine learning research. While most bias mitigation strategies focus on neural networks, we noticed a lack of work on fair classifiers based on decision trees even though they have proven very efficient. In an up-to-date comparison of state-of-the-art classification algorithms in tabular data, tree boosting outperforms deep learning (Zhang et al. in Expert Syst Appl 82:128–150, 2017). For this reason, we have developed a novel approach of adversarial gradient tree boosting. The objective of the algorithm is to predict the output Y with gradient tree boosting while minimizing the ability of an adversarial neural network to predict the sensitive attribute S. The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting. We empirically assess our approach on four popular data sets and compare against state-of-the-art algorithms. The results show that our algorithm achieves a higher accuracy while obtaining the same level of fairness, as measured using a set of different common fairness definitions.
- Published
- 2020
- Full Text
- View/download PDF
65. Overcoming Long-Term Catastrophic Forgetting Through Adversarial Neural Pruning and Synaptic Consolidation.
- Author
-
Peng, Jian, Tang, Bo, Jiang, Hao, Li, Zhuo, Lei, Yinjie, Lin, Tao, and Li, Haifeng
- Subjects
- *
GENERATIVE adversarial networks , *CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *LONG-term potentiation , *RIGHT to be forgotten , *NEUROPLASTICITY - Abstract
Enabling a neural network to sequentially learn multiple tasks is of great significance for expanding the applicability of neural networks in real-world applications. However, artificial neural networks face the well-known problem of catastrophic forgetting. What is worse, the degradation of previously learned skills becomes more severe as the task sequence increases, known as the long-term catastrophic forgetting. It is due to two facts: first, as the model learns more tasks, the intersection of the low-error parameter subspace satisfying for these tasks becomes smaller or even does not exist; second, when the model learns a new task, the cumulative error keeps increasing as the model tries to protect the parameter configuration of previous tasks from interference. Inspired by the memory consolidation mechanism in mammalian brains with synaptic plasticity, we propose a confrontation mechanism in which Adversarial Neural Pruning and synaptic Consolidation (ANPyC) is used to overcome the long-term catastrophic forgetting issue. The neural pruning acts as long-term depression to prune task-irrelevant parameters, while the novel synaptic consolidation acts as long-term potentiation to strengthen task-relevant parameters. During the training, this confrontation achieves a balance in that only crucial parameters remain, and non-significant parameters are freed to learn subsequent tasks. ANPyC avoids forgetting important information and makes the model efficient to learn a large number of tasks. Specifically, the neural pruning iteratively relaxes the current task’s parameter conditions to expand the common parameter subspace of the task; the synaptic consolidation strategy, which consists of a structure-aware parameter-importance measurement and an element-wise parameter updating strategy, decreases the cumulative error when learning new tasks. Our approach encourages the synapse to be sparse and polarized, which enables long-term learning and memory. ANPyC exhibits effectiveness and generalization on both image classification and generation tasks with multiple layer perceptron, convolutional neural networks, and generative adversarial networks, and variational autoencoder. The full source code is available at https://github.com/GeoX-Lab/ANPyC. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
66. Topic Aware Context Modelling for Dialogue Response Generation
- Author
-
Chen, Dali, Rong, Wenge, Ma, Zhiyuan, Ouyang, Yuanxin, Xiong, Zhang, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gedeon, Tom, editor, Wong, Kok Wai, editor, and Lee, Minho, editor
- Published
- 2019
- Full Text
- View/download PDF
67. Gender Prediction Through Synthetic Resampling of User Profiles Using SeqGANs
- Author
-
Syed, Munira, Marshall, Jermaine, Nigam, Aastha, Chawla, Nitesh V., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tagarelli, Andrea, editor, and Tong, Hanghang, editor
- Published
- 2019
- Full Text
- View/download PDF
68. Non-deterministic Behavior of Ranking-Based Metrics When Evaluating Embeddings
- Author
-
Nicolaou, Anguelos, Dey, Sounak, Christlein, Vincent, Maier, Andreas, Karatzas, Dimosthenis, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Pandu Rangan, C., Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Kerautret, Bertrand, editor, Colom, Miguel, editor, Lopresti, Daniel, editor, Monasse, Pascal, editor, and Talbot, Hugues, editor
- Published
- 2019
- Full Text
- View/download PDF
69. Group Anomaly Detection Using Deep Generative Models
- Author
-
Chalapathy, Raghavendra, Toth, Edward, Chawla, Sanjay, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Berlingerio, Michele, editor, Bonchi, Francesco, editor, Gärtner, Thomas, editor, Hurley, Neil, editor, and Ifrim, Georgiana, editor
- Published
- 2019
- Full Text
- View/download PDF
70. Contradictorialitatea în faza de urmărire penală. Asistarea avocatului la efectuarea actelor de urmărire penală.
- Author
-
CĂLIN, Radu Bogdan
- Abstract
The adversarial islands that are encountered in the criminal instruction phase offer means of evidence administered in conditions of adversarial proceedings additional reliability and are an expression of the guarantees that benefit the accused person. The study is divided into three chapters, beginning with a general presentation of the adversarial principle that characterizes the trial phase and is sporadically found in the criminal investigation phase; in the following section, various concepts and theories regarding the procedures in the criminal investigation phase in which the adversarial principle is applicable are debated. In the last part of the paper, an analysis is performed in relation to the effects of the adversarial principle on the reliability of evidence obtained with its application, the exclusion of evidence obtained in violation of the adversarial principle, and lex ferenda proposals by the author. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
71. Explicit feature disentanglement for visual place recognition across appearance changes.
- Author
-
Tang, Li, Wang, Yue, Tan, Qimeng, and Xiong, Rong
- Subjects
OBJECT recognition (Computer vision) ,MOBILE robots - Abstract
In the long-term deployment of mobile robots, changing appearance brings challenges for localization. When a robot travels to the same place or restarts from an existing map, global localization is needed, where place recognition provides coarse position information. For visual sensors, changing appearances such as the transition from day to night and seasonal variation can reduce the performance of a visual place recognition system. To address this problem, we propose to learn domain-unrelated features across extreme changing appearance, where a domain denotes a specific appearance condition, such as a season or a kind of weather. We use an adversarial network with two discriminators to disentangle domain-related features and domain-unrelated features from images, and the domain-unrelated features are used as descriptors in place recognition. Provided images from different domains, our network is trained in a self-supervised manner which does not require correspondences between these domains. Besides, our feature extractors are shared among all domains, making it possible to contain more appearance without increasing model complexity. Qualitative and quantitative results on two toy cases are presented to show that our network can disentangle domain-related and domain-unrelated features from given data. Experiments on three public datasets and one proposed dataset for visual place recognition are conducted to illustrate the performance of our method compared with several typical algorithms. Besides, an ablation study is designed to validate the effectiveness of the introduced discriminators in our network. Additionally, we use a four-domain dataset to verify that the network can extend to multiple domains with one model while achieving similar performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
72. Multi-Dimensional Disentangled Representation Learning for Emotion Embedding Generation
- Author
-
Czyzycki, Evan
- Subjects
Artificial intelligence ,adversarial ,disentanglement ,embedding ,emotion detection ,NLP ,style transfer - Abstract
In the natural language processing (NLP) research community, disentangled representation learning hasbecome commonplace in text style transfer and sentiment analysis. Previous studies have demonstrated theutility of extracting style from text corpora in order to augment context-dependent downstream tasks suchas text generation. Within sentiment analysis specifically, disentangled representation learning has beenshown to produce latent representations that can be used to improve downstream classification tasks. In thisstudy, we build upon this existing framework by (1) investigating disentangled representation learning inthe multidimensional task of emotion detection, (2) testing the robustness of this methodology over varyingdatasets, and (3) exploring the interpretability of the produced latent representations. We discover thatclosely following existing disentangled representation learning methods for sentiment analysis in a multi-class setting, performance decreases significantly, and we are unable to effectively distinguish content andstyle in our learned latent representations. Further work is necessary to determine the effectiveness of styledisentanglement for text in multi-class settings using adversarial training.
- Published
- 2022
73. Adversarial Attacks and Defense using Energy-Based Image Models
- Author
-
Mitchell, Jonathan Craig
- Subjects
Computer science ,Adversarial ,Defense ,Energy-Based Model ,GAN ,Generative Model ,MCMC - Abstract
In this article we briefly review current research in adversarial attacks and defenses and form a basis for a theoretical explanation as to why a generative energy model is the solution to the defense problem as it exists for securing naturally trained classifiers. We further expand on this topic and discuss future efforts toward the use of a generalized adversarial defense framework based on Stochastic Security to defend against the strongest known adversarial attacks. We further expand on this idea and demonstrate that Energy-based models can be extended towards multiple tasks and datasets. Furthermore, we discuss some architectural improvements to the framework that lead to improvements in synthesis and defense (The Hat-EBM and the Fixer). This work lies at the intersection of generative modeling, adversarial defense, and chaotic dynamics.
- Published
- 2022
74. Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG).
- Author
-
Gideon, John, McInnis, Melvin G, and Provost, Emily Mower
- Abstract
Automatic speech emotion recognition provides computers with critical context to enable user understanding. While methods trained and tested within the same dataset have been shown successful, they often fail when applied to unseen datasets. To address this, recent work has focused on adversarial methods to find more generalized representations of emotional speech. However, many of these methods have issues converging, and only involve datasets collected in laboratory conditions. In this paper, we introduce Adversarial Discriminative Domain Generalization (ADDoG), which follows an easier to train “meet in the middle” approach. The model iteratively moves representations learned for each dataset closer to one another, improving cross-dataset generalization. We also introduce Multiclass ADDoG, or MADDoG, which is able to extend the proposed method to more than two datasets, simultaneously. Our results show consistent convergence for the introduced methods, with significantly improved results when not using labels from the target dataset. We also show how, in most cases, ADDoG and MADDoG can be used to improve upon baseline state-of-the-art methods when target dataset labels are added and in-the-wild data are considered. Even though our experiments focus on cross-corpus speech emotion, these methods could be used to remove unwanted factors of variation in other settings. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
75. SPLASH: Learnable activation functions for improving accuracy and adversarial robustness.
- Author
-
Tavakoli, Mohammadamin, Agostinelli, Forest, and Baldi, Pierre
- Subjects
- *
NONLINEAR functions , *ARCHITECTURAL design , *HINGES - Abstract
We introduce SPLASH units, a class of learnable activation functions shown to simultaneously improve the accuracy of deep neural networks while also improving their robustness to adversarial attacks. SPLASH units have both a simple parameterization and maintain the ability to approximate a wide range of non-linear functions. SPLASH units are: (1) continuous; (2) grounded (f (0) = 0); (3) use symmetric hinges; and (4) their hinges are placed at fixed locations which are derived from the data (i.e. no learning required). Compared to nine other learned and fixed activation functions, including ReLU and its variants, SPLASH units show superior performance across three datasets (MNIST, CIFAR-10, and CIFAR-100) and four architectures (LeNet5, All-CNN, ResNet-20, and Network-in-Network). Furthermore, we show that SPLASH units significantly increase the robustness of deep neural networks to adversarial attacks. Our experiments on both black-box and white-box adversarial attacks show that commonly-used architectures, namely LeNet5, All-CNN, Network-in-Network, and ResNet-20, can be up to 31% more robust to adversarial attacks by simply using SPLASH units instead of ReLUs. Finally, we show the benefits of using SPLASH activation functions in bigger architectures designed for non-trivial datasets such as ImageNet. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
76. Domain distribution variation learning via adversarial adaption for helicopter transmission system fault diagnosis.
- Author
-
Sun, Kuangchi, Yin, Aijun, and Lu, Shiao
- Subjects
- *
FAULT diagnosis , *HELICOPTERS , *DIAGNOSIS methods - Abstract
Deep Learning-based fault diagnosis has aroused widespread attention in machine fault diagnosis. Helicopter is an important transport for its special purpose. How to ensure its normal operation is a challenging task. Nevertheless, the existing research mainly focuses on single bearing or gear of gearbox, while there are few reports about intelligent fault diagnosis of bearing and shaft in helicopter transmission system. Furthermore, traditional domain adaption-based fault diagnosis methods assume that the source machine and target machine have the same class distribution. Besides, the latent distributed feature of target domain data is rarely developed, and the distributional discrepancy of shared-class samples during domain adaption with outlier class is rarely considered. To address these issues, we propose a domain distribution variation learning (DDVL) via adversarial adaption for helicopter transmission system fault diagnosis in this paper. Hereinto, the distributional discrepancy of partial shared-class is measured by adversarial training during open-set domain adaption. Especially, a self-supervised learning framework via the pseudo-label and weight normalization is proposed to develop the latent distribution feature of target data with unknown labels. The case study from a simulated helicopter transmission system is used to verify the effectiveness of DDVL. Our method outperforms other comparison methods in different case studies for helicopter transmission system fault diagnosis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
77. View Decomposition and Adversarial for Semantic Segmentation
- Author
-
Guan, He, Zhang, Zhaoxiang, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Geng, Xin, editor, and Kang, Byeong-Ho, editor
- Published
- 2018
- Full Text
- View/download PDF
78. A Step Beyond Generative Multi-adversarial Networks
- Author
-
Singh, Aman, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Basu, Anup, editor, and Berretti, Stefano, editor
- Published
- 2018
- Full Text
- View/download PDF
79. Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
- Author
-
Yang, Siqi, Wiliem, Arnold, Chen, Shaokang, Lovell, Brian C., Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Ferrari, Vittorio, editor, Hebert, Martial, editor, Sminchisescu, Cristian, editor, and Weiss, Yair, editor
- Published
- 2018
- Full Text
- View/download PDF
80. El principio de igualdad de armas: un análisis conceptual
- Author
-
Simón Moratto
- Subjects
principio ,igualdad ,armas ,desventaja ,adversarial ,Law in general. Comparative and uniform law. Jurisprudence ,K1-7720 - Abstract
El principio de igualdad de armas es un mandato esencial que consiste en que “cada parte debe tener una oportunidad razonable para presentar su caso en condiciones que no la pongan en desventaja con respecto a su oponente”1. Esta figura, a pesar de su connotada y evidente importancia, no ha sido objeto de estudio profundo y serio en la doctrina y la jurisprudencia colombianas. Lo anterior ha permitido un uso irresponsable y equivocado de este concepto, y es de ahí que nace la necesidad de este artículo, el cual tiene por objeto, sin pretender ser exhaustivo, dar una mayor claridad sobre este mandato de optimización, empezando por revisar sus antecedentes; pasando por el análisis de cada uno de sus elementos esenciales; la referencia a algunas categorías desarrolladas en la jurisdicción internacional que son de suma importancia para alcanzar un entendimiento íntegro de dicho principio; el señalamiento de las diferentes posiciones sobre su fundamento, definición y alcance; y una diferenciación con otros derechos y principios con los que suele confundirse. De esta manera, se sentarán las bases para que, posteriormente, sea posible abrir un debate organizado acerca de esta figura que haga viable un ejercicio optimizado y responsable de ella.
- Published
- 2021
- Full Text
- View/download PDF
81. Privacy-Net: An Adversarial Approach for Identity-Obfuscated Segmentation of Medical Images.
- Author
-
Kim, Bach Ngoc, Dolz, Jose, Jodoin, Pierre-Marc, and Desrosiers, Christian
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *DIAGNOSTIC imaging , *IMAGE analysis , *DEEP learning , *IMAGE segmentation , *MAGNETIC resonance imaging - Abstract
This paper presents a client/server privacy-preserving network in the context of multicentric medical image analysis. Our approach is based on adversarial learning which encodes images to obfuscate the patient identity while preserving enough information for a target task. Our novel architecture is composed of three components: 1) an encoder network which removes identity-specific features from input medical images, 2) a discriminator network that attempts to identify the subject from the encoded images, 3) a medical image analysis network which analyzes the content of the encoded images (segmentation in our case). By simultaneously fooling the discriminator and optimizing the medical analysis network, the encoder learns to remove privacy-specific features while keeping those essentials for the target task. Our approach is illustrated on the problem of segmenting brain MRI from the large-scale Parkinson Progression Marker Initiative (PPMI) dataset. Using longitudinal data from PPMI, we show that the discriminator learns to heavily distort input images while allowing for highly accurate segmentation results. Our results also demonstrate that an encoder trained on the PPMI dataset can be used for segmenting other datasets, without the need for retraining. The code is made available at: https://github.com/bachkimn/Privacy-Net-An-Adversarial-Approach-forIdentity-Obfuscated-Segmentation-of-MedicalImages [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
82. An Adversarial Smart Contract Honeypot in Ethereum.
- Author
-
Yu Han, Tiantian Ji, Zhongru Wang, Hao Liu, Hai Jiang, Wendi Wang, and Xiang Cui
- Subjects
CONTRACTS ,BLOCKCHAINS - Abstract
A smart contract honeypot is a special type of smart contract. This type of contract seems to have obvious vulnerabilities in contract design. If a user transfers a certain amount of funds to the contract, then the user can withdraw the funds in the contract. However, once users try to take advantage of this seemingly obvious vulnerability, they will fall into a real trap. Consequently, the user's investment in the contract cannot be retrieved. The honeypot induces other accounts to launch funds, which seriously threatens the security of property on the blockchain. Detection methods for honeypots are available. However, studying the manner by which to defend existing honeypots is insufficient to fight against honeypots. The new honeypots that may appear in the future from the perspective of an attacker must also be predicted. Therefore, we propose a type of adversarial honeypot. The code and behavioral features of honeypots are obtained through a comparative analysis of the 158,568 nonhoneypots and 352 honeypots. To build an adversarial honeypot, we try to separately hide these features andmake the honeypot bypass the existing detection technology. We construct 18 instances on the basis of the proposed adversarial honeypot and use an open-source honeypot detection tool to detect these instances. The experimental result shows that the proposed honeypot can bypass the detection tool with a 100% ratio. Therefore, this type of honeypot should be given attention, and defensive measures should be proposed as soon as possible. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
83. The principle of adversarial proceedings in preparatory criminal proceedings.
- Author
-
Briatková, Marika
- Subjects
CRIMINAL procedure ,CRIMINAL justice system ,CRIMINAL law ,DECISION making - Abstract
In the article, the author deals with the issue of adversariality of criminal proceedings as one of the basic principles of criminal proceedings with a focus on the application of this principle in criminal proceedings. In the article, the author focused on clarifying the concept of adversariality, and followed up on the general definition of adversariality as a principle of criminal procedure. Given that this is a category of broad importance, the author tried to define these definitions as precisely as possible and did not focus on its comparison with other principles of criminal proceedings, but focused only on the principle of adversariality. Furthermore, in the article, the author focused on the application of the adversarial principle in selected post-clerical acts of criminal proceedings with a focus on the exercise of the rights of the accused and exceptions to the principle of adversariality in preparatory proceedings. In the article, the author deals with the issue of adversariality of criminal proceedings as one of the basic principles of criminal proceedings with a focus on the application of this principle in criminal proceedings. In the article, the author focused on clarifying the concept of adversariality, and followed up on the general definition of adversariality as a principle of criminal procedure. Given that this is a category of broad meaning, the author tried to define these definitions as precisely as possible and did not focus on its comparison with other principles of criminal proceed-ings, but focused only on the principle of adversariality. Furthermore, in the article, the author focused on the applica-tion of the adversarial principle in selected post-clerical acts of criminal proceedings with a focus on the exercise of the rights of the accused and exceptions to the principle of adversariality in preparatory proceedings. Attention is also paid to the violation of the principle of contradictory in the preparatory proceedings in the fourth chapter. [ABSTRACT FROM AUTHOR]
- Published
- 2021
84. Stopping the course of the judicial litigation and dropping it, a study in the Jordanian Civil Procedures Law.
- Author
-
Badran, Feton Ali and Jarrah, Mashal Mufleh
- Subjects
CIVIL procedure ,LEGAL procedure ,CIVIL law ,COMPARATIVE method ,ACTIONS & defenses (Law) - Abstract
Explaining cases of stopping and dropping the litigation and their reasons in the Jordanian Code of Procedure, the problematic of this study is to explain the stopping of the Jordanian legislator from cases of stopping and dropping the litigation and the adequacy of the legal texts regulating it in Jordanian legislation. The two researchers followed the descriptive and analytical approach in analyzing legal texts, particularly the texts of the Jordanian Code of Civil Procedure, the jurisprudence, and the opinions of the jurists. Likewise, the comparative approach, whenever necessary, in order to enrich the study, and the researchers reached a number of findings and recommendations, and the most important result was: that the litigation may be intercepted by situations that lead to affecting the course of its outcome, such as endowment and omission. The researchers recommended the necessity of expanding the study of the judicial litigation and what influences it in the cases of stopping and abrogation, due to its importance in determining the legal positions of the parties. [ABSTRACT FROM AUTHOR]
- Published
- 2021
85. Learning credible DNNs via incorporating prior knowledge and model local explanation.
- Author
-
Du, Mengnan, Liu, Ninghao, Yang, Fan, and Hu, Xia
- Subjects
PRIOR learning ,LOCAL knowledge ,EXPLANATION ,GENERALIZATION - Abstract
Recent studies have shown that state-of-the-art DNNs are not always credible, despite their impressive performance on the hold-out test set of a variety of tasks. These models tend to exploit dataset shortcuts to make predictions, rather than learn the underlying task. The non-credibility could lead to low generalization, adversarial vulnerability, as well as algorithmic discrimination of the DNN models. In this paper, we propose CREX in order to develop more credible DNNs. The high-level idea of CREX is to encourage DNN models to focus more on evidences that actually matter for the task at hand and to avoid overfitting to data-dependent shortcuts. Specifically, in the DNN training process, CREX directly regularizes the local explanation with expert rationales, i.e., a subset of features highlighted by domain experts as justifications for predictions, to enforce the alignment between local explanations and rationales. Even when rationales are not available, CREX still could be useful by requiring the generated explanations to be sparse. In addition, CREX is widely applicable to different network architectures, including CNN, LSTM and attention model. Experimental results on several text classification datasets demonstrate that CREX could increase the credibility of DNNs. Comprehensive analysis further shows three meaningful improvements of CREX: (1) it significantly increases DNN accuracy on new and previously unseen data beyond test set, (2) it enhances fairness of DNNs in terms of equality of opportunity metric and reduce models' discrimination toward certain demographic group, and (3) it promotes the robustness of DNN models with respect to adversarial attack. These experimental results highlight the advantages of the increased credibility by CREX. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
86. BBAS: Towards large scale effective ensemble adversarial attacks against deep neural network learning.
- Author
-
Shen, Jialie and Robertson, Neil
- Subjects
- *
VIDEO surveillance , *BIG data , *ENSEMBLE music , *NEURAL development , *ARTIFICIAL neural networks - Abstract
Recent decades have witnessed rapid development of deep neural networks (DNN). As DNN learning is becoming more and more important to numerous intelligent system, ranging from self driving car to video surveillance system, significant research efforts have been devoted to explore how to improve DNN model's robustness and reliability against adversarial example attacks. Distinguish from previous study, we address the problem of adversarial training with ensemble based approach and propose a novel boosting based black-box attack scheme call BBAS to facilitate high diverse adversarial example generation. BBAS not only separates example generation from the settings of the trained model but also enhance the diversity of perturbation over class distribution through seamless integration of stratified sampling and ensemble adversarial training. This leads to reliable and effective training example selection. To validate and evaluate the scheme from different perspectives, a set of comprehensive tests have been carried out based on two large open data sets. Experimental results demonstrate the superiority of our method in terms of effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
87. Combating Adversarial Inputs Using a Predictive-Estimator Network
- Author
-
Orchard, Jeff, Castricato, Louis, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Liu, Derong, editor, Xie, Shengli, editor, Li, Yuanqing, editor, Zhao, Dongbin, editor, and El-Alfy, El-Sayed M., editor
- Published
- 2017
- Full Text
- View/download PDF
88. Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits.
- Author
-
Zimmert, Julian and Seldin, Yevgeny
- Subjects
- *
STOCHASTIC analysis , *ALGORITHMS , *BAYES' estimation , *PRIOR learning , *ENTROPY - Abstract
We derive an algorithm that achieves the optimal (within constants) pseudo-regret in both adversarial and stochastic multi-armed bandits without prior knowledge of the regime and time horizon.1 The algorithm is based on online mirror descent (OMD) with Tsallis entropy regularization with power = 1=2 and reduced-variance loss estimators. More generally, we define an adversarial regime with a self-bounding constraint, which includes stochastic regime, stochastically constrained adversarial regime (Wei and Luo, 2018), and stochastic regime with adversarial corruptions (Lykouris et al., 2018) as special cases, and show that the algorithm achieves logarithmic regret guarantee in this regime and all of its special cases simultaneously with the optimal regret guarantee in the adversarial regime. The algorithm also achieves adversarial and stochastic optimality in the utility-based dueling bandit setting. We provide empirical evaluation of the algorithm demonstrating that it significantly outperforms Ucb1 and Exp3 in stochastic environments. We also provide examples of adversarial environments, where Ucb1 and Thompson Sampling exhibit almost linear regret, whereas our algorithm suers only logarithmic regret. To the best of our knowledge, this is the first example demonstrating vulnerability of Thompson Sampling in adversarial environments. Last but not least, we present a general stochastic analysis and a general adversarial analysis of OMD algorithms with Tsallis entropy regularization for 2 [0; 1] and explain the reason why = 1=2 works best. [ABSTRACT FROM AUTHOR]
- Published
- 2021
89. Person re-identification from virtuality to reality via modality invariant adversarial mechanism.
- Author
-
Chen, Lin, Yang, Hua, and Gao, Zhiyong
- Subjects
- *
CRIME scene searches , *MODAL logic , *IMAGE registration , *LEARNING modules - Abstract
• A modality invariant adversarial mechanism for improving the multi-style Re-ID task. • Two new datasets from virtuality to reality for the multi-style Re-ID task. • Space transformation and different category classifiers for performance improvement. Person re-identification based on multi-style images helps in crime scene investigation, where only a virtual image (sketch or portrait) of the suspect is available for retrieving possible identities. However, due to the modality gap between multi-style images, standard model of person re-identification cannot achieve satisfactory performance when directly applied to match the virtual images with the real photographs. To address this problem, we propose a modality invariant adversarial mechanism (MIAM) to remove the modality gap between multi-style images. Specifically, the MIAN consists of two parts: a space transformation module to transfer the multi-style person images to a modality-invariant space, and an adversarial learning module "played" between the category classifier and modality classifier to steer the representation learning. The modality classifier discriminates between the real and virtual images while the category classifier predicts the identities of the input transformed images. We explore the space transformation for data augmentation to further bridge the modality gap and facilitate the performance. Furthermore, we build two new datasets for the multi-style Re-ID to evaluate the performance. Extensive experimental results demonstrate the effectiveness of the proposed method on improving the performance against the existing feature learning networks. Further comparison results conducted on different modules in MIAM show that our approach is of favorable generalization ability on alleviating the modality gap to improve the multi-style Re-ID. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
90. X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data.
- Author
-
Hong, Danfeng, Yokoya, Naoto, Xia, Gui-Song, Chanussot, Jocelyn, and Zhu, Xiao Xiang
- Subjects
- *
REMOTE sensing , *SYNTHETIC aperture radar , *INTERACTIVE learning , *GRAPH labelings , *LEARNING modules - Abstract
This paper addresses the problem of semi-supervised transfer learning with limited cross-modality data in remote sensing. A large amount of multi-modal earth observation images, such as multispectral imagery (MSI) or synthetic aperture radar (SAR) data, are openly available on a global scale, enabling parsing global urban scenes through remote sensing imagery. However, their ability in identifying materials (pixel-wise classification) remains limited, due to the noisy collection environment and poor discriminative information as well as limited number of well-annotated training images. To this end, we propose a novel cross-modal deep-learning framework, called X-ModalNet, with three well-designed modules: self-adversarial module, interactive learning module, and label propagation module, by learning to transfer more discriminative information from a small-scale hyperspectral image (HSI) into the classification task using a large-scale MSI or SAR data. Significantly, X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network, yielding semi-supervised cross-modality learning. We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
91. Multi-Task Consistency-Preserving Adversarial Hashing for Cross-Modal Retrieval.
- Author
-
Xie, De, Deng, Cheng, Li, Chao, Liu, Xianglong, and Tao, Dacheng
- Subjects
- *
LEARNING modules - Abstract
Owing to the advantages of low storage cost and high query efficiency, cross-modal hashing has received increasing attention recently. As failing to bridge the inherent modality gap between modalities, most existing cross-modal hashing methods have limited capability to explore the semantic consistency information between different modality data, leading to unsatisfactory search performance. To address this problem, we propose a novel deep hashing method named Multi-Task Consistency-Preserving Adversarial Hashing (CPAH) to fully explore the semantic consistency and correlation between different modalities for efficient cross-modal retrieval. First, we design a consistency refined module (CR) to divide the representations of different modality into two irrelevant parts, i.e., modality-common and modality-private representations. Then, a multi-task adversarial learning module (MA) is presented, which can make the modality-common representation of different modalities close to each other on feature distribution and semantic consistency. Finally, the compact and powerful hash codes can be generated from modality-common representation. Comprehensive evaluations conducted on three representative cross-modal benchmark datasets illustrate our method is superior to the state-of-the-art cross-modal hashing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
92. Achieving Fairness with Decision Trees: An Adversarial Approach.
- Author
-
Grari, Vincent, Ruf, Boris, Lamprier, Sylvain, and Detyniecki, Marcin
- Subjects
DECISION trees ,FAIRNESS ,DEEP learning ,CLASSIFICATION algorithms ,MACHINE learning ,DEFINITIONS - Abstract
Fair classification has become an important topic in machine learning research. While most bias mitigation strategies focus on neural networks, we noticed a lack of work on fair classifiers based on decision trees even though they have proven very efficient. In an up-to-date comparison of state-of-the-art classification algorithms in tabular data, tree boosting outperforms deep learning (Zhang et al. in Expert Syst Appl 82:128–150, 2017). For this reason, we have developed a novel approach of adversarial gradient tree boosting. The objective of the algorithm is to predict the output Y with gradient tree boosting while minimizing the ability of an adversarial neural network to predict the sensitive attribute S. The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting. We empirically assess our approach on four popular data sets and compare against state-of-the-art algorithms. The results show that our algorithm achieves a higher accuracy while obtaining the same level of fairness, as measured using a set of different common fairness definitions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
93. Masked Linear Regression for Learning Local Receptive Fields for Facial Expression Synthesis.
- Author
-
Khan, Nazar, Akram, Arbish, Mahmood, Arif, Ashraf, Sania, and Murtaza, Kashif
- Subjects
- *
FACIAL expression , *COMPUTATIONAL complexity - Abstract
Compared to facial expression recognition, expression synthesis requires a very high-dimensional mapping. This problem exacerbates with increasing image sizes and limits existing expression synthesis approaches to relatively small images. We observe that facial expressions often constitute sparsely distributed and locally correlated changes from one expression to another. By exploiting this observation, the number of parameters in an expression synthesis model can be significantly reduced. Therefore, we propose a constrained version of ridge regression that exploits the local and sparse structure of facial expressions. We consider this model as masked regression for learning local receptive fields. In contrast to the existing approaches, our proposed model can be efficiently trained on larger image sizes. Experiments using three publicly available datasets demonstrate that our model is significantly better than ℓ 0 , ℓ 1 and ℓ 2 -regression, SVD based approaches, and kernelized regression in terms of mean-squared-error, visual quality as well as computational and spatial complexities. The reduction in the number of parameters allows our method to generalize better even after training on smaller datasets. The proposed algorithm is also compared with state-of-the-art GANs including Pix2Pix, CycleGAN, StarGAN and GANimation. These GANs produce photo-realistic results as long as the testing and the training distributions are similar. In contrast, our results demonstrate significant generalization of the proposed algorithm over out-of-dataset human photographs, pencil sketches and even animal faces. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
94. The David Eastman case: The use of inquiries to investigate miscarriages of justice in Australia.
- Author
-
Fuller, Jacqueline
- Abstract
The wrongful conviction of David Harold Eastman in the Australian Capital Territory represents one of Australia's most recent and high-profile public failures of the criminal justice system and highlights the limits of the Australian legal system. Further, the Eastman case draws into question the use of inquiries into miscarriages of justice, particularly when an inquiry's recommendations can be disregarded by governments (as it was in this instance). This article provides an overview of the Eastman case and critically evaluates how it sheds light on the use of inquiries as an avenue to investigate and correct wrongful convictions more broadly in Australia. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
95. Collateral Damage
- Author
-
McQuillan, Dan, author
- Published
- 2022
- Full Text
- View/download PDF
96. Learning beyond sensations: How dreams organize neuronal representations.
- Author
-
Deperrois, Nicolas, Petrovici, Mihai A., Senn, Walter, and Jordan, Jakob
- Subjects
- *
DREAMS , *SENSES , *SLEEP - Abstract
Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive processing theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations. In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive processing paradigm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
97. Semi-Supervised Learning of MRI Synthesis Without Fully-Sampled Ground Truths
- Author
-
Mahmut Yurt, Onat Dalmaz, Salman Dar, Muzaffer Ozbey, Berk Tinaz, Kader Oguz, Tolga Cukur, Yurt, Mahmut, Dalmaz, Onat, Dar, Salman, Özbey, Muzaffer, and Çukur, Tolga
- Subjects
Radiological and Ultrasound Technology ,Image synthesis ,Magnetic Resonance Imaging ,Adversarial ,Computer Science Applications ,Magnetic resonance imaging ,Image Processing, Computer-Assisted ,Supervised Machine Learning ,Undersampled ,Electrical and Electronic Engineering ,Semi-supervised ,Algorithms ,Software ,Retrospective Studies - Abstract
Learning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.
- Published
- 2022
- Full Text
- View/download PDF
98. A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
- Author
-
Paul-Lou Benedick, Jérémy Robert, and Yves Le Traon
- Subjects
time series classification ,artificial intelligence robustness ,industrial internet of things ,adversarial ,Chemical technology ,TP1-1185 - Abstract
Artificial Intelligence (AI) is one of the hottest topics in our society, especially when it comes to solving data-analysis problems. Industry are conducting their digital shifts, and AI is becoming a cornerstone technology for making decisions out of the huge amount of (sensors-based) data available in the production floor. However, such technology may be disappointing when deployed in real conditions. Despite good theoretical performances and high accuracy when trained and tested in isolation, a Machine-Learning (M-L) model may provide degraded performances in real conditions. One reason may be fragility in treating properly unexpected or perturbed data. The objective of the paper is therefore to study the robustness of seven M-L and Deep-Learning (D-L) algorithms, when classifying univariate time-series under perturbations. A systematic approach is proposed for artificially injecting perturbations in the data and for evaluating the robustness of the models. This approach focuses on two perturbations that are likely to happen during data collection. Our experimental study, conducted on twenty sensors’ datasets from the public University of California Riverside (UCR) repository, shows a great disparity of the models’ robustness under data quality degradation. Those results are used to analyse whether the impact of such robustness can be predictable—thanks to decision trees—which would prevent us from testing all perturbations scenarios. Our study shows that building such a predictor is not straightforward and suggests that such a systematic approach needs to be used for evaluating AI models’ robustness.
- Published
- 2021
- Full Text
- View/download PDF
99. Aproximaciones legales y jurisprudenciales a la prueba de oficio, en el procedimiento penal adversarial con tendencia acusatoria en el ordenamiento colombiano
- Author
-
Jimmy Patiño García, Gabriel Alberto Ospina Herrera, and Isabel Indira Molina Ariza
- Subjects
prueba ,oficio ,adversarial ,acusatorio ,imparcialidad ,legalidad ,Law ,Political science - Abstract
El presente documento tiene como propósito efectuar una aproximación, desde el punto de vista jurisprudencial y legal, del desarrollo que ha tenido la prueba oficiosa en el sistema penal adversarial de tendencia acusatoria, implementado en Colombia con la Ley 906 de 2004, a fin de establecer la tensión existente entre los principios de imparcialidad, igualdad de armas y legalidad, frente al principio de justicia material y en el contexto del rol del juez en el proceso. En efecto, esta es una de las discusiones que enfrentan quienes administran justicia y quienes acuden a ella para resolver los conflictos sociales, en la que se evidencia que la construcción de un sistema acusatorio es un proceso inacabado y en constante evolución. Para ello se hace necesario abordar no solo un concepto meramente legalista, sino que se requiere recabar en la filosofía misma del sistema penal acusatorio y las realidades en que este se aplica, para cumplir los cometidos sociales para los cuales fue creado.
- Published
- 2017
- Full Text
- View/download PDF
100. REMOVING THE MASK: VIDEO FINGERPRINTING ATTACKS OVER TOR
- Author
-
Barton, Armon C., Singh, Gurminder, Computer Science (CS), Duhe', Paul H., III, Barton, Armon C., Singh, Gurminder, Computer Science (CS), and Duhe', Paul H., III
- Abstract
The Onion Router (Tor) is used by adversaries and warfighters alike to encrypt session information and gain anonymity on the internet. Since its creation in 2002, Tor has gained popularity by terrorist organizations, human traffickers, and illegal drug distributors who wish to use Tor services to mask their identity while engaging in illegal activities. Fingerprinting attacks assist in thwarting these attempts. Website fingerprinting (WF) attacks have been proven successful at linking a user to the website they have viewed over an encrypted Tor connection. With consumer video streaming traffic making up a large majority of internet traffic and sites like YouTube remaining in the top visited sites in the world, it is just as likely that adversaries are using videos to spread misinformation, illegal content, and terrorist propaganda. Video fingerprinting (VF) attacks look to use encrypted network traffic to predict the content of encrypted video sessions in closed- and open-world scenarios. This research builds upon an existing dataset of encrypted video session data and use statistical analysis to train a machine-learning classifier, using deep fingerprinting (DF), to predict videos viewed over Tor. DF is a machine learning technique that relies on the use of convolutional neural networks (CNN) and can be used to conduct VF attacks against Tor. By analyzing the results of these experiments, we can more accurately identify malicious video streaming activity over Tor., Civilian, Approved for public release. Distribution is unlimited.
- Published
- 2023
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.