Search

Showing total 539 results

Search Constraints

Start Over You searched for: Topic adversarial attacks Remove constraint Topic: adversarial attacks Publication Year Range Last 50 years Remove constraint Publication Year Range: Last 50 years
539 results

Search Results

251. Sound classification using wavelet transformation and deep learning methods

252. On the vulnerability of deep learning to adversarial attacks for camera model identification.

253. MODEL-AGNOSTIC META-LEARNING FOR RESILIENCE OPTIMIZATION OF ARTIFICIAL INTELLIGENCE SYSTEM.

254. Evil vs evil: using adversarial examples to against backdoor attack in federated learning.

257. Are Graph Neural Network Explainers Robust to Graph Noises?

258. Adversarial Robustness of MR Image Reconstruction Under Realistic Perturbations

260. Consistency Regularization Helps Mitigate Robust Overfitting in Adversarial Training

261. Addressing Adversarial Machine Learning Attacks in Smart Healthcare Perspectives

262. AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks

264. Robustness Against Adversarial Attacks Using Dimensionality

267. BreakingBED: Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks

269. Adversarial robustness and attacks for multi-view deep models.

270. Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring

271. Adversarial Scratches: Deployable Attacks to CNN Classifiers

272. Towards robust rain removal against adversarial attacks: a comprehensive benchmark analysis and beyond

273. Detection of SQL Injection Attack Using Machine Learning Techniques: A Systematic Literature Review

274. Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation

275. Collaborative Defense-GAN for protecting adversarial attacks on classification system.

276. Adversarial attacks and active defense on deep learning based identification of GaN power amplifiers under physical perturbation.

277. DDSG-GAN: Generative Adversarial Network with Dual Discriminators and Single Generator for Black-Box Attacks.

278. Defending Adversarial Examples by a Clipped Residual U-Net Model.

279. Mitigating Malicious Adversaries Evasion Attacks in Industrial Internet of Things.

280. SIEMS: A Secure Intelligent Energy Management System for Industrial IoT Applications.

281. Adversarial Machine Learning in Image Classification: A Survey Toward the Defender’s Perspective.

282. Minimally Distorted Structured Adversarial Attacks.

283. Generate adversarial examples by adaptive moment iterative fast gradient sign method.

284. Revisiting model's uncertainty and confidences for adversarial example detection.

285. An Adversarial Approach for Intrusion Detection Systems Using Jacobian Saliency Map Attacks (JSMA) Algorithm.

286. Effectiveness of the Execution and Prevention of Metric-Based Adversarial Attacks on Social Network Data †.

287. Low-Pass Image Filtering to Achieve Adversarial Robustness

288. Improving Adversarial Robustness via Distillation-Based Purification

289. On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective

290. Structure Estimation of Adversarial Distributions for Enhancing Model Robustness: A Clustering-Based Approach

291. A Pornographic Images Recognition Model based on Deep One-Class Classification With Visual Attention Mechanism

293. Robust transformer with locality inductive bias and feature normalization

294. Resilience enhancement of multi-agent reinforcement learning-based demand response against adversarial attacks.

295. Understanding deep learning defenses against adversarial examples through visualizations for dynamic risk assessment.

296. A Two Stream Fusion Assisted Deep Learning Framework for Stomach Diseases Classification.

297. Distributed Attack-Robust Submodular Maximization for Multirobot Planning.

298. Turning Federated Learning Systems Into Covert Channels

299. A Methodology for Evaluating the Robustness of Anomaly Detectors to Adversarial Attacks in Industrial Scenarios

300. A Highly Stealthy Adaptive Decay Attack Against Speaker Recognition