7 results on '"Soman, K. P."'
Search Results
2. Dimensionality Reduced Recursive Filter Features for Hyperspectral Image Classification
- Author
-
Lekshmi Kiran, S., Sowmya, V., Soman, K. P., Kacprzyk, Janusz, Series editor, Satapathy, Suresh Chandra, editor, Raju, K. Srujan, editor, Mandal, Jyotsna Kumar, editor, and Bhateja, Vikrant, editor
- Published
- 2016
- Full Text
- View/download PDF
3. Synergistic Detection of Multimodal Fake News Leveraging TextGCN and Vision Transformer.
- Author
-
M, Visweswaran, Mohan, Jayanth, Sachin Kumar, S, and Soman, K P
- Subjects
TRANSFORMER models ,FAKE news ,CONVOLUTIONAL neural networks ,FEATURE extraction ,MULTIMODAL user interfaces ,DIGITAL technology ,CELL fusion - Abstract
In today's digital age, the rapid spread of fake news is a pressing concern. Fake news, whether intentional or inadvertent, manipulates public sentiment and threatens the integrity of online information. To address this, effective detection and prevention methods are vital. Detecting and addressing this multimodal fake news is an intricate challenge as, unlike traditional news articles that predominantly rely on textual content, multimodal fake news leverages the persuasive power of visual elements, making its identification a formidable task. Manipulated images can significantly sway individuals' perceptions and beliefs, making the detection of such deceptive content complex. Our research introduces an innovative approach to multimodal fake news identification by presenting a fusion-based methodology that harnesses the capabilities of Text Graph Convolutional Neural Networks (TextGCN) and Vision Transformers (ViT) to effectively utilise both text and image modalities. The proposed Methodology starts with preprocessing textual content using TextGCN, allowing for the capture of intricate structural dependencies among words and phrases. Simultaneously, visual features are extracted from associated images using ViT. Through a fusion mechanism, these modalities seamlessly integrate, yielding superior embeddings. The primary contributions encompass an in-depth exploration of multimodal fake news detection through a fusion-based approach. What sets our approach apart from existing techniques is its integration of graph-based feature extraction through TextGCN. While previous methods predominantly rely on text or image features, our approach harnesses the additional semantic information and intricate relationships within a graph structure, in addition to image embeddings. This enables our method to capture more comprehensive understanding of the data, resulting in increased accuracy and reliability. Our experiments demonstrate the exceptional performance of our fusion-based approach, which leverages multiple modalities and incorporates graph-based representations and semantic relationships. This method outperformed single modalities of text or image, achieving an impressive accuracy of 94.17% using a neural network after fusion. By seamlessly integrating graph-based representations and semantic relationships, our fusion-based technique represents a significant stride in addressing the challenges posed by multimodal fake news. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Adversarial Defense: DGA-Based Botnets and DNS Homographs Detection Through Integrated Deep Learning.
- Author
-
Ravi, Vinayakumar, Alazab, Mamoun, Srinivasan, Sriram, Arunachalam, Ajay, and Soman, K. P.
- Subjects
INTERNET domain naming system ,DEEP learning ,BOTNETS ,REVERSE engineering ,HOMONYMS ,CYBERTERRORISM ,HUMAN error - Abstract
Cybercriminals use domain generation algorithms (DGAs) to prevent their servers from being potentially blacklisted or shut down. Existing reverse engineering techniques for DGA detection is labor intensive, extremely time-consuming, prone to human errors, and have significant limitations. Hence, an automated real-time technique with a high detection rate is warranted in such applications. In this article, we present a novel technique to detect randomly generated domain names and domain name system (DNS) homograph attacks without the need for any reverse engineering or using nonexistent domain (NXDomain) inspection using deep learning. We provide an extensive evaluation of our model over four large, real-world, publicly available datasets. We further investigate the robustness of our model against three different adversarial attacks: DeepDGA, CharBot, and MaskDGA. Our evaluation demonstrates that our method is effectively able to identify DNS homograph attacks and DGAs and also is resilient to common evading cyberattacks. Promising results show that our approach provides a more effective detection rate with an accuracy of 0.99. Additionally, the performance of our model is compared against the most popular deep learning architectures. Our findings highlight the essential need for more robust detection models to counter adversarial learning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
5. Significance of processing chrominance information for scene classification: a review.
- Author
-
Sowmya, V., Govind, D., and Soman, K. P.
- Subjects
ARTIFICIAL neural networks ,INFORMATION processing ,SUPPORT vector machines ,EXTRACTION techniques ,FEATURE extraction - Abstract
The primary objective of this paper is to provide a detailed review of various works showing the role of processing chrominance information for color-to-grayscale conversion. The usefulness of perceptually improved color-to-grayscale converted images for scene classification is then studied as a part of this presented work. Various issues identified for the color-to-grayscale conversion and improved scene classification are presented in this paper. The review provided in this paper includes, review on existing feature extraction techniques for scene classification, various existing scene classification systems, different methods available in the literature for color-to-grayscale image conversion, benchmark datasets for scene classification and color-to-gray-scale image conversion, subjective evaluation and objective quality assessments for image decolorization. In the present work, a scene classification system is proposed using the pre-trained convolutional neural network and Support Vector Machines developed utilizing the grayscale images converted by the image decolorization methods. The experimental analysis on Oliva Torralba scene dataset shows that the color-to-grayscale image conversion technique has a positive impact on the performance of scene classification systems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Effect of Decolorized Images In Scene Classification Using Deep Convolution Features.
- Author
-
Damodaran, Nikhil, Sowmya, V, Govind, D, and Soman, K P
- Subjects
DEEP learning ,ALGORITHMS ,FEATURE extraction ,ARTIFICIAL neural networks ,GRAPHICS processing units ,COMPUTER vision - Abstract
Abstract Scene classification is considered as an imperative issue for computer vision and has got extensive consideration in the recent past. Due to recent developments in high performance computing units such as GPUs, popularly known deep learning algorithm namely, convolutional neural networks (CNNs), exploits huge datasets to give powerful models. The paper proposes the use of transfer learning technique, by which a pre-trained model known as Places-CNN is used to generate feature vectors for each scene image of the dataset. The scene-classification experiments are conducted on the Oliva Torralba (OT) scene dataset, which consists of eight outdoor scene categories. The features were extracted from the fully connected layer of the pre-trained Places CNN architecture. The deep features were extracted from the input color images and the grayscale images converted using two different techniques based on singular value decomposition (SVD). The results obtained from classification experiments show that, models trained on SVD-Decolorized and Modified-SVD decolorized images give comparable performance to the input color images. Unlike the color images, which use three planes (RGB) of information, the grayscale images use only one plane of information. The grayscale images were able to retain the required shape and texture information from the original RGB images and, thus sufficient to categorize the classes of scene images. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Reduced Scattering Representation for Malayalam Character Recognition.
- Author
-
Manjusha, K., Anand Kumar, M., and Soman, K. P.
- Subjects
PATTERN recognition systems ,WAVELET transforms ,SUPPORT vector machines - Abstract
Scattering convolution network generates stable feature representation by applying a sequence wavelet decomposition operation on input signals. The feature representation in higher layers of the network builds a large-dimensional feature vector, which is often undesirable in most of the applications. Dimension reduction techniques can be applied on these higher-dimensional feature descriptors to produce an informative representation. In this paper, singular value decomposition is applied to the higher-layer scattering representation to generate informative feature descriptors. The effectiveness of the reduced scattering representation is evaluated on Malayalam printed and handwritten character recognition using support vector machine classifier. The reduced scattering representation improves the recognition performance when combining with lower-layer scattering network features. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.