585,833 results on '"RAHMAN, A."'
Search Results
2. Production losses associated with natural concomitant infection of Infectious bronchitis virus and Mycoplasma in a broiler breeder farm: A case study
- Author
-
Sharma, Megha, Rahman, A.T. Faslu, Dhama, Kuldeep, and Mariappan, Asok Kumar
- Published
- 2023
- Full Text
- View/download PDF
3. Scientific-Technological Revolution: Social Aspects by Ralf Dahrendorf, et al (review)
- Author
-
Rahman, A.
- Published
- 2023
4. Homo Faber: Technology and Culture in India, China and the West from 1500 to the Present Day by Claude Alvares (review)
- Author
-
Rahman, A.
- Published
- 2023
5. AI-Driven Smartphone Solution for Digitizing Rapid Diagnostic Test Kits and Enhancing Accessibility for the Visually Impaired
- Author
-
Dastagir, R. B., Jami, J. T., Chanda, S., Hafiz, F., Rahman, M., Dey, K., Rahman, M. M., Qureshi, M., and Chowdhury, M. M.
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Rapid diagnostic tests are crucial for timely disease detection and management, yet accurate interpretation of test results remains challenging. In this study, we propose a novel approach to enhance the accuracy and reliability of rapid diagnostic test result interpretation by integrating artificial intelligence (AI) algorithms, including convolutional neural networks (CNN), within a smartphone-based application. The app enables users to take pictures of their test kits, which YOLOv8 then processes to precisely crop and extract the membrane region, even if the test kit is not centered in the frame or is positioned at the very edge of the image. This capability offers greater accessibility, allowing even visually impaired individuals to capture test images without needing perfect alignment, thus promoting user independence and inclusivity. The extracted image is analyzed by an additional CNN classifier that determines if the results are positive, negative, or invalid, providing users with the results and a confidence level. Through validation experiments with commonly used rapid test kits across various diagnostic applications, our results demonstrate that the synergistic integration of AI significantly improves sensitivity and specificity in test result interpretation. This improvement can be attributed to the extraction of the membrane zones from the test kit images using the state-of-the-art YOLO algorithm. Additionally, we performed SHapley Additive exPlanations (SHAP) analysis to investigate the factors influencing the model's decisions, identifying reasons behind both correct and incorrect classifications. By facilitating the differentiation of genuine test lines from background noise and providing valuable insights into test line intensity and uniformity, our approach offers a robust solution to challenges in rapid test interpretation.
- Published
- 2024
6. BanglaDialecto: An End-to-End AI-Powered Regional Speech Standardization
- Author
-
Samin, Md. Nazmus Sadat, Ahad, Jawad Ibn, Medha, Tanjila Ahmed, Rahman, Fuad, Amin, Mohammad Ruhul, Mohammed, Nabeel, and Rahman, Shafin
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,Computer Science - Sound ,Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
This study focuses on recognizing Bangladeshi dialects and converting diverse Bengali accents into standardized formal Bengali speech. Dialects, often referred to as regional languages, are distinctive variations of a language spoken in a particular location and are identified by their phonetics, pronunciations, and lexicon. Subtle changes in pronunciation and intonation are also influenced by geographic location, educational attainment, and socioeconomic status. Dialect standardization is needed to ensure effective communication, educational consistency, access to technology, economic opportunities, and the preservation of linguistic resources while respecting cultural diversity. Being the fifth most spoken language with around 55 distinct dialects spoken by 160 million people, addressing Bangla dialects is crucial for developing inclusive communication tools. However, limited research exists due to a lack of comprehensive datasets and the challenges of handling diverse dialects. With the advancement in multilingual Large Language Models (mLLMs), emerging possibilities have been created to address the challenges of dialectal Automated Speech Recognition (ASR) and Machine Translation (MT). This study presents an end-to-end pipeline for converting dialectal Noakhali speech to standard Bangla speech. This investigation includes constructing a large-scale diverse dataset with dialectal speech signals that tailored the fine-tuning process in ASR and LLM for transcribing the dialect speech to dialect text and translating the dialect text to standard Bangla text. Our experiments demonstrated that fine-tuning the Whisper ASR model achieved a CER of 0.8% and WER of 1.5%, while the BanglaT5 model attained a BLEU score of 41.6% for dialect-to-standard text translation., Comment: Accepted in 2024 IEEE International Conference on Big Data (IEEE BigData)
- Published
- 2024
7. Empowering Meta-Analysis: Leveraging Large Language Models for Scientific Synthesis
- Author
-
Ahad, Jawad Ibn, Sultan, Rafeed Mohammad, Kaikobad, Abraham, Rahman, Fuad, Amin, Mohammad Ruhul, Mohammed, Nabeel, and Rahman, Shafin
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Information Retrieval - Abstract
This study investigates the automation of meta-analysis in scientific documents using large language models (LLMs). Meta-analysis is a robust statistical method that synthesizes the findings of multiple studies support articles to provide a comprehensive understanding. We know that a meta-article provides a structured analysis of several articles. However, conducting meta-analysis by hand is labor-intensive, time-consuming, and susceptible to human error, highlighting the need for automated pipelines to streamline the process. Our research introduces a novel approach that fine-tunes the LLM on extensive scientific datasets to address challenges in big data handling and structured data extraction. We automate and optimize the meta-analysis process by integrating Retrieval Augmented Generation (RAG). Tailored through prompt engineering and a new loss metric, Inverse Cosine Distance (ICD), designed for fine-tuning on large contextual datasets, LLMs efficiently generate structured meta-analysis content. Human evaluation then assesses relevance and provides information on model performance in key metrics. This research demonstrates that fine-tuned models outperform non-fine-tuned models, with fine-tuned LLMs generating 87.6% relevant meta-analysis abstracts. The relevance of the context, based on human evaluation, shows a reduction in irrelevancy from 4.56% to 1.9%. These experiments were conducted in a low-resource environment, highlighting the study's contribution to enhancing the efficiency and reliability of meta-analysis automation., Comment: Accepted in 2024 IEEE International Conference on Big Data (IEEE BigData)
- Published
- 2024
8. Hierarchical Sentiment Analysis Framework for Hate Speech Detection: Implementing Binary and Multiclass Classification Strategy
- Author
-
Naznin, Faria, Rahman, Md Touhidur, and Alve, Shahran Rahman
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
A significant challenge in automating hate speech detection on social media is distinguishing hate speech from regular and offensive language. These identify an essential category of content that web filters seek to remove. Only automated methods can manage this volume of daily data. To solve this problem, the community of Natural Language Processing is currently investigating different ways of hate speech detection. In addition to those, previous approaches (e.g., Convolutional Neural Networks, multi-channel BERT models, and lexical detection) have always achieved low precision without carefully treating other related tasks like sentiment analysis and emotion classification. They still like to group all messages with specific words in them as hate speech simply because those terms often appear alongside hateful rhetoric. In this research, our paper presented the hate speech text classification system model drawn upon deep learning and machine learning. In this paper, we propose a new multitask model integrated with shared emotional representations to detect hate speech across the English language. The Transformer-based model we used from Hugging Face and sentiment analysis helped us prevent false positives. Conclusion. We conclude that utilizing sentiment analysis and a Transformer-based trained model considerably improves hate speech detection across multiple datasets., Comment: 20 Pages
- Published
- 2024
9. Privacy-Preserving Customer Churn Prediction Model in the Context of Telecommunication Industry
- Author
-
Sana, Joydeb Kumar, Rahman, M Sohel, and Rahman, M Saifur
- Subjects
Computer Science - Machine Learning ,Computer Science - Cryptography and Security - Abstract
Data is the main fuel of a successful machine learning model. A dataset may contain sensitive individual records e.g. personal health records, financial data, industrial information, etc. Training a model using this sensitive data has become a new privacy concern when someone uses third-party cloud computing. Trained models also suffer privacy attacks which leads to the leaking of sensitive information of the training data. This study is conducted to preserve the privacy of training data in the context of customer churn prediction modeling for the telecommunications industry (TCI). In this work, we propose a framework for privacy-preserving customer churn prediction (PPCCP) model in the cloud environment. We have proposed a novel approach which is a combination of Generative Adversarial Networks (GANs) and adaptive Weight-of-Evidence (aWOE). Synthetic data is generated from GANs, and aWOE is applied on the synthetic training dataset before feeding the data to the classification algorithms. Our experiments were carried out using eight different machine learning (ML) classifiers on three openly accessible datasets from the telecommunication sector. We then evaluated the performance using six commonly employed evaluation metrics. In addition to presenting a data privacy analysis, we also performed a statistical significance test. The training and prediction processes achieve data privacy and the prediction classifiers achieve high prediction performance (87.1\% in terms of F-Measure for GANs-aWOE based Na\"{\i}ve Bayes model). In contrast to earlier studies, our suggested approach demonstrates a prediction enhancement of up to 28.9\% and 27.9\% in terms of accuracy and F-measure, respectively., Comment: 26 pages, 14 tables, 13 figures
- Published
- 2024
10. Embedding with Large Language Models for Classification of HIPAA Safeguard Compliance Rules
- Author
-
Rahman, Md Abdur, Barek, Md Abdul, Riad, ABM Kamrul Islam, Rahman, Md Mostafizur, Rashid, Md Bajlur, Ambedkar, Smita, Miaa, Md Raihan, Wu, Fan, Cuzzocrea, Alfredo, and Ahamed, Sheikh Iqbal
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Artificial Intelligence - Abstract
Although software developers of mHealth apps are responsible for protecting patient data and adhering to strict privacy and security requirements, many of them lack awareness of HIPAA regulations and struggle to distinguish between HIPAA rules categories. Therefore, providing guidance of HIPAA rules patterns classification is essential for developing secured applications for Google Play Store. In this work, we identified the limitations of traditional Word2Vec embeddings in processing code patterns. To address this, we adopt multilingual BERT (Bidirectional Encoder Representations from Transformers) which offers contextualized embeddings to the attributes of dataset to overcome the issues. Therefore, we applied this BERT to our dataset for embedding code patterns and then uses these embedded code to various machine learning approaches. Our results demonstrate that the models significantly enhances classification performance, with Logistic Regression achieving a remarkable accuracy of 99.95\%. Additionally, we obtained high accuracy from Support Vector Machine (99.79\%), Random Forest (99.73\%), and Naive Bayes (95.93\%), outperforming existing approaches. This work underscores the effectiveness and showcases its potential for secure application development., Comment: I am requesting the withdrawal of my paper due to critical issues identified in the methodology/results that may impact its accuracy and reliability. I also plan to make substantial revisions that go beyond minor corrections
- Published
- 2024
11. Self-DenseMobileNet: A Robust Framework for Lung Nodule Classification using Self-ONN and Stacking-based Meta-Classifier
- Author
-
Rahman, Md. Sohanur, Chowdhury, Muhammad E. H., Rahman, Hasib Ryan, Ahmed, Mosabber Uddin, Kabir, Muhammad Ashad, Roy, Sanjiban Sekhar, and Sarmun, Rusab
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Machine Learning - Abstract
In this study, we propose a novel and robust framework, Self-DenseMobileNet, designed to enhance the classification of nodules and non-nodules in chest radiographs (CXRs). Our approach integrates advanced image standardization and enhancement techniques to optimize the input quality, thereby improving classification accuracy. To enhance predictive accuracy and leverage the strengths of multiple models, the prediction probabilities from Self-DenseMobileNet were transformed into tabular data and used to train eight classical machine learning (ML) models; the top three performers were then combined via a stacking algorithm, creating a robust meta-classifier that integrates their collective insights for superior classification performance. To enhance the interpretability of our results, we employed class activation mapping (CAM) to visualize the decision-making process of the best-performing model. Our proposed framework demonstrated remarkable performance on internal validation data, achieving an accuracy of 99.28\% using a Meta-Random Forest Classifier. When tested on an external dataset, the framework maintained strong generalizability with an accuracy of 89.40\%. These results highlight a significant improvement in the classification of CXRs with lung nodules., Comment: 31 pages
- Published
- 2024
12. Seminar on Alternative Technology: Simla, India, September 1975
- Author
-
Rahman, A.
- Published
- 2023
13. Management of white grubs through a novel technology in Uttarakhand hills of North-West Himalayas
- Author
-
Sushil, S. N., Stanley, J., Mohan, M., Selvakumar, G., Rai, Deepak, Rahman, A., Ramkewal, Pandey, Sunita, Bhatt, J. C., and Gupta, H. S.
- Published
- 2022
- Full Text
- View/download PDF
14. Technology in Education through Mobile Learning Application (MLA) and Its Impact on Learning Outcomes: Literature Review
- Author
-
Mochamad Kamil Budiarto, Gunarhadi Gunarhadi, and Abdul Rahman
- Abstract
Integration of information and communication technology (ICT) in teacher education is a means to support the teaching and learning process. Good teaching by utilizing technology certainly requires changes, especially in the realm of pedagogy, but teachers apparently do not have enough ability to optimize ICT in the learning process. In fact, ICT has the potential to provide various benefits for teachers and students, including joint learning areas, cooperative and collaborative learning opportunities. Therefore, this research aims to identify the use of mobile learning application (MLA) and its impact as a form of ICT integration in learning. The method used is literature study, by taking data from various relevant scientific articles and books. Data analysis uses descriptive analysis from the results of the synthesis of several literature reviews obtained. The research results show that a number of 10 main articles and 15 relevant supporting articles as well as several book sources show that mobile-based learning with smartphone devices is becoming a trend at various levels of education, both academic and vocational.
- Published
- 2024
15. The Effectiveness of Case Method in Developing Intrapreneurship among Business Students
- Author
-
Khairuddin E. Tambunan, Ali Fikri Hasibuan, Rangga Restu Prayogo, Faisal Rahman Dongoran, Dedy Husrizal Syah, and Gaffar Hafiz Sagala
- Abstract
Intrapreneurship skill has considered as the alternative learning outcomes of entrepreneurship education. However, entrepreneurship teachers need a complex learning program to develop intrapreneurship among business students. At the same time, The Ministry of Education and Culture of The Republic of Indonesia recommends university teachers implement case methods to deliver complex learning environments and build critical skills among students. Therefore, this study aims to i) examine the effect of micro small medium enterprise (MSME) cases on the intrapreneurship of business students, ii) investigate the influence of MSME cases on flow experience in entrepreneurship education and iii) investigate the effect of flow experience during entrepreneurship education on intrapreneurship skill. We used field experiments on entrepreneurship and digital business student in business development courses. The result indicates that the case method effectively developed student intrapreneurship skill and flow experience during the course positively impacting student intrapreneurship skill.
- Published
- 2024
16. Enhancement of the superconducting transition temperature due to multiband effect in the topological nodal-line semimetal Pb$_{1-x}$Sn$_{x}$TaSe$_{2}$
- Author
-
Kumarasinghe, K., Rahman, A., Tomlinson, M., and Nakajima, Y.
- Subjects
Condensed Matter - Superconductivity - Abstract
We report a systematic study of the normal-state and superconducting properties of single crystal Pb$_{1-x}$Sn$_{x}$TaSe$_{2}$ $(0\leq x \leq 0.23)$. Sn doping enhances the superconducting temperature $T_{c}$ up to 5.1 K, while also significantly increasing impurity scattering in the crystals. For $x=0$, the specific heat jump at $T_{c}$ exceeds the Bardeen-Cooper-Schrieffer (BCS) weak-coupling value of 1.43, indicating the realization of strong-coupling superconductivity in PbTaSe$_{2}$. In contrast, substituting Pb with Sn lowers the specific heat jump at $T_{c}$ below the BSC value of 1.43, which cannot be explained by a single-gap model. Rather, the observed specific heat of Sn-doped PbTaSe$_{2}$ is reproduced by a two-gap model. Our observations suggest that additional Fermi pockets appear due to a reduction of the spin-orbit gap with Sn doping, and the multiband effect arising from these emergent Fermi pockets enhances the effective electron-phonon coupling strength, leading to the increase in $T_{c}$., Comment: 6 pages, 4 figures
- Published
- 2024
17. Pralekha: An Indic Document Alignment Evaluation Benchmark
- Author
-
Suryanarayanan, Sanjay, Song, Haiyue, Khan, Mohammed Safi Ur Rahman, Kunchukuttan, Anoop, Khapra, Mitesh M., and Dabre, Raj
- Subjects
Computer Science - Computation and Language - Abstract
Mining parallel document pairs poses a significant challenge because existing sentence embedding models often have limited context windows, preventing them from effectively capturing document-level information. Another overlooked issue is the lack of concrete evaluation benchmarks comprising high-quality parallel document pairs for assessing document-level mining approaches, particularly for Indic languages. In this study, we introduce Pralekha, a large-scale benchmark for document-level alignment evaluation. Pralekha includes over 2 million documents, with a 1:2 ratio of unaligned to aligned pairs, covering 11 Indic languages and English. Using Pralekha, we evaluate various document-level mining approaches across three dimensions: the embedding models, the granularity levels, and the alignment algorithm. To address the challenge of aligning documents using sentence and chunk-level alignments, we propose a novel scoring method, Document Alignment Coefficient (DAC). DAC demonstrates substantial improvements over baseline pooling approaches, particularly in noisy scenarios, achieving average gains of 20-30% in precision and 15-20% in F1 score. These results highlight DAC's effectiveness in parallel document mining for Indic languages., Comment: Work in Progress
- Published
- 2024
18. A Novel Word Pair-based Gaussian Sentence Similarity Algorithm For Bengali Extractive Text Summarization
- Author
-
Morshed, Fahim, Rahman, Md. Abdur, and Ahmed, Sumon
- Subjects
Computer Science - Computation and Language ,I.2.7 - Abstract
Extractive Text Summarization is the process of selecting the most representative parts of a larger text without losing any key information. Recent attempts at extractive text summarization in Bengali, either relied on statistical techniques like TF-IDF or used naive sentence similarity measures like the word averaging technique. All of these strategies suffer from expressing semantic relationships correctly. Here, we propose a novel Word pair-based Gaussian Sentence Similarity (WGSS) algorithm for calculating the semantic relation between two sentences. WGSS takes the geometric means of individual Gaussian similarity values of word embedding vectors to get the semantic relationship between sentences. It compares two sentences on a word-to-word basis which rectifies the sentence representation problem faced by the word averaging method. The summarization process extracts key sentences by grouping semantically similar sentences into clusters using the Spectral Clustering algorithm. After clustering, we use TF-IDF ranking to pick the best sentence from each cluster. The proposed method is validated using four different datasets, and it outperformed other recent models by 43.2% on average ROUGE scores (ranging from 2.5% to 95.4%). It is also experimented on other low-resource languages i.e. Turkish, Marathi, and Hindi language, where we find that the proposed method performs as similar as Bengali for these languages. In addition, a new high-quality Bengali dataset is curated which contains 250 articles and a pair of summaries for each of them. We believe this research is a crucial addition to Bengali Natural Language Processing (NLP) research and it can easily be extended into other low-resource languages. We made the implementation of the proposed model and data public on https://github.com/FMOpee/WGSS.
- Published
- 2024
19. LLM-Powered Approximate Intermittent Computing
- Author
-
Sayyid-Ali, Abdur-Rahman Ibrahim, Rafay, Abdul, Soomro, Muhammad Abdullah, Alizai, Muhammad Hamad, and Bhatti, Naveed Anwar
- Subjects
Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
Batteryless IoT systems face energy constraints exacerbated by checkpointing overhead. Approximate computing offers solutions but demands manual expertise, limiting scalability. This paper presents CheckMate, an automated framework leveraging LLMs for context-aware code approximations. CheckMate integrates validation of LLM-generated approximations to ensure correct execution and employs Bayesian optimization to fine-tune approximation parameters autonomously, eliminating the need for developer input. Tested across six IoT applications, it reduces power cycles by up to 60% with an accuracy loss of just 8%, outperforming semi-automated tools like ACCEPT in speedup and accuracy. CheckMate's results establish it as a robust, user-friendly tool and a foundational step toward automated approximation frameworks for intermittent computing.
- Published
- 2024
20. Soil Characterization of Watermelon Field through Internet of Things: A New Approach to Soil Salinity Measurement
- Author
-
Rahman, Md. Naimur, Sozol, Shafak Shahriar, Samsuzzaman, Md., Hossin, Md. Shahin, Islam, Mohammad Tariqul, Islam, S. M. Taohidul, and Maniruzzaman, Md.
- Subjects
Electrical Engineering and Systems Science - Signal Processing ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
In the modern agricultural industry, technology plays a crucial role in the advancement of cultivation. To increase crop productivity, soil require some specific characteristics. For watermelon cultivation, soil needs to be sandy and of high temperature with proper irrigation. This research aims to design and implement an intelligent IoT-based soil characterization system for the watermelon field to measure the soil characteristics. IoT based developed system measures moisture, temperature, and pH of soil using different sensors, and the sensor data is uploaded to the cloud via Arduino and Raspberry Pi, from where users can obtain the data using mobile application and webpage developed for this system. To ensure the precision of the framework, this study includes the comparison between the readings of the soil parameters by the existing field soil meters, the values obtained from the sensors integrated IoT system, and data obtained from soil science laboratory. Excessive salinity in soil affects the watermelon yield. This paper proposes a model for the measurement of soil salinity based on soil resistivity. It establishes a relationship between soil salinity and soil resistivity from the data obtained in the laboratory using artificial neural network (ANN).
- Published
- 2024
21. BanglaEmbed: Efficient Sentence Embedding Models for a Low-Resource Language Using Cross-Lingual Distillation Techniques
- Author
-
Kabir, Muhammad Rafsan, Nabil, Md. Mohibur Rahman, and Khan, Mohammad Ashrafuzzaman
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Sentence-level embedding is essential for various tasks that require understanding natural language. Many studies have explored such embeddings for high-resource languages like English. However, low-resource languages like Bengali (a language spoken by almost two hundred and thirty million people) are still under-explored. This work introduces two lightweight sentence transformers for the Bangla language, leveraging a novel cross-lingual knowledge distillation approach. This method distills knowledge from a pre-trained, high-performing English sentence transformer. Proposed models are evaluated across multiple downstream tasks, including paraphrase detection, semantic textual similarity (STS), and Bangla hate speech detection. The new method consistently outperformed existing Bangla sentence transformers. Moreover, the lightweight architecture and shorter inference time make the models highly suitable for deployment in resource-constrained environments, making them valuable for practical NLP applications in low-resource languages., Comment: Accepted in ACAI 2024
- Published
- 2024
22. Deep Learning Approach for Enhancing Oral Squamous Cell Carcinoma with LIME Explainable AI Technique
- Author
-
Islam, Samiha, Mahmud, Muhammad Zawad, Alve, Shahran Rahman, and Chowdhury, Md. Mejbah Ullah
- Subjects
Electrical Engineering and Systems Science - Image and Video Processing ,Computer Science - Computer Vision and Pattern Recognition - Abstract
The goal of the present study is to analyze an application of deep learning models in order to augment the diagnostic performance of oral squamous cell carcinoma (OSCC) with a longitudinal cohort study using the Histopathological Imaging Database for oral cancer analysis. The dataset consisted of 5192 images (2435 Normal and 2511 OSCC), which were allocated between training, testing, and validation sets with an estimated ratio repartition of about 52% for the OSCC group, and still, our performance measure was validated on a combination set that contains almost equal number of sample in this use case as entire database have been divided into half using stratified splitting technique based again near binary proportion but total distribution was around even. We selected four deep-learning architectures for evaluation in the present study: ResNet101, DenseNet121, VGG16, and EfficientnetB3. EfficientNetB3 was found to be the best, with an accuracy of 98.33% and F1 score (0.9844), and it took remarkably less computing power in comparison with other models. The subsequent one was DenseNet121, with 90.24% accuracy and an F1 score of 90.45%. Moreover, we employed the Local Interpretable Model-agnostic Explanations (LIME) method to clarify why EfficientNetB3 made certain decisions with its predictions to improve the explainability and trustworthiness of results. This work provides evidence for the possible superior diagnosis in OSCC activated from the EfficientNetB3 model with the explanation of AI techniques such as LIME and paves an important groundwork to build on towards clinical usage., Comment: Under Review at an IEEE conference
- Published
- 2024
23. Atomistic investigation of irradiation-induced defect dynamics in FeNiCu medium-entropy alloy: effect of local chemical order
- Author
-
Rahman, Kazi Tawseef, Shahriar, Mustofa Sakif, Ehsan, Mashaekh Tausif, and Hasan, Mohammad Nasim
- Subjects
Condensed Matter - Materials Science - Abstract
Medium and high-entropy alloys (M/HEAs) have garnered significant attention as potential nuclear structural materials due to their excellent stability at high temperatures and resistance to radiation. However, the common use of Co in M/HEAs, which exhibits high radioactivity under radiation has prompted the development of Co-free M/HEAs for nuclear applications. In this study, we investigate the irradiation behavior of FeNiCu, a promising Co-free medium-entropy alloy (MEA) with a focus on the effect of local chemical order (LCO) using hybrid-molecular dynamics and Monte Carlo simulations. Considerable LCOs in Cu-Cu and Fe-Ni pairs were observed in the thermodynamically stable ordered system. To conduct a comprehensive comparative study of irradiation-induced defect formation and dynamics, cumulative displacement cascades up to 500 were performed in random and ordered configurations of the MEA as well as in pure Ni for benchmark. Our study revealed LCO configuration as the most radiation resistant structure among the three. Complex potential energy landscape (PEL) in MEAs disrupts dislocation growth resulting in its dispersed distribution. The Cu-rich uniform regions in the ordered system act as defect traps enabling faster diffusion and high defect recombination resulting in formation of the dislocation networks in/near these regions. The lower stair-rod dislocation density in the ordered system revealed its high resistance to irradiation swelling signifying the effect of LCO even more, positioning FeNiCu MEA as a strong candidate for future nuclear application. Additionally, the theoretical insights into defect evolution covering formation and diffusion in both random and ordered structures enhance our understanding of LCO's impact, offering a solid foundation for the future development of radiation-resistant M/HEAs for nuclear applications., Comment: 32 pages, 10 figures
- Published
- 2024
24. DIS-Mine: Instance Segmentation for Disaster-Awareness in Poor-Light Condition in Underground Mines
- Author
-
Jewel, Mizanur Rahman, Elmahallawy, Mohamed, Madria, Sanjay, and Frimpong, Samuel
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Detecting disasters in underground mining, such as explosions and structural damage, has been a persistent challenge over the years. This problem is compounded for first responders, who often have no clear information about the extent or nature of the damage within the mine. The poor-light or even total darkness inside the mines makes rescue efforts incredibly difficult, leading to a tragic loss of life. In this paper, we propose a novel instance segmentation method called DIS-Mine, specifically designed to identify disaster-affected areas within underground mines under low-light or poor visibility conditions, aiding first responders in rescue efforts. DIS-Mine is capable of detecting objects in images, even in complete darkness, by addressing challenges such as high noise, color distortions, and reduced contrast. The key innovations of DIS-Mine are built upon four core components: i) Image brightness improvement, ii) Instance segmentation with SAM integration, iii) Mask R-CNN-based segmentation, and iv) Mask alignment with feature matching. On top of that, we have collected real-world images from an experimental underground mine, introducing a new dataset named ImageMine, specifically gathered in low-visibility conditions. This dataset serves to validate the performance of DIS-Mine in realistic, challenging environments. Our comprehensive experiments on the ImageMine dataset, as well as on various other datasets demonstrate that DIS-Mine achieves a superior F1 score of 86.0% and mIoU of 72.0%, outperforming state-of-the-art instance segmentation methods, with at least 15x improvement and up to 80% higher precision in object detection.
- Published
- 2024
25. Test-Time Adaptation of 3D Point Clouds via Denoising Diffusion Models
- Author
-
Dastmalchi, Hamidreza, An, Aijun, Cheraghian, Ali, Rahman, Shafin, and Ramasinghe, Sameera
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
Test-time adaptation (TTA) of 3D point clouds is crucial for mitigating discrepancies between training and testing samples in real-world scenarios, particularly when handling corrupted point clouds. LiDAR data, for instance, can be affected by sensor failures or environmental factors, causing domain gaps. Adapting models to these distribution shifts online is crucial, as training for every possible variation is impractical. Existing methods often focus on fine-tuning pre-trained models based on self-supervised learning or pseudo-labeling, which can lead to forgetting valuable source domain knowledge over time and reduce generalization on future tests. In this paper, we introduce a novel 3D test-time adaptation method, termed 3DD-TTA, which stands for 3D Denoising Diffusion Test-Time Adaptation. This method uses a diffusion strategy that adapts input point cloud samples to the source domain while keeping the source model parameters intact. The approach uses a Variational Autoencoder (VAE) to encode the corrupted point cloud into a shape latent and latent points. These latent points are corrupted with Gaussian noise and subjected to a denoising diffusion process. During this process, both the shape latent and latent points are updated to preserve fidelity, guiding the denoising toward generating consistent samples that align more closely with the source domain. We conduct extensive experiments on the ShapeNet dataset and investigate its generalizability on ModelNet40 and ScanObjectNN, achieving state-of-the-art results. The code has been released at \url{https://github.com/hamidreza-dastmalchi/3DD-TTA}., Comment: Accepted to WACV 2025 (Winter Conference on Applications of Computer Vision)
- Published
- 2024
26. Enhancing Transportation Cyber-Physical Systems Security: A Shift to Post-Quantum Cryptography
- Author
-
Mamun, Abdullah Al, Abrar, Akid, Rahman, Mizanur, Salek, M Sabbir, and Chowdhury, Mashrur
- Subjects
Computer Science - Cryptography and Security - Abstract
The rise of quantum computing threatens traditional cryptographic algorithms that secure Transportation Cyber-Physical Systems (TCPS). Shor's algorithm poses a significant threat to RSA and ECC, while Grover's algorithm reduces the security of symmetric encryption schemes, such as AES. The objective of this paper is to underscore the urgency of transitioning to post-quantum cryptography (PQC) to mitigate these risks in TCPS by analyzing the vulnerabilities of traditional cryptographic schemes and the applicability of standardized PQC schemes in TCPS. We analyzed vulnerabilities in traditional cryptography against quantum attacks and reviewed the applicability of NIST-standardized PQC schemes, including CRYSTALS-Kyber, CRYSTALS-Dilithium, and SPHINCS+, in TCPS. We conducted a case study to analyze the vulnerabilities of a TCPS application from the Architecture Reference for Cooperative and Intelligent Transportation (ARC-IT) service package, i.e., Electronic Toll Collection, leveraging the Microsoft Threat Modeling tool. This case study highlights the cryptographic vulnerabilities of a TCPS application and presents how PQC can effectively counter these threats. Additionally, we evaluated CRYSTALS-Kyber's performance across wired and wireless TCPS data communication scenarios. While CRYSTALS-Kyber proves effective in securing TCPS applications over high-bandwidth, low-latency Ethernet networks, our analysis highlights challenges in meeting the stringent latency requirements of safety-critical wireless applications within TCPS. Future research should focus on developing lightweight PQC solutions and hybrid schemes that integrate traditional and PQC algorithms, to enhance compatibility, scalability, and real-time performance, ensuring robust protection against emerging quantum threats in TCPS., Comment: This version has been submitted to ACM Transactions on Cyber-Physical Systems (Special Issue on Security and Privacy in Safety-Critical Cyber-Physical Systems) and is currently under peer review. Please note that the abstract in this version has been revised from the ACM-submitted version to comply with arXiv's 1920-character limit
- Published
- 2024
27. Freezing of Gait Detection Using Gramian Angular Fields and Federated Learning from Wearable Sensors
- Author
-
Soumma, Shovito Barua, Alam, S M Raihanul, Rahman, Rudmila, Mahi, Umme Niraj, Mamun, Abdullah, Mostafavi, Sayyed Mostafa, and Ghasemzadeh, Hassan
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Signal Processing - Abstract
Freezing of gait (FOG) is a debilitating symptom of Parkinson's disease (PD) that impairs mobility and safety. Traditional detection methods face challenges due to intra and inter-patient variability, and most systems are tested in controlled settings, limiting their real-world applicability. Addressing these gaps, we present FOGSense, a novel FOG detection system designed for uncontrolled, free-living conditions. It uses Gramian Angular Field (GAF) transformations and federated deep learning to capture temporal and spatial gait patterns missed by traditional methods. We evaluated our FOGSense system using a public PD dataset, 'tdcsfog'. FOGSense improves accuracy by 10.4% over a single-axis accelerometer, reduces failure points compared to multi-sensor systems, and demonstrates robustness to missing values. The federated architecture allows personalized model adaptation and efficient smartphone synchronization during off-peak hours, making it effective for long-term monitoring as symptoms evolve. Overall, FOGSense achieves a 22.2% improvement in F1-score compared to state-of-the-art methods, along with enhanced sensitivity for FOG episode detection. Code is available: https://github.com/shovito66/FOGSense.
- Published
- 2024
28. Towards Understanding the Impact of Data Bugs on Deep Learning Models in Software Engineering
- Author
-
Shah, Mehil B, Rahman, Mohammad Masudur, and Khomh, Foutse
- Subjects
Computer Science - Software Engineering - Abstract
Deep learning (DL) techniques have achieved significant success in various software engineering tasks (e.g., code completion by Copilot). However, DL systems are prone to bugs from many sources, including training data. Existing literature suggests that bugs in training data are highly prevalent, but little research has focused on understanding their impacts on the models used in software engineering tasks. In this paper, we address this research gap through a comprehensive empirical investigation focused on three types of data prevalent in software engineering tasks: code-based, text-based, and metric-based. Using state-of-the-art baselines, we compare the models trained on clean datasets with those trained on datasets with quality issues and without proper preprocessing. By analysing the gradients, weights, and biases from neural networks under training, we identify the symptoms of data quality and preprocessing issues. Our analysis reveals that quality issues in code data cause biased learning and gradient instability, whereas problems in text data lead to overfitting and poor generalisation of models. On the other hand, quality issues in metric data result in exploding gradients and model overfitting, and inadequate preprocessing exacerbates these effects across all three data types. Finally, we demonstrate the validity and generalizability of our findings using six new datasets. Our research provides a better understanding of the impact and symptoms of data bugs in software engineering datasets. Practitioners and researchers can leverage these findings to develop better monitoring systems and data-cleaning methods to help detect and resolve data bugs in deep learning systems., Comment: Submitted to Empirical Software Engineering Journal
- Published
- 2024
29. Early Adoption of Generative Artificial Intelligence in Computing Education: Emergent Student Use Cases and Perspectives in 2023
- Author
-
Smith, C. Estelle, Shiekh, Kylee, Cooreman, Hayden, Rahman, Sharfi, Zhu, Yifei, Siam, Md Kamrul, Ivanitskiy, Michael, Ahmed, Ahmed M., Hallinan, Michael, Grisak, Alexander, and Fierro, Gabe
- Subjects
Computer Science - Computers and Society - Abstract
Because of the rapid development and increasing public availability of Generative Artificial Intelligence (GenAI) models and tools, educational institutions and educators must immediately reckon with the impact of students using GenAI. There is limited prior research on computing students' use and perceptions of GenAI. In anticipation of future advances and evolutions of GenAI, we capture a snapshot of student attitudes towards and uses of yet emerging GenAI, in a period of time before university policies had reacted to these technologies. We surveyed all computer science majors in a small engineering-focused R1 university in order to: (1) capture a baseline assessment of how GenAI has been immediately adopted by aspiring computer scientists; (2) describe computing students' GenAI-related needs and concerns for their education and careers; and (3) discuss GenAI influences on CS pedagogy, curriculum, culture, and policy. We present an exploratory qualitative analysis of this data and discuss the impact of our findings on the emerging conversation around GenAI and education., Comment: 7 pages
- Published
- 2024
- Full Text
- View/download PDF
30. Exploring the effects of diameter and volume fraction of quantum dots on photocarrier generation rate in solar cells
- Author
-
Hafiz, F., Rafi, M. R. I., Tasfia, M., Rahman, M. M., and Chowdhury, M. M.
- Subjects
Physics - Applied Physics - Abstract
This paper extends a previous model for p-i-n GaAs quantum dot solar cells (QDSC) by revising the equation of photocarrier generation rate in quantum dots (QDs) inside the intrinsic region. In our model, we address a notable discrepancy that arose from the previous model where they did not consider the volume of QDs within the intrinsic region, leading to an overestimation of the photocarrier generation rate. Our present model rectifies this by incorporating the volume of quantum dots, resulting in adjustments to the photocarrier generation rate. Additionally, we determine the absorption coefficient of the QDs based on Mie theory for different diameter sizes considering the constant volume fraction of the total number of QDs in the intrinsic region. We observe in our analysis that the absorption spectra of the QDs and host material may overlap in certain cases, although the previous model assumed no overlap. This finding suggests the need for caution when evaluating spectral overlap: if the spectra do not overlap, both the previous and current modified models can be reliably applied. However, in cases of overlap, careful consideration is required to ensure accurate predictions of photocarrier generation. Furthermore, investigating the effect of QD diameter size on the photocarrier generation rate in the intrinsic region, we find that smaller QD sizes result in a higher absorption coefficient as well as a higher generation rate for a constant volume of QDs in the region. Moreover, we establish the optimization of the QDs array size by varying the size and the total volume of QDs to improve the generation rate. Our analysis reveals that a higher volume of QDs and a smaller size of QDs result in the maximum generation rate. From an experimental perspective, we propose that the optimal arrangement of QDs in such solar cells is a 0.5 volume fraction with a QD diameter of 2 nm., Comment: 22 pages, 6 figures
- Published
- 2024
31. Dialectal Toxicity Detection: Evaluating LLM-as-a-Judge Consistency Across Language Varieties
- Author
-
Faisal, Fahim, Rahman, Md Mushfiqur, and Anastasopoulos, Antonios
- Subjects
Computer Science - Computation and Language - Abstract
There has been little systematic study on how dialectal differences affect toxicity detection by modern LLMs. Furthermore, although using LLMs as evaluators ("LLM-as-a-judge") is a growing research area, their sensitivity to dialectal nuances is still underexplored and requires more focused attention. In this paper, we address these gaps through a comprehensive toxicity evaluation of LLMs across diverse dialects. We create a multi-dialect dataset through synthetic transformations and human-assisted translations, covering 10 language clusters and 60 varieties. We then evaluated three LLMs on their ability to assess toxicity across multilingual, dialectal, and LLM-human consistency. Our findings show that LLMs are sensitive in handling both multilingual and dialectal variations. However, if we have to rank the consistency, the weakest area is LLM-human agreement, followed by dialectal consistency. Code repository: \url{https://github.com/ffaisal93/dialect_toxicity_llm_judge}
- Published
- 2024
32. Molecular Dynamics Study of Liquid Condensation on Nano-structured Sinusoidal Hybrid Wetting Surfaces
- Author
-
Mehereen, Taskin, Chanda, Shorup, Nitu, Afrina Ayrin, Jami, Jubaer Tanjil, Rahim, Rafia Rizwana, and Rahman, Md Ashiqur
- Subjects
Condensed Matter - Materials Science ,Electrical Engineering and Systems Science - Systems and Control - Abstract
Although real surfaces exhibit intricate topologies at the nanoscale, rough surface consideration is often overlooked in nanoscale heat transfer studies. Superimposed sinusoidal functions effectively model the complexity of these surfaces. This study investigates the impact of sinusoidal roughness on liquid argon condensation over a functional gradient wetting (FGW) surface with 84% hydrophilic content using molecular dynamics simulations. Argon atoms are confined between two platinum substrates: a flat lower substrate heated to 130K and a rough upper substrate at 90K. Key metrics of the nanoscale condensation process, such as nucleation, surface heat flux, and total energy per atom, are analyzed. Rough surfaces significantly enhance nucleation, nearly doubling cluster counts compared to smooth surfaces and achieving a more extended atomic density profile with a peak of approximately and improved heat flux. Stronger atom-surface interactions also lead to more efficient energy dissipation. These findings underscore the importance of surface roughness in optimizing condensation and heat transfer, offering a more accurate representation of surface textures and a basis for designing surfaces that achieve superior heat transfer performance., Comment: 9 pages, 7 figures, conference
- Published
- 2024
33. A Regularized LSTM Method for Detecting Fake News Articles
- Author
-
Camelia, Tanjina Sultana, Fahim, Faizur Rahman, and Anwar, Md. Musfique
- Subjects
Computer Science - Machine Learning ,Computer Science - Computation and Language ,Computer Science - Computers and Society - Abstract
Nowadays, the rapid diffusion of fake news poses a significant problem, as it can spread misinformation and confusion. This paper aims to develop an advanced machine learning solution for detecting fake news articles. Leveraging a comprehensive dataset of news articles, including 23,502 fake news articles and 21,417 accurate news articles, we implemented and evaluated three machine-learning models. Our dataset, curated from diverse sources, provides rich textual content categorized into title, text, subject, and Date features. These features are essential for training robust classification models to distinguish between fake and authentic news articles. The initial model employed a Long Short-Term Memory (LSTM) network, achieving an accuracy of 94%. The second model improved upon this by incorporating additional regularization techniques and fine-tuning hyperparameters, resulting in a 97% accuracy. The final model combined the strengths of previous architectures with advanced optimization strategies, achieving a peak accuracy of 98%. These results demonstrate the effectiveness of our approach in identifying fake news with high precision. Implementing these models showcases significant advancements in natural language processing and machine learning techniques, contributing valuable tools for combating misinformation. Our work highlights the potential for deploying such models in real-world applications, providing a reliable method for automated fake news detection and enhancing the credibility of news dissemination., Comment: 6 pages, 7 figures, 2024 IEEE International Conference on Signal Processing, Information, Communication and Systems (SPICSCON)
- Published
- 2024
34. Electrical Load Forecasting in Smart Grid: A Personalized Federated Learning Approach
- Author
-
Rahman, Ratun, Kumar, Neeraj, and Nguyen, Dinh C.
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Signal Processing - Abstract
Electric load forecasting is essential for power management and stability in smart grids. This is mainly achieved via advanced metering infrastructure, where smart meters (SMs) are used to record household energy consumption. Traditional machine learning (ML) methods are often employed for load forecasting but require data sharing which raises data privacy concerns. Federated learning (FL) can address this issue by running distributed ML models at local SMs without data exchange. However, current FL-based approaches struggle to achieve efficient load forecasting due to imbalanced data distribution across heterogeneous SMs. This paper presents a novel personalized federated learning (PFL) method to load prediction under non-independent and identically distributed (non-IID) metering data settings. Specifically, we introduce meta-learning, where the learning rates are manipulated using the meta-learning idea to maximize the gradient for each client in each global round. Clients with varying processing capacities, data sizes, and batch sizes can participate in global model aggregation and improve their local load forecasting via personalized learning. Simulation results show that our approach outperforms state-of-the-art ML and FL methods in terms of better load forecasting accuracy., Comment: This paper has been accepted by the IEEE Consumer Communications \& Networking Conference (CCNC), Jan. 2025
- Published
- 2024
35. Nitrogen vacancy center in diamond-based Faraday magnetometer
- Author
-
Kashtiban, Reza, Morley, Gavin W., Newton, Mark E., and Rahman, A T M Anishur
- Subjects
Physics - Applied Physics ,Quantum Physics - Abstract
The nitrogen vacancy centre in diamond is a versatile color center widely used for magnetometry, quantum computing, and quantum communications. In this article, we develop a new magnetometer using an ensemble of nitrogen vacancy centers and the Faraday effect. The sensitivity of our magnetometer is $300~nT/ \sqrt{Hz}$. We argue that by using an optical cavity and a high purity diamond, sensitivities in the femtotesla level can be achieved., Comment: 5 pages, 4 figures
- Published
- 2024
36. The Effect of Galaxy Interactions on Starbursts in Milky Way-Mass Galaxies in FIRE Simulations
- Author
-
Li, Fei, Rahman, Mubdi, Murray, Norman, Kereš, Dušan, Wetzel, Andrew, Faucher-Giguère, Claude-André, Hopkins, Philip F., and Moreno, Jorge
- Subjects
Astrophysics - Astrophysics of Galaxies - Abstract
Simulations and observations suggest that galaxy interactions may enhance the star formation rate (SFR) in merging galaxies. One proposed mechanism is the torque exerted on the gas and stars in the larger galaxy by the smaller galaxy. We analyze the interaction torques and star formation activity on six galaxies from the FIRE-2 simulation suite with masses comparable to the Milky Way galaxy at redshift $z=0$. We trace the halos from $z = 3.6$ to $z=0$, calculating the torque exerted by the nearby galaxies on the gas in the central galaxy. We calculate the correlation between the torque and the SFR across the simulations for various mass ratios. For near-equal-stellar-mass-ratio interactions in the galaxy sample, occurring between $z=1.2-3.6$, there is a positive and statistically significant correlation between the torque from nearby galaxies on the gas of the central galaxies and the SFR. For all other samples, no statistically significant correlation is found between the torque and the SFR. Our analysis shows that some, but not all, major interactions cause starbursts in the simulated Milky Way-mass galaxies, and that most starbursts are not caused by galaxy interactions. The transition from `bursty' at high redshift ($z\gtrsim1$) to `steady' star-formation state at later times is independent of the interaction history of the galaxies, and most of the interactions do not leave significant imprints on the overall trend of the star formation history of the galaxies., Comment: Submitted to ApJ
- Published
- 2024
37. Hybrid Vector Auto Regression and Neural Network Model for Order Flow Imbalance Prediction in High Frequency Trading
- Author
-
Rahman, Abdul and Upadhye, Neelesh
- Subjects
Quantitative Finance - Computational Finance ,Quantitative Finance - Statistical Finance ,Quantitative Finance - Trading and Market Microstructure - Abstract
In high frequency trading, accurate prediction of Order Flow Imbalance (OFI) is crucial for understanding market dynamics and maintaining liquidity. This paper introduces a hybrid predictive model that combines Vector Auto Regression (VAR) with a simple feedforward neural network (FNN) to forecast OFI and assess trading intensity. The VAR component captures linear dependencies, while residuals are fed into the FNN to model non-linear patterns, enabling a comprehensive approach to OFI prediction. Additionally, the model calculates the intensity on the Buy or Sell side, providing insights into which side holds greater trading pressure. These insights facilitate the development of trading strategies by identifying periods of high buy or sell intensity. Using both synthetic and real trading data from Binance, we demonstrate that the hybrid model offers significant improvements in predictive accuracy and enhances strategic decision-making based on OFI dynamics. Furthermore, we compare the hybrid models performance with standalone FNN and VAR models, showing that the hybrid approach achieves superior forecasting accuracy across both synthetic and real datasets, making it the most effective model for OFI prediction in high frequency trading., Comment: 17 pages, 9 figures
- Published
- 2024
38. PyGen: A Collaborative Human-AI Approach to Python Package Creation
- Author
-
Barua, Saikat, Rahman, Mostafizur, Sadek, Md Jafor, Islam, Rafiul, Khaled, Shehnaz, and Hossain, Md. Shohrab
- Subjects
Computer Science - Software Engineering ,Computer Science - Artificial Intelligence - Abstract
The principles of automation and innovation serve as foundational elements for advancement in contemporary science and technology. Here, we introduce Pygen, an automation platform designed to empower researchers, technologists, and hobbyists to bring abstract ideas to life as core, usable software tools written in Python. Pygen leverages the immense power of autoregressive large language models to augment human creativity during the ideation, iteration, and innovation process. By combining state-of-the-art language models with open-source code generation technologies, Pygen has significantly reduced the manual overhead of tool development. From a user prompt, Pygen automatically generates Python packages for a complete workflow from concept to package generation and documentation. The findings of our work show that Pygen considerably enhances the researcher's productivity by enabling the creation of resilient, modular, and well-documented packages for various specialized purposes. We employ a prompt enhancement approach to distill the user's package description into increasingly specific and actionable. While being inherently an open-ended task, we have evaluated the generated packages and the documentation using Human Evaluation, LLM-based evaluation, and CodeBLEU, with detailed results in the results section. Furthermore, we documented our results, analyzed the limitations, and suggested strategies to alleviate them. Pygen is our vision of ethical automation, a framework that promotes inclusivity, accessibility, and collaborative development. This project marks the beginning of a large-scale effort towards creating tools where intelligent agents collaborate with humans to improve scientific and technological development substantially. Our code and generated examples are open-sourced at [https://github.com/GitsSaikat/Pygen], Comment: 33 pages, 13 figures
- Published
- 2024
39. Combining Entangled and Non-Entangled Based Quantum Key Distribution Protocol With GHZ State
- Author
-
Sykot, Arman, Rahman, Mohammad Hasibur, Anannya, Rifat Tasnim, Upoma, Khan Shariya Hasan, and Mahdy, M. R. C.
- Subjects
Quantum Physics ,Computer Science - Cryptography and Security ,Physics - Applied Physics - Abstract
This paper presents a novel hybrid Quantum Key Distribution ,QKD, protocol that combines entanglement based and non entanglement based approaches to optimize security and the number of generated keys. We introduce a dynamic system that integrates a three particle GHZ state method with the two state B92 protocol, using a quantum superposition state to probabilistically switch between them. The GHZ state component leverages strong three particle entanglement correlations for enhanced security, while the B92 component offers simplicity and potentially higher key generation rates. Implemented and simulated using Qiskit, our approach demonstrates higher number of generated keys compared to standalone protocols while maintaining robust security. We present a comprehensive analysis of the security properties and performance characteristics of the proposed protocol. The results show that this combined method effectively balances the trade offs inherent in QKD systems, offering a flexible framework adaptable to varying channel conditions and security requirements.This research contributes to ongoing efforts to make QKD more practical and efficient, potentially advancing the development of large scale, secured quantum networks., Comment: 14 pages, 24 equations, 9 figures
- Published
- 2024
40. CineXDrama: Relevance Detection and Sentiment Analysis of Bangla YouTube Comments on Movie-Drama using Transformers: Insights from Interpretability Tool
- Author
-
Rifa, Usafa Akther, Debnath, Pronay, Rafa, Busra Kamal, Hridi, Shamaun Safa, and Rahman, Md. Aminur
- Subjects
Computer Science - Computation and Language - Abstract
In recent years, YouTube has become the leading platform for Bangla movies and dramas, where viewers express their opinions in comments that convey their sentiments about the content. However, not all comments are relevant for sentiment analysis, necessitating a filtering mechanism. We propose a system that first assesses the relevance of comments and then analyzes the sentiment of those deemed relevant. We introduce a dataset of 14,000 manually collected and preprocessed comments, annotated for relevance (relevant or irrelevant) and sentiment (positive or negative). Eight transformer models, including BanglaBERT, were used for classification tasks, with BanglaBERT achieving the highest accuracy (83.99% for relevance detection and 93.3% for sentiment analysis). The study also integrates LIME to interpret model decisions, enhancing transparency.
- Published
- 2024
41. FactLens: Benchmarking Fine-Grained Fact Verification
- Author
-
Mitra, Kushan, Zhang, Dan, Rahman, Sajjadur, and Hruschka, Estevam
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Large Language Models (LLMs) have shown impressive capability in language generation and understanding, but their tendency to hallucinate and produce factually incorrect information remains a key limitation. To verify LLM-generated contents and claims from other sources, traditional verification approaches often rely on holistic models that assign a single factuality label to complex claims, potentially obscuring nuanced errors. In this paper, we advocate for a shift toward fine-grained verification, where complex claims are broken down into smaller sub-claims for individual verification, allowing for more precise identification of inaccuracies, improved transparency, and reduced ambiguity in evidence retrieval. However, generating sub-claims poses challenges, such as maintaining context and ensuring semantic equivalence with respect to the original claim. We introduce FactLens, a benchmark for evaluating fine-grained fact verification, with metrics and automated evaluators of sub-claim quality. The benchmark data is manually curated to ensure high-quality ground truth. Our results show alignment between automated FactLens evaluators and human judgments, and we discuss the impact of sub-claim characteristics on the overall verification performance., Comment: 12 pages, under review
- Published
- 2024
42. Sdn Intrusion Detection Using Machine Learning Method
- Author
-
Mahmud, Muhammad Zawad, Alve, Shahran Rahman, Islam, Samiha, and Khan, Mohammad Monirujjaman
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Machine Learning ,Computer Science - Networking and Internet Architecture - Abstract
Software-defined network (SDN) is a new approach that allows network control to become directly programmable, and the underlying infrastructure can be abstracted from applications and network services. Control plane). When it comes to security, the centralization that this demands is ripe for a variety of cyber threats that are not typically seen in other network architectures. The authors in this research developed a novel machine-learning method to capture infections in networks. We applied the classifier to the UNSW-NB 15 intrusion detection benchmark and trained a model with this data. Random Forest and Decision Tree are classifiers used to assess with Gradient Boosting and AdaBoost. Out of these best-performing models was Gradient Boosting with an accuracy, recall, and F1 score of 99.87%,100%, and 99.85%, respectively, which makes it reliable in the detection of intrusions for SDN networks. The second best-performing classifier was also a Random Forest with 99.38% of accuracy, followed by Ada Boost and Decision Tree. The research shows that the reason that Gradient Boosting is so effective in this task is that it combines weak learners and creates a strong ensemble model that can predict if traffic belongs to a normal or malicious one with high accuracy. This paper indicates that the GBDT-IDS model is able to improve network security significantly and has better features in terms of both real-time detection accuracy and low false positive rates. In future work, we will integrate this model into live SDN space to observe its application and scalability. This research serves as an initial base on which one can make further strides forward to enhance security in SDN using ML techniques and have more secure, resilient networks., Comment: 15 Pages, 14 Figures
- Published
- 2024
43. Data-Driven Distributed Common Operational Picture from Heterogeneous Platforms using Multi-Agent Reinforcement Learning
- Author
-
Sur, Indranil, Raghavan, Aswin, Rahman, Abrar, Hare, James Z, Cassenti, Daniel, and Busart, Carl
- Subjects
Computer Science - Multiagent Systems ,Computer Science - Artificial Intelligence - Abstract
The integration of unmanned platforms equipped with advanced sensors promises to enhance situational awareness and mitigate the "fog of war" in military operations. However, managing the vast influx of data from these platforms poses a significant challenge for Command and Control (C2) systems. This study presents a novel multi-agent learning framework to address this challenge. Our method enables autonomous and secure communication between agents and humans, which in turn enables real-time formation of an interpretable Common Operational Picture (COP). Each agent encodes its perceptions and actions into compact vectors, which are then transmitted, received and decoded to form a COP encompassing the current state of all agents (friendly and enemy) on the battlefield. Using Deep Reinforcement Learning (DRL), we jointly train COP models and agent's action selection policies. We demonstrate resilience to degraded conditions such as denied GPS and disrupted communications. Experimental validation is performed in the Starcraft-2 simulation environment to evaluate the precision of the COPs and robustness of policies. We report less than 5% error in COPs and policies resilient to various adversarial conditions. In summary, our contributions include a method for autonomous COP formation, increased resilience through distributed prediction, and joint training of COP models and multi-agent RL policies. This research advances adaptive and resilient C2, facilitating effective control of heterogeneous unmanned platforms., Comment: 29th International Command and Control Research & Technology Symposium
- Published
- 2024
44. Martin's Axiom and Weak Kurepa Hypothesis
- Author
-
Mohammadpour, Rahman
- Subjects
Mathematics - Logic ,03E35, 03E50, 03E57 - Abstract
I show that it is consistent relative to the consistency of a Mahlo cardinal that Martin's axiom holds at $\omega_2$, but the weak Kurepa Hypothesis fails. This answers a question posed by Honzik, Lambie-Hanson and Stejskalov\'a. The consistency result is obtained by constructing a model where the weak Kurepa Hypothesis fails in any c.c.c. forcing extension., Comment: It is withdrawn due to a gap in the proof of the main theorem which was pointed out by John Krueger, to whom the author is grateful
- Published
- 2024
45. MISGUIDE: Security-Aware Attack Analytics for Smart Grid Load Frequency Control
- Author
-
Haque, Nur Imtiazul, Mali, Prabin, Haider, Mohammad Zakaria, Rahman, Mohammad Ashiqur, and Paudyal, Sumit
- Subjects
Computer Science - Computational Engineering, Finance, and Science - Abstract
Incorporating advanced information and communication technologies into smart grids (SGs) offers substantial operational benefits while increasing vulnerability to cyber threats like false data injection (FDI) attacks. Current SG attack analysis tools predominantly employ formal methods or adversarial machine learning (ML) techniques with rule-based bad data detectors to analyze the attack space. However, these attack analytics either generate simplistic attack vectors detectable by the ML-based anomaly detection models (ADMs) or fail to identify critical attack vectors from complex controller dynamics in a feasible time. This paper introduces MISGUIDE, a novel defense-aware attack analytics designed to extract verifiable multi-time slot-based FDI attack vectors from complex SG load frequency control dynamics and ADMs, utilizing the Gurobi optimizer. MISGUIDE can identify optimal (maliciously triggering under/over frequency relays in minimal time) and stealthy attack vectors. Using real-world load data, we validate the MISGUIDE-identified attack vectors through real-time hardware-in-the-loop (OPALRT) simulations of the IEEE 39-bus system., Comment: 12 page journal
- Published
- 2024
46. BhasaAnuvaad: A Speech Translation Dataset for 13 Indian Languages
- Author
-
Jain, Sparsh, Sankar, Ashwin, Choudhary, Devilal, Suman, Dhairya, Narasimhan, Nikhil, Khan, Mohammed Safi Ur Rahman, Kunchukuttan, Anoop, Khapra, Mitesh M, and Dabre, Raj
- Subjects
Computer Science - Computation and Language - Abstract
Automatic Speech Translation (AST) datasets for Indian languages remain critically scarce, with public resources covering fewer than 10 of the 22 official languages. This scarcity has resulted in AST systems for Indian languages lagging far behind those available for high-resource languages like English. In this paper, we first evaluate the performance of widely-used AST systems on Indian languages, identifying notable performance gaps and challenges. Our findings show that while these systems perform adequately on read speech, they struggle significantly with spontaneous speech, including disfluencies like pauses and hesitations. Additionally, there is a striking absence of systems capable of accurately translating colloquial and informal language, a key aspect of everyday communication. To this end, we introduce BhasaAnuvaad, the largest publicly available dataset for AST involving 13 out of 22 scheduled Indian languages and English spanning over 44,400 hours and 17M text segments. BhasaAnuvaad contains data for English speech to Indic text, as well as Indic speech to English text. This dataset comprises three key categories: (1) Curated datasets from existing resources, (2) Large-scale web mining, and (3) Synthetic data generation. By offering this diverse and expansive dataset, we aim to bridge the resource gap and promote advancements in AST for Indian languages., Comment: Work in Progress
- Published
- 2024
47. Ultrasound-Based AI for COVID-19 Detection: A Comprehensive Review of Public and Private Lung Ultrasound Datasets and Studies
- Author
-
Morshed, Abrar, Shihab, Abdulla Al, Jahin, Md Abrar, Nahian, Md Jaber Al, Sarker, Md Murad Hossain, Wadud, Md Sharjis Ibne, Uddin, Mohammad Istiaq, Siraji, Muntequa Imtiaz, Anjum, Nafisa, Shristy, Sumiya Rajjab, Rahman, Tanvin, Khatun, Mahmuda, Dewan, Md Rubel, Hossain, Mosaddeq, Sultana, Razia, Chakma, Ripel, Emon, Sonet Barua, Islam, Towhidul, and Hussain, Mohammad Arafat
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
The COVID-19 pandemic has affected millions of people globally, with respiratory organs being strongly affected in individuals with comorbidities. Medical imaging-based diagnosis and prognosis have become increasingly popular in clinical settings for detecting COVID-19 lung infections. Among various medical imaging modalities, ultrasound stands out as a low-cost, mobile, and radiation-safe imaging technology. In this comprehensive review, we focus on AI-driven studies utilizing lung ultrasound (LUS) for COVID-19 detection and analysis. We provide a detailed overview of both publicly available and private LUS datasets and categorize the AI studies according to the dataset they used. Additionally, we systematically analyzed and tabulated the studies across various dimensions, including data preprocessing methods, AI models, cross-validation techniques, and evaluation metrics. In total, we reviewed 60 articles, 41 of which utilized public datasets, while the remaining employed private data. Our findings suggest that ultrasound-based AI studies for COVID-19 detection have great potential for clinical use, especially for children and pregnant women. Our review also provides a useful summary for future researchers and clinicians who may be interested in the field.
- Published
- 2024
48. Hybrid Attention for Robust RGB-T Pedestrian Detection in Real-World Conditions
- Author
-
Rathinam, Arunkumar, Pauly, Leo, Shabayek, Abd El Rahman, Rharbaoui, Wassim, Kacem, Anis, Gaudillière, Vincent, and Aouada, Djamila
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Artificial Intelligence - Abstract
Multispectral pedestrian detection has gained significant attention in recent years, particularly in autonomous driving applications. To address the challenges posed by adversarial illumination conditions, the combination of thermal and visible images has demonstrated its advantages. However, existing fusion methods rely on the critical assumption that the RGB-Thermal (RGB-T) image pairs are fully overlapping. These assumptions often do not hold in real-world applications, where only partial overlap between images can occur due to sensors configuration. Moreover, sensor failure can cause loss of information in one modality. In this paper, we propose a novel module called the Hybrid Attention (HA) mechanism as our main contribution to mitigate performance degradation caused by partial overlap and sensor failure, i.e. when at least part of the scene is acquired by only one sensor. We propose an improved RGB-T fusion algorithm, robust against partial overlap and sensor failure encountered during inference in real-world applications. We also leverage a mobile-friendly backbone to cope with resource constraints in embedded systems. We conducted experiments by simulating various partial overlap and sensor failure scenarios to evaluate the performance of our proposed method. The results demonstrate that our approach outperforms state-of-the-art methods, showcasing its superiority in handling real-world challenges., Comment: Accepted for publication in IEEE Robotics and Automation Letters, October 2024
- Published
- 2024
49. ChatGPT in Research and Education: Exploring Benefits and Threats
- Author
-
Miah, Abu Saleh Musa, Tusher, Md Mahbubur Rahman, Hossain, Md. Moazzem, Hossain, Md Mamun, Rahim, Md Abdur, Hamid, Md Ekramul, Islam, Md. Saiful, and Shin, Jungpil
- Subjects
Computer Science - Computer Vision and Pattern Recognition - Abstract
In recent years, advanced artificial intelligence technologies, such as ChatGPT, have significantly impacted various fields, including education and research. Developed by OpenAI, ChatGPT is a powerful language model that presents numerous opportunities for students and educators. It offers personalized feedback, enhances accessibility, enables interactive conversations, assists with lesson preparation and evaluation, and introduces new methods for teaching complex subjects. However, ChatGPT also poses challenges to traditional education and research systems. These challenges include the risk of cheating on online exams, the generation of human-like text that may compromise academic integrity, a potential decline in critical thinking skills, and difficulties in assessing the reliability of information generated by AI. This study examines both the opportunities and challenges ChatGPT brings to education from the perspectives of students and educators. Specifically, it explores the role of ChatGPT in helping students develop their subjective skills. To demonstrate its effectiveness, we conducted several subjective experiments using ChatGPT, such as generating solutions from subjective problem descriptions. Additionally, surveys were conducted with students and teachers to gather insights into how ChatGPT supports subjective learning and teaching. The results and analysis of these surveys are presented to highlight the impact of ChatGPT in this context.
- Published
- 2024
50. MILU: A Multi-task Indic Language Understanding Benchmark
- Author
-
Verma, Sshubam, Khan, Mohammed Safi Ur Rahman, Kumar, Vishwajeet, Murthy, Rudra, and Sen, Jaydeep
- Subjects
Computer Science - Computation and Language - Abstract
Evaluating Large Language Models (LLMs) in low-resource and linguistically diverse languages remains a significant challenge in NLP, particularly for languages using non-Latin scripts like those spoken in India. Existing benchmarks predominantly focus on English, leaving substantial gaps in assessing LLM capabilities in these languages. We introduce MILU, a Multi task Indic Language Understanding Benchmark, a comprehensive evaluation benchmark designed to address this gap. MILU spans 8 domains and 42 subjects across 11 Indic languages, reflecting both general and culturally specific knowledge. With an India-centric design, incorporates material from regional and state-level examinations, covering topics such as local history, arts, festivals, and laws, alongside standard subjects like science and mathematics. We evaluate over 45 LLMs, and find that current LLMs struggle with MILU, with GPT-4o achieving the highest average accuracy at 72 percent. Open multilingual models outperform language-specific fine-tuned models, which perform only slightly better than random baselines. Models also perform better in high resource languages as compared to low resource ones. Domain-wise analysis indicates that models perform poorly in culturally relevant areas like Arts and Humanities, Law and Governance compared to general fields like STEM. To the best of our knowledge, MILU is the first of its kind benchmark focused on Indic languages, serving as a crucial step towards comprehensive cultural evaluation. All code, benchmarks, and artifacts are publicly available to foster open research.
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.