6,089 results on '"False positive rate"'
Search Results
2. Enabling space-time efficient range queries with REncoder.
- Author
-
Fan, Zhuochen, Ye, Bowen, Wang, Ziwei, Zhong, Zheng, Guo, Jiarui, Wu, Yuhan, Li, Haoyu, Yang, Tong, Tu, Yaofeng, Liu, Zirui, and Cui, Bin
- Abstract
A range filter is a data structure to answer range membership queries. Range queries are common in modern applications, and range filters have gained rising attention for improving the performance of range queries by ruling out empty range queries. However, state-of-the-art range filters, such as SuRF and Rosetta, suffer either high false positive rate or low throughput. In this paper, we propose a novel range filter, called REncoder. It organizes all prefixes of keys into a segment tree, and locally encodes the segment tree into a Bloom filter to accelerate queries. REncoder supports diverse workloads by adaptively choosing how many levels of the segment tree to store. In addition, we also propose a customized blacklist optimization for it to further improve the accuracy of multi-round queries. We theoretically prove that the error of REncoder is bounded and derive the asymptotic space complexity under the bounded error. We conduct extensive experiments on both synthetic datasets and real datasets. The experimental results show that REncoder outperforms all state-of-the-art range filters, and the proposed blacklist optimization can effectively further reduce the false positive rate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Efficient hashing technique for malicious profile detection at hypervisor environment.
- Author
-
Kumar, Anumukonda Naga Seshu, Yadav, Rajesh Kumar, and Raghava, Nallanthighal Srinivasa
- Subjects
- *
VIRTUAL machine systems , *HYPERVISOR (Computer software) , *INTERNET security , *SECURITY systems , *CYBERTERRORISM - Abstract
Attack detection in cyber security systems is one of the complex tasks which require domain specific knowledge and cognitive intelligence to detect novel and unknown attacks from large scale network data. This research explores how the network operations and network security affects the detection of unknown attacks in network systems. A hash based profile matching technique is presented in this paper for attack detection. The main objective of this work is to detect unknown attacks using a profile matching approach in Hypervisors. Hypervisors are characterized by their versatile nature since they allow the utilization of available system resources. The virtual machines (VMs) in the hypervisors are not dependent on the host hardware and as a result, hypervisors are considered advantageous. In addition, hypervisors have direct access to the hardware resources such as memory, storage and processors. However, hypervisors are more susceptible to the security threats which attack each and every VM. A SHA3-512 hashing algorithm used for generating hash values in hypervisor and the proposed model is used to verify whether the profile is malicious or benign. The performance of the hashbased profile matching technique is compared with traditional hash techniques namely SHA-256 and MD5 algorithm. Results show that the proposed SHA3-512 algorithm achieves a phenomenal performance in terms of phenomenal accuracy and zero false positive rates. Simulation results also show that the computation time required by Sha3-512 algorithm is lower compared to SHA-256 and MD5 algorithms. The performance analysis validates that the hash based approach achieves reliable performance for attack detection. The effectiveness of the hashing technique was determined using three different evaluation metrics namely attack DR, FPR, and computational time. Simulation results show that the existing SHA3- 512 algorithm detection rate of 97.24% with zero false positive rate and faster computational time compared to SHA 256 and MD5 algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Noninvasive prenatal testing (NIPT) results are less accurate the later applied during pregnancy
- Author
-
Thomas Liehr
- Subjects
Noninvasive prenatal testing (NIPT) ,Pregnancy age ,False positive rate ,Gynecology and obstetrics ,RG1-991 - Abstract
Objective: Noninvasive prenatal testing (NIPT) has been introduced in prenatal genetics, recently. Even though it is connected with biological, technical, medical and ethical issues also reviewed here, it is meanwhile applied as a standard screening test. One of the obvious, but yet not further reviewed peculiarities of NIPT is that the reported false positives rates are variant, specifically in European, compared with Chinese publications. Materials and methods: Here the only 15 suited studies on >600,000 cases were identified in which at least average pregnancy age was reported for the time NIPT was done. Results and conclusion: It could be shown, that NIPT is done in China in later weeks of gestation, than in other countries. Besides, here for the first time it is highlighted that false positive NIPT results are less frequent, the earlier the screening is performed. Most likely this is related to two biological phenomena: loss of trisomic pregnancies and preferential survival of fetuses which underwent trisomic rescue, however, with major trisomic populations in placenta. This yet not considered aspect needs to be kept in mind especially in late stage high risk pregnancies.
- Published
- 2024
- Full Text
- View/download PDF
5. Statistical analysis and comparison of deep convolutional neural network models for the identification and classification of maize leaf diseases.
- Author
-
Dash, Arabinda and Sethy, Prabira Kumar
- Subjects
CONVOLUTIONAL neural networks ,ARTIFICIAL neural networks ,PLANT diseases ,PLANT identification ,LEAF spots - Abstract
Maize, as well as other plants, are particularly susceptible to various illnesses. Therefore, one of the most crucial ways for farmers to prevent crop loss is through early diagnosis of plant diseases. The application of deep convolutional neural networks (CNN) for disease identification in maize plants can aid farmers in quickly and reliably detecting the presence of illnesses. In this regard, we have taken 4,988 images belonging to three distinct but widespread types of maize illnesses: Leaf-blight, Common Rust, and Gray Leaf Spot. Moreover, all these images are fed to 13 different pre-trained CNN models such as Alexnet, Densenet201, Googlenet, InceptionresnetV2, InceptionV3, MobilenetV2, VGG-16, VGG-19, Resnet-18, Resnet-50, Resnet-101, Xception and Shufflenet for training as well as testing. The performance of all these CNN models is recorded with reference to accuracy, specificity, precision, False Positive Rate, F1 Score, MCC, and Kappa. Then the effectiveness of each model is evaluated through multiclass statistical analysis by the IBM SPSS statistics tool to select the most efficient classification model among all above said 13 models. The comparison results show, among all the models, Densenet201 has the highest accuracy with the lowest False Positive Rate (FPR), whereas VGG-19 has the lowest accuracy with the highest False Positive Rate for the identification of mentioned maize diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Deep Learning-Based ECG Classification for Arterial Fibrillation Detection.
- Author
-
Irshad, Muhammad Sohail, Masood, Tehreem, Jaffar, Arfan, Rashid, Muhammad, Akram, Sheeraz, and Aljohani, Abeer
- Subjects
DEEP learning ,ELECTROCARDIOGRAPHY ,ATRIAL fibrillation ,CONVOLUTIONAL neural networks - Abstract
The application of deep learning techniques in the medical field, specifically for Atrial Fibrillation (AFib) detection through Electrocardiogram (ECG) signals, has witnessed significant interest. Accurate and timely diagnosis increases the patient's chances of recovery. However, issues like overfitting and inconsistent accuracy across datasets remain challenges. In a quest to address these challenges, a study presents two prominent deep learning architectures, ResNet-50 andDenseNet-121, to evaluate their effectiveness in AFib detection. The aim was to create a robust detection mechanism that consistently performs well. Metrics such as loss, accuracy, precision, sensitivity, and Area Under the Curve (AUC) were utilized for evaluation. The findings revealed that ResNet-50 surpassed DenseNet-121 in all evaluated categories. It demonstrated lower loss rate 0.0315 and 0.0305 superior accuracy of 98.77% and 98.88%, precision of 98.78% and 98.89% and sensitivity of 98.76% and 98.86% for training and validation, hinting at its advanced capability for AFib detection. These insights offer a substantial contribution to the existing literature on deep learning applications for AFib detection from ECG signals. The comparative performance data assists future researchers in selecting suitable deep-learning architectures for AFib detection. Moreover, the outcomes of this study are anticipated to stimulate the development of more advanced and efficient ECG-based AFib detection methodologies, for more accurate and early detection of AFib, thereby fostering improved patient care and outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Anticipating Graduate Program Admission Through Implementation of Deep Learning Models
- Author
-
Shaik, Nazeer, Singh, Jagendra, Gupta, Ankur, Hasan, Dler Salih, Manikandan, N., Chandan, Radha Raman, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Shaw, Rabindra Nath, editor, Siano, Pierluigi, editor, Makhilef, Saad, editor, Ghosh, Ankush, editor, and Shimi, S. L., editor
- Published
- 2024
- Full Text
- View/download PDF
8. The heterogeneity effect of surveillance intervals on progression free survival.
- Author
-
Zhong, Zihang, Yang, Min, Ni, Senmiao, Cai, Lixin, Wu, Jingwei, Bai, Jianling, and Yu, Hao
- Subjects
- *
HETEROGENEITY , *RATE setting , *FALSE positive error , *COLORECTAL cancer , *LIFE expectancy , *PROGRESSION-free survival , *CLINICAL trials - Abstract
Progression-free survival (PFS) is an increasingly important surrogate endpoint in cancer clinical trials. However, the true time of progression is typically unknown if the evaluation of progression status is only scheduled at given surveillance intervals. In addition, comparison between treatment arms under different surveillance schema is not uncommon. Our aim is to explore whether the heterogeneity of the surveillance intervals may interfere with the validity of the conclusion of efficacy based on PFS, and the extent to which the variation would bias the results. We conduct comprehensive simulation studies to explore the aforementioned goals in a two-arm randomized control trial. We introduce three steps to simulate survival data with predefined surveillance intervals under different censoring rate considerations. We report the estimated hazard ratios and examine false positive rate, power and bias under different surveillance intervals, given different baseline median PFS, hazard ratio and censoring rate settings. Results show that larger heterogeneous lengths of surveillance intervals lead to higher false positive rate and overestimate the power, and the effect of the heterogeneous surveillance intervals may depend upon both the life expectancy of the tumor prognoses and the censoring proportion of the survival data. We also demonstrate such heterogeneity effect of surveillance intervals on PFS in a phase III metastatic colorectal cancer trial. In our opinions, adherence to consistent surveillance intervals should be favored in designing the comparative trials. Otherwise, it needs to be appropriately taken into account when analyzing data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Advancing face detection efficiency: Utilizing classification networks for lowering false positive incidences
- Author
-
Jianlin Zhang, Chen Hou, Xu Yang, Xuechao Yang, Wencheng Yang, and Hui Cui
- Subjects
Convolutional Neural Network (CNNs) ,Face detection ,Pseudo-face image ,False positive rate ,Object detection ,Computer engineering. Computer hardware ,TK7885-7895 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
The advancement of convolutional neural networks (CNNs) has markedly progressed in the field of face detection, significantly enhancing accuracy and recall metrics. Precision and recall remain pivotal for evaluating CNN-based detection models; however, there is a prevalent inclination to focus on improving true positive rates at the expense of addressing false positives. A critical issue contributing to this discrepancy is the lack of pseudo-face images within training and evaluation datasets. This deficiency impairs the regression capabilities of detection models, leading to numerous erroneous detections and inadequate localization. To address this gap, we introduce the WIDERFACE dataset, enriched with a considerable number of pseudo-face images created by amalgamating human and animal facial features. This dataset aims to bolster the detection of false positives during training phases. Furthermore, we propose a new face detection architecture that incorporates a classification model into the conventional face detection model to diminish the false positive rate and augment detection precision. Our comparative analysis on the WIDERFACE and other renowned datasets reveals that our architecture secures a lower false positive rate while preserving the true positive rate in comparison to existing top-tier face detection models.
- Published
- 2024
- Full Text
- View/download PDF
10. A Signature Recognition Technique With a Powerful Verification Mechanism Based on CNN and PCA
- Author
-
Gibrael Abosamra and Hadi Oqaibi
- Subjects
Convolutional neural networks ,cosine distance ,false negative rate ,false positive rate ,k-Nearest neighbor algorithm ,out-of-distribution detection ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this paper, we embed a signature verification mechanism in a previously introduced architecture for signature recognition to detect in-distribution and out-of-distribution random forgeries. In the previous architecture, a CNN was trained on the genuine user training dataset and then used as a feature extraction module. A k-NN algorithm with cosine distance was then used to classify the unknown signatures based on the nearest cosine distance neighbor. This architecture led to higher than 99% accuracy, but without verification, because any unknown signature will converge to one of the identities of the training dataset’s users. To add a verification mechanism that differentiates between genuine and random forgeries, we use PCA to select the most discriminating features used in calculating the cosine distance between the training and testing signatures. A fixed parameter thresholding technique based on the training distances is introduced that best differentiates between the genuine and random-user signatures. Moreover, enhancement of the technique is carried out by combining the output of the Softmax layer and the last convolution layer of the ResNet18 model to get a highly discriminative representation of the handwritten signatures. Accordingly, the introduced verification mechanism resulted in very low false positive and negative rates for test signatures from inside and outside the main dataset, with an insignificant decrease in the high identification accuracy. The complete architecture has been tested on three publicly available datasets, showing superior results.
- Published
- 2024
- Full Text
- View/download PDF
11. Clinico-biological-radiomics (CBR) based machine learning for improving the diagnostic accuracy of FDG-PET false-positive lymph nodes in lung cancer
- Author
-
Caiyue Ren, Fuquan Zhang, Jiangang Zhang, Shaoli Song, Yun Sun, and Jingyi Cheng
- Subjects
Clinico-biological-radiomics ,Machine learning ,[18F]FDG-PET/CT ,False positive rate ,Mediastinal–hilar lymph nodes ,Medicine - Abstract
Abstract Background The main problem of positron emission tomography/computed tomography (PET/CT) for lymph node (LN) staging is the high false positive rate (FPR). Thus, we aimed to explore a clinico-biological-radiomics (CBR) model via machine learning (ML) to reduce FPR and improve the accuracy for predicting the hypermetabolic mediastinal–hilar LNs status in lung cancer than conventional PET/CT. Methods A total of 260 lung cancer patients with hypermetabolic mediastinal–hilar LNs (SUVmax ≥ 2.5) were retrospectively reviewed. Patients were treated with surgery with systematic LN resection and pathologically divided into the LN negative (LN-) and positive (LN +) groups, and randomly assigned into the training (n = 182) and test (n = 78) sets. Preoperative CBR dataset containing 1738 multi-scale features was constructed for all patients. Prediction models for hypermetabolic LNs status were developed using the features selected by the supervised ML algorithms, and evaluated using the classical diagnostic indicators. Then, a nomogram was developed based on the model with the highest area under the curve (AUC) and the lowest FPR, and validated by the calibration plots. Results In total, 109 LN− and 151 LN + patients were enrolled in this study. 6 independent prediction models were developed to differentiate LN− from LN + patients using the selected features from clinico-biological-image dataset, radiomics dataset, and their combined CBR dataset, respectively. The DeLong test showed that the CBR Model containing all-scale features held the highest predictive efficiency and the lowest FPR among all of established models (p
- Published
- 2023
- Full Text
- View/download PDF
12. Rethinking False Positive Exercise Electrocardiographic Stress Tests by Assessing Coronary Microvascular Function.
- Author
-
Sinha, Aish, Dutta, Utkarsh, Demir, Ozan M., De Silva, Kalpa, Ellis, Howard, Belford, Samuel, Ogden, Mark, Li Kam Wa, Matthew, Morgan, Holly P., Shah, Ajay M., Chiribiri, Amedeo, Webb, Andrew J., Marber, Michael, Rahman, Haseeb, and Perera, Divaka
- Subjects
- *
MICROCIRCULATION disorders , *MYOCARDIAL ischemia , *CORONARY artery disease , *CORONARY arteries , *BLOOD flow measurement - Abstract
Exercise electrocardiographic stress testing (EST) has historically been validated against the demonstration of obstructive coronary artery disease. However, myocardial ischemia can occur because of coronary microvascular dysfunction (CMD) in the absence of obstructive coronary artery disease. The aim of this study was to assess the specificity of EST to detect an ischemic substrate against the reference standard of coronary endothelium-independent and endothelium-dependent microvascular function in patients with angina with nonobstructive coronary arteries (ANOCA). Patients with ANOCA underwent invasive coronary physiological assessment using adenosine and acetylcholine. CMD was defined as impaired endothelium-independent and/or endothelium-dependent function. EST was performed using a standard Bruce treadmill protocol, with ischemia defined as the appearance of ≥0.1-mV ST-segment depression 80 ms from the J-point on electrocardiography. The study was powered to detect specificity of ≥91%. A total of 102 patients were enrolled (65% women, mean age 60 ± 8 years). Thirty-two patients developed ischemia (ischemic group) during EST, whereas 70 patients did not (nonischemic group); both groups were phenotypically similar. Ischemia during EST was 100% specific for CMD. Acetylcholine flow reserve was the strongest predictor of ischemia during exercise. Using endothelium-independent and endothelium-dependent microvascular dysfunction as the reference standard, the false positive rate of EST dropped to 0%. In patients with ANOCA, ischemia on EST was highly specific of an underlying ischemic substrate. These findings challenge the traditional belief that EST has a high false positive rate. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Preoperative predictors of successful tumour localization by intraoperative molecular imaging with pafolacianine in lung cancer to create predictive nomogram.
- Author
-
Bou-Samra, Patrick, Joffe, Jonah, Chang, Austin, Guo, Emily, Segil, Alix, Azari, Feredun, Kennedy, Gregory, Din, Azra, Hwang, Wei-Ting, and Singhal, Sunil
- Subjects
- *
NOMOGRAPHY (Mathematics) , *RECEIVER operating characteristic curves , *LUNG cancer , *POSITRON emission tomography , *LUNG diseases - Abstract
Open in new tab Download slide OBJECTIVES Intraoperative molecular imaging (IMI) uses cancer-targeted fluorescent probe to locate nodules. Pafolacianine is a Food and Drug Administration-approved fluorescent probe for lung cancer. However, it has a 8–12% false negative rate for localization. Our goal is to define preoperative predictors of tumour localization by IMI. METHODS We performed a retrospective review of patients who underwent IMI using pafolacianine for lung lesions from June 2015 to August 2019. Candidate predictors including sex, age, body mass index, smoking history, tumour size, distance of tumour from surface, use of neoadjuvant therapy and positron emission tomography avidity were included. The outcome was fluorescence in vivo and comprehensively included those who were true or false positives negatives. Multiple imputation was used to handle the missing data. The final model was evaluated using the area under the receiver operating characteristic curve. RESULTS Three hundred nine patients were included in our study. The mean age was 64 (standard deviation 13) and 68% had a smoking history. The mean distance of the tumours from the pleural surface was 0.4 cm (standard deviation 0.6). Smoking in pack-years and distance from pleura had an odds ratio of 0.99 [95% confidence interval: 0.98–0.99; P = 0.03] and 0.46 [95% confidence interval: 0.27–0.78; P = 0.004], respectively. The final model had an area under the receiver operating characteristic curve of 0.68 and was used to create a nomogram that gives a probability of fluorescence in vivo. CONCLUSIONS Primary tumours that are deeper from the pleural surface, especially in patients with a higher pack-years, are associated with a decreased likelihood of intraoperative localization. We identified a nomogram to predict the likelihood of tumour localization with IMI with pafolacianine. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Clinico-biological-radiomics (CBR) based machine learning for improving the diagnostic accuracy of FDG-PET false-positive lymph nodes in lung cancer.
- Author
-
Ren, Caiyue, Zhang, Fuquan, Zhang, Jiangang, Song, Shaoli, Sun, Yun, and Cheng, Jingyi
- Subjects
LYMPH node cancer ,MACHINE learning ,POSITRON emission tomography ,COMPUTED tomography ,LUNG cancer - Abstract
Background: The main problem of positron emission tomography/computed tomography (PET/CT) for lymph node (LN) staging is the high false positive rate (FPR). Thus, we aimed to explore a clinico-biological-radiomics (CBR) model via machine learning (ML) to reduce FPR and improve the accuracy for predicting the hypermetabolic mediastinal–hilar LNs status in lung cancer than conventional PET/CT. Methods: A total of 260 lung cancer patients with hypermetabolic mediastinal–hilar LNs (SUVmax ≥ 2.5) were retrospectively reviewed. Patients were treated with surgery with systematic LN resection and pathologically divided into the LN negative (LN-) and positive (LN +) groups, and randomly assigned into the training (n = 182) and test (n = 78) sets. Preoperative CBR dataset containing 1738 multi-scale features was constructed for all patients. Prediction models for hypermetabolic LNs status were developed using the features selected by the supervised ML algorithms, and evaluated using the classical diagnostic indicators. Then, a nomogram was developed based on the model with the highest area under the curve (AUC) and the lowest FPR, and validated by the calibration plots. Results: In total, 109 LN− and 151 LN + patients were enrolled in this study. 6 independent prediction models were developed to differentiate LN− from LN + patients using the selected features from clinico-biological-image dataset, radiomics dataset, and their combined CBR dataset, respectively. The DeLong test showed that the CBR Model containing all-scale features held the highest predictive efficiency and the lowest FPR among all of established models (p < 0.05) in both the training and test sets (AUCs of 0.90 and 0.89, FPRs of 12.82% and 6.45%, respectively) (p < 0.05). The quantitative nomogram based on CBR Model was validated to have a good consistency with actual observations. Conclusion: This study presents an integrated CBR nomogram that can further reduce the FPR and improve the accuracy of hypermetabolic mediastinal–hilar LNs evaluation than conventional PET/CT in lung cancer, thereby greatly reducing the risk of overestimation and assisting for precision treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
15. Evaluating Binary Classification Algorithms on Data Lakes Using Machine Learning.
- Author
-
Boyko, Nataliya
- Subjects
CLASSIFICATION algorithms ,RECEIVER operating characteristic curves ,RESEARCH methodology evaluation ,MACHINE learning ,LAKES ,ERROR functions ,MATHEMATICAL optimization - Abstract
The objective of this study was to conduct a comprehensive evaluation of binary classification algorithms within data lakes, employing a diverse array of metrics. Binary classification algorithms, which categorize inputs into one of two distinct classes, were scrutinized to determine their efficacy. The research focused on the evaluation techniques applicable to these algorithms. Methods for assessing algorithmic efficiency were investigated, including logistic regression, error function, regularization, and ancillary training tools within the dataset. A detailed analysis of the parameters pertinent to classifier evaluation was performed, encompassing accuracy, confusion matrix, precision, recall, decision threshold, F1 score, and the Receiver Operating Characteristic (ROC) curve. A critical comparison between the ROC and Precision-Recall (PR) curves was conducted, with particular attention to the Area Under the Curve (AUC) metric. The study's methodology involved training a classifier on the UCI Machine Learning Repository's Breast Cancer Wisconsin dataset, followed by the calibration of the precision/recall ratio. The findings of this study offer an in-depth examination of various evaluation metrics and threshold optimization techniques, thereby augmenting the comprehension of binary classifier performance. Practitioners are provided with guidance to select suitable metrics and thresholds tailored to specific contexts. Furthermore, the study's insights into the strengths and limitations of these metrics across heterogeneous datasets promote refined practices in machine learning and data analysis, facilitating more strategic model selection and deployment. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Evaluating false positive rates of standard and hierarchical measures of metacognitive accuracy.
- Author
-
Rausch, Manuel and Zehetleitner, Michael
- Subjects
METACOGNITION ,INDEPENDENT variables ,RECEIVER operating characteristic curves ,REGRESSION analysis ,DATABASES ,RESEARCH personnel - Abstract
A key aspect of metacognition is metacognitive accuracy, i.e., the degree to which confidence judgments differentiate between correct and incorrect trials. To quantify metacognitive accuracy, researchers are faced with an increasing number of different methods. The present study investigated false positive rates associated with various measures of metacognitive accuracy by hierarchical resampling from the confidence database to accurately represent the statistical properties of confidence judgements. We found that most measures based on the computation of summary-statistics separately for each participant and subsequent group-level analysis performed adequately in terms of false positive rate, including gamma correlations, meta-d′, and the area under type 2 ROC curves. Meta-d′/d′ is associated with a false positive rate even below 5%, but log-transformed meta-d′/d′ performs adequately. The false positive rate of HMeta-d depends on the study design and on prior specification: For group designs, the false positive rate is above 5% when independent priors are placed on both groups, but the false positive rate is adequate when a prior was placed on the difference between groups. For continuous predictor variables, default priors resulted in a false positive rate below 5%, but the false positive rate was not distinguishable from 5% when close-to-flat priors were used. Logistic mixed model regression analysis is associated with dramatically inflated false positive rates when random slopes are omitted from model specification. In general, we argue that no measure of metacognitive accuracy should be used unless the false positive rate has been demonstrated to be adequate. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. An Intrusion Detection System for Securing IoT Based Sensor Networks from Routing Attacks
- Author
-
Subramani, Shalini, Selvi, M., Kumar, S. V. N. Santhosh, Thangaramya, K., Anand, M., Kannan, A., Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Fernando, Xavier, editor, and Chandrabose, Aravindan, editor
- Published
- 2023
- Full Text
- View/download PDF
18. A novel principal component based method for identifying differentially methylated regions in Illumina Infinium MethylationEPIC BeadChip data
- Author
-
Yuanchao Zheng, Kathryn L. Lunetta, Chunyu Liu, Alicia K. Smith, Richard Sherva, Mark W. Miller, and Mark W. Logue
- Subjects
differentially methylated region ,false positive rate ,principal components ,Genetics ,QH426-470 - Abstract
Differentially methylated regions (DMRs) are genomic regions with methylation patterns across multiple CpG sites that are associated with a phenotype. In this study, we proposed a Principal Component (PC) based DMR analysis method for use with data generated using the Illumina Infinium MethylationEPIC BeadChip (EPIC) array. We obtained methylation residuals by regressing the M-values of CpGs within a region on covariates, extracted PCs of the residuals, and then combined association information across PCs to obtain regional significance. Simulation-based genome-wide false positive (GFP) rates and true positive rates were estimated under a variety of conditions before determining the final version of our method, which we have named DMRPC. Then, DMRPC and another DMR method, coMethDMR, were used to perform epigenome-wide analyses of several phenotypes known to have multiple associated methylation loci (age, sex, and smoking) in a discovery and a replication cohort. Among regions that were analysed by both methods, DMRPC identified 50% more genome-wide significant age-associated DMRs than coMethDMR. The replication rate for the loci that were identified by only DMRPC was higher than the rate for those that were identified by only coMethDMR (90% for DMRPC vs. 76% for coMethDMR). Furthermore, DMRPC identified replicable associations in regions of moderate between-CpG correlation which are typically not analysed by coMethDMR. For the analyses of sex and smoking, the advantage of DMRPC was less clear. In conclusion, DMRPC is a new powerful DMR discovery tool that retains power in genomic regions with moderate correlation across CpGs.
- Published
- 2023
- Full Text
- View/download PDF
19. Accuracy of placental growth factor alone or in combination with soluble fms-like tyrosine kinase-1 or maternal factors in detecting preeclampsia in asymptomatic women in the second and third trimesters: a systematic review and meta-analysis.
- Author
-
Chaemsaithong, Piya, Gil, María M., Chaiyasit, Noppadol, Cuenca-Gomez, Diana, Plasencia, Walter, Rolle, Valeria, and Poon, Liona C.
- Subjects
PLACENTAL growth factor ,ASYMPTOMATIC patients ,PREECLAMPSIA ,THIRD trimester of pregnancy ,SECOND trimester of pregnancy - Abstract
This study aimed to: (1) identify all relevant studies reporting on the diagnostic accuracy of maternal circulating placental growth factor) alone or as a ratio with soluble fms-like tyrosine kinase-1), and of placental growth factor-based models (placental growth factor combined with maternal factors±other biomarkers) in the second or third trimester to predict subsequent development of preeclampsia in asymptomatic women; (2) estimate a hierarchical summary receiver-operating characteristic curve for studies reporting on the same test but different thresholds, gestational ages, and populations; and (3) select the best method to screen for preeclampsia in asymptomatic women during the second and third trimester of pregnancy by comparing the diagnostic accuracy of each method. A systematic search was performed through MEDLINE, Embase, CENTRAL, ClinicalTrials.gov , and the World Health Organization International Clinical Trials Registry Platform databases from January 1, 1985 to April 15, 2021. Studies including asymptomatic singleton pregnant women at >18 weeks' gestation with risk of developing preeclampsia were evaluated. We included only cohort or cross-sectional test accuracy studies reporting on preeclampsia outcome, allowing tabulation of 2×2 tables, with follow-up available for >85%, and evaluating performance of placental growth factor alone, soluble fms-like tyrosine kinase-1– placental growth factor ratio, or placental growth factor-based models. The study protocol was registered on the International Prospective Register Of Systematic Reviews (CRD 42020162460). Because of considerable intra- and interstudy heterogeneity, we computed the hierarchical summary receiver-operating characteristic plots and derived diagnostic odds ratios, β, θ i , and Λ for each method to compare performances. The quality of the included studies was evaluated by the QUADAS-2 tool. The search identified 2028 citations, from which we selected 474 studies for detailed assessment of the full texts. Finally, 100 published studies met the eligibility criteria for qualitative and 32 for quantitative syntheses. Twenty-three studies reported on performance of placental growth factor testing for the prediction of preeclampsia in the second trimester, including 16 (with 27 entries) that reported on placental growth factor test alone, 9 (with 19 entries) that reported on the soluble fms-like tyrosine kinase-1–placental growth factor ratio, and 6 (16 entries) that reported on placental growth factor-based models. Fourteen studies reported on performance of placental growth factor testing for the prediction of preeclampsia in the third trimester, including 10 (with 18 entries) that reported on placental growth factor test alone, 8 (with 12 entries) that reported on soluble fms-like tyrosine kinase-1–placental growth factor ratio, and 7 (with 12 entries) that reported on placental growth factor-based models. For the second trimester, Placental growth factor-based models achieved the highest diagnostic odds ratio for the prediction of early preeclampsia in the total population compared with placental growth factor alone and soluble fms-like tyrosine kinase-1–placental growth factor ratio (placental growth factor-based models, 63.20; 95% confidence interval, 37.62–106.16 vs soluble fms-like tyrosine kinase-1–placental growth factor ratio, 6.96; 95% confidence interval, 1.76–27.61 vs placental growth factor alone, 5.62; 95% confidence interval, 3.04–10.38); placental growth factor-based models had higher diagnostic odds ratio than placental growth factor alone for the identification of any-onset preeclampsia in the unselected population (28.45; 95% confidence interval, 13.52–59.85 vs 7.09; 95% confidence interval, 3.74–13.41). For the third trimester, Placental growth factor-based models achieved prediction for any-onset preeclampsia that was significantly better than that of placental growth factor alone but similar to that of soluble fms-like tyrosine kinase-1–placental growth factor ratio (placental growth factor-based models, 27.12; 95% confidence interval, 21.67–33.94 vs placental growth factor alone, 10.31; 95% confidence interval, 7.41–14.35 vs soluble fms-like tyrosine kinase-1–placental growth factor ratio, 14.94; 95% confidence interval, 9.42–23.70). Placental growth factor with maternal factors ± other biomarkers determined in the second trimester achieved the best predictive performance for early preeclampsia in the total population. However, in the third trimester, placental growth factor-based models had predictive performance for any-onset preeclampsia that was better than that of placental growth factor alone but similar to that of soluble fms-like tyrosine kinase-1–placental growth factor ratio. Through this meta-analysis, we have identified a large number of very heterogeneous studies. Therefore, there is an urgent need to develop standardized research using the same models that combine serum placental growth factor with maternal factors ± other biomarkers to accurately predict preeclampsia. Identification of patients at risk might be beneficial for intensive monitoring and timing delivery. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. General and patient-specific seizure classification using deep neural networks.
- Author
-
Massoud, Yasmin M., Abdelzaher, Mennatallah, Kuhlmann, Levin, and Abd El Ghany, Mohamed A.
- Subjects
ARTIFICIAL neural networks ,MACHINE learning ,EPILEPSY ,DEEP learning ,SEIZURES (Medicine) - Abstract
Seizure prediction algorithms have been central in the field of data analysis for the improvement of epileptic patients' lives. The most recent advancements of which include the use of deep neural networks to present an optimized, accurate seizure prediction system. This work puts forth deep learning methods to automate the process of epileptic seizure detection with electroencephalogram (EEG) signals as input; both a patient-specific and general approach are followed. EEG signals are time structure series motivating the use of sequence algorithms such as temporal convolutional neural networks (TCNNs), and long short-term memory networks. We then compare this methodology to other prior pre-implemented structures, including our previous work for seizure prediction using machine learning approaches support vector machine and random under-sampling boost. Moreover, patient-specific and general seizure prediction approaches are used to evaluate the performance of the best algorithms. Area under curve (AUC) is used to select the best performing algorithm to account for the imbalanced dataset. The presented TCNN model showed the best patient-specific results than that of the general approach with, AUC of 0.73, while ML model had the best results for general classification with AUC of 0.75. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Timber Knot Detector with Low False-Positive Results by Integrating an Overlapping Bounding Box Filter with Faster R-CNN Algorithm
- Author
-
Wenping Chen, Jing Liu, Yiming Fang, and Jianyong Zhao
- Subjects
timber knot detection ,faster r-cnn ,false positive rate ,overlapping bounding box filter ,Biotechnology ,TP248.13-248.65 - Abstract
Knot detection is an important aspect of timber grading. Reducing the false-positive frequency of knot detection will improve the accuracy of the predicted grade, as well as the utilization of the graded timber. In this study, a framework for timber knot detection was proposed. Faster R-CNN, a state-of-the-art defect identification algorithm, was first employed to detect timber knots because of its high true-positive frequency. Then, an overlapping bounding box filter was proposed to lower the false positive frequency achieved by Faster R-CNN, where a single knot is sometimes marked several times. The filter merges the overlapping bounding boxes for one actual knot into one box and ensures that each knot is marked only once. The main advantage of this framework is that it reduces the false positive frequency with a small computational cost and a small impact on the true positive frequency. The experimental results showed that the detection precision improved from 90.9% to 97.5% by filtering the overlapping bounding box. The framework proposed in this study is competitive and has potential applications for detecting timber knots for timber grading.
- Published
- 2023
22. Reduction of False Positives for Runtime Errors in C/C++ Software: A Comparative Study.
- Author
-
Park, Jihyun, Shin, Jaeyoung, and Choi, Byoungju
- Subjects
DEEP learning ,FALSE positive error ,MACHINE learning ,COMPUTER software ,SOURCE code ,COMPUTER software development - Abstract
In software development, early defect detection using static analysis can be performed without executing the source code. However, defects are detected on a non-execution basis, thus resulting in a higher ratio of false positives. Recently, studies have been conducted to effectively perform static analyses using machine learning (ML) and deep learning (DL) technologies. This study examines the techniques for detecting runtime errors used in existing static analysis tools and the causes and rates of false positives. It analyzes the latest static analysis technologies that apply machine learning/deep learning to decrease false positives and compares them with existing technologies in terms of effectiveness and performance. In addition, machine-learning/deep-learning-based defect detection techniques were implemented in experimental environments and real-world software to determine their effectiveness in real-world software. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
23. 개인용 얼굴 식별 시스템을 위한 새로운 심화 신경망 구조.
- Author
-
반태원
- Subjects
DEEP learning ,SYSTEM identification ,PROBABILITY theory - Abstract
We investigated personal face identification systems using deep learning networks and we proposed a deep neural network architecture for improving false positive rate. Most of conventional face identification systems have the same number of output nodes as the number of faces that can be identified, and each node is trained to identify one registered face. In this paper, we added an extra node to identify unregistered faces for improving false positive rate(FPR) and accuracy by reducing the propagation of matching probabilities toward other output nodes identifying the registered faces, when unregistered faces are input. The proposed model has been trained with the VGGFace2 dataset. The performance of the proposed model was analyzed in terms of accuracy, precision, FPR, and false negative rate (FNR), and was compared to that of the existing model. According to the performance analysis results, when FNR is 5%, the FPR of the proposed model is improved by about 83% compared to the existing model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A Review of Cuckoo Filters for Privacy Protection and Their Applications.
- Author
-
Zhao, Yekang, Dai, Wangchen, Wang, Shiren, Xi, Liang, Wang, Shenqing, and Zhang, Feng
- Subjects
CUCKOOS ,DATA structures ,PRIVACY ,DATA privacy - Abstract
As the global digitalization process continues, information is transformed into data and widely used, while the data are also at risk of serious privacy breaches. The Cuckoo filter is a data structure based on the Cuckoo hash. It encrypts data when it is used and can achieve privacy protection to a certain extent. The Cuckoo filter is an alternative to the Bloom filter, with advantages such as support for deleting elements and efficient space utilization. Cuckoo filters are widely used and developed in the fields of network engineering, storage systems, databases, file systems, distributed systems, etc., because they are often used to solve collection element query problems. In recent years, many variants of the Cuckoo filter have emerged based on ideas such as improving the structure and introducing new technologies in order to accommodate a variety of different scenarios, as well as a huge collection. With the development of the times, the improvement of the structure and operation logic of the Cuckoo filter itself has become an important direction for the research of aggregate element query. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Gaussian Mixture Modeling Extensions for Improved False Discovery Rate Estimation in GC–MS Metabolomics.
- Author
-
Flores, Javier E., Bramer, Lisa M., Degnan, David J., Paurus, Vanessa L., Corilo, Yuri E., and Clendinen, Chaevien S.
- Abstract
The ability to reliably identify small molecules (e.g., metabolites) is key toward driving scientific advancement in metabolomics. Gas chromatography–mass spectrometry (GC–MS) is an analytic method that may be applied to facilitate this process. The typical GC–MS identification workflow involves quantifying the similarity of an observed sample spectrum and other features (e.g., retention index) to that of several references, noting the compound of the best-matching reference spectrum as the identified metabolite. While a deluge of similarity metrics exist, none quantify the error rate of generated identifications, thereby presenting an unknown risk of false identification or discovery. To quantify this unknown risk, we propose a model-based framework for estimating the false discovery rate (FDR) among a set of identifications. Extending a traditional mixture modeling framework, our method incorporates both similarity score and experimental information in estimating the FDR. We apply these models to identification lists derived from across 548 samples of varying complexity and sample type (e.g., fungal species, standard mixtures, etc.), comparing their performance to that of the traditional Gaussian mixture model (GMM). Through simulation, we additionally assess the impact of reference library size on the accuracy of FDR estimates. In comparing the best performing model extensions to the GMM, our results indicate relative decreases in median absolute estimation error (MAE) ranging from 12% to 70%, based on comparisons of the median MAEs across all hit-lists. Results indicate that these relative performance improvements generally hold despite library size; however FDR estimation error typically worsens as the set of reference compounds diminishes. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Korean Red Ginseng slows coreceptor switch in HIV-1 infected patients
- Author
-
Young-Keol Cho, Jung-Eun Kim, and Jinny Lee
- Subjects
coreceptor tropism switch ,HIV-1 env gene ,False positive rate ,Korean Red Ginseng ,Botany ,QK1-989 - Abstract
Background: Human immunodeficiency virus-1 (HIV-1) that binds to the coreceptor CCR5 (R5 viruses) can evolve into viruses that bind to the coreceptor CXCR4 (X4 viruses), with high viral replication rates governing this coreceptor switch. Korean Red Ginseng (KRG) treatment of HIV-1 infected patients has been found to slow the depletion of CD4+ T cells. This study assessed whether the KRG-associated slow depletion of CD4+ T cells was associated with coreceptor switching. Methods: This study included 146 HIV-1-infected patients naïve to antiretroviral therapy (ART) and seven patients receiving ART. A total of 540 blood samples were obtained from these patients over 122 ± 129 months. Their env genes were amplified by nested PCR or RT-PCR and subjected to direct sequencing. Tropism was determined with a 10% false positive rate (FPR) cutoff. Results: Of the 146 patients naïve to ART, 102 were KRG-naïve, and 44 had been treated with KRG. Evaluation of initial samples showed that coreceptor switch had occurred in 19 patients, later occurring in 38 additional patients. There was a significant correlation between the amount of KRG and FPR. Based on initial samples, the R5 maintenance period was extended 2.35-fold, with the coreceptor switch being delayed 2.42-fold in KRG-treated compared with KRG-naïve patients. The coreceptor switch occurred in 85% of a homogeneous cohort. The proportion of patients who maintained R5 for ≥10 years was significantly higher in long-term slow progressors than in typical progressors. Conclusion: KRG therapy extends R5 maintenance period by increasing FPR, thereby slowing the coreceptor switch.
- Published
- 2023
- Full Text
- View/download PDF
27. Intrusion Detection Based on PCA with Improved K-Means
- Author
-
Chapagain, Pralhad, Timalsina, Arun, Bhandari, Mohan, Chitrakar, Roshan, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Mekhilef, Saad, editor, Shaw, Rabindra Nath, editor, and Siano, Pierluigi, editor
- Published
- 2022
- Full Text
- View/download PDF
28. Performance Measures
- Author
-
Marcus, Pamela M. and Marcus, Pamela M.
- Published
- 2022
- Full Text
- View/download PDF
29. Artificial Neural Network Approach for Multimodal Biometric Authentication System
- Author
-
Sudhamani, M. J., Sanyal, Ipsita, Venkatesha, M. K., Xhafa, Fatos, Series Editor, Gupta, Deepak, editor, Polkowski, Zdzislaw, editor, Khanna, Ashish, editor, Bhattacharyya, Siddhartha, editor, and Castillo, Oscar, editor
- Published
- 2022
- Full Text
- View/download PDF
30. Optimization of Support Vector Machine for Classification of Spyware Using Symbiotic Organism Search for Features Selection
- Author
-
Gana, Noah Ndakotsu, Abdulhamid, Shafi’i Muhammad, Misra, Sanjay, Garg, Lalit, Ayeni, Foluso, Azeta, Ambrose, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Garg, Lalit, editor, Kesswani, Nishtha, editor, Vella, Joseph G., editor, Xuereb, Peter A., editor, Lo, Man Fung, editor, Diaz, Rowell, editor, Misra, Sanjay, editor, Gupta, Vipul, editor, and Randhawa, Princy, editor
- Published
- 2022
- Full Text
- View/download PDF
31. Intrusion Detection Using Federated Learning for Computing.
- Author
-
Aashmi, R. S. and Jaya, T.
- Subjects
INTRUSION detection systems (Computer security) ,MACHINE learning ,ELECTRONIC equipment ,MALWARE ,DEEP learning - Abstract
The integration of clusters, grids, clouds, edges and other computing platforms result in contemporary technology of jungle computing. This novel technique has the aptitude to tackle high performance computation systems and it manages the usage of all computing platforms at a time. Federated learning is a collaborative machine learning approach without centralized training data. The proposed system effectively detects the intrusion attack without human intervention and subsequently detects anomalous deviations in device communication behavior, potentially caused by malicious adversaries and it can emerge with new and unknown attacks. The main objective is to learn overall behavior of an intruder while performing attacks to the assumed target service. Moreover, the updated system model is send to the centralized server in jungle computing, to detect their pattern. Federated learning greatly helps the machine to study the type of attack from each device and this technique paves a way to complete dominion over all malicious behaviors. In our proposed work, we have implemented an intrusion detection system that has high accuracy, low False Positive Rate (FPR) scalable, and versatile for the jungle computing environment. The execution time taken to complete a round is less than two seconds, with an accuracy rate of 96%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
32. Allergy to Local Anesthetics is a Rarity: Review of Diagnostics and Strategies for Clinical Management.
- Author
-
Jiang, Shirley and Tang, Monica
- Abstract
Local anesthetics (LA) are commonly used in procedures and in topical agents for pain management. With the increasing use of LA drugs, the management of LA reactions is more frequently encountered in the office and in operating rooms. True allergic reactions involving IgE-mediated reactions and anaphylaxis are rare; they have only been identified in case reports and account for less than 1% of adverse LA reactions. Most reactions are non-allergic or are a result of hypersensitivity to other culprits such as preservatives, excipients, or other exposures. LA reactions that are misclassified as true allergies can lead to unnecessary avoidance of LA drugs or delays in surgical procedures that require their use. A detailed history of prior LA reactions is the first and most crucial step for understanding the nature of the reaction. Reactions that are suspicious for an immediate hypersensitivity reaction can be evaluated with skin prick and intradermal testing with subsequent graded challenge. Reactions that are suspicious for a delayed hypersensitivity reaction can be evaluated with patch testing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Newborn Screening for Krabbe Disease: Status Quo and Recommendations for Improvements
- Author
-
Dietrich Matern, Khaja Basheeruddin, Tracy L. Klug, Gwendolyn McKee, Patricia U. Edge, Patricia L. Hall, Joanne Kurtzberg, and Joseph J. Orsini
- Subjects
Krabbe disease ,newborn screening ,globoid cell leukodystrophy ,galactocerebrosidase ,psychosine ,false positive rate ,Pediatrics ,RJ1-570 - Abstract
Krabbe disease (KD) is part of newborn screening (NBS) in 11 states with at least one additional state preparing to screen. In July 2021, KD was re-nominated for addition to the federal Recommended Uniform Screening Panel (RUSP) in the USA with a two-tiered strategy based on psychosine (PSY) as the determinant if an NBS result is positive or negative after a first-tier test revealed decreased galactocerebrosidase activity. Nine states currently screening for KD include PSY analysis in their screening strategy. However, the nomination was rejected in February 2023 because of perceived concerns about a high false positive rate, potential harm to newborns with an uncertain prognosis, and inadequate data on presymptomatic treatment benefit or harm. To address the concern about false positive NBS results, a survey was conducted of the eight NBS programs that use PSY and have been screening for KD for at least 1 year. Seven of eight states responded. We found that: (1) the use of PSY is variable; (2) when modeling the data based on the recommended screening strategy for KD, and applying different cutoffs for PSY, each state could virtually eliminate false positive results without major impact on sensitivity; (3) the reason for the diverse strategies appears to be primarily the difficulty of state programs to adjust screening algorithms due to the concern of possibly missing even an adult-onset case following a change that focuses on infantile and early infantile KD. Contracts with outside vendors and the effort/cost of making changes to a program’s information systems can be additional obstacles. We recommend that programs review their historical NBS outcomes for KD with their advisory committees and make transparent decisions on whether to accept false positive results for such a devastating condition or to adjust their procedures to ensure an efficient, effective, and manageable NBS program for KD.
- Published
- 2024
- Full Text
- View/download PDF
34. Develop the hybrid Adadelta Stochastic Gradient Classifier with optimized feature selection algorithm to predict the heart disease at earlier stage
- Author
-
R. Senthil, B. Narayanan, and K. Velmurugan
- Subjects
HADSGC-HHBS ,Big data ,Machine learning ,Health care ,Performance ,False positive rate ,Electric apparatus and materials. Electric circuits. Electric networks ,TK452-454.4 - Abstract
The technique of collecting and analyzing a massive quantity of patient data to obtain meaningful information was available in a medical big data analysis. In many fields, including cloud-based medical systems, there are many barriers to big data analysis. The healthcare industry generates a significant amount of heart disease details for the various patients. Most recent research focuses on business models based on big data analysis to improve predictive performance of heart attack data and reduce risk levels for patients. Data storage, however, has been a major challenge; data must be accessed efficiently in multiple locations in a decentralized context. An objective should be to generate a Hybrid Adadelta Stochastic Gradient Classifier-based Healthcare Hash Big Data Storage (HADSGC-HHBS) method of storing and managing clinical information from many places in a distributed setting with the least amount of space and in the shortest amount of time. Data are categorized using a HADSGC-HHBS technique after vast amounts of information have been collected based on certain characteristics. The stochastic Gradient Classification (SGD) algorithm is to classify patient information using a non-convex possible risk target than the Support Vector Machine(SVM) algorithm. A range of data documents is used to assess the proposed HADSGC-HHBS process. Compared to previous approaches, the proposed HADSGC-HHBS process was productive in terms of classification, false positives, and reduced computing complexity.
- Published
- 2023
- Full Text
- View/download PDF
35. Racial differences in positive findings on embedded performance validity tests.
- Author
-
Hromas, Gabrielle, Rolin, Summer, and Davis, Jeremy J.
- Abstract
Introduction: Embedded performance validity tests (PVTs) may show increased positive findings in racially diverse examinees. This study examined positive findings in an older adult sample of African American (AA) and European American (EA) individuals recruited as part of a study on aging and cognition.Method: The project involved secondary analysis of deidentified National Alzheimer's Coordinating Center data (N = 22,688). Exclusion criteria included diagnosis of dementia (n = 5,550), mild cognitive impairment (MCI; n = 5,160), impaired but not MCI (n = 1,126), other race (n = 864), and abnormal Mini Mental State Examination (MMSE < 25; n = 135). The initial sample included 9,853 participants (16.4% AA). Propensity score matching matched AA and EA participants on age, education, sex, and MMSE score. The final sample included 3,024 individuals with 50% of participants identifying as AA. Premorbid ability estimates were calculated based on demographics. Failure rates on five raw score and six age-adjusted scaled score PVTs were examined by race.Results: Age, education, sex, MMSE, and premorbid ability estimate were not significantly different by race. Thirteen percent of AA and 3.8% of EA participants failed two or more raw score PVTs (p < .0001). On age-adjusted PVTs, 20.6% of AA and 5.9% of EA participants failed two or more (p < .0001).Conclusions: PVT failure rates were significantly higher among AA participants. Findings indicate a need for cautious interpretation of embedded PVTs with underrepresented groups. Adjustments to embedded PVT cutoffs may need to be considered to improve diagnostic accuracy. [ABSTRACT FROM AUTHOR]- Published
- 2022
- Full Text
- View/download PDF
36. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability.
- Author
-
Messa, Isabelle, Holcomb, Matthew, Lichtenstein, Jonathan D, Tyson, Brad T, Roth, Robert M, and Erdodi, Laszlo A
- Subjects
- *
TEST validity , *INTELLECTUAL disabilities , *NEUROPSYCHOLOGICAL tests , *FALSE positive error , *TEST scoring , *RECOGNITION (Psychology) - Abstract
This study was designed to determine the clinical utility of embedded performance validity indicators (EVIs) in adults with intellectual disability (ID) during neuropsychological assessment. Based on previous research, unacceptably high (>16%) base rates of failure (BRFail) were predicted on EVIs using on the method of threshold, but not on EVIs based on alternative detection methods. A comprehensive battery of neuropsychological tests was administered to 23 adults with ID (MAge = 37.7 years, MFSIQ = 64.9). BRFail were computed at two levels of cut-offs for 32 EVIs. Patients produced very high BRFail on 22 EVIs (18.2%-100%), indicating unacceptable levels of false positive errors. However, on the remaining ten EVIs BRFail was <16%. Moreover, six of the EVIs had a zero BRFail, indicating perfect specificity. Consistent with previous research, individuals with ID failed the majority of EVIs at high BRFail. However, they produced BRFail similar to cognitively higher functioning patients on select EVIs based on recognition memory and unusual patterns of performance, suggesting that the high BRFail reported in the literature may reflect instrumentation artefacts. The implications of these findings for clinical and forensic assessment are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Noninvasive prenatal testing (NIPT) results are less accurate the later applied during pregnancy.
- Author
-
Liehr T
- Subjects
- Humans, Female, Pregnancy, False Positive Reactions, China, Noninvasive Prenatal Testing methods, Gestational Age
- Abstract
Objective: Noninvasive prenatal testing (NIPT) has been introduced in prenatal genetics, recently. Even though it is connected with biological, technical, medical and ethical issues also reviewed here, it is meanwhile applied as a standard screening test. One of the obvious, but yet not further reviewed peculiarities of NIPT is that the reported false positives rates are variant, specifically in European, compared with Chinese publications., Materials and Methods: Here the only 15 suited studies on >600,000 cases were identified in which at least average pregnancy age was reported for the time NIPT was done., Results and Conclusion: It could be shown, that NIPT is done in China in later weeks of gestation, than in other countries. Besides, here for the first time it is highlighted that false positive NIPT results are less frequent, the earlier the screening is performed. Most likely this is related to two biological phenomena: loss of trisomic pregnancies and preferential survival of fetuses which underwent trisomic rescue, however, with major trisomic populations in placenta. This yet not considered aspect needs to be kept in mind especially in late stage high risk pregnancies., (Copyright © 2024. Published by Elsevier B.V.)
- Published
- 2024
- Full Text
- View/download PDF
38. Cooperative Detection Model for Phishing Websites Based on Approach
- Author
-
Wang, He, Lin, Guoyuan, Fang, Menghua, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Sun, Xingming, editor, Wang, Jinwei, editor, and Bertino, Elisa, editor
- Published
- 2020
- Full Text
- View/download PDF
39. A Multi-objective Bat Algorithm for Software Defect Prediction
- Author
-
Wu, Di, Zhang, Jiangjiang, Geng, Shaojin, Cai, Xingjuan, Zhang, Guoyou, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Pan, Linqiang, editor, Liang, Jing, editor, and Qu, Boyang, editor
- Published
- 2020
- Full Text
- View/download PDF
40. A Holistic Approach for Detecting DDoS Attacks by Using Ensemble Unsupervised Machine Learning
- Author
-
Das, Saikat, Venugopal, Deepak, Shiva, Sajjan, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Arai, Kohei, editor, Kapoor, Supriya, editor, and Bhatia, Rahul, editor
- Published
- 2020
- Full Text
- View/download PDF
41. Robust Adaptive Cloud Intrusion Detection System Using Advanced Deep Reinforcement Learning
- Author
-
Sethi, Kamalakanta, Kumar, Rahul, Mohanty, Dinesh, Bera, Padmalochan, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Batina, Lejla, editor, Picek, Stjepan, editor, and Mondal, Mainack, editor
- Published
- 2020
- Full Text
- View/download PDF
42. Reverse Bayesian Implications of p-Values Reported in Critical Care Randomized Trials.
- Author
-
Nostedt, Sarah and Joffe, Ari R.
- Subjects
- *
CRITICAL care medicine , *RANDOMIZED controlled trials , *MORTALITY , *HYPOTHESIS , *CONTINUOUS ambulatory peritoneal dialysis - Abstract
Background: Misinterpretations of the p-value in null-hypothesis statistical testing are common. We aimed to determine the implications of observed p-values in critical care randomized controlled trials (RCTs). Methods: We included three cohorts of published RCTs: Adult-RCTs reporting a mortality outcome, Pediatric-RCTs reporting a mortality outcome, and recent Consecutive-RCTs reporting p-value ≤.10 in six higher-impact journals. We recorded descriptive information from RCTs. Reverse Bayesian implications of obtained p-values were calculated, reported as percentages with inter-quartile ranges. Results: Obtained p-value was ≤.005 in 11/216 (5.1%) Adult-RCTs, 2/120 (1.7%) Pediatric-RCTs, and 37/90 (41.1%) Consecutive-RCTs. An obtained p-value .05-.0051 had high False Positive Rates; in Adult-RCTs, minimum (assuming prior probability of the alternative hypothesis was 50%) and realistic (assuming prior probability of the alternative hypothesis was 10%) False Positive Rates were 16.7% [11.2, 21.8] and 64.3% [53.2, 71.4]. An obtained p-value ≤.005 had lower False Positive Rates; in Adult-RCTs the realistic False Positive Rate was 7.7% [7.7, 16.0]. The realistic probability of the alternative hypothesis for obtained pvalue .05-.0051 (ie, Positive Predictive Value) was 28.0% [24.1, 34.8], 30.6% [27.7, 48.5], 29.3% [24.3, 41.0], and 32.7% [24.1, 43.5] for Adult-RCTs, Pediatric-RCTs, Consecutive-RCTs primary and secondary outcome, respectively. The maximum Positive Predictive Value for p-value category .05-.0051 was median 77.8%, 79.8%, 78.8%, and 81.4% respectively. To have maximum or realistic Positive Predictive Value >90% or >80%, RCTs needed to have obtained p-value ≤.005. The credibility of pvalue .05-.0051 findings were easy to challenge, and the credibility to rule-out an effect with p-value >.05 to .10 was low. The probability that a replication study would obtain p-value ≤.05 did not approach 90% unless the obtained p-value was ≤.005. Conclusions: Unless the obtained p-value was ≤.005, the False Positive Rate was high, and the Positive Predictive Value and probability of replication of "statistically significant" findings were low. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. R-ratio test threshold selection in GNSS integer ambiguity resolution.
- Author
-
Wu, Yanze, Yu, Xianwen, and Wang, Jiafu
- Subjects
- *
AMBIGUITY , *FALSE positive error , *INTEREST rates - Abstract
Ensuring that ambiguity cycles are correctly fixed as integers is a critical prerequisite for ensuring the reliability of GNSS high-precision carrier positioning results. As a result, it is both theoretically and practically important to investigate the performance of the ambiguity validation test by selecting an appropriate threshold. To begin, two statistics are proposed in this paper to quantitatively describe the performance of the validation test, namely the true negative rate and the false positive rate, which are based on the percentage of Type I errors (discarded-truth) in the total number of failed tests and the percentage of Type II errors (false positive) in the total number of passed tests. Following that, this paper employs the false positive rate and the true negative rate as primary and secondary criteria for evaluating the performance of the R-ratio test, respectively, and develops simulation experiments to evaluate the performance of different thresholds under different ambiguity dimensions and data accuracy, and finally provides decisions for test threshold selection: (1) For ambiguities with 4 to 9 dimensions, a reference table for the selection of thresholds is given. (2) For ambiguities of 10 dimensions or more, the threshold value should be no less than 2.0 (where data with a mean value of more than 3.7 for the main diagonal elements of the variance matrix should not be fixed). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Type I error rates of multi-arm multi-stage clinical trials: strong control and impact of intermediate outcomes
- Author
-
Bratton, Daniel J, Parmar, Mahesh KB, Phillips, Patrick PJ, and Choodari-Oskooei, Babak
- Subjects
Epidemiology ,Health Sciences ,Clinical Trials and Supportive Activities ,Clinical Research ,Clinical Trials as Topic ,Data Interpretation ,Statistical ,Endpoint Determination ,Humans ,Models ,Statistical ,Research Design ,Time Factors ,Multi-arm ,Multi-stage ,False positive rate ,Familywise error rate ,MAMS ,Cardiorespiratory Medicine and Haematology ,Clinical Sciences ,Cardiovascular System & Hematology ,General & Internal Medicine ,Clinical sciences ,Health services and systems - Abstract
BackgroundThe multi-arm multi-stage (MAMS) design described by Royston et al. [Stat Med. 2003;22(14):2239-56 and Trials. 2011;12:81] can accelerate treatment evaluation by comparing multiple treatments with a control in a single trial and stopping recruitment to arms not showing sufficient promise during the course of the study. To increase efficiency further, interim assessments can be based on an intermediate outcome (I) that is observed earlier than the definitive outcome (D) of the study. Two measures of type I error rate are often of interest in a MAMS trial. Pairwise type I error rate (PWER) is the probability of recommending an ineffective treatment at the end of the study regardless of other experimental arms in the trial. Familywise type I error rate (FWER) is the probability of recommending at least one ineffective treatment and is often of greater interest in a study with more than one experimental arm.MethodsWe demonstrate how to calculate the PWER and FWER when the I and D outcomes in a MAMS design differ. We explore how each measure varies with respect to the underlying treatment effect on I and show how to control the type I error rate under any scenario. We conclude by applying the methods to estimate the maximum type I error rate of an ongoing MAMS study and show how the design might have looked had it controlled the FWER under any scenario.ResultsThe PWER and FWER converge to their maximum values as the effectiveness of the experimental arms on I increases. We show that both measures can be controlled under any scenario by setting the pairwise significance level in the final stage of the study to the target level. In an example, controlling the FWER is shown to increase considerably the size of the trial although it remains substantially more efficient than evaluating each new treatment in separate trials.ConclusionsThe proposed methods allow the PWER and FWER to be controlled in various MAMS designs, potentially increasing the uptake of the MAMS design in practice. The methods are also applicable in cases where the I and D outcomes are identical.
- Published
- 2016
45. Considerations on the region of interest in the ROC space.
- Author
-
Lavazza, Luigi and Morasca, Sandro
- Subjects
- *
RECEIVER operating characteristic curves , *PYTHON programming language , *BORDERLANDS , *STATISTICAL correlation , *SOFTWARE engineers , *SOFTWARE engineering - Abstract
Receiver Operating Characteristic curves have been widely used to represent the performance of diagnostic tests. The corresponding area under the curve, widely used to evaluate their performance quantitatively, has been criticized in several respects. Several proposals have been introduced to improve area under the curve by taking into account only specific regions of the Receiver Operating Characteristic space, that is, the plane to which Receiver Operating Characteristic curves belong. For instance, a region of interest can be delimited by setting specific thresholds for the true positive rate or the false positive rate. Different ways of setting the borders of the region of interest may result in completely different, even opposing, evaluations. In this paper, we present a method to define a region of interest in a rigorous and objective way, and compute a partial area under the curve that can be used to evaluate the performance of diagnostic tests. The method was originally conceived in the Software Engineering domain to evaluate the performance of methods that estimate the defectiveness of software modules. We compare this method with previous proposals. Our method allows the definition of regions of interest by setting acceptability thresholds on any kind of performance metric, and not just false positive rate and true positive rate: for instance, the region of interest can be determined by imposing that ϕ (also known as the Matthews Correlation Coefficient) is above a given threshold. We also show how to delimit the region of interest corresponding to acceptable costs, whenever the individual cost of false positives and false negatives is known. Finally, we demonstrate the effectiveness of the method by applying it to the Wisconsin Breast Cancer Data. We provide Python and R packages supporting the presented method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Evaluation for Two Bloom Filters’ Configuration
- Author
-
Luo, Chenxi, Wang, Zhu, Luo, Tiejian, Barbosa, Simone Diniz Junqueira, Series Editor, Filipe, Joaquim, Series Editor, Kotenko, Igor, Series Editor, Washio, Takashi, Series Editor, Yuan, Junsong, Series Editor, Zhou, Lizhu, Series Editor, Ghosh, Ashish, Series Editor, Park, Jong Hyuk, editor, Shen, Hong, editor, Sung, Yunsick, editor, and Tian, Hui, editor
- Published
- 2019
- Full Text
- View/download PDF
47. An Introduction to Biostatistics
- Author
-
Cunanan, Kristen M., Gönen, Mithat, Lewis, Jason S., editor, Windhorst, Albert D., editor, and Zeglis, Brian M., editor
- Published
- 2019
- Full Text
- View/download PDF
48. A study of boosted evolutionary classifiers for detecting spam
- Author
-
Trivedi, Shrawan Kumar and Dey, Shubhamoy
- Published
- 2020
- Full Text
- View/download PDF
49. Diagnostic tests: how to estimate the positive predictive value
- Author
-
Molinaro, Annette M
- Subjects
Biomedical and Clinical Sciences ,Clinical Sciences ,4.2 Evaluation of markers and technologies ,Detection ,screening and diagnosis ,diagnostic tests ,false positive rate ,positive predictive value ,sensitivity ,statistics ,Neurosciences ,Oncology and carcinogenesis - Abstract
When a patient receives a positive test result from a diagnostic test they assume they have the disease. However, the positive predictive value (PPV), ie the probability that they have the disease given a positive test result, is rarely equal to one. To assist their patients, doctors must explain the chance that they do in fact have the disease. However, physicians frequently miscalculate the PPV as the sensitivity and/or misinterpret the PPV, which results in increased anxiety in patients and generates unnecessary tests and consultations. The reasons for this miscalculation as well as three ways to calculate the PPV are reviewed here.
- Published
- 2015
50. 基于免疫原理的7 种磺胺类兽药残留快速检测 试剂结果准确性评估.
- Author
-
顾 晔, 张 爽, 王成军, 李 悦, and 杨雨柔
- Abstract
Copyright of Journal of Food Safety & Quality is the property of Journal of Food Safety & Quality Editorial Department and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.