10,331 results on '"RAW DATA"'
Search Results
2. Importance of powder diffraction raw data archival in a curated database for materials science applications.
- Author
-
Kabekkodu, Soorya and Blanton, Thomas
- Subjects
- *
SCIENCE databases , *DIFFRACTION patterns , *MATERIALS science , *DATABASES , *SINGLE crystals , *NEUTRON diffraction - Abstract
In recent years, there is a significant interest from the crystallographic and materials science communities to have access to raw diffraction data. The effort in archiving raw data for access by the user community is spearheaded by the International Union of Crystallography (IUCr) Committee on Data. In materials science, where powder diffraction is extensively used, the challenge in archiving raw data is different to that from single crystal data, owing to the very nature of the contributions involved. Powder diffraction (X‐ray or neutron) data consist of contributions from the material under study as well as instrument specific parameters. Having raw powder diffraction data can be essential in cases of analysing materials with poor crystallinity, disorder, micro structure (size/strain) etc. Here, the initiative and progress made by the International Centre for Diffraction Data (ICDDR) in archiving powder X‐ray diffraction raw data in the Powder Diffraction FileTM (PDFR) database is outlined. The upcoming 2025 release of the PDF‐5+ database will have more than 20800 raw powder diffraction patterns that are available for reference. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Best practices for data management and sharing in experimental biomedical research.
- Author
-
Cunha-Oliveira, Teresa, Ioannidis, John P. A., and Oliveira, Paulo J.
- Subjects
- *
DATA management , *INFORMATION sharing , *BEST practices , *MEDICAL research , *RESEARCH personnel - Abstract
Effective data management is crucial for scientific integrity and reproducibility, a cornerstone of scientific progress. Well-organized and well-documented data enable validation and building on results. Data management encompasses activities including organization, documentation, storage, sharing, and preservation. Robust data management establishes credibility, fostering trust within the scientific community and benefiting researchers' careers. In experimental biomedicine, comprehensive data management is vital due to the typically intricate protocols, extensive metadata, and large datasets. Low-throughput experiments, in particular, require careful management to address variations and errors in protocols and raw data quality. Transparent and accountable research practices rely on accurate documentation of procedures, data collection, and analysis methods. Proper data management ensures long-term preservation and accessibility of valuable datasets. Well-managed data can be revisited, contributing to cumulative knowledge and potential new discoveries. Publicly funded research has an added responsibility for transparency, resource allocation, and avoiding redundancy. Meeting funding agency expectations increasingly requires rigorous methodologies, adherence to standards, comprehensive documentation, and widespread sharing of data, code, and other auxiliary resources. This review provides critical insights into raw and processed data, metadata, high-throughput versus low-throughput datasets, a common language for documentation, experimental and reporting guidelines, efficient data management systems, sharing practices, and relevant repositories. We systematically present available resources and optimal practices for wide use by experimental biomedical researchers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. An efficient defogging network for RAW image sequences with high viewpoint: An efficient defogging network for RAW image sequences...
- Author
-
Liu, Yan, Qi, Wenting, Wang, Jingwen, Xiao, Yanqiu, Cui, Guangzhen, and Han, Li
- Published
- 2024
- Full Text
- View/download PDF
5. Influence of helical pitch and gantry rotation time on image quality and file size in ultrahigh-resolution photon-counting detector CT
- Author
-
Philipp Feldle, Jan-Peter Grunz, Henner Huflage, Andreas Steven Kunz, Süleyman Ergün, Saif Afat, Philipp Gruschwitz, Lukas Görtz, Lenhard Pennig, Thorsten Alexander Bley, and Nora Conrads
- Subjects
Photon-counting ,Tomography, x-ray computed ,Helical pitch factor ,Gantry rotation time ,Raw data ,Medicine ,Science - Abstract
Abstract The goal of this experimental study was to quantify the influence of helical pitch and gantry rotation time on image quality and file size in ultrahigh-resolution photon-counting CT (UHR-PCCT). Cervical and lumbar spine, pelvis, and upper legs of two fresh-frozen cadaveric specimens were subjected to nine dose-matched UHR-PCCT scan protocols employing a collimation of 120 × 0.2 mm with varying pitch (0.3/1.0/1.2) and rotation time (0.25/0.5/1.0 s). Image quality was analyzed independently by five radiologists and further substantiated by placing normed regions of interest to record mean signal attenuation and noise. Effective mAs, CT dose index (CTDIvol), size-specific dose estimate (SSDE), scan duration, and raw data file size were compared. Regardless of anatomical region, no significant difference was ascertained for CTDIvol (p ≥ 0.204) and SSDE (p ≥ 0.240) among protocols. While exam duration differed substantially (all p ≤ 0.016), the lowest scan time was recorded for high-pitch protocols (4.3 ± 1.0 s) and the highest for low-pitch protocols (43.6 ± 15.4 s). The combination of high helical pitch and short gantry rotation times produced the lowest perceived image quality (intraclass correlation coefficient 0.866; 95% confidence interval 0.807–0.910; p
- Published
- 2024
- Full Text
- View/download PDF
6. Influence of helical pitch and gantry rotation time on image quality and file size in ultrahigh-resolution photon-counting detector CT.
- Author
-
Feldle, Philipp, Grunz, Jan-Peter, Huflage, Henner, Kunz, Andreas Steven, Ergün, Süleyman, Afat, Saif, Gruschwitz, Philipp, Görtz, Lukas, Pennig, Lenhard, Bley, Thorsten Alexander, and Conrads, Nora
- Subjects
- *
PHOTON counting , *INTRACLASS correlation , *ROTATIONAL motion , *THIGH , *CERVICAL vertebrae , *DETECTORS , *LUMBAR vertebrae - Abstract
The goal of this experimental study was to quantify the influence of helical pitch and gantry rotation time on image quality and file size in ultrahigh-resolution photon-counting CT (UHR-PCCT). Cervical and lumbar spine, pelvis, and upper legs of two fresh-frozen cadaveric specimens were subjected to nine dose-matched UHR-PCCT scan protocols employing a collimation of 120 × 0.2 mm with varying pitch (0.3/1.0/1.2) and rotation time (0.25/0.5/1.0 s). Image quality was analyzed independently by five radiologists and further substantiated by placing normed regions of interest to record mean signal attenuation and noise. Effective mAs, CT dose index (CTDIvol), size-specific dose estimate (SSDE), scan duration, and raw data file size were compared. Regardless of anatomical region, no significant difference was ascertained for CTDIvol (p ≥ 0.204) and SSDE (p ≥ 0.240) among protocols. While exam duration differed substantially (all p ≤ 0.016), the lowest scan time was recorded for high-pitch protocols (4.3 ± 1.0 s) and the highest for low-pitch protocols (43.6 ± 15.4 s). The combination of high helical pitch and short gantry rotation times produced the lowest perceived image quality (intraclass correlation coefficient 0.866; 95% confidence interval 0.807–0.910; p < 0.001) and highest noise. Raw data size increased with acquisition time (15.4 ± 5.0 to 235.0 ± 83.5 GByte; p ≤ 0.013). Rotation time and pitch factor have considerable influence on image quality in UHR-PCCT and must therefore be chosen deliberately for different musculoskeletal imaging tasks. In examinations with long acquisition times, raw data size increases considerably, consequently limiting clinical applicability for larger scan volumes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. SFL-MDrone: Synchronous federated learning enabled multi drones.
- Author
-
Sharma, Itika and Gupta, Sachin Kumar
- Subjects
- *
FEDERATED learning , *SERVER farms (Computer network management) , *DATA packeting , *INTERACTIVE multimedia , *MACHINE learning , *DATA modeling , *WIRELESS communications - Abstract
UAVs or Drones can be used to support wireless communication by acting as flying or mobile Base Stations for the accumulation of the different types of data to train the models. However, in traditional or DL-based UAVs, the raw data is sent from the devices to the centralized server, which causes problems in terms of the privacy of the devices and the UAVs' communication resources or limited processing. Therefore, the issue with DL-based UAVs is that sending the original data to the centralized body raises questions about security and privacy. The transmission of distributed, unprocessed data from the drones to the cloud, including interactive media information data types, requires a significant amount of network bandwidth and more energy, which has an enormous effect on several trade-offs, including communication rates and computation latencies. Data packet loss caused by asynchronous transmission, which doesn't prevent peer-to-peer communication, is a concern with AFL-based UAVs. Therefore, in order to address the aforementioned issues, we have introduced SFL-based UAVs that focus on creating algorithms in which the models simultaneously update the server as they wait for all of the chosen devices to communicate. The proposed framework enables a variety of devices, including mobile and UAV devices, to train or learn their algorithms for machine learning before updating the models and parameters simultaneously to servers or manned aerial data centers for model buildup without transferring their original private information. This decreases packet loss and privacy threats while also enhancing round effectiveness as well as model accuracy. The comparative analysis of AFL and SFL techniques in terms of accuracy, global rounds, and communication rounds are offered. Simulation findings suggest that the proposed methodology improves in terms of global rounds and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Python toolbox for android GNSS raw data to RINEX conversion.
- Author
-
Hernández Olcina, Jorge, Anquela Julián, Ana B., and Martín Furones, Ángel E.
- Abstract
Global navigation satellite system (GNSS) data collected from Android devices have gained increasing importance in various applications, ranging from geospatial positioning to environmental monitoring. However, the lack of standardized tools for converting Android GNSS raw data into receiver independent exchange (RINEX) format poses a significant challenge for researchers and practitioners. In response to this need, we present a comprehensive Python toolbox designed to streamline the conversion process and enhance the usability of Android GNSS data. The proposed toolbox leverages Python’s versatility to provide a user-friendly interface for converting Android GNSS raw data into the widely adopted RINEX format. Key features include robust data parsing algorithms, support for multiple GNSS constellations, and compatibility with diverse Android device configurations. Furthermore, the toolbox’s open-source nature encourages community collaboration and allows for continual improvement and adaptation to emerging GNSS technologies. We anticipate that this Python toolbox will serve as a valuable resource for researchers and practitioners working with Android GNSS data, facilitating standardized data interchange and promoting reproducibility in GNSS-based studies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. From Signal to Knowledge: The Diagnostic Value of Raw Data in the Artificial Intelligence Prediction of Human Data for the First Time
- Author
-
Bingxi He, Yu Guo, Yongbei Zhu, Lixia Tong, Boyu Kong, Kun Wang, Caixia Sun, Hailin Li, Feng Huang, Liwei Wu, Meng Wang, Fanyang Meng, Le Dou, Kai Sun, Tong Tong, Zhenyu Liu, Ziqi Wei, Wei Mu, Shuo Wang, Zhenchao Tang, Shuaitong Zhang, Jingwei Wei, Lizhi Shao, Mengjie Fang, Juntao Li, Shouping Zhu, Lili Zhou, Di Dong, Huimao Zhang, and Jie Tian
- Subjects
Computed tomography ,Diagnosis ,Deep learning ,Lung cancer ,Raw data ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
Encouraging and astonishing developments have recently been achieved in image-based diagnostic technology. Modern medical care and imaging technology are becoming increasingly inseparable. However, the current diagnosis pattern of signal to image to knowledge inevitably leads to information distortion and noise introduction in the procedure of image reconstruction (from signal to image). Artificial intelligence (AI) technologies that can mine knowledge from vast amounts of data offer opportunities to disrupt established workflows. In this prospective study, for the first time, we develop an AI-based signal-to-knowledge diagnostic scheme for lung nodule classification directly from the computed tomography (CT) raw data (the signal). We find that the raw data achieves almost comparable performance with CT, indicating that it is possible to diagnose diseases without reconstructing images. Moreover, the incorporation of raw data through three common convolutional network structures greatly improves the performance of the CT models in all cohorts (with a gain ranging from 0.01 to 0.12), demonstrating that raw data contains diagnostic information that CT does not possess. Our results break new ground and demonstrate the potential for direct signal-to-knowledge domain analysis.
- Published
- 2024
- Full Text
- View/download PDF
10. Ship Detection From Raw SAR Echoes Using Convolutional Neural Networks
- Author
-
Kevin De Sousa, Georgios Pilikos, Mario Azcueta, and Nicolas Floury
- Subjects
Deep learning ,raw data ,ship detection ,synthetic aperture radar (SAR) ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Synthetic aperture radar (SAR) is an indispensable tool for marine monitoring. Conventional data processing involves data down-linking and on-ground operations for image focusing, analysis, and ship detection. These steps take significant amount of time, resulting in potentially critical delays. In this work, we propose a ship detection algorithm that operates directly on raw SAR echoes, based on convolutional neural networks. To evaluate our approach, we performed experiments using raw data simulations and real raw SAR data from Sentinel-1 stripmap mode scenes. Preliminary results on this set show the capability of detecting multiple ships from raw data with similar accuracy as using single-look-complex images as input. Simultaneously, running time is reduced significantly, by-passing the image focusing step. This illustrates the great potential of deep learning, moving toward more intelligent SAR systems.
- Published
- 2024
- Full Text
- View/download PDF
11. INTELLIGENT DATA ANALYSIS ON AN ANALYTICAL PLATFORM.
- Author
-
Darkenbayev, Dauren, Altybay, Arshyn, Darkenbayeva, Zhaidargul, and Mekebayev, Nurbapa
- Abstract
This article explores methods for processing unstructured data using an analytical platform. The authors analyze existing methods and propose new approaches for data processing. They discuss the use of spectral processing and noise removal techniques to improve the accuracy of statistical analysis. The authors emphasize the importance of processing raw data and highlight the need for analytics platforms to handle large volumes of data. Overall, this paper contributes to the development of raw data processing technologies and emphasizes the importance of obtaining accurate results from any data source. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
12. Dataset on a reliability generalization meta-analysis of the Oxford COVID-19 vaccine hesitancy scale
- Author
-
Kabiru Maitama Kura and Ramatu Abdulkareem Abubakar
- Subjects
Heath data ,Coronavirus ,Public health ,Data sharing ,Raw data ,Pandemic ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
The Oxford COVID-19 Vaccine Hesitancy Scale is a 7-item psychometric scale developed by Freeman and colleagues a year after detecting the first case of the disease in 2019. The scale assesses people's thoughts, feelings, and behavior toward COVID-19 vaccines. A comprehensive search of major electronic databases, including Scopus, Clarivate Analytics, and PubMed, was conducted to extract eligible articles for inclusion in this meta-analysis. This paper reports information on data collected for a reliability generalization meta-analysis of the Oxford COVID-19 Vaccine Hesitancy Scale. The dataset incorporates information on the average reliability of the scale as measured with Cronbach's alpha in 20 studies included in the meta-analysis. Several benefits can be derived from the dataset. In particular, the research community would find this dataset beneficial as it can enhance their understanding of the health challenges of COVID-19, helping them come up with better solutions to eradicate the disease.
- Published
- 2024
- Full Text
- View/download PDF
13. Raw GNSS data collected using smartphones and low-cost receiver under optimal and sub-optimal conditions
- Author
-
Julián Tomaštík, Matej Varga, and Tim Everett
- Subjects
Raw data ,Low-cost receivers ,Global navigation satellite systems ,Adverse conditions ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Science (General) ,Q1-390 - Abstract
The miniaturisation and decrease of price are amongst the main current trends in the area of Global Navigation Satellite Systems (GNSS) receivers. Besides standalone receivers also receivers incorporated into Android devices can provide raw GNSS measurements thus enabling much wider options, formerly restricted to devices of much higher price. The article describes two datasets. The first was collected using a Xiaomi Mi 8 smartphone with and without application of a simple ground plane. In the second we compared a smartphone receiver (Google Pixel 5) with a standalone low-cost receiver (u-Blox ZED F9P). In both cases the datasets consist of multiple measurement sessions, also considering the conditions where the reception of GNSS signals was obstructed by trees’ canopy. The datasets are focused on repeatability (multiple measurements), influence of external conditions (canopy and foliage state) and the devices used.
- Published
- 2024
- Full Text
- View/download PDF
14. Artificial Intelligence for SAR Focusing
- Author
-
Trematerra, Oreste, Morante, Quirino, Biancucci, Federica, Kacprzyk, Janusz, Series Editor, Ieracitano, Cosimo, editor, Mammone, Nadia, editor, Di Clemente, Marco, editor, Mahmud, Mufti, editor, Furfaro, Roberto, editor, and Morabito, Francesco Carlo, editor
- Published
- 2023
- Full Text
- View/download PDF
15. INTELLIGENT DATA ANALYSIS ON AN ANALYTICAL PLATFORM
- Author
-
Dauren Darkenbayev, Arshyn Altybay, Zhaidargul Darkenbayeva, and Nurbapa Mekebayev
- Subjects
raw data ,processing ,analytical platform ,technology ,analysis ,Environmental engineering ,TA170-171 ,Environmental sciences ,GE1-350 - Abstract
The article discusses methods for processing unstructured data using an analytical platform. The authors analyze existing methods and technologies used to implement data processing and propose new approaches to solving this problem. The possibilities of using analytical platforms to solve the problem of processing source data are considered. The purpose of the article is to explore the possibilities of data import, partial preprocessing, missing data recovery, anomaly removal, spectral processing and noise removal. The authors explored how analytics platforms can function without a data warehouse, obtaining information from any other sources, but the most optimal way is to use them together, and how big data and unstructured data can be processed using an analytics platform. The authors solved a specific problem related to processing problems and proposed ways to solve them using an analytical platform. Particular attention is paid to a complete set of mechanisms that allows you to obtain information from any data source, carry out the entire processing cycle and display the results. Overall, the paper represents an important contribution to the development of raw data processing technologies. The authors plan to continue research in the field of processing big unstructured data.
- Published
- 2024
- Full Text
- View/download PDF
16. Unveiling the Feasibility of Coalbed Methane Production Adjustment in Area L through Native Data Reproduction Technology: A Study.
- Author
-
Chang, Qifan, Fan, Likun, Zheng, Lihui, Yang, Xumin, Fu, Yun, Kan, Zixuan, and Pan, Xiaoqing
- Subjects
- *
COALBED methane , *PRAGMATICS , *REPRODUCTIVE technology , *DATA scrubbing - Abstract
In the L Area, big data techniques are employed to manage the principal controlling factors of coalbed methane (CBM) production, thereby regulating single-well output. Nonetheless, conventional data cleansing and the use of arbitrary thresholds may result in an overemphasis on certain controlling factors, compromising the design and feasibility of optimization schemes. This study introduces a novel approach that leverages raw data without data cleaning and eschews artificial threshold setting for controlling factor identification. The methodology supplements previously overlooked controlling factors, proposing a more pragmatic CBM production adjustment scheme. In addition to the initial five controlling factors, this approach incorporates three additional ones, namely, dynamic fluid level state, drainage velocity, and fracturing displacement. This study presents a practical application case study of the proposed approach, demonstrating its ability to reduce reservoir damage during the coal fracturing process and enhance output through seal adjustments. Utilizing the full spectrum of original data and minimizing human intervention thresholds enriches the information available for model training, thereby facilitating the development of a more efficacious model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
17. Raw data and its implications in exegesis of Daniel 11,2b-12,3.
- Author
-
Gane, Roy E.
- Subjects
- *
DISCOURSE , *STANDARD language , *LITERARY interpretation , *POWER (Social sciences) , *SECTARIAN conflict , *ISLAM ,BIBLICAL hermeneutics - Abstract
Three kinds of raw data in Daniel 11,2b-12,3 carry crucial implications for interpretation. First, these verses comprise one discourse unit beginning with literal language. Therefore, the entire unit is basically literal. Accordingly, 11,2b-12,3 is the third angelic explanation (with elaboration) of the symbolic vision in 8,3-14, following explanations in 8,17; 19-26 and 9,24-27. Second, these three parallel explanations share intratextual terminological points of contact. So contexts of words in chapters 8 and 9 reappearing in chapter 11 illuminate similar contexts in chapter 11. Third, matching language of literary profiles in Daniel 11 to historical events requires accurate identification of raw historical data. Thus, verse 40 does not predict the "mortal wound" of the church of Rome inflicted by atheistic France in A.D. 1798. In verses 40-43, the religious-political church, the "king of the north," defeats the religious-political "king of the south": Islamic power. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. A Comprehensive Review of Conventional and Deep Learning Approaches for Ground-Penetrating Radar Detection of Raw Data.
- Author
-
Bai, Xu, Yang, Yu, Wei, Shouming, Chen, Guanyi, Li, Hongrui, Li, Yuhao, Tian, Haoxiang, Zhang, Tianxiang, and Cui, Haitao
- Subjects
DEEP learning ,MACHINE learning ,ELECTRONIC data processing ,GROUND penetrating radar ,IMAGE processing ,UNDERGROUND construction ,NONDESTRUCTIVE testing - Abstract
Ground-penetrating radar (GPR) is a nondestructive testing technology that is widely applied in infrastructure maintenance, archaeological research, military operations, and other geological studies. A crucial step in GPR data processing is the detection and classification of underground structures and buried objects, including reinforcement bars, landmines, pipelines, bedrock, and underground cavities. With the development of machine learning algorithms, traditional methods such as SVM, K-NN, ANN, and HMM, as well as deep learning algorithms, have gradually been incorporated into A-scan, B-scan, and C-scan GPR image processing. This paper provides a summary of the typical machine learning and deep learning algorithms employed in the field of GPR and categorizes them based on the feature extraction method or classifier used. Additionally, this work discusses the sources and forms of data utilized in these studies. Finally, potential future development directions are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. Reference values for wrist-worn accelerometer physical activity metrics in England children and adolescents
- Author
-
Stuart J. Fairclough, Alex V. Rowlands, Borja del Pozo Cruz, Matteo Crotti, Lawrence Foweather, Lee E. F. Graves, Liezel Hurter, Owen Jones, Mhairi MacDonald, Deborah A. McCann, Caitlin Miller, Robert J. Noonan, Michael B. Owen, James R. Rudd, Sarah L. Taylor, Richard Tyler, and Lynne M. Boddy
- Subjects
ENMO ,MAD ,Youth ,Raw data ,Average acceleration ,Intensity gradient ,Nutritional diseases. Deficiency diseases ,RC620-627 ,Public aspects of medicine ,RA1-1270 - Abstract
Abstract Background Over the last decade use of raw acceleration metrics to assess physical activity has increased. Metrics such as Euclidean Norm Minus One (ENMO), and Mean Amplitude Deviation (MAD) can be used to generate metrics which describe physical activity volume (average acceleration), intensity distribution (intensity gradient), and intensity of the most active periods (MX metrics) of the day. Presently, relatively little comparative data for these metrics exists in youth. To address this need, this study presents age- and sex-specific reference percentile values in England youth and compares physical activity volume and intensity profiles by age and sex. Methods Wrist-worn accelerometer data from 10 studies involving youth aged 5 to 15 y were pooled. Weekday and weekend waking hours were first calculated for youth in school Years (Y) 1&2, Y4&5, Y6&7, and Y8&9 to determine waking hours durations by age-groups and day types. A valid waking hours day was defined as accelerometer wear for ≥ 600 min·d−1 and participants with ≥ 3 valid weekdays and ≥ 1 valid weekend day were included. Mean ENMO- and MAD-generated average acceleration, intensity gradient, and MX metrics were calculated and summarised as weighted week averages. Sex-specific smoothed percentile curves were generated for each metric using Generalized Additive Models for Location Scale and Shape. Linear mixed models examined age and sex differences. Results The analytical sample included 1250 participants. Physical activity peaked between ages 6.5–10.5 y, depending on metric. For all metrics the highest activity levels occurred in less active participants (3rd-50th percentile) and girls, 0.5 to 1.5 y earlier than more active peers, and boys, respectively. Irrespective of metric, boys were more active than girls (p
- Published
- 2023
- Full Text
- View/download PDF
20. Exploring the Viability of Bypassing the Image Signal Processor for CNN-Based Object Detection in Autonomous Vehicles
- Author
-
Jordan Cahill, Ashkan Parsi, Darragh Mullins, Jonathan Horgan, Enda Ward, Ciaran Eising, Patrick Denny, Brian Deegan, Martin Glavin, and Edward Jones
- Subjects
Object detection ,image signal processor ,autonomous vehicles ,neural networks ,raw data ,Bayer filter ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In the field of autonomous driving, cameras are crucial sensors for providing information about a vehicle’s environment. Image quality refers to a camera system’s ability to capture, process, and display signals to form an image. Historically, “good quality” in this context refers to images that have been processed by an Image Signal Processor (ISP) designed with the goal of providing the optimal experience for human consumption. However, image quality perceived by humans may not always result in optimal conditions for computer vision. In the context of human consumption, image quality is well documented and understood. Image quality for computer vision applications, such as those in the autonomous vehicle industry, requires more research. Fully autonomous vehicles inevitably encounter constraints concerning data storage, transmission speed, and energy consumption. This is a result of enormous amounts of data being generated by the vehicle from suites made up of multiple different sensors. We propose a potential optimization along the computer vision pipeline, by completely bypassing the ISP block for a class of applications. We demonstrate that doing so has a negligible impact on the performance of Convolutional Neural Network (CNN) object detectors. The results also highlight the benefits of using raw pre-ISP data, in the context of computation and energy savings achieved by removing the ISP.
- Published
- 2023
- Full Text
- View/download PDF
21. Workload Aware Cost-Based Partial Loading of Raw Data for Limited Storage Resources
- Author
-
Patel, Mayank, Yadav, Nitish, Bhise, Minal, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Singh, Pradeep Kumar, editor, Wierzchoń, Sławomir T., editor, Chhabra, Jitender Kumar, editor, and Tanwar, Sudeep, editor
- Published
- 2022
- Full Text
- View/download PDF
22. Metric Analysis of Big Data of Payment System Based on Kolmogorov–Shannon Coding Methods
- Author
-
Kamenev, Ivan G., Andrianova, Daria A., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Yang, Xin-She, editor, Sherratt, Simon, editor, Dey, Nilanjan, editor, and Joshi, Amit, editor
- Published
- 2022
- Full Text
- View/download PDF
23. Smartphone-Based Gait Cadence to Identify Older Adults with Decreased Functional Capacity
- Author
-
Daniel S. Rubin, Sylvia L. Ranjeva, Jacek K. Urbanek, Marta Karas, Maria Lucia L. Madariaga, and Megan Huisingh-Scheetz
- Subjects
triaxial accelerometer ,gait ,mobile technology ,raw data ,wearable physical activity monitoring ,Biology (General) ,QH301-705.5 - Abstract
Background: Functional capacity assessment is a critical step in the preoperative evaluation to identify patients at increased risk of cardiac complications and disability after major noncardiac surgery. Smartphones offer the potential to objectively measure functional capacity but are limited by inaccuracy in patients with poor functional capacity. Open-source methods exist to analyze accelerometer data to estimate gait cadence (steps/min), which is directly associated with activity intensity. Here, we used an updated Step Test smartphone application with an open-source method to analyze accelerometer data to estimate gait cadence and functional capacity in older adults. Methods: We performed a prospective observational cohort study within the Frailty, Activity, Body Composition and Energy Expenditure in Aging study at the University of Chicago. Participants completed the Duke Activity Status Index (DASI) and performed an in-clinic 6-min walk test (6MWT) while using the Step Test application on a study smartphone. Gait cadence was measured from the raw accelerometer data using an adaptive empirical pattern transformation method, which has been previously validated. A 6MWT distance of 370 m was used as an objective threshold to identify patients at high risk. We performed multivariable logistic regression to predict walking distance using a priori explanatory variables. Results: Sixty patients were enrolled in the study. Thirty-seven patients completed the protocol and were included in the final data analysis. The median (IQR) age of the overall cohort was 71 (69–74) years, with a body mass index of 31 (27–32). There were no differences in any clinical characteristics or functional measures between participants that were able to walk 370 m during the 6MWT and those that could not walk that distance. Median (IQR) gait cadence for the entire cohort was 110 (102–114) steps/min during the 6MWT. Median (IQR) gait cadence was higher in participants that walked more than 370 m during the 6MWT 112 (108–118) versus 106 (96–114) steps/min; p = 0.0157). The final multivariable model to identify participants that could not walk 370 m included only median gait cadence. The Youden’s index cut-point was 107 steps/min with a sensitivity of 0.81 (95% CI: 0.77, 0.85) and a specificity of 0.57 (95% CI: 0.55, 0.59) and an AUCROC of 0.69 (95% CI: 0.51, 0.87). Conclusions: Our pilot study demonstrates the feasibility of using gait cadence as a measure to estimate functional capacity. Our study was limited by a smaller than expected sample size due to COVID-19, and thus, a prospective study with preoperative patients that measures outcomes is necessary to validate our findings.
- Published
- 2022
- Full Text
- View/download PDF
24. Reference values for wrist-worn accelerometer physical activity metrics in England children and adolescents.
- Author
-
Fairclough, Stuart J., Rowlands, Alex V., del Pozo Cruz, Borja, Crotti, Matteo, Foweather, Lawrence, Graves, Lee E. F., Hurter, Liezel, Jones, Owen, MacDonald, Mhairi, McCann, Deborah A., Miller, Caitlin, Noonan, Robert J., Owen, Michael B., Rudd, James R., Taylor, Sarah L., Tyler, Richard, and Boddy, Lynne M.
- Subjects
- *
REFERENCE values , *AGE distribution , *ACCELEROMETERS , *PHYSICAL activity , *ACCELEROMETRY , *SEX distribution , *EXERCISE intensity , *BODY movement , *RESEARCH funding , *WRIST - Abstract
Background: Over the last decade use of raw acceleration metrics to assess physical activity has increased. Metrics such as Euclidean Norm Minus One (ENMO), and Mean Amplitude Deviation (MAD) can be used to generate metrics which describe physical activity volume (average acceleration), intensity distribution (intensity gradient), and intensity of the most active periods (MX metrics) of the day. Presently, relatively little comparative data for these metrics exists in youth. To address this need, this study presents age- and sex-specific reference percentile values in England youth and compares physical activity volume and intensity profiles by age and sex. Methods: Wrist-worn accelerometer data from 10 studies involving youth aged 5 to 15 y were pooled. Weekday and weekend waking hours were first calculated for youth in school Years (Y) 1&2, Y4&5, Y6&7, and Y8&9 to determine waking hours durations by age-groups and day types. A valid waking hours day was defined as accelerometer wear for ≥ 600 min·d−1 and participants with ≥ 3 valid weekdays and ≥ 1 valid weekend day were included. Mean ENMO- and MAD-generated average acceleration, intensity gradient, and MX metrics were calculated and summarised as weighted week averages. Sex-specific smoothed percentile curves were generated for each metric using Generalized Additive Models for Location Scale and Shape. Linear mixed models examined age and sex differences. Results: The analytical sample included 1250 participants. Physical activity peaked between ages 6.5–10.5 y, depending on metric. For all metrics the highest activity levels occurred in less active participants (3rd-50th percentile) and girls, 0.5 to 1.5 y earlier than more active peers, and boys, respectively. Irrespective of metric, boys were more active than girls (p <.001) and physical activity was lowest in the Y8&9 group, particularly when compared to the Y1&2 group (p <.001). Conclusions: Percentile reference values for average acceleration, intensity gradient, and MX metrics have utility in describing age- and sex-specific values for physical activity volume and intensity in youth. There is a need to generate nationally-representative wrist-acceleration population-referenced norms for these metrics to further facilitate health-related physical activity research and promotion. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Cyber Attacker Profiling for Risk Analysis Based on Machine Learning.
- Author
-
Kotenko, Igor, Fedorchenko, Elena, Novikova, Evgenia, and Jha, Ashish
- Subjects
- *
RISK assessment , *MACHINE learning , *TASK analysis , *CYBERTERRORISM , *DATA security - Abstract
The notion of the attacker profile is often used in risk analysis tasks such as cyber attack forecasting, security incident investigations and security decision support. The attacker profile is a set of attributes characterising an attacker and their behaviour. This paper analyzes the research in the area of attacker modelling and presents the analysis results as a classification of attacker models, attributes and risk analysis techniques that are used to construct the attacker models. The authors introduce a formal two-level attacker model that consists of high-level attributes calculated using low-level attributes that are in turn calculated on the basis of the raw security data. To specify the low-level attributes, the authors performed a series of experiments with datasets of attacks. Firstly, the requirements of the datasets for the experiments were specified in order to select the appropriate datasets, and, afterwards, the applicability of the attributes formed on the basis of such nominal parameters as bash commands and event logs to calculate high-level attributes was evaluated. The results allow us to conclude that attack team profiles can be differentiated using nominal parameters such as bash history logs. At the same time, accurate attacker profiling requires the extension of the low-level attributes list. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Input Shape Effect on Classification Performance of Raw EEG Motor Imagery Signals with Convolutional Neural Networks for Use in Brain—Computer Interfaces.
- Author
-
Arı, Emre and Taçgın, Ertuğrul
- Subjects
- *
CONVOLUTIONAL neural networks , *COMPUTER interfaces , *MOTOR imagery (Cognition) , *ARTIFICIAL neural networks , *ELECTROENCEPHALOGRAPHY - Abstract
EEG signals are interpreted, analyzed and classified by many researchers for use in brain–computer interfaces. Although there are many different EEG signal acquisition methods, one of the most interesting is motor imagery signals. Many different signal processing methods, machine learning and deep learning models have been developed for the classification of motor imagery signals. Among these, Convolutional Neural Network models generally achieve better results than other models. Because the size and shape of the data is important for training Convolutional Neural Network models and discovering the right relationships, researchers have designed and experimented with many different input shape structures. However, no study has been found in the literature evaluating the effect of different input shapes on model performance and accuracy. In this study, the effects of different input shapes on model performance and accuracy in the classification of EEG motor imagery signals were investigated, which had not been specifically studied before. In addition, signal preprocessing methods, which take a long time before classification, were not used; rather, two CNN models were developed for training and classification using raw data. Two different datasets, BCI Competition IV 2A and 2B, were used in classification processes. For different input shapes, 53.03–89.29% classification accuracy and 2–23 s epoch time were obtained for 2A dataset, 64.84–84.94% classification accuracy and 4–10 s epoch time were obtained for 2B dataset. This study showed that the input shape has a significant effect on the classification performance, and when the correct input shape is selected and the correct CNN architecture is developed, feature extraction and classification can be done well by the CNN architecture without any signal preprocessing. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
27. UTMInDualSymFi: A Dual-Band Wi-Fi Dataset for Fingerprinting Positioning in Symmetric Indoor Environments.
- Author
-
Abdullah, Asim, Haris, Muhammad, Aziz, Omar Abdul, Rashid, Rozeha A., and Abdullah, Ahmad Shahidan
- Subjects
HUMAN fingerprints ,INDOOR positioning systems ,WIRELESS Internet ,MACHINE learning ,ACQUISITION of data - Abstract
Recent studies on indoor positioning using Wi-Fi fingerprinting are motivated by the ubiquity of Wi-Fi networks and their promising positioning accuracy. Machine learning algorithms are commonly leveraged in indoor positioning works. The performance of machine learning based solutions are dependent on the availability, volume, quality, and diversity of related data. Several public datasets have been published in order to foster advancements in Wi-Fi based fingerprinting indoor positioning solutions. These datasets, however, lack dual-band Wi-Fi data within symmetric indoor environments. To fill this gap, this research work presents the UTMInDualSymFi dataset, as a source of dual-band Wi-Fi data, acquired within multiple residential buildings with symmetric deployment of access points. UTMInDualSymFi comprises the recorded dual-band raw data, training and test datasets, radio maps and supporting metadata. Additionally, a statistical radio map construction algorithm is presented. Benchmark performance was evaluated by implementing a machine-learning-based positioning algorithm on the dataset. In general, higher accuracy was observed, on the 5 GHz data scenarios. This systematically collected dataset enables the development and validation of future comprehensive solutions, inclusive of novel preprocessing, radio map construction, and positioning algorithms. Dataset: https://doi.org/10.5281/zenodo.7260097 Dataset License: Creative Commons Attribution 4.0 International [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. The ideal repository for hosting data from clinical trials: blueprint using business process management [version 2; peer review: 1 approved with reservations, 1 not approved]
- Author
-
Mirko Gabelica, Damir Sapunar, Matko Marušić, and Livia Puljak
- Subjects
Opinion Article ,Articles ,repository ,business process management ,clinical trials ,data sharing ,raw data ,individual patient data - Abstract
In this article, we suggest a blueprint for an ideal open-access repository for clinical trial data with a description of a model of such a repository using a business process analysis approach. Firstly, we suggested which features an ideal repository should have. Secondly, we used business process management software to describe the whole process, from the decision to share clinical trial data to either publication of data in a repository or discarding data. The research community, legislators and society at large should be interested in a transparent open-access repository that will host clinical trial data. We hope this work can inspire relevant stakeholders to engage in discussion about the necessity of creating such repository, and that we will witness the creation of such a repository in the near future.
- Published
- 2023
- Full Text
- View/download PDF
29. Predicting Bike Rental Based on Environment and Seasons
- Author
-
Gopi, Nettem, Reddy, Nare Thoshan Kumar, Sasikala, T., Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Sherpa, Karma Sonam, editor, Bhoi, Akash Kumar, editor, Kalam, Akhtar, editor, and Mishra, Manoj Kumar, editor
- Published
- 2021
- Full Text
- View/download PDF
30. Development of Unit Process Datasets
- Author
-
Zhang, Xiaojin, Wang, Hongtao, Treyer, Karin, Klöpffer, Walter, Series Editor, Curran, Mary Ann, Series Editor, Ciroth, Andreas, editor, and Arvidsson, Rickard, editor
- Published
- 2021
- Full Text
- View/download PDF
31. Interpretation of a Humphrey Single Visual Field Printout
- Author
-
Patyal, Sagarika, Gandhi, Monica, Patyal, Sagarika, editor, and Gandhi, Monica, editor
- Published
- 2021
- Full Text
- View/download PDF
32. Data Presentation in Qualitative Research: The Outcomes of the Pattern of Ideas with the Raw Data
- Author
-
Aisha Ibrahim Ningi
- Subjects
credibility of the data ,data presentation ,qualitative research ,raw data ,Social Sciences - Abstract
The data presentation is one of the segments of the methodology in every research depending on the approach. The methodology, therefore, refers to the design and the theory that underpins the research. The paper contained a detailed explanation of the steps taken from the researcher’s interactions with the raw data and how such data was presented or analyzed. The data processing used observations, interviews, and audio recordings to have a balanced presentation. The pattern of ideas in data presentation involved familiarizing with the data to generate initial codes, searching for themes, defining and producing the report. Thus, the paper gives the reader a clear view of how qualitative data are presented and discussed with credibility. It is recommended that a researcher must rely on a wide range of sources of data that would help to produce an in-depth and holistic portrait of the participants' experiences.
- Published
- 2022
- Full Text
- View/download PDF
33. Validation of the Trackhealth Physical Activity Monitor
- Author
-
Ricardo Ansaloni de Oliveira, Jeffer Eidi Sasaki, Natália Lujan Ferraz, Álvaro Ribeiro Gomes de Oliveira, Pedro Henrique Ataides de Moraes, and Jair Sindra Virtuoso Júnior
- Subjects
Comparison ,Accelerometry ,Motion ,Raw data ,Sports ,GV557-1198.995 ,Medicine (General) ,R5-920 - Abstract
This is a quantitative methodological study for the validation of a research instrument. It aimed to validate the data from the TrackHealth accelerometry device. The sample consisted of 30 adult individuals of both sexes selected by convenience who met the inclusion and exclusion criteria. The physical activity monitors used for the research protocol were the ActiGraph® wGT3X-BT triaxial accelerometer and the TrackHealth accelerometer (TH). The activity protocol consisted of 4 (four) activities (walking at 4.8 and 6.4 km h¹ and running at 9.7 and 12 km h¹) performed in the laboratory, on an Ibramed treadmill, lasting 5 (five) minutes at each stage. A difference was found between the raw acceleration data of the two devices, however the TrackHealth device showed higher sensitivity at speeds of 4.8 and 6.4 km/h, and a high level of agreement (2.7-2.8%) at the initial speeds of the magnitude vectors. However, there is still a need for improvement in the functioning of the device, so that TrackHealth can be commercialized.
- Published
- 2023
- Full Text
- View/download PDF
34. Reducing Latency of Object Detection Systems Using Bayer Filters
- Author
-
Delli Santi, Angelo and Delli Santi, Angelo
- Abstract
Latency is a crucial aspect of object recognition systems, being fast enough to recognize objects allows the use of these algorithms for various industrial, security, or robotics applications that require real-time detection. A typical example of a real-time application of object detection is self-driving cars, whereby algorithms need to recognize obstacles as quickly as possible, to avoid collisions. This thesis investigates how it is possible to reduce the latency during the recall of neural network-based object detection systems deployed on an embedded system. It focuses on measuring the latency of the various steps required to run an object detection algorithm on a security camera, exploring an approach to reduce the latency, and evaluating at which cost in terms of accuracy these improvements in speed comes. The main attempt to reduce the latency proposed in this thesis is to skip the Image Processing Pipeline (IPP), which is the process that transforms the raw data obtained from the sensor into a more common RGB image and trains the object detection system to work directly on the raw data obtained from the sensors. This approach allows skipping entirely the processing pipeline and spares that computational time, letting the network instead learn how to deal directly with raw images. The final results are promising, however, the reduction in latency greatly depends on the hardware used to capture the images. A security RGB camera produced by Axis will be used as an example for the latency evaluation, and the Coco [1] dataset to evaluate the accuracy., Latens är en avgörande aspekt för objektigenkänning system, att vara tillräckligt snabb för att känna igen objekt gör det möjligt att använda dessa algoritmer för olika industri-, säkerhets- eller robottillämpningar som kräver realtidsdetektering. Ett typiskt exempel på en realtidsapplikation för objektdetektering är självkörande bilar, där algoritmerna måste känna igen hinder så snabbt som möjligt för att undvika kollisioner. Denna avhandling undersöker hur det är möjligt att minska latensen under återkallandet av neurala nätverksbaserade objektdetekteringssystem som används på ett inbyggt system. Den fokuserar på att mäta latensen för de olika steg som krävs för att köra en algoritm för objektdetektering på en säkerhetskamera, utforska en metod för att minska latensen och utvärdera till vilken kostnad i form av noggrannhet dessa förbättringar i hastighet kommer. Det huvudsakliga försöket att minska latensen som föreslås i denna avhandling är att hoppa över Image Processing Pipelin (IPP), vilket är den process som omvandlar rådata från sensorn till en mer vanlig RGB-bild och tränar objektdetekteringssystemet att arbeta direkt med rådata från sensorerna. Detta tillvägagångssätt gör det möjligt att helt hoppa över bearbetningspipelinen och spara den beräkningstiden, så att nätverket istället kan lära sig att direkt hantera råa bilder. De slutliga resultaten är lovande, men minskningen av latenstiden beror i hög grad på vilken hårdvara som används för att ta bilderna. En RGB-säkerhetskamera från Axis kommer att användas som exempel för att utvärdera latensen, och Coco-datasetet[1] för att utvärdera noggrannheten.
- Published
- 2024
35. Visualization of Structural Dependencies Hidden in a Large Data Set
- Author
-
Hnatkowska, Bogumila, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Hernes, Marcin, editor, Wojtkiewicz, Krystian, editor, and Szczerbicki, Edward, editor
- Published
- 2020
- Full Text
- View/download PDF
36. Malicious Network Traffic Recognition Method Based on Deep Learning
- Author
-
Song, Yi, Sun, Xuebin, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Wang, Yue, editor, Fu, Meixia, editor, Xu, Lexi, editor, and Zou, Jiaqi, editor
- Published
- 2020
- Full Text
- View/download PDF
37. Raw Data Redundancy Elimination on Cloud Database
- Author
-
Mohapatra, Subhashree, Bajpai, Namita, Swarnkar, Tripti, Mishra, Manohar, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Das, Asit Kumar, editor, Nayak, Janmenjoy, editor, Naik, Bighnaraj, editor, Dutta, Soumi, editor, and Pelusi, Danilo, editor
- Published
- 2020
- Full Text
- View/download PDF
38. Accurate intensity integration in the twinned γ-form of o-nitroaniline
- Author
-
Martin Lutz and Loes Kroon-Batenburg
- Subjects
twinning ,scattering ,twin interface ,raw data ,Crystallography ,QD901-999 - Abstract
o-Nitroaniline, C6H6N2O3, is known to be polymorphic. The α-form is probably amorphous, while the β- and γ-forms are crystalline. Difficulties with the unit-cell determination of the γ-form were reported as a consequence of twinning. In this paper, newly recorded diffraction data of the γ-form of o-nitroaniline are described that were processed taking into account the two twin lattices. Data were partly deconvoluted and much better agreement was obtained in terms of R1 values and C—C bond precision. The availability of raw data and proper reprocessing using twin lattices is by far superior to efforts to de-twin processed structure factors.
- Published
- 2022
- Full Text
- View/download PDF
39. Unveiling the Feasibility of Coalbed Methane Production Adjustment in Area L through Native Data Reproduction Technology: A Study
- Author
-
Qifan Chang, Likun Fan, Lihui Zheng, Xumin Yang, Yun Fu, Zixuan Kan, and Xiaoqing Pan
- Subjects
coalbed recover ,yield optimization scheme ,raw data ,coalbed methane mining ,native data reproduction technology ,Technology - Abstract
In the L Area, big data techniques are employed to manage the principal controlling factors of coalbed methane (CBM) production, thereby regulating single-well output. Nonetheless, conventional data cleansing and the use of arbitrary thresholds may result in an overemphasis on certain controlling factors, compromising the design and feasibility of optimization schemes. This study introduces a novel approach that leverages raw data without data cleaning and eschews artificial threshold setting for controlling factor identification. The methodology supplements previously overlooked controlling factors, proposing a more pragmatic CBM production adjustment scheme. In addition to the initial five controlling factors, this approach incorporates three additional ones, namely, dynamic fluid level state, drainage velocity, and fracturing displacement. This study presents a practical application case study of the proposed approach, demonstrating its ability to reduce reservoir damage during the coal fracturing process and enhance output through seal adjustments. Utilizing the full spectrum of original data and minimizing human intervention thresholds enriches the information available for model training, thereby facilitating the development of a more efficacious model.
- Published
- 2023
- Full Text
- View/download PDF
40. A Comprehensive Review of Conventional and Deep Learning Approaches for Ground-Penetrating Radar Detection of Raw Data
- Author
-
Xu Bai, Yu Yang, Shouming Wei, Guanyi Chen, Hongrui Li, Yuhao Li, Haoxiang Tian, Tianxiang Zhang, and Haitao Cui
- Subjects
ground-penetrating radar ,detection ,classification ,machine learning ,deep learning ,raw data ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Ground-penetrating radar (GPR) is a nondestructive testing technology that is widely applied in infrastructure maintenance, archaeological research, military operations, and other geological studies. A crucial step in GPR data processing is the detection and classification of underground structures and buried objects, including reinforcement bars, landmines, pipelines, bedrock, and underground cavities. With the development of machine learning algorithms, traditional methods such as SVM, K-NN, ANN, and HMM, as well as deep learning algorithms, have gradually been incorporated into A-scan, B-scan, and C-scan GPR image processing. This paper provides a summary of the typical machine learning and deep learning algorithms employed in the field of GPR and categorizes them based on the feature extraction method or classifier used. Additionally, this work discusses the sources and forms of data utilized in these studies. Finally, potential future development directions are presented.
- Published
- 2023
- Full Text
- View/download PDF
41. A Flexible Data Evaluation System for Improving the Quality and Efficiency of Laboratory Analysis and Testing.
- Author
-
Tu, Yonghui, Tang, Haoye, Gong, Hua, and Hu, Wenyou
- Subjects
- *
LABORATORY management , *TESTING laboratories , *CHEMICAL laboratories , *ANALYTICAL chemistry , *SYSTEMS software , *SYSTEM integration , *DATA editing - Abstract
In a chemical analysis laboratory, sample detection via most analytical devices obtains raw data and processes it to validate data reports, including raw data filtering, editing, effectiveness evaluation, error correction, etc. This process is usually carried out manually by analysts. When the sample detection volume is large, the data processing involved becomes time-consuming and laborious, and manual errors may be introduced. In addition, analytical laboratories typically use a variety of analytical devices with different measurement principles, leading to the use of various heterogeneous control software systems from different vendors with different export data formats. Different formats introduce difficulties to laboratory automation. This paper proposes a modular data evaluation system that uses a global unified management and maintenance mode that can automatically filter data, evaluate quality, generate valid reports, and distribute reports. This modular software design concept allows the proposed system to be applied to different analytical devices; its integration into existing laboratory information management systems (LIMS) could maximise automation and improve the analysis and testing quality and efficiency in a chemical analysis laboratory, while meeting the analysis and testing requirements. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Risk prediction model and classification of various hazards in automobile industry using HAO based deep CNN.
- Author
-
Jaganathan, Anbarasu and Mathesan, Karthikeyan
- Abstract
The automobile industries and the manufacturing revolution leads to a large extend of risk and hazards to automobile workers. The modern automobile industries are complicated because of various breakthroughs in technological innovations and based on the nature of operation with hazards of assorted degrees at every stage. This paper aims in identifying the hazards to evaluate and compare the rate of risk. The proposed framework comprises three diverse levels such as the data collection phase, data processing phase and risk prediction phase. In the data collection phase, the data gathered from the automobile manufacturing industry located in Tamil Nadu region is taken for investigation. In the data processing phase, raw data obtained from cameras and sensors are processed and the features are then extracted and selected for prediction purposes. In the risk prediction phase, the risk is predicted and classified into four levels namely level 1 (Extreme risk level), level 2 (High risk level), level 3 (Medium risk level), and level 4 (Low risk level) using deep convolutional neural network-hybrid Aquila optimizer (DCNN-HAO) approach. Regarding the evaluation of the proposed model, few performance measures namely area of under curve (AUC), area under receiving operation characteristics (ROC), false positive rate (FPR) and true positive rate (TPR) are evaluated. The experimental investigations are performed and graphical evaluation revealed that the proposed technique achieves better performances than other techniques. This confirms that the superiority and feasibility of the proposed DCNN-HAO approach that accurately predicts the risk. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Crystal structure of the second extracellular domain of human tetraspanin CD9: twinning and diffuse scattering
- Author
-
Viviana Neviani, Martin Lutz, Wout Oosterheert, Piet Gros, and Loes Kroon-Batenburg
- Subjects
twinning ,diffuse scattering ,tetraspanin cd9ec2 ,raw data ,Crystallography ,QD901-999 - Abstract
Remarkable features are reported in the diffraction pattern produced by a crystal of the second extracellular domain of tetraspanin CD9 (deemed CD9EC2), the structure of which has been described previously [Oosterheert et al. (2020), Life Sci. Alliance, 3, e202000883]. CD9EC2 crystallized in space group P1 and was twinned. Two types of diffuse streaks are observed. The stronger diffuse streaks are related to the twinning and occur in the direction perpendicular to the twinning interface. It is concluded that the twin domains scatter coherently as both Bragg reflections and diffuse streaks are seen. The weaker streaks along c* are unrelated to the twinning but are caused by intermittent layers of non-crystallographic symmetry related molecules. It is envisaged that the raw diffraction images could be very useful for methods developers trying to remove the diffuse scattering to extract accurate Bragg intensities or using it to model the effect of packing disorder on the molecular structure.
- Published
- 2022
- Full Text
- View/download PDF
44. Data Availability of Open T-Cell Receptor Repertoire Data, a Systematic Assessment.
- Author
-
Yu-Ning Huang, Patel, Naresh Amrat, Mehta, Jay Himanshu, Ginjala, Srishti, Brodin, Petter, Gray, Clive M., Patel, Yesha M., Cowell, Lindsay G., Burkhardt, Amanda M., and Mangul, Serghei
- Subjects
- *
SECONDARY analysis , *T cell receptors , *REPRODUCIBLE research , *SCIENTIFIC community , *REPORT writing , *TECHNICAL reports , *T cells - Abstract
Modern data-driven research has the power to promote novel biomedical discoveries through secondary analyses of raw data. Therefore, it is important to ensure data-driven research with great reproducibility and robustness for promoting a precise and accurate secondary analysis of the immunogenomics data. In scientific research, rigorous conduct in designing and conducting experiments is needed, specifically in scientific writing and reporting results. It is also crucial to make raw data available, discoverable, and well described or annotated in order to promote future re-analysis of the data. In order to assess the data availability of published T cell receptor (TCR) repertoire data, we examined 11,918 TCR-Seq samples corresponding to 134 TCR-Seq studies ranging from 2006 to 2022. Among the 134 studies, only 38.1% had publicly available raw TCR-Seq data shared in public repositories. We also found a statistically significant association between the presence of data availability statements and the increase in raw data availability (p = 0.014). Yet, 46.8% of studies with data availability statements failed to share the raw TCRSeq data. There is a pressing need for the biomedical community to increase awareness of the importance of promoting raw data availability in scientific research and take immediate action to improve its raw data availability enabling cost-effective secondary analysis of existing immunogenomics data by the larger scientific community. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. neatStats: An R Package for a Neat Pipeline From Raw Data to Reportable Statistics in Psychological Science
- Author
-
Lukács, Gáspár
- Subjects
data processing ,raw data ,analysis ,statistics ,reporting ,Psychology ,BF1-990 - Abstract
Performing the entire transition from raw data to reportable statistics can pose difficulties: it takes time, it allows various mistakes (that may or may not go unnoticed), and there are no general guidelines on how to proceed with this task. One particularly useful tool for this transition is the R programming language. However, how to use R for this is not trivial, especially for novices. The present paper serves as a step-by-step yet fast tutorial on how to make all the steps from raw data files to all the statistics normally needed in a conventional psychological experiment (including ANOVA and t-tests). At the same time, it also introduces the R package \fontencoding {T1}\texttt {neatStats}, which was created for the very purpose of making these steps as easy and straightforward as possible.
- Published
- 2021
- Full Text
- View/download PDF
46. Modeling of a Simple and Efficient Cascaded FPGA-Based Digital Band-Pass FIR Filter for Raw Ultrasound Data
- Author
-
Assef, Amauri Amorin, de Oliveira, Jonathan, Scherbaty, Lucas, Maia, Joaquim Miguel, Zimbico, Acácio, Ferreira, Breno Mendes, Costa, Eduardo Tavares, Magjarevic, Ratko, Series Editor, Ładyżyński, Piotr, Associate Editor, Ibrahim, Fatimah, Associate Editor, Lackovic, Igor, Associate Editor, Rock, Emilio Sacristan, Associate Editor, Costa-Felix, Rodrigo, editor, Machado, João Carlos, editor, and Alvarenga, André Victor, editor
- Published
- 2019
- Full Text
- View/download PDF
47. Research on Fault Diagnosis Method Based on RSAPSO-DBN
- Author
-
Yang, Jianjian, Wang, Xiaolin, Zhang, Qiang, Wang, Chao, Zhang, Zhihua, Liu, Yang, Gong, Dunwei, Wu, Miao, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tan, Ying, editor, Shi, Yuhui, editor, and Niu, Ben, editor
- Published
- 2019
- Full Text
- View/download PDF
48. Progress in Biomedical Sciences and Raw Data: Ethical Dilemmas
- Author
-
Casacuberta, David, Tassani, Simone, Vallverdú, Jordi, editor, Puyol, Angel, editor, and Estany, Anna, editor
- Published
- 2019
- Full Text
- View/download PDF
49. Technical note: Partitioning of gated single photon emission computed tomography raw data for protocols optimization.
- Author
-
Queiroz, Cleiton Cavalcante, Machado, Marcos Antonio Dorea, Ximenes, Antonio Augusto Brito, Pino, Andre Gustavo Silva, and Netto, Eduardo Martins
- Subjects
PHOTON emission ,COLLIMATORS ,SINGLE-photon emission computed tomography - Abstract
Purpose: Methodologies for optimization of SPECT image acquisition can be challenging due to imaging throughput, physiological bias, and patient comfort constraints. We evaluated a vendor‐independent method for simulating lower count image acquisitions. Methods: We developed an algorithm that recombines the ECG‐gated raw data into reduced counting acquisitions. We then tested the algorithm to simulate reduction of counting statistics from phantom SPECT image acquisition, which was synchronized with an ECG simulator. The datasets were reconstructed with a resolution recovery algorithm and the summed stress score (SSS) was assessed by three readers (two experts and one automatic). Results: The algorithm generated varying counting levels, simulating multiple examinations at the same time. The error between the expected and the simulated countings ranged from approximately 5% to 10% for the ungated simulations and 0% for the gated simulations. Conclusions: The vendor‐independent algorithm successfully generated lower counting statistics datasets from single‐gated SPECT raw data. This method can be readily implemented for optimal SPECT research aiming to lower the injected activity and/ or to shorten the acquisition time. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. CONTRIBUIÇÕES DA BIBLIOMETRIA AO TRATAMENTO DE DADOS INSTITUCIONAIS NÃO SISTEMATIZADOS DE PRODUÇÃO CIENTÍFICA: o caso do Instituto Federal de Educação, Ciência e Tecnologia de São Paulo (IFSP).
- Author
-
Galdino, Rosangela, Guimarães Garcia, Leonardo, and Morato do Amaral, Roniberto
- Subjects
- *
BIBLIOTHERAPY , *BIBLIOMETRICS , *LITERATURE - Abstract
According to the literature, one of the means that institutions can use for an in-depth understanding of the results, as well as the causes that enhance (or limit) them, is bibliometrics and the construction of bibliometric research indicators. However, there are few studies that illustrate, based on data from a real research institution, the benefits that bibliometrics could provide when there is no systematic approach to the treatment of data from the scientific production carried out by the institution. In this sense, the objective of this work illustrates the potential of bibliometric indicators in the treatment of scientific production data from a research institution that does not systematically carry out this treatment. The research is classified as descriptive with an analytical approach and bibliometrics was used as a technique for analyzing information. As an example, the research was applied to the Federal Institute of Education, Science and Technology of São Paulo. The results present the creation and interpretation, from the raw data, of several bibliometric indicators. It is concluded that the use of bibliometrics can convert unprotected data into an understanding of the research carried out by the institution, as well as generate useful insights to expand its results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.