750 results on '"DATA binning"'
Search Results
102. Parameter Estimation of Binned Hawkes Processes.
- Author
-
Shlomovich, Leigh, Cohen, Edward A. K., Adams, Niall, and Patel, Lekha
- Subjects
- *
PARAMETER estimation , *DATA binning , *POINT processes , *EXPECTATION-maximization algorithms , *COMPUTER networks , *STOCK exchanges - Abstract
A key difficulty that arises from real event data is imprecision in the recording of event time-stamps. In many cases, retaining event times with a high precision is expensive due to the sheer volume of activity. Combined with practical limits on the accuracy of measurements, binned data is common. In order to use point processes to model such event data, tools for handling parameter estimation are essential. Here we consider parameter estimation of the Hawkes process, a type of self-exciting point process that has found application in the modeling of financial stock markets, earthquakes and social media cascades. We develop a novel optimization approach to parameter estimation of binned Hawkes processes using a modified Expectation-Maximization algorithm, referred to as Binned Hawkes Expectation Maximization (BH-EM). Through a detailed simulation study, we demonstrate that existing methods are capable of producing severely biased and highly variable parameter estimates and that our novel BH-EM method significantly outperforms them in all studied circumstances. We further illustrate the performance on network flow (NetFlow) data between devices in a real large-scale computer network, to characterize triggering behavior. These results highlight the importance of correct handling of binned data. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
103. Tensor-based reconstruction applied to regularized time-lapse data.
- Author
-
Popa, Jonathan, Minkoff, Susan E, and Lou, Yifei
- Subjects
- *
DATA binning , *SINGULAR value decomposition , *DATA structures , *UNDERGROUND storage , *GEOPHONE , *CARTESIAN coordinates , *DATA recorders & recording - Abstract
Repeatedly recording seismic data over a period of months or years is one way to identify trapped oil and gas and to monitor CO2 injection in underground storage reservoirs and saline aquifers. This process of recording data over time and then differencing the images assumes the recording of the data over a particular subsurface region is repeatable. In other words, the hope is that one can recover changes in the Earth when the survey parameters are held fixed between data collection times. Unfortunately, perfect experimental repeatability almost never occurs. Acquisition inconsistencies such as changes in weather (currents, wind) for marine seismic data are inevitable, resulting in source and receiver location differences between surveys at the very least. Thus, data processing aimed at improving repeatability between baseline and monitor surveys is extremely useful. One such processing tool is regularization (or binning) that aligns multiple surveys with different source or receiver configurations onto a common grid. Data binned onto a regular grid can be stored in a high-dimensional data structure called a tensor with, for example, x and y receiver coordinates and time as indices of the tensor. Such a higher-order data structure describing a subsection of the Earth often exhibits redundancies which one can exploit to fill in gaps caused by sampling the surveys onto the common grid. In fact, since data gaps and noise increase the rank of the tensor, seeking to recover the original data by reducing the rank (low-rank tensor-based completion) successfully fills in gaps caused by binning. The tensor nuclear norm (TNN) is defined by the tensor singular value decomposition (tSVD) which generalizes the matrix SVD. In this work we complete missing time-lapse data caused by binning using the alternating direction method of multipliers (or ADMM) to minimize the TNN. For a synthetic experiment with three parabolic events in which the time-lapse difference involves an amplitude increase in one of these events between baseline and monitor data sets, the binning and reconstruction algorithm (TNN-ADMM) correctly recovers this time-lapse change. We also apply this workflow of binning and TNN-ADMM reconstruction to a real marine survey from offshore Western Australia in which the binning onto a regular grid results in significant data gaps. The data after reconstruction varies continuously without the large gaps caused by the binning process. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
104. ممد بن احلسن الشيباني من خالل كتابه "احلجة على أهل املدينة" قرائن ترجيح األحاديث عند اإلمام.
- Author
-
حمد ظاهر ستوبل, باسم فيصل الجواب, and كوسوفو- بريشتين
- Subjects
HADITH ,NARRATORS ,SCHOLARS ,CONCRETE ,TERMS & phrases ,DATA binning ,DATA harmonization - Abstract
Copyright of IUG Journal of Islamic Studies is the property of Islamic University of Gaza and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
- Full Text
- View/download PDF
105. A Computationally Efficient Method for Estimating Multi‐Model Process Sensitivity Index.
- Author
-
Dai, Heng, Zhang, Fangqiang, Ye, Ming, Guadagnini, Alberto, Liu, Qi, Hu, Bill, and Yuan, Songhu
- Subjects
DATA binning ,GROUNDWATER flow ,PARAMETRIC modeling ,HYDROLOGIC models ,EMPIRICAL research - Abstract
Identification of important processes of a hydrologic system is critical for improving process‐based hydrologic modeling. To identify important processes while jointly considering parametric and model uncertainty, Dai et al. (2017), https://doi.org/10.1002/2016WR019715, developed a multi‐model process sensitivity index. Numerical evaluation of the index using a brute force Monte Carlo (MC) simulation is computationally expensive, because it requires a nested structure of parameter sampling and the number of model simulations is on the order of N2 ${N}^{2}$ (N being the number of parameter samples). To reduce computational cost, we develop a new method (here denoted as quasi‐MC for brevity) that uses triple sets of parameter samples (generated using quasi‐MC sequence) to remove the nested structure of parameter sampling in a theoretically rigorous way. The quasi‐MC method reduces the number of model simulations from the order of N2 ${N}^{2}$ to 2N. The performance of the method is assessed against the brute force MC approach and the recent binning method developed by Dai et al. (2017), https://doi.org/10.1002/2016WR019715, through two synthetic cases of groundwater flow and solute transport modeling. Due to its rigorous theoretical foundation, the quasi‐MC method overcomes the limitations imposed by the inherently empirical nature of the binning method. We find that the quasi‐MC method outperforms both the brute force MC and the binning method in terms of computational requirements and theoretical aspects, thus strengthening its potential for the assessment of process sensitivity indices subject to various sources of uncertainty. Key Points: A new quasi‐Monte Carlo (MC) method is developed for more efficiently estimating the multi‐model process sensitivity indexThe new method reduces the number of MC model simulations for estimating process sensitivity index from the order of N2 to 2NThis mathematically rigorous new method also outperforms the empirical binning method in terms of accuracy and convergence [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
106. Metric learning for comparing genomic data with triplet network.
- Author
-
Ma, Zhi, Lu, Yang Young, Wang, Yiwen, Lin, Renhao, Yang, Zizi, Zhang, Fang, and Wang, Ying
- Subjects
- *
GENE expression profiling , *METAGENOMICS , *DATA binning - Abstract
Many biological applications are essentially pairwise comparison problems, such as evolutionary relationships on genomic sequences, contigs binning on metagenomic data, cell type identification on gene expression profiles of single-cells, etc. To make pair-wise comparison, it is necessary to adopt suitable dissimilarity metric. However, not all the metrics can be fully adapted to all possible biological applications. It is necessary to employ metric learning based on data adaptive to the application of interest. Therefore, in this study, we proposed MEtric Learning with Triplet network (MELT), which learns a nonlinear mapping from original space to the embedding space in order to keep similar data closer and dissimilar data far apart. MELT is a weakly supervised and data-driven comparison framework that offers more adaptive and accurate dissimilarity learned in the absence of the label information when the supervised methods are not applicable. We applied MELT in three typical applications of genomic data comparison, including hierarchical genomic sequences, longitudinal microbiome samples and longitudinal single-cell gene expression profiles, which have no distinctive grouping information. In the experiments, MELT demonstrated its empirical utility in comparison to many widely used dissimilarity metrics. And MELT is expected to accommodate a more extensive set of applications in large-scale genomic comparisons. MELT is available at https://github.com/Ying-Lab/MELT. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
107. The k-NN Classification of Histogram- and Trapezoid-Valued Data.
- Author
-
Al-Ma'shumah, Fathimah, Razmkhah, Mostafa, and Effati, Sohrab
- Subjects
DATA binning ,HISTOGRAMS ,CLASSIFICATION ,COMPUTATIONAL complexity ,NEAREST neighbor analysis (Statistics) - Abstract
A histogram-valued observation is a specific type of symbolic objects that represents its value by a list of bins (intervals) along with their corresponding relative frequencies or probabilities. In the literature, the raw data in bins of all histogram-valued data have been assumed to be uniformly distributed. A new representation of such observations is proposed in this paper by assuming that the raw data in each bin are linearly distributed, which are called trapezoid-valued data. Moreover, new definitions of union and intersection between trapezoid-valued observations are made. This study proposes the k-nearest neighbour technique for classifying histogram-valued data using various dissimilarity measures. Further, the limiting behaviour of the computational complexities based on the performed dissimilarity measures are compared. To study the effect of using a distance instead of a dissimilarity measure, the Wasserstein distance is also used and the accuracy of the classification is compared. Some simulations are done to study the performance of the proposed procedures. Also, the results are applied to three various real data sets. Eventually, some conclusions are stated. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
108. Industrial Growth.
- Subjects
- *
PRICES , *STATE taxation , *BUSINESS revenue , *SALES statistics , *MANUFACTURING processes , *DATA binning - Abstract
According to an article in Beijing Review, the industrial sector in China experienced steady revenue growth in July. Tax data showed a 6% year-on-year increase in sales revenue for industrial companies nationwide, with the mining sector seeing a 9.7% growth and the manufacturing sector experiencing a 5.7% growth. The electricity generation and supply sector also saw a 5.6% increase in sales revenue. These positive trends were attributed to factors such as increased bulk commodity prices and China's peak season of power consumption. [Extracted from the article]
- Published
- 2024
109. HOW HEALTHY ARE YOUR LEFTOVERS?
- Author
-
WOOD, SAMANTHA
- Subjects
LEFTOVERS ,FOOD storage ,FOOD containers ,PERISHABLE foods ,GLASS containers ,DATA binning - Abstract
This article from Woman's Own discusses the importance of properly handling and storing leftovers to avoid sickness and allergic reactions. The rising cost of food has led many people to rely on batch cooking and leftovers to save money. However, as food ages, it can produce compounds that can make us unwell. It is recommended to only keep leftovers in the fridge for a maximum of three days to prevent the development of bad bacteria and potential allergic reactions. Additionally, proper storage and reheating techniques are important to avoid ingesting toxins. The article provides tips on when to discard leftovers, the impact of leftovers on hormones, the importance of keeping perishable food chilled, the use of appropriate storage containers, and the safe reheating of food. It also suggests freezing leftovers if they need to be kept for longer periods. Finally, the article offers tips for using the fridge effectively, such as maintaining the proper temperature, avoiding overpacking, and storing raw meat and fish separately. [Extracted from the article]
- Published
- 2024
110. Top jobs for... OCTOBER.
- Author
-
HAYES, RUTH
- Subjects
PINE cones ,DATA binning ,MYCOSES ,SPRING - Abstract
This article from Country Homes & Interiors provides a list of top jobs to do in October for garden maintenance. Suggestions include building a bug hotel using natural materials from the garden, forcing bulbs indoors for Christmas blooms, deadheading roses and removing black spot, harvesting winter squashes, planting tulip bulbs, cleaning garden tools, ordering vegetable seeds and fruit plants for next year, planting new bare root trees and shrubs, protecting patio pots from waterlogging, and growing winter salad indoors. The article also provides tips on sowing calendula, aquilegia, and poppy seeds. [Extracted from the article]
- Published
- 2024
111. Day-Ahead Load Forecasting Based on Conditional Linear Predictions with Smoothed Daily Profile
- Author
-
Park, Sunme, Park, Kanggu, Hwang, Euiseok, Chlamtac, Imrich, Series Editor, José, Rui, editor, Van Laerhoven, Kristof, editor, and Rodrigues, Helena, editor
- Published
- 2020
- Full Text
- View/download PDF
112. A Novel Variable Selection Method Based on Binning-Normalized Mutual Information for Multivariate Calibration
- Author
-
Liang Zhong, Ruiqi Huang, Lele Gao, Jianan Yue, Bing Zhao, Lei Nie, Lian Li, Aoli Wu, Kefan Zhang, Zhaoqing Meng, Guiyun Cao, Hui Zhang, and Hengchang Zang
- Subjects
variable selection ,near-infrared spectroscopy ,data binning ,normalized mutual information ,Organic chemistry ,QD241-441 - Abstract
Variable (wavelength) selection is essential in the multivariate analysis of near-infrared spectra to improve model performance and provide a more straightforward interpretation. This paper proposed a new variable selection method named binning-normalized mutual information (B-NMI) based on information entropy theory. “Data binning” was applied to reduce the effects of minor measurement errors and increase the features of near-infrared spectra. “Normalized mutual information” was employed to calculate the correlation between each wavelength and the reference values. The performance of B-NMI was evaluated by two experimental datasets (ideal ternary solvent mixture dataset, fluidized bed granulation dataset) and two public datasets (gasoline octane dataset, corn protein dataset). Compared with classic methods of backward and interval PLS (BIPLS), variable importance projection (VIP), correlation coefficient (CC), uninformative variables elimination (UVE), and competitive adaptive reweighted sampling (CARS), B-NMI not only selected the most featured wavelengths from the spectra of complex real-world samples but also improved the stability and robustness of variable selection results.
- Published
- 2023
- Full Text
- View/download PDF
113. socialh: An R package for determining the social hierarchy of animals using data from individual electronic bins.
- Author
-
Valente, Júlia de Paula Soares, Deniz, Matheus, de Sousa, Karolini Tenffen, Mercadante, Maria Eugênia Zerlotti, and Dias, Laila Talarico
- Subjects
- *
SOCIAL hierarchy in animals , *BEEF cattle , *DOMESTIC animals , *BEHAVIORAL assessment , *ANIMAL welfare , *DATA binning , *DRUNK driving - Abstract
Cattle have a complex social organization, with negative (agonistic) and positive (affiliative) interactions that affect access to environmental resources. Thus, the social behaviour has a major impact on animal production, and it is an important factor to improve the farm animal welfare. The use of data from electronic bins to determine social competition has already been validated; however, the studies used non-free software or did not make the code available. With data from electronic bins is possible to identify when one animal takes the place of another animal, i.e. a replacement occurs, at the feeders or drinkers. However, there is no package for the R environment to detect competitive replacements from electronic bins data. Our general approach consisted in creating a user-friendly R package for social behaviour analysis. The workflow of the socialh package comprises several steps that can be used sequentially or separately, allowing data input from electronic systems, or obtained from the animals' observation. We provide an overview of all functions of the socialh package and demonstrate how this package can be applied using data from electronic feed bins of beef cattle. The socialh package provides support for researchers to determine the social hierarchy of gregarious animals through the synthesis of agonistic interactions (or replacement) in a friendly, versatile, and open-access system, thus contributing to scientific research. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
114. REDI for Binned Data: A Random Empirical Distribution Imputation Method for Estimating Continuous Incomes.
- Author
-
King, Molly M.
- Subjects
- *
DATA binning , *AMERICAN Community Survey , *INCOME distribution , *DEMOGRAPHIC surveys - Abstract
Researchers often need to work with categorical income data. The typical nonparametric (including midpoint) and parametric estimation methods used to estimate summary statistics both have advantages, but they carry assumptions that cause them to deviate in important ways from real-world income distributions. The method introduced here, random empirical distribution imputation (REDI), imputes discrete observations using binned income data, while also calculating summary statistics. REDI achieves this through random cold-deck imputation from a real-world reference data set (demonstrated here using the Current Population Survey Annual Social and Economic Supplement). This method can be used to reconcile bins between data sets or across years and handle top incomes. REDI has other advantages for computing values of an income distribution that is nonparametric, bin consistent, area and variance preserving, continuous, and computationally fast. The author provides proof of concept using two years of the American Community Survey. The method is available as the redi command for Stata. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
115. Ground-based Ku-band microwave observations of ozone in the polar middle atmosphere.
- Author
-
Newnham, David A., Clilverd, Mark A., Clark, William D. J., Kosch, Michael, Verronen, Pekka T., and Rogers, Alan E. E.
- Subjects
- *
MIDDLE atmosphere , *OZONE , *ZENITH distance , *DATA binning , *MICROWAVES , *OZONE layer , *ATMOSPHERIC ozone - Abstract
Ground-based observations of 11.072 GHz atmospheric ozone (O3) emission have been made using the Ny-Ålesund Ozone in the Mesosphere Instrument (NAOMI) at the UK Arctic Research Station (latitude 78 ∘ 55 ′ 0 ′′ N, longitude 11 ∘ 55 ′ 59 ′′ E), Spitsbergen. Seasonally averaged O3 vertical profiles in the Arctic polar mesosphere–lower thermosphere region for night-time and twilight conditions in the period 15 August 2017 to 15 March 2020 have been retrieved over the altitude range 62–98 km. NAOMI measurements are compared with corresponding, overlapping observations by the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) satellite instrument. The NAOMI and SABER version 2.0 data are binned according to the SABER instrument 60 d yaw cycles into nominal 3-month "winter" (15 December–15 March), "autumn" (15 August–15 November), and "summer" (15 April–15 July) periods. The NAOMI observations show the same year-to-year and seasonal variabilities as the SABER 9.6 µm O3 data. The winter night-time (solar zenith angle, SZA ≥ 110 ∘) and twilight (75 ∘ ≤ SZA ≤ 110 ∘) NAOMI and SABER 9.6 µm O3 volume mixing ratio (VMR) profiles agree to within the measurement uncertainties. However, for autumn twilight conditions the SABER 9.6 µm O3 secondary maximum VMR values are higher than NAOMI over altitudes 88–97 km by 47 % and 59 %, respectively in 2017 and 2018. Comparing the two SABER channels which measure O3 at different wavelengths and use different processing schemes, the 9.6 µm O3 autumn twilight VMR data for the three years 2017–2019 are higher than the corresponding 1.27 µm measurements with the largest difference (58 %) in the 65–95 km altitude range similar to the NAOMI observation. The SABER 9.6 µm O3 summer daytime (SZA < 75 ∘) mesospheric O3 VMR is also consistently higher than the 1.27 µm measurement, confirming previously reported differences between the SABER 9.6 µm channel and measurements of mesospheric O3 by other satellite instruments. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
116. Programmed for Automatic Bone Disorder Clustering Based on Cumulative Calcium Prediction for Feature Extraction.
- Author
-
Ramkumar, S., Kumar, M. Rajeev, and Sasi, G.
- Subjects
DATA binning ,CALCIUM ,SUPPORT vector machines ,FEATURE extraction - Abstract
Background: The prediction of bone disorders varies between ortho-physicians. A precise bone disorder cataloging system is proposed based on a renewed method for estimating calcium value from a radiological image of the bone. Methods: A deliberate method was employed, the binning technique, for the input image which divides the input image into non-overlapping blocks to obtain accurate calcium volume estimation. In this proposed approach, the input image undergoes two stages of the process. In stage 1, input image preprocessing is accomplished with median filtering to eliminate the unwanted noise and it increases the quality of the image. Further, the processed image is fed to the Otsu-thresholding-segmentation method to highlight the affected regions from the processed bone image. The LBP (Local Binary Pattern) is a technique implemented to pull out the feature vector alone from the input image. Calcium value is estimated from abnormal regions from the segmented bone image and with the help of extracted texture features, the calcium concentration is obtained. MSVM (Multi-class Support Vector Machine) technique is applied to categorize as normal, osteoporosis, and osteopenia. In stage 2, the entire input is divided into 4 x 4 bins and preprocessing, segmentation, feature extraction, and calcium estimation process were applied similar to stage I to each bin separately and the calcium values of all bins are added together. Results: Finally, stage 1 and stage 2 calcium values are summed up to obtain a more precise calcium estimation of the input image the feature vectors which were pull-out from others. The result can prove that the proposed binning technique is best for the bone disorder classification system which attained the greater accuracy of 97.4% and sensitivity of 98.3% when compared with and without binning technique. Conclusions: Validation of the results was performed with bone images, and these bone images were declared by the physician as bone disorder-affected images. The success rate of the bone disorder prediction is 80%. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
117. OceanSODA-MDB: a standardised surface ocean carbonate system dataset for model-data intercomparisons.
- Author
-
Land, Peter E., Findlay, Helen S., Shutler, Jamie D., Piolle, Jean-Francois, Sims, Richard, Green, Hannah, Kitidis, Vassilis, Polukhin, Alexander, and Pipko, Irina I.
- Subjects
- *
SURFACE of the earth , *CARBONATES , *PARTIAL pressure , *DATA binning , *DATA distribution - Abstract
In recent years, large datasets of in situ marine carbonate system parameters (partial pressure of CO2 (pCO2), total alkalinity, dissolved inorganic carbon and pH) have been collated, quality controlled and made publicly available. These carbonate system datasets have highly variable data density in both space and time, especially in the case of pCO2, which is routinely measured at high frequency using underway measuring systems. This variation in data density can create biases when the data are used, for example for algorithm assessment, favouring datasets or regions with high data density. A common way to overcome data density issues is to bin the data into cells of equal latitude and longitude extent. This leads to bins with spatial areas that are latitude and projection dependent (e. g. become smaller and more elongated as the poles are approached). Additionally, as bin boundaries are defined without reference to the spatial distribution of the data or to geographical features, data clusters may be divided sub-optimally (e. g. a bin covering a region with a strong gradient). To overcome these problems and to provide a tool for matching surface in situ data with satellite, model and climatological data, which often have very different spatiotemporal scales both from the in situ data and from each other, a methodology has been created to group in situ data into 'regions of interest': spatiotemporal cylinders consisting of circles on the Earth's surface extending over a period of time. These regions of interest are optimally adjusted to contain as many in situ measurements as possible. All surface in situ measurements of the same parameter contained in a region of interest are collated, including estimated uncertainties and regional summary statistics. The same grouping is applied to each of the nonin situ datasets in turn, producing a dataset of coincident matchups that are consistent in space and time. About 35 million in situ data points were matched with data from five satellite sources and five model and re-analysis datasets to produce a global matchup dataset of carbonate system data, consisting of ~286,000 regions of interest spanning 54 years from 1957 to 2020. Each region of interest is 100 km in diameter and 10 days in duration. An example application, the reparameterisation of a global total alkalinity algorithm, is shown. This matchup dataset can be updated as and when in situ and other datasets are updated, and similar datasets at finer spatiotemporal scale can be constructed, for example to enable regional studies. The matchup dataset provides users with a large multi-parameter carbonate system dataset containing data from different sources, in one consistent, collated and standardised format suitable for model-data intercomparisons and model evaluations. The OceanSODA-MDB data can be downloaded from https://doi.org/10.12770/0dc16d62-05f6-4bbe-9dc4-6d47825a5931 (Land and Piollé, 2022). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
118. COMAP Early Science. III. CO Data Processing.
- Author
-
Foss, Marie K., Ihle, HĂĄvard T., Borowska, Jowita, Cleary, Kieran A., Eriksen, Hans Kristian, Harper, Stuart E., Kim, Junhan, Lamb, James W., Lunde, Jonas G. S., Philip, Liju, Rasmussen, Maren, Stutzer, Nils-Ole, Uzgil, Bade D., Watts, Duncan J., Wehus, Ingunn K., Woody, David P., Bond, J. Richard, Breysse, Patrick C., Catha, Morgan, and Church, Sarah E.
- Subjects
- *
ELECTRONIC data processing , *STAR maps (Astronomy) , *WHITE noise , *POWER spectra , *DATA binning , *CARTOGRAPHY - Abstract
We describe the first-season CO Mapping Array Project (COMAP) analysis pipeline that converts raw detector readouts to calibrated sky maps. This pipeline implements four main steps: gain calibration, filtering, data selection, and mapmaking. Absolute gain calibration relies on a combination of instrumental and astrophysical sources, while relative gain calibration exploits real-time total-power variations. High-efficiency filtering is achieved through spectroscopic common-mode rejection within and across receivers, resulting in nearly uncorrelated white noise within single-frequency channels. Consequently, near-optimal but biased maps are produced by binning the filtered time stream into pixelized maps; the corresponding signal bias transfer function is estimated through simulations. Data selection is performed automatically through a series of goodness-of-fit statistics, including χ 2 and multiscale correlation tests. Applying this pipeline to the first-season COMAP data, we produce a data set with very low levels of correlated noise. We find that one of our two scanning strategies (the Lissajous type) is sensitive to residual instrumental systematics. As a result, we no longer use this type of scan and exclude data taken this way from our Season 1 power spectrum estimates. We perform a careful analysis of our data processing and observing efficiencies and take account of planned improvements to estimate our future performance. Power spectrum results derived from the first-season COMAP maps are presented and discussed in companion papers. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
119. Locality-Aware Crowd Counting.
- Author
-
Zhou, Joey Tianyi, Zhang, Le, Du, Jiawei, Peng, Xi, Fang, Zhiwen, Xiao, Zhe, and Zhu, Hongyuan
- Subjects
- *
DATA augmentation , *DATA binning , *DATA distribution , *CROWDS , *COUNTING - Abstract
Imbalanced data distribution in crowd counting datasets leads to severe under-estimation and over-estimation problems, which has been less investigated in existing works. In this paper, we tackle this challenging problem by proposing a simple but effective locality-based learning paradigm to produce generalizable features by alleviating sample bias. Our proposed method is locality-aware in two aspects. First, we introduce a locality-aware data partition (LADP) approach to group the training data into different bins via locality-sensitive hashing. As a result, a more balanced data batch is then constructed by LADP. To further reduce the training bias and enhance the collaboration with LADP, a new data augmentation method called locality-aware data augmentation (LADA) is proposed where the image patches are adaptively augmented based on the loss. The proposed method is independent of the backbone network architectures, and thus could be smoothly integrated with most existing deep crowd counting approaches in an end-to-end paradigm to boost their performance. We also demonstrate the versatility of the proposed method by applying it for adversarial defense. Extensive experiments verify the superiority of the proposed method over the state of the arts. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
120. Evaluating Ecohydrological Model Sensitivity to Input Variability with an Information-Theory-Based Approach.
- Author
-
Farahani, Mozhgan A., Vahid, Alireza, and Goodwell, Allison E.
- Subjects
- *
RATE distortion theory , *EDDY flux , *HEAT flux , *VAPOR pressure , *WIND speed , *DATA binning - Abstract
Ecohydrological models vary in their sensitivity to forcing data and use available information to different extents. We focus on the impact of forcing precision on ecohydrological model behavior particularly by quantizing, or binning, time-series forcing variables. We use rate-distortion theory to quantize time-series forcing variables to different precisions. We evaluate the effect of different combinations of quantized shortwave radiation, air temperature, vapor pressure deficit, and wind speed on simulated heat and carbon fluxes for a multi-layer canopy model, which is forced and validated with eddy covariance flux tower observation data. We find that the model is more sensitive to radiation than meteorological forcing input, but model responses also vary with seasonal conditions and different combinations of quantized inputs. While any level of quantization impacts carbon flux similarly, specific levels of quantization influence heat fluxes to different degrees. This study introduces a method to optimally simplify forcing time series, often without significantly decreasing model performance, and could be applied within a sensitivity analysis framework to better understand how models use available information. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
121. Dynamics retrieval from stochastically weighted incomplete data by low-pass spectral analysis.
- Author
-
Casadei, Cecilia M., Hosseinizadeh, Ahmad, Schertler, Gebhard F. X., Ourmazd, Abbas, and Santra, Robin
- Subjects
DATA binning ,SET functions ,DATA analysis ,CRYSTALLOGRAPHY - Abstract
Time-resolved serial femtosecond crystallography (TR-SFX) provides access to protein dynamics on sub-picosecond timescales, and with atomic resolution. Due to the nature of the experiment, these datasets are often highly incomplete and the measured diffracted intensities are affected by partiality. To tackle these issues, one established procedure is that of splitting the data into time bins, and averaging the multiple measurements of equivalent reflections within each bin. This binning and averaging often involve a loss of information. Here, we propose an alternative approach, which we call low-pass spectral analysis (LPSA). In this method, the data are projected onto the subspace defined by a set of trigonometric functions, with frequencies up to a certain cutoff. This approach attenuates undesirable high-frequency features and facilitates retrieving the underlying dynamics. A time-lagged embedding step can be included prior to subspace projection to improve the stability of the results with respect to the parameters involved. Subsequent modal decomposition allows to produce a low-rank description of the system's evolution. Using a synthetic time-evolving model with incomplete and partial observations, we analyze the LPSA results in terms of quality of the retrieved signal, as a function of the parameters involved. We compare the performance of LPSA to that of a range of other sophisticated data analysis techniques. We show that LPSA allows to achieve excellent dynamics reconstruction at modest computational cost. Finally, we demonstrate the superiority of dynamics retrieval by LPSA compared to time binning and merging, which is, to date, the most commonly used method to extract dynamical information from TR-SFX data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
122. Dynamic pulmonary MRI using motion‐state weighted motion‐compensation (MostMoCo) reconstruction with ultrashort TE: A structural and functional study.
- Author
-
Ding, Zekang, Cheng, Zenghui, She, Huajun, Liu, Bei, Yin, Yongfang, and Du, Yiping P.
- Subjects
MAGNETIC resonance imaging ,DATA binning ,VENTILATION ,LUNGS - Abstract
Purpose: To improve the quality of structural images and the quantification of ventilation in free‐breathing dynamic pulmonary MRI. Methods: A 3D radial ultrashort TE (UTE) sequence with superior–inferior navigators was used to acquire pulmonary data during free breathing. All acquired data were binned into different motion states according to the respiratory signal extracted from superior–inferior navigators. Motion‐resolved images were reconstructed using eXtra‐Dimensional (XD) UTE reconstruction. The initial motion fields were generated by registering images at each motion state to other motion states in motion‐resolved images. A motion‐state weighted motion‐compensation (MostMoCo) reconstruction algorithm was proposed to reconstruct the dynamic UTE images. This technique, termed as MostMoCo‐UTE, was compared with XD‐UTE and iterative motion‐compensation (iMoCo) on a porcine lung and 10 subjects. Results: MostMoCo reconstruction provides higher peak SNR (37.0 vs. 35.4 and 34.2) and structural similarity (0.964 vs. 0.931 and 0.947) compared to XD‐UTE and iMoCo in the porcine lung experiment. Higher apparent SNR and contrast‐to‐noise ratio are achieved using MostMoCo in the human experiment. MostMoCo reconstruction better preserves the temporal variations of signal intensity of parenchyma compared to iMoCo, shows reduced random noise and improved sharpness of anatomical structures compared to XD‐UTE. In the porcine lung experiment, the quantification of ventilation using MostMoCo images is more accurate than that using XD‐UTE and iMoCo images. Conclusion: The proposed MostMoCo‐UTE provides improved quality of structural images and quantification of ventilation for free‐breathing pulmonary MRI. It has the potential for the detection of structural and functional disorders of the lung in clinical settings. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
123. Knowledge of Time-bin Data Selection using Gini Index based Type Classification in GitHub.
- Author
-
Masuda, Ayako and Matsuodani, Tohru
- Subjects
LORENZ curve ,CLASSIFICATION ,DATA binning ,TIME series analysis ,VALUE engineering - Abstract
The number of references to knowledge is considered one of the indicators for evaluating the usefulness of knowledge in open science. However, the value of knowledge is diverse and cannot be evaluated only by the number of references. We seek to determine indices of how engineers learn and grow from knowledge on GitHub based on knowledge references and their impact as characteristics of usefulness. Dynamic analysis requires the conversion of a variety of recorded data into time-series data and the selection of an appropriate method of time-series analysis. When selecting a method of time-series analysis, data classification is necessary. In this study, we focused on the differences in the type of time series to capture characteristics of usefulness of knowledge. We attempted to classify data by focusing on the bias of bin values at the stage of converting time-stamped recorded data to time bins. The data classification was based on the Lorenz curve, which is a measure of distortion from a uniform distribution, and used the Gini index and concentration ratio defined by us. The concentration ratio is an index of the partial features of the Lorenz curve. The results of applying this method confirmed that there is a type in the usefulness of knowledge. In this study, we present classification methods of this type as knowledge. The value sought by engineers appears to depend on the respective type of usefulness of the knowledge. The results of this study should serve as a basis for future analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
124. Binned scatterplots with marginal histograms: binscatterhist.
- Author
-
Pinna, Matteo
- Subjects
- *
SCATTER diagrams , *HISTOGRAMS , *STATISTICAL software , *DATA binning , *SAMPLE size (Statistics) - Abstract
I introduce binscatterhist, a command that extends the functionality of the popular binscatter command (Stepner, 2013, Statistical Software Components S457709, Department of Economics, Boston College). binscatter allows researchers to summarize the relationship between two variables in an informative and versatile way by collapsing scattered points into bins. However, information about the variables' frequencies gets lost in the process. binscatterhist solves this issue by allowing the user to further enrich the graphs by plotting the variables' underlying distribution. The binscatterhist command includes options for different regression methods, including reghdfe (Correia, 2014, Statistical Software Components S457874, Department of Economics, Boston College) and areg, and robust and clustered standard errors, with automatic reporting of estimation results and sample size. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
125. A Permutation Entropy Analysis of Voyager Interplanetary Magnetic Field Observations.
- Author
-
Raath, J. L., Olivier, C. P., and Engelbrecht, N. E.
- Subjects
INTERPLANETARY magnetic fields ,SOLAR magnetic fields ,ENTROPY ,PERMUTATIONS ,DATA binning ,MAGNETIC entropy ,WIENER processes ,BROWNIAN motion - Abstract
The permutation entropy analysis technique is here employed to study Voyager 2 observations of heliospheric field fluctuations from ∼6 AU to ∼34 AU for the first time. The properties of the technique, especially regarding the classification of a given process as either chaotic or stochastic, are illustrated in some detail and it is indicated how the technique is best applied and interpreted to the data set in question. Proceeding from this, conclusions are made regarding the stochasticity of the processes driving turbulence as a function of radial distance from the Sun, which is here found to increase with distance toward a value theoretically associated with Brownian motion, exceeding that value depending on the data bin size considered. At larger radial distances, however, it is argued that this trend may be influenced by a strongly declining signal‐to‐noise ratio. Intriguingly, this technique also serves to identify intervals of anomalously moderate to low stochasticity, which are briefly investigated here. Plain Language Summary: Analyzing spacecraft data measuring the Sun's magnetic field using the permutation entropy analysis technique allows on to ascertain whether the processes generating fluctuations in the magnetic field are chaotic (more deterministic) or stochastic (more random). This study presents for the first time the results of such an analysis of Voyager spacecraft data, and shows that fluctuations in the Sun's magnetic field, out to a distance of approximately 34 AU (near Pluto's orbit), appear to be primarily stochastic in nature. However, we identify a fair number of intervals with low permutation entropy, that could correspond to interesting phenomena that require further analysis. This raises the intriguing possibility that the permutation entropy analysis technique may serve as a useful tool in identifying intervals of interest to scientists in large data sets. Key Points: Permutation entropy analysis is here, for the first time, applied to Voyager dataPermutation entropy is found to imply that the driving processes for heliospheric magnetic field (HMF) turbulence are predominantly stochasticPermutation entropy is found to increase with heliocentric radial distance, although this effect may be influenced by instrumental uncertainties at the largest radial distances considered here [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
126. New Diabetic Retinopathy Study Findings Recently Were Reported by Researchers at University of Rovira and Virgili (Multivariate Data Binning and Examples Generation To Build a Diabetic Retinopathy Classifier Based On Temporal Clinical and...).
- Subjects
DIABETIC angiopathies ,EYE diseases ,DATA binning ,RETINAL diseases ,INFORMATION technology ,DIABETIC retinopathy - Abstract
Researchers at the University of Rovira and Virgili in Catalonia, Spain have conducted a study on diabetic retinopathy, a condition that affects the eyes of individuals with diabetes. The study explores the use of retrospective clinical data from Electronic Health Records (EHR) for classification tasks in chronic patients. The researchers propose a preprocessing method to construct a multivariate time series dataset using EHR data and address the problem of class imbalance. The study concludes that the proposed data preparation methods can be applied to other diseases with similar characteristics. [Extracted from the article]
- Published
- 2024
127. Citigroup's New Banking Chief Must Relish a Challenge.
- Author
-
Davies, Paul J.
- Subjects
BANK service charges ,INVESTMENT banking ,BANKING industry ,CORPORATE banking ,CHIEF executive officers ,DATA binning - Published
- 2024
128. My towing mirror tale.
- Author
-
Youngman, Anthony
- Subjects
MIRRORS ,TOWING ,AUTOMOBILES ,BINS ,DATA binning - Abstract
The article discusses the author's experience with towing mirrors and the lack of truly universal options. The author shares their personal experiences with finding mirrors for their Vauxhall Vectra and VW Tiguan, highlighting the need for car brand-specific mirrors. They found success with a company in the Netherlands that sells dedicated, car brand-specific mirrors. The article also includes feedback from readers requesting more content for older vans and tow cars, as well as a warning about certain Škoda Octavias that are not built to tow. [Extracted from the article]
- Published
- 2024
129. Page 10.
- Author
-
Davis, DeeDee
- Subjects
LABOR Day ,AUTUMN ,RETAIL industry ,TREE houses ,JACK-o-lanterns ,DATA binning - Published
- 2024
130. Enhanced data preprocessing with novel window function in Raman spectroscopy: Leveraging feature selection and machine learning for raspberry origin identification.
- Author
-
Zhao, Yaju, Lv, Wei, Zhang, Yinsheng, Tang, Minmin, and Wang, Haiyan
- Subjects
- *
FISHER discriminant analysis , *MACHINE learning , *FARM produce , *RAMAN spectroscopy , *AGRICULTURAL implements , *DATA binning , *FEATURE selection - Abstract
[Display omitted] • Proposed method combines Raman preprocessing, feature selection, ML for origin identification. • Innovative Raman spectral preprocessing techniques improve data quality and reduce dimensionality. • Optimized window function with binning width of 5 achieves highest accuracy in preprocessing. • Information gain feature selection extracts discriminative spectral features effectively. • LinearSVC, MLPClassifier, LDA, and RVFLClassifier provide robust performances. In this study, a simple and accurate approach is proposed for enhancing the origin identification of raspberry samples using a combination of innovative Raman spectral preprocessing techniques, feature selection, and machine learning algorithms. Window function was creatively introduced and combined with baseline removal technique to preprocess the Raman spectral data, reducing the dimensionality of the raw data and ensuring the quality of the processed data. An optimization process was conducted to determine the optimal parameter for the window function, resulting in a binning window width of 5 that yielded the highest accuracy. After applying three feature selection techniques, it was found that the information gain model had the best performance in extracting discriminative spectral features. Finally, ten different machine learning algorithms were employed to construct predictive models, and the optimal models were selected. Linear Support Vector Classifier (LinearSVC), Multi-Layer Perceptron Classifier (MLPClassifier), and Linear Discriminant Analysis (LDA) achieve accuracy, precision, recall, and F1 values above 0.96, while the Random Vector Functional Link Network Classifier (RVFLClassifier) surpasses 0.93 for these performance metrics. These results demonstrate the effectiveness of the proposed approach in identifying the origin of raspberry samples with high accuracy and robustness, providing a valuable tool for agricultural product authentication and quality control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
131. METHODOLOGY FOR MATCHING LEGACY ACCELERATIVE EXPOSURES ACROSS MULTIPLE SUBJECT TYPES.
- Author
-
McGovern, Shannon, Olszko, Ardyn, Abraczinskas, Alicia, Beltran, Christine, Vasquez, Kimberly, and Chancey, Valeta
- Subjects
DATA binning ,DISTRIBUTION (Probability theory) ,RESEARCH questions ,HUMAN experimentation - Abstract
INTRODUCTION: The Biodynamics Data Resource (BDR) at the U.S. Army Aeromedical Research Laboratory houses data for -7,000 non-contact inertial loading exposures (non-injurious and injurious) from vertical and horizontal sled runs previously conducted at the Naval Biodynamics Laboratory (1971-1996). Kinematic and physiologic responses were measured from human research volunteers (HRVs) (only non-injurious), anthropomorphic test devices (ATDs), and non-human primates (NHPs). Therefore, data can be used to develop human injury criteria; however, a methodology is first needed to match non-injurious parameters across subject types to extrapolate the non-injurious human responses to injurious ranges. METHODS: Twenty-six parameters (peak sled acceleration, impact direction, etc.) were scored as high, medium, low, or negligible and classified as numeric or categorical. Classifications were made using multiple statistical assessments. Two processes were used to determine tolerances on numeric parameters: classifying parameters into statistical distributions and equal-frequency varying-bins histograms. Tolerances were selected from equal-frequency varying-bins histograms based on the largest bin size. Data were matched based on exact categorical and numeric parameters within a tolerance range. RESULTS: Fifteen parameters had high-priority classification: categorical (6) and numeric (9). The data spread for most numeric parameters were right-skewed, and none fit a known statistical distribution. All HRV exposures fell within the range of the ATD and NHP exposures for all parameters. Equal-frequency varying-bins histograms determined a static frequency per parameter, allowing for bin sizes and number of bins (16 to 80) to vary. The largest and smallest tolerances of all parameters encompassed 94.79% and 15.94% of the range, respectively. All parameters were matched between subject types but not across all three subject types. DISCUSSION: The right-skewness of the numeric parameters was due to higher ATD and NHP (injurious) exposures, causing selected tolerances to encompass a large percent of each parameter range. While this generated matches, no matched group contained all three subject types despite large tolerances. While future work could apply these methods solely over the HRV range and decrease bin sizes to optimize groups with matched parameters, real-world data and subject variability may not be suitable for such statistical binning to determine matched pairs. Learning Objectives 1. Learn a methodology for matching datasets across multiple subject types for a variety of matching parameters, where parameters included are both numeric and categorical. 2. Learn that when assessing exposures across multiple subject types and intensities, parameters chosen to match must be limited by relevance to the research question to make comparisons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
132. Robust point cloud registration using Hough voting-based correspondence outlier rejection.
- Author
-
Han, Jihoon, Shin, Minwoo, and Paik, Joonki
- Subjects
- *
POINT cloud , *HOUGH transforms , *DATA binning , *TRANSFORMER models , *SEISMIC networks , *SAMPLING (Process) - Abstract
In this paper, we present a novel method for point cloud registration in large-scale 3D scenes. Our approach is accurate and robust, and does not rely on unrealistic assumptions. We address the challenges posed by scanning equipment like LiDAR, which often produce point clouds with dense properties. Additionally, our method is effective even in scenes with low overlap rates, specifically less than 30%. Our approach begins by computing overlap region-based correspondences. This involves extracting deep geometric features from point cloud pairs, which is especially beneficial in enhancing registration performance in cases with low overlap ratios. We then construct efficient triplets that vote in the 6D Hough space, representing the transformation parameters. This process involves creating a quartet from overlap region-based correspondences and then forming a final triplet following a sampling process. To mitigate ambiguity during training, we use similarity values of the triplet as features of each vote when configuring votes for network input. Our framework incorporates the architecture of the Fully Convolutional Geometric Features (FCGF) network, augmented with a transformer's attention mechanism, to reduce noise in the voting process. The final stage involves identifying the consensus of correspondence in the Hough space using a binning approach, which enables us to predict the final transformation parameters. Our method has demonstrated state-of-the-art performance on indoor datasets, including high overlap ratio data like 3DMatch and low overlap ratio data like 3DLoMatch. It has also shown comparable performance to leading methods on outdoor datasets like KITTI. [Display omitted] The proposed method employs a local feature-based matching technique between pairs of point clouds, utilizing a set of triplets. Initially, we assign each triplet to a specific bin in a sparsely divided Hough space. Following this initial step, we employ a fully sparse convolutional network to refine the Hough space, effectively removing noise from the voting process. The final stage involves selecting the bin that receives the maximum number of votes, which we then use to determine the final transformation parameter. This process ensures both accuracy and precision in our point cloud matching methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
133. Continuous-Variable Nonlocality and Contextuality.
- Author
-
Barbosa, Rui Soares, Douce, Tom, Emeriau, Pierre-Emmanuel, Kashefi, Elham, and Mansfield, Shane
- Subjects
- *
BELL'S theorem , *QUANTUM computing , *DATA binning - Abstract
Contextuality is a non-classical behaviour that can be exhibited by quantum systems. It is increasingly studied for its relationship to quantum-over-classical advantages in informatic tasks. To date, it has largely been studied in discrete-variable scenarios, where observables take values in discrete and usually finite sets. Practically, on the other hand, continuous-variable scenarios offer some of the most promising candidates for implementing quantum computations and informatic protocols. Here we set out a framework for treating contextuality in continuous-variable scenarios. It is shown that the Fine–Abramsky–Brandenburger theorem extends to this setting, an important consequence of which is that Bell nonlocality can be viewed as a special case of contextuality, as in the discrete case. The contextual fraction, a quantifiable measure of contextuality that bears a precise relationship to Bell inequality violations and quantum advantages, is also defined in this setting. It is shown to be a non-increasing monotone with respect to classical operations that include binning to discretise data. Finally, we consider how the contextual fraction can be formulated as an infinite linear program. Through Lasserre relaxations, we are able to express this infinite linear program as a hierarchy of semi-definite programs that allow to calculate the contextual fraction with increasing accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
134. Correction: Collins et al. Altimeter Observations of Tropical Cyclone-Generated Sea States: Spatial Analysis and Operational Hindcast Evaluation. J. Mar. Sci. Eng. 2021, 9 , 216.
- Author
-
Collins, Clarence, Hesser, Tyler, Rogowski, Peter, and Merrifield, Sophia
- Subjects
OCEAN waves ,ALTIMETERS ,CYCLONES ,TROPICAL cyclones ,WIND speed ,DATA binning - Abstract
The correct version now states: Near the eye in the TC-centered reference frame, we find a pattern of model underestimation in the left sector and overestimation in the right sector except near the eye where wave height remains underestimated. Indeed, Rogowski et al. (this issue) show that while there is a trend towards underestimation of peak Hs values, bias patterns vary from storm to storm. The correct text now states: For all subplots, there is a relative minimum in bias around the eye, and this bias is negative (model underestimation) for all but the fastest storms. There is general pattern of underestimation of wave height in the left sector (~-4%) and overestimation of wave height in the right sector (~+5%). [Extracted from the article]
- Published
- 2022
- Full Text
- View/download PDF
135. Using ANPR data to create an anonymized linked open dataset on urban bustle.
- Author
-
Van de Vyvere, Brecht and Colpaert, Pieter
- Subjects
- *
DATA binning , *AUTOMOBILE license plates , *DIGITAL twins , *DECISION making , *BUSTLES , *STATISTICS , *MACHINE learning - Abstract
ANPR cameras allow the automatic detection of vehicle license plates and are increasingly used for law enforcement. However, also statistical data generated by ANPR cameras are a potential source of urban insights. In order for this data to reach its full potential for policy-making, we research how this data can be shared in digital twins, with researchers, for a diverse set of machine learning models, and even Open Data portals. This article's key objective is to find a way to anonymize and aggregate ANPR data in a way that it still can provide useful visualizations for local decision making. We introduce an approach to aggregate the data with geotemporal binning and publish it by combining nine existing data specifications. We implemented the approach for the city of Kortrijk (Belgium) with 43 ANPR cameras, developed the ANPR Metrics tool to generate the statistical data and dashboards on top of the data, and tested whether mobility experts from the city could deduct valuable insights. We present a couple of insights that were found as a result, as a proof that anonymized ANPR data complements their currently used traffic analysis tools, providing a valuable source for data-driven policy-making. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
136. Condition-Based Monitoring on High-Precision Gearbox for Robotic Applications.
- Author
-
Amin Al Hajj, Mohamad, Quaglia, Giuseppe, and Schulz, Ingo
- Subjects
- *
GEARBOXES , *DATA binning , *INDUSTRIAL robots , *ROBOTICS , *PRINCIPAL components analysis , *PLANT maintenance , *SIGNAL processing - Abstract
This work presents a theoretical and experimental study regarding defect detection in a robotic gearbox using vibration signals in both cyclostationary and noncyclostationary conditions. The existing work focuses on inferring the health of the robot during operation with little regard toward the defective element of the components. This article illustrates the detection of specific element damage of a robotic gearbox during a robotic cycle based on domain knowledge and presents a novel data-driven method for asset health. This starts by studying the robotic gearbox, specifically its kinematics as a planetary 2-stage reduction gearbox to acquire the knowledge of the rotations of each component. The signals acquired from a test bench with four sensors undergo different acquisition methods and signal processing techniques to correlate the elements' frequencies. The work shows the detection of the artificially created defects from the acquired vibration data, verifying the kinematic methodology and identifying the root cause of failure of such gearboxes. A novel resampling method, Binning, is presented and compared with the traditional signal processing techniques. Binning combined with Principal Component Analysis (PCA) as a data-driven method to infer the state of the gearbox is presented, tested, and validated. This work presents methods as a step toward automatized predictive maintenance on robots in industrial applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
137. LSR‐forest: An locality sensitive hashing‐based approximate k‐nearest neighbor query algorithm on high‐dimensional uncertain data.
- Author
-
Wang, Jiagang, Qian, Tu, Yang, Anbang, Wang, Hui, and Qian, Jiangbo
- Subjects
K-nearest neighbor classification ,NEAREST neighbor analysis (Statistics) ,DATA binning ,DATA scrubbing ,LOCATION-based services ,ALGORITHMS ,HIGH-dimensional model representation - Abstract
Summary: Uncertain data is widely used in many practical applications, such as data cleaning, location‐based services, privacy protection, and so on. With the development of technology, data has a tendency to high‐dimensionality. The most common indexes for nearest neighbor search on uncertain data are the R‐Tree and the KD‐Tree. These indexes will inevitably bring about "curse of dimension." Focus on this problem, article proposes a new hash algorithm, called the LSR‐forest, which based on locality sensitive hashing and R‐Tree, to solve the high‐dimensional uncertain data approximate neighbor search problem. The LSR‐forest can hash similar high‐dimensional uncertain data into a same bucket with a high probability, and then constructs multiple R‐Tree‐based indexes for hashed buckets. When querying, it is possible to judge neighbors by checking the data in the hypercube which the query point is in. One can also adjust the query range automatically by different parameter of k. Many experiments on different datasets are presented in this article. The results show that LSR‐forest has better effectiveness and efficiency than R‐Tree on high‐dimensional datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
138. Volumetric dose extension for isodose tuning.
- Author
-
Ma, Lin, Chen, Mingli, Gu, Xuejun, and Lu, Weiguo
- Subjects
- *
VOLUMETRIC-modulated arc therapy , *DATA binning , *PROSTATE , *STANDARD deviations , *THREE-dimensional modeling - Abstract
Purpose: To develop a method that can extend dose from two isodose surfaces (isosurfaces) to the entire patient volume, and to demonstrate its application in radiotherapy plan isodose tuning. Methods: We hypothesized that volumetric dose distribution can be extended from two isosurfaces—the 100% isosurface and a reference isosurface—with the distances to these two surfaces (L100${L_{100}}$ and Lref${L_{{\rm{ref}}}}$) as extension variables. The extension function is modeled by a three‐dimensional lookup table (LUT), where voxel dose values from clinical plans are binned by three indexes: L100${L_{100}}$, Lref${L_{{\rm{ref}}}}$, and Dref${D_{{\rm{ref}}}}$ (reference dose level). The mean and standard deviation of voxel doses in each bin are calculated and stored in LUT. Volumetric dose extension is performed voxel‐wisely by indexing the LUT with the L100${L_{100}}$, Lref${L_{{\rm{ref}}}}$, and Dref${D_{{\rm{ref}}}}$ of each query voxel. The mean dose stored in the corresponding bin is filled into the query voxel as extended dose, and the standard deviation be filled voxel‐wisely as the uncertainty of extension result. We applied dose extension in isodose tuning, which aims to tune volumetric dose distribution by isosurface dragging. We adopted extended dose as an approximate dose estimation, and combined it with dose correction strategy to achieve accurate dose tuning. Results: We collected 32 post‐operative prostate volumetric‐modulated arc therapy (VMAT) cases and built the LUT and its associated uncertainties from the doses of 27 cases. The dose extension method was tested on five cases, whose dose distributions were defined as ground truth (GT). We extended the doses from 100% and 50% GT isosurfaces to the entire volume, and evaluated the accuracy of extended doses. The 5 mm/5% gamma passing rate (GPR) of extended doses are 92.0%. The mean error is 4.5%, which is consistent to the uncertainty estimated by LUT. The dose difference in 90.5% of voxels is within two sigma and 97.5% in three sigma. The calculation time is less than 2 seconds. To simulate plan isodose tuning, we optimized a dose with less sparing on rectum (than GT dose) and defined it as a "base dose"—the dose awaiting isosurface dragging. In front‐end, the simulated isodose tuning is conducted as such that the base dose was given to plan tuner, and its 50% isosurface would be dragged to the desired position (position of 50% isosurface in GT dose). In back‐end, the output of isodose tuning is obtained by (1) extending dose from the desired isosurfaces and viewed the extended dose as an approximate dose, (2) obtaining a correction map from the base dose, and (3) applying the correction map to the extended dose. The accuracy of output—extended dose with correction—was 97.2% in GPR (3 mm/3%) and less than 1% in mean dose difference. The total calculation time is less than 2 seconds, which allows for interactive isodose tuning. Conclusions: We developed a dose extension method that generates volumetric dose distribution from two surfaces. The application of dose extension is in interactive isodose tuning. The distance‐based LUT fashion and correction strategy guarantee the computation efficiency and accuracy in isodose tuning. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
139. The medieval cell doctrine: Foundations, development, evolution, and graphic representations in printed books from 1490 to 1630.
- Author
-
Lanska, Douglas J.
- Subjects
- *
FATHERS of the church , *CEREBRAL ventricles , *MIDDLE Ages , *DATA binning , *SIXTEENTH century , *RELIGIOUS doctrines - Abstract
The medieval cell doctrine was a series of related psychological models based on ancient Greco-Roman ideas in which cognitive faculties were assigned to "cells," typically corresponding to the cerebral ventricles. During Late Antiquity and continuing during the Early Middle Ages, Christian philosophers attempted to reinterpret Aristotle's De Anima, along with later modifications by Herophilos and Galen, in a manner consistent with religious doctrine. The resulting medieval cell doctrine was formulated by the fathers of the early Christian Church in the fourth and fifth centuries. Printed images of the doctrine that appeared in medical, philosophical, and religious works, beginning with "graphic incunabula" at the end of the fifteenth century, extended and evolved a manuscript tradition that had been in place since at least the eleventh century. Some of these early psychological models just pigeonholed the various cognitive faculties in different non-overlapping bins within the brain (albeit without any clinicopathologic evidence supporting such localizations), while others specifically promoted or implied a linear sequence of events, resembling the process of digestion. By the sixteenth century, printed images of the doctrine were usually linear three-cell versions with few exceptions having four or five cells. Despite direct challenges by Massa and Vesalius in the sixteenth century, and Willis in the seventeenth century, the doctrine saw its most elaborate formulations in the late-sixteenth and early-seventeenth centuries with illustrations by the Paracelsan physicians Bacci and Fludd. Overthrow of the doctrine had to await abandonment of Galenic cardiovascular physiology from the late-seventeenth to early-eighteenth centuries. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
140. سلوب االمر في ديوان الحسه به راشد الحلي دراسة وحوية.
- Author
-
د أميه عبيد جيجان and عامر عبد وعمة الط
- Subjects
LINGUISTIC context ,DATA binning ,POETS ,COLLECTIONS - Abstract
Copyright of Journal of Human Sciences (19922876) is the property of Republic of Iraq Ministry of Higher Education & Scientific Research (MOHESR) and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2022
141. On the correction of multiple minute sampling rainfall data of tipping bucket rainfall recorders.
- Author
-
Rácz, Tibor
- Subjects
DATA binning ,CORRECTION factors ,DATA recorders & recording ,DATA transmission systems ,TIME measurements - Abstract
In the last decades of the 1900s, the tipping bucket rainfall gauges (TBG) were used to record the sub-daily rainfall data. In the first period of the rainfall data recording, as a result of the lack of efficient data storage and data transmission, the sampling period of the TBG devices was chosen in a magnitude of 10-20 minutes. Consequently, there are historical datasets characterized by several minutes long sampling periods. Since the turn of the 2000s, the data handling has been revolutionized; the sampling period has diminished to one minute. There is a systematic error of the TBG technique which has been investigated since the middle of the 1900s. Between 2004 and 2008, a comprehensive research was performed to determine the correction equation for several TBG devices. These results can be utilized for the short sampling period measurements (one minute sampling), but for longer sampling period data, further corrections are needed. In this paper, a supplementary correction is presented. On the base of the mathematical determination of the correction factor, simple estimation will be proposed to be able to execute the necessary correction. After the presentation of the correction factor, a general correction factor is proposed for larger geographical regions and wide time span of the measurements. The revision of the historical rainfall data recorded by TBG devices can be important in several issues, such as the re-evaluation of intensity-duration-frequency (IDF) curves, and in other fields. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
142. Noise-Based Image Harmonization Significantly Increases Repeatability and Reproducibility of Radiomics Features in PET Images: A Phantom Study.
- Author
-
Keller, Harald, Shek, Tina, Driscoll, Brandon, Xu, Yiwen, Nghiem, Brian, Nehmeh, Sadek, Grkovski, Milan, Schmidtlein, Charles Ross, Budzevich, Mikalai, Balagurunathan, Yoganand, Sunderland, John J., Beichel, Reinhard R., Uribe, Carlos, Lee, Ting-Yim, Li, Fiona, Jaffray, David A., and Yeung, Ivan
- Subjects
POSITRON emission tomography ,RADIOMICS ,STATISTICAL reliability ,INTRACLASS correlation ,IMAGE reconstruction ,DATA binning - Abstract
For multicenter clinical studies, characterizing the robustness of image-derived radiomics features is essential. Features calculated on PET images have been shown to be very sensitive to image noise. The purpose of this work was to investigate the efficacy of a relatively simple harmonization strategy on feature robustness and agreement. A purpose-built texture pattern phantom was scanned on 10 different PET scanners in 7 institutions with various different image acquisition and reconstruction protocols. An image harmonization technique based on equalizing a contrast-to-noise ratio was employed to generate a "harmonized" alongside a "standard" dataset for a reproducibility study. In addition, a repeatability study was performed with images from a single PET scanner of variable image noise, varying the binning time of the reconstruction. Feature agreement was measured using the intraclass correlation coefficient (ICC). In the repeatability study, 81/93 features had a lower ICC on the images with the highest image noise as compared to the images with the lowest image noise. Using the harmonized dataset significantly improved the feature agreement for five of the six investigated feature classes over the standard dataset. For three feature classes, high feature agreement corresponded with higher sensitivity to the different patterns, suggesting a way to select suitable features for predictive models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
143. Self‐gated 3D stack‐of‐spirals UTE pulmonary imaging at 0.55T.
- Author
-
Javed, Ahsan, Ramasawmy, Rajiv, O'Brien, Kendall, Mancini, Christine, Su, Pan, Majeed, Waqas, Benkert, Thomas, Bhat, Himanshu, Suffredini, Anthony F., Malayeri, Ashkan, and Campbell‐Washburn, Adrienne E.
- Subjects
COVID-19 ,IMAGE reconstruction ,DATA binning ,PULMONARY nodules ,DIAGNOSTIC imaging - Abstract
Purpose: To develop an isotropic high‐resolution stack‐of‐spirals UTE sequence for pulmonary imaging at 0.55 Tesla by leveraging a combination of robust respiratory‐binning, trajectory correction, and concomitant‐field corrections. Methods: A stack‐of‐spirals golden‐angle UTE sequence was used to continuously acquire data for 15.5 minutes. The data was binned to a stable respiratory phase based on superoinferior readout self‐navigator signals. Corrections for trajectory errors and concomitant field artifacts, along with image reconstruction with conjugate gradient SENSE, were performed inline within the Gadgetron framework. Finally, data were retrospectively reconstructed to simulate scan times of 5, 8.5, and 12 minutes. Image quality was assessed using signal‐to‐noise, image sharpness, and qualitative reader scores. The technique was evaluated in healthy volunteers, patients with coronavirus disease 2019 infection, and patients with lung nodules. Results: The technique provided diagnostic quality images with parenchymal lung SNR of 3.18 ± 0.0.60, 4.57 ± 0.87, 5.45 ± 1.02, and 5.89 ± 1.28 for scan times of 5, 8.5, 12, and 15.5 minutes, respectively. The respiratory binning technique resulted in significantly sharper images (p < 0.001) as measured with relative maximum derivative at the diaphragm. Concomitant field corrections visibly improved sharpness of anatomical structures away from iso‐center. The image quality was maintained with a slight loss in SNR for simulated scan times down to 8.5 minutes. Inline image reconstruction and artifact correction were achieved in <5 minutes. Conclusion: The proposed pulmonary imaging technique combined efficient stack‐of‐spirals imaging with robust respiratory binning, concomitant field correction, and trajectory correction to generate diagnostic quality images with 1.75 mm isotropic resolution in 8.5 minutes on a high‐performance 0.55 Tesla system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
144. Large CO2 Emitters as Seen From Satellite: Comparison to a Gridded Global Emission Inventory.
- Author
-
Chevallier, Frédéric, Broquet, Grégoire, Zheng, Bo, Ciais, Philippe, and Eldering, Annmarie
- Subjects
- *
EMISSION inventories , *CARBON emissions , *DATA binning , *FOSSIL fuels , *GRID cells , *TELECOMMUNICATION satellites ,PARIS Agreement (2016) - Abstract
Using the multiyear archive of the two Orbiting Carbon Observatories (OCO) of NASA, we have retrieved large fossil fuel CO2 emissions (larger than 1.0 ktCO2 h−1 per 10−2 square degree grid cell) over the globe with a simple plume cross‐sectional inversion approach. We have compared our results with a global gridded and hourly inventory. The corresponding OCO emission retrievals explain more than one third of the inventory variance at the corresponding cells and hours. We have binned the data at diverse time scales from the year (with OCO‐2) to the average morning and afternoon (with OCO‐3). We see consistent variations of the median emissions, indicating that the retrieval‐inventory differences (with standard deviations of a few tens of percent) are mostly random and that trends can be calculated robustly in areas of favorable observing conditions, when the future satellite CO2 imagers provide an order of magnitude more data. Plain Language Summary: In the wake of the Paris Climate Agreement, there is an increasing need to monitor emissions from fossil fuel combustion around the world. For CO2 in particular, satellite imagers are being designed to observe the emission plumes from large point sources and intense urban area sources. In order to assess their potential, we have tested a simple emission retrieval scheme on the multi‐year archive of the two NASA Orbiting Carbon Observatories which provide dense observations along the orbit line, but with a narrow swath. We have compared our results with a global gridded and hourly inventory. The corresponding emission retrievals explain a large part of the inventory variability, despite uncertainty in both datasets. We also see consistent variations in the middle emission values at different time scales. These results suggest that the differences between retrievals and inventory are mostly random and that the trends can be calculated robustly in areas of favorable observation conditions, when future satellite CO2 imagers provide an order of magnitude more data. Key Points: We have retrieved some instantaneous CO2 emissions for one third of the large emission cells of a global high‐resolution hourly inventoryThe emission retrievals explain more than one third of the inventory variance at the corresponding cells and hoursConsistent temporal variations of median emissions suggest that trends can be robustly calculated when more data become available [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
145. Filtering Data Bins of UWB Radars for Activity Recognition with Random Forest.
- Author
-
Imbeault-Nepton, Thomas, Maitre, Julien, Bouchard, Kévin, and Gaboury, Sébastien
- Subjects
DATA binning ,RANDOM forest algorithms ,ULTRA-wideband radar ,PRINCIPAL components analysis ,REDUCTION potential ,BIN packing problem - Abstract
The world's population is rapidly aging, leading to an increase in the number of people who need care and a reduction in the number of potential workers who can give that care. Hence, in order to fight this worker shortage, scientific researchers proposed solutions, mainly prototypes, to maintain people at home. These solutions monitor the activities performed by people and can detect anomalies in people's behavior to assist them in a proper way and at the perfect time. In this paper, we propose a solution based on three ultra-wideband radars to recognize activities in a prototype apartment. More precisely, we processed the data provided by the radars with a conventional band-pass filter applied on each bin independently. Then, we extracted several features and performed dimensionality reduction with the help of the SelectKbest algorithm and the principal component analysis. Finally, we tested the proposed approach with Random Forest algorithm and the leave-one-subject-out strategy. The results obtained show an average improvement of approximately 13% accuracy compared to our previous work. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
146. Errors in aerial survey count data: Identifying pitfalls and solutions.
- Author
-
Davis, Kayla L., Silverman, Emily D., Sussman, Allison L., Wilson, R. Randy, and Zipkin, Elise F.
- Subjects
- *
AERIAL surveys , *ECOLOGICAL surveys , *SPECIES pools , *DATA binning , *INTERNET surveys , *ACQUISITION of data , *WATER birds , *COUNTING - Abstract
Accurate estimates of animal abundance are essential for guiding effective management, and poor survey data can produce misleading inferences. Aerial surveys are an efficient survey platform, capable of collecting wildlife data across large spatial extents in short timeframes. However, these surveys can yield unreliable data if not carefully executed. Despite a long history of aerial survey use in ecological research, problems common to aerial surveys have not yet been adequately resolved. Through an extensive review of the aerial survey literature over the last 50 years, we evaluated how common problems encountered in the data (including nondetection, counting error, and species misidentification) can manifest, the potential difficulties conferred, and the history of how these challenges have been addressed. Additionally, we used a double‐observer case study focused on waterbird data collected via aerial surveys and an online group (flock) counting quiz to explore the potential extent of each challenge and possible resolutions. We found that nearly three quarters of the aerial survey methodology literature focused on accounting for nondetection errors, while issues of counting error and misidentification were less commonly addressed. Through our case study, we demonstrated how these challenges can prove problematic by detailing the extent and magnitude of potential errors. Using our online quiz, we showed that aerial observers typically undercount group size and that the magnitude of counting errors increases with group size. Our results illustrate how each issue can act to bias inferences, highlighting the importance of considering individual methods for mitigating potential problems separately during survey design and analysis. We synthesized the information gained from our analyses to evaluate strategies for overcoming the challenges of using aerial survey data to estimate wildlife abundance, such as digital data collection methods, pooling species records by family, and ordinal modeling using binned data. Recognizing conditions that can lead to data collection errors and having reasonable solutions for addressing errors can allow researchers to allocate resources effectively to mitigate the most significant challenges for obtaining reliable aerial survey data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
147. An early prediction of lung cancer, solid, liquid and semi-liquid deposition and its classification through measurement of physical characteristics using CT scan images.
- Author
-
Karthika, K. and Jothilakshmi, G. R.
- Subjects
- *
LUNG cancer , *PHYSICAL measurements , *COMPUTED tomography , *DATA binning , *SUPPORT vector machines , *REFLECTANCE - Abstract
The analysis of lung diseases at the early stage is a major need in medical field to overcome the patients' severity. Thus, in the proposed work, Support Vector Machine (SVM) based classification method is adopted for precise classification of lung cancer, solid or aerosol Deposition, liquid and semi-liquid Deposition. Initially, the gathered data are pre-processed for image transformation. Image binning is performed to divide the images into several sub-regions, and threshold setting is done for effective segmentation. To evaluate the cell size and infected areas, Region of Interest (ROI) extraction has performed. The physical characteristics encompassing reflection coefficient, mass density and impedance are estimated to achieve effective performance. The prediction and classification of four lung diseases are effectively classified using SVM classifier. The simulation tool used for evaluating the performance is MATLAB. The performance of different physical characteristics shows that the proposed method performs better in classification. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
148. Neural network identification of water pipe blockage from smart embedded passive acoustic measurements.
- Author
-
Baronti, Luca, Castellani, Marco, Hefft, Daniel, and Alberini, Federico
- Subjects
ACOUSTIC measurements ,DATA binning ,FOURIER analysis ,FOURIER transforms ,WATER supply ,ACOUSTIC emission ,HEAT pipes ,PRESSURE drop (Fluid dynamics) - Abstract
This study presents a new neural network approach to identify the presence and type of obstruction in pipes from measurements of passive acoustic emissions. Inserts were used in a fluid re‐circulation loop to simulate different types of blockage at various flow rates within the turbulent regime, generating patterns of acoustic emissions. The data were pre‐processed using Fourier analysis, and two candidate sets of statistical descriptors were extracted for each measurement. The first set used average and spread of the Fourier transform amplitudes, the second used data binning to obtain a concise representation of the spectrum of amplitudes. Experimental evidence showed the second set of descriptors was the most suitable to train the neural network to recognize with accuracy the presence and type of blockage. The obtained results compare favourably with the literature, indicating that the approach provides a tool to enhance process monitoring in water supply systems, in particular early detection of upstream blockages. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
149. Assessing Data Independence and Normality for SPC Charts.
- Author
-
VALENTINE, PATRICK
- Subjects
QUALITY control charts ,TIME series analysis ,STATISTICAL process control ,DATA binning ,CIRCUIT board manufacturing ,MANUFACTURING processes - Published
- 2023
150. Probing the non-thermal emission geometry of AR Sco via optical phase-resolved polarimetry.
- Author
-
du Plessis, Louis, Venter, Christo, Wadiasingh, Zorawar, Harding, Alice K, Buckley, David A H, Potter, Stephen B, and Meintjes, P J
- Subjects
- *
POLARIMETRY , *SYNCHROTRON radiation , *DATA binning , *GEOMETRY , *PULSARS - Abstract
AR Sco is a binary system that contains a white and red dwarf. The rotation rate of the white dwarf (WD) has been observed to slow down, analogous to rotation-powered radio pulsars; it has thus been dubbed a 'white dwarf pulsar'. We previously fit the traditional radio pulsar rotating vector model to the linearly polarized optical data from this source, constraining the system geometry as well as the WD mass. Using a much more extensive data set, we now explore the application of the same model to binary phase-resolved optical polarimetric data, thought to be the result of non-thermal synchrotron radiation, and derive the magnetic inclination angle α and the observer angle ζ at different orbital phases. We obtain an ∼10° variation in α and ∼30° variation in ζ over the orbital period. The variation patterns in these two parameters is robust, regardless of the binning and epoch of data used. We speculate that the observer is detecting radiation from an asymmetric emission region that is a stable structure over several orbital periods. The success of this simple model lastly implies that the pitch angles of the particles are small and the pulsed, non-thermal emission originates relatively close to the WD surface. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.