9,791 results on '"Filtering"'
Search Results
2. Multi-filter-Based Image Pre-processing on Face Mask Detection Using Custom CNN Architecture
- Author
-
Kayali, Devrim, Dimililer, Kamil, Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Pal, Sankar K., editor, Thampi, Sabu M., editor, and Abraham, Ajith, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Discrete Wavelet Transformation for the Sensitive Detection of Ultrashort Radiation Pulse With Radiation-Induced Acoustics
- Author
-
Van Bergen, Rick, Sun, Leshan, Pandey, Prabodh Kumar, Wang, Siqi, Bjegovic, Kristina, Gonzalez, Gilberto, Chen, Yong, Lopata, Richard, and Xiang, Liangzhong
- Subjects
Biomedical and Clinical Sciences ,Oncology and Carcinogenesis ,Biomedical Imaging ,Radiation Oncology ,Cancer ,Affordable and Clean Energy ,Discrete wavelet transforms ,Filtering ,Low-pass filters ,X-rays ,Transducers ,Wavelet analysis ,Plasmas ,Discrete wavelet filtering ,radiation induced acoustics ,radiation monitoring ,Discrete Wavelet Filtering ,Radiation Induced Acoustics ,Radiation monitoring - Abstract
Radiation-induced acoustics (RIA) shows promise in advancing radiological imaging and radiotherapy dosimetry methods. However, RIA signals often require extensive averaging to achieve reasonable signal-to-noise ratios, which increases patient radiation exposure and limits real-time applications. Therefore, this paper proposes a discrete wavelet transform (DWT) based filtering approach to denoise the RIA signals and avoid extensive averaging. The algorithm was benchmarked against low-pass filters and tested on various types of RIA sources, including low-energy X-rays, high-energy X-rays, and protons. The proposed method significantly reduced the required averages (1000 times less averaging for low-energy X-ray RIA, 32 times less averaging for high-energy X-ray RIA, and 4 times less averaging for proton RIA) and demonstrated robustness in filtering signals from different sources of radiation. The coif5 wavelet in conjunction with the sqtwolog threshold selection algorithm yielded the best results. The proposed DWT filtering method enables high-quality, automated, and robust filtering of RIA signals, with a performance similar to low-pass filtering, aiding in the clinical translation of radiation-based acoustic imaging for radiology and radiation oncology.
- Published
- 2024
4. TargetCall: eliminating the wasted computation in basecalling via pre-basecalling filtering.
- Author
-
Cavlak, Meryem Banu, Singh, Gagandeep, Alser, Mohammed, Firtina, Can, Lindegger, Joël, Sadrosadati, Mohammad, Mansouri Ghiasi, Nika, Alkan, Can, and Mutlu, Onur
- Subjects
DEEP learning ,SEQUENCE analysis ,GENOMICS ,GENOMES - Abstract
Basecalling is an essential step in nanopore sequencing analysis where the raw signals of nanopore sequencers are converted into nucleotide sequences, that is, reads. State-of-the-art basecallers use complex deep learning models to achieve high basecalling accuracy. This makes basecalling computationally inefficient and memory-hungry, bottlenecking the entire genome analysis pipeline. However, for many applications, most reads do not match the reference genome of interest (i.e., target reference) and thus are discarded in later steps in the genomics pipeline, wasting the basecalling computation. To overcome this issue, we propose TargetCall, the first pre-basecalling filter to eliminate the wasted computation in basecalling. TargetCall's key idea is to discard reads that will not match the target reference (i.e., off-target reads) prior to basecalling. TargetCall consists of two main components: (1) LightCall, a lightweight neural network basecaller that produces noisy reads, and (2) Similarity Check, which labels each of these noisy reads as on-target or off-target by matching them to the target reference. Our thorough experimental evaluations show that TargetCall 1) improves the end-to-end basecalling runtime performance of the state-of-the-art basecaller by 3.31 × while maintaining high (98.88 %) recall in keeping on-target reads, 2) maintains high accuracy in downstream analysis, and 3) achieves better runtime performance, throughput, recall, precision, and generality than prior works. TargetCall is available at https://github.com/CMU-SAFARI/TargetCall. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Macrofaunal Biodiversity and Patch Mosaics on the Deep Gulf of Mexico Seafloor.
- Author
-
McClain, Craig R., Hanks, Granger, Copley, Samuel, Bryant, S. River D., and Schubert, Brian A.
- Subjects
- *
SPATIAL arrangement , *GRAIN size , *SPECIES distribution , *BENTHOS , *ENVIRONMENTAL sciences - Abstract
ABSTRACT The deep‐sea benthos often exhibit exceptional biodiversity. The patch‐mosaic hypothesis proposes that this deep‐sea diversity arises from varied microhabitats with prolonged temporal persistence filtering for distinctive communities thereby increasing beta‐diversity. This study investigated environmental, community, and macrofaunal species turnover at four deep‐sea sites (~2000 m) in the northern Gulf of Mexico. Using precise small‐scale sampling with a ROV, we analyzed patterns across spatial scales from centimeters to approximately 400 km among 67 sediment cores. We examined the relationships between sedimentary carbon, sediment grain size, and macrofaunal alpha‐ and beta‐diversity. Subsequently, we explored the role of these environmental properties and their spatial arrangement in shaping communities and species distributions. We observed a consistent trend where the overall abundance and diversity of a community increased with higher carbon but decreased with increasing grain size. Substantial faunal turnover was observed among cores, even at centimeter scales, with the contribution of centimeter‐scale spatial distance rivaling that of 100‐km scales in faunal dissimilarity. Similar to alpha‐diversity, beta‐diversity exhibited strong correlations with sediment carbon and grain size. The observed random spatial structure in grain size and carbon appear to translate into randomness in both community and species distribution. These findings align with the patch‐mosaic model, underscoring the complexity of deep‐sea ecosystems. These findings suggest an intricate relationship between sedimentary attributes, faunal composition, and spatial arrangement in the deep‐sea benthos, shedding light on the mechanisms driving biodiversity in seemingly homogeneous environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Developed Internet of Vehicles Architecture: Communication, Big Data and Route Determination Perspectives.
- Author
-
Alblehai, Fahad and Said, Omar
- Subjects
- *
INTELLIGENT transportation systems , *BIG data , *DATA transmission systems , *INTERNET of things , *INTERNET - Abstract
Internet of vehicles (IoV) has become an important research topic due to its direct effect on intelligent transportation systems (ITS) development. There are many challenges in the IoV environment, such as communication, big data and best route assigning. In this paper, an effective IoV architecture is proposed. This architecture has four main objectives. The first objective is to utilize a powerful communication scheme in which three tiers of coverage tools — Internet, satellite, high-altitude platform (HAP) — are utilized. Therefore, the vehicles maintain a continuous connection to the IoV environment everywhere. The second objective is to apply filtering and prioritization mechanisms to reduce the detrimental effects of IoV big data. The third objective is to assign the best route for a vehicle after determining its real-time priority. The fourth objective is to analyze the IoV data. The proposed architecture performance is measured using a simulation environment that is created by the NS-3 package. The simulation results proved that the proposed IoV architecture has a positive impact on the IoV environment according to the performance metrics: energy, success rate of route assignment, filtering effect, data loss, delay, usage of coverage tools and throughput. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. A Ramanujan subspace and dynamic time warping and adaptive singular value decomposition combined denoising method for low signal‐to‐noise ratio surface microseismic monitoring data in hydraulic fracturing.
- Author
-
Wang, Xu‐Lin, Zhang, Jian‐Zhong, and Huang, Zhong‐Lai
- Subjects
- *
HILBERT-Huang transform , *HYDRAULIC fracturing , *BANDPASS filters , *ELECTRONIC data processing , *DATA quality , *RANDOM noise theory , *SINGULAR value decomposition - Abstract
Surface microseismic monitoring is widely used in hydraulic fracturing. Real‐time monitoring data collected during fracturing can be used to perform surface‐microseismic localization, which aids in assessing the effects of fracturing and provides guidance for the process. The accuracy of localization critically depends on the quality of monitoring data. However, the signal‐to‐noise ratio of the data is often low due to strong coherent and random noise, making denoising essential for processing surface monitoring data. To suppress noise more effectively, this paper introduces a novel denoising method that integrates the Ramanujan subspace with dynamic time warping and adaptive singular value decomposition. The new method consists of two steps: First, a Ramanujan subspace is constructed to suppress periodic noise. Then, dynamic time warping and adaptive singular value decomposition are applied to eliminate remaining coherent and random noise. The method has been evaluated using both synthetic and field data, and its performance is compared with traditional microseismic denoising techniques, including bandpass filtering and empirical mode decomposition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. How to Find Accurate Terrain and Canopy Height GEDI Footprints in Temperate Forests and Grasslands?
- Author
-
Moudrý, Vítězslav, Prošek, Jiří, Marselis, Suzanne, Marešová, Jana, Šárovcová, Eliška, Gdulová, Kateřina, Kozhoridze, Giorgi, Torresani, Michele, Rocchini, Duccio, Eltner, Anette, Liu, Xiao, Potůčková, Markéta, Šedová, Adéla, Crespo‐Peremarch, Pablo, Torralba, Jesús, Ruiz, Luis A., Perrone, Michela, Špatenková, Olga, and Wild, Jan
- Subjects
- *
ECOSYSTEM dynamics , *ECOLOGICAL disturbances , *TEMPERATE forests , *TERRAIN mapping , *LAND cover - Abstract
Filtering approaches on Global Ecosystem Dynamics Investigation (GEDI) data differ considerably across existing studies and it is yet unclear which method is the most effective. We conducted an in‐depth analysis of GEDI's vertical accuracy in mapping terrain and canopy heights across three study sites in temperate forests and grasslands in Spain, California, and New Zealand. We started with unfiltered data (2,081,108 footprints) and describe a workflow for data filtering using Level 2A parameters and for geolocation error mitigation. We found that retaining observations with at least one detected mode eliminates noise more effectively than sensitivity. The accuracy of terrain and canopy height observations depended considerably on the number of modes, beam sensitivity, landcover, and terrain slope. In dense forests, a minimum sensitivity of 0.9 was required, while in areas with sparse vegetation, sensitivity of 0.5 sufficed. Sensitivity greater than 0.9 resulted in an overestimation of canopy height in grasslands, especially on steep slopes, where high sensitivity led to the detection of multiple modes. We suggest excluding observations with more than five modes in grasslands. We found that the most effective strategy for filtering low‐quality observations was to combine the quality flag and difference from TanDEM‐X, striking an optimal balance between eliminating poor‐quality data and preserving a maximum number of high‐quality observations. Positional shifts improved the accuracy of GEDI terrain estimates but not of vegetation height estimates. Our findings guide users to an easy way of processing of GEDI footprints, enabling the use of the most accurate data and leading to more reliable applications. Plain Language Summary: The Global Ecosystem Dynamics Investigation (GEDI) collected terrain and canopy observations using laser altimetry. The quality of terrain and canopy observations is influenced by acquisition conditions and land (cover) characteristics. Consequently, a considerable amount of GEDI observations is discarded as noise, and further filtering is necessary to retain only high‐quality observations. Our objective was to assess how environmental and acquisition characteristics influence the accuracy of terrain and canopy height of GEDI observations. Although the main objective of the GEDI mission was to map forests, we also focused on grasslands. GEDI serves not only as an essential source of information on canopy height but also provides accurate terrain observations. Furthermore, it is important to know that GEDI does not overestimate the height of low vegetation as this can result in an overestimation of carbon storage. We distinguished four steps in the GEDI data processing: (a) removal of noise observations, (b) removal of low‐quality data, (c) effect of additional acquisition characteristics, and (d) mitigation of geolocation error. We found that the accuracy of terrain and canopy height observations depended considerably on the number of detected modes, beam sensitivity, landcover, and terrain slope. Key Points: Terrain is crucial for estimates of canopy height, however only 20%–30% of footprints have an absolute error of terrain estimates <3 mThe quality of terrain and canopy height estimates depends on the interplay of number of modes, sensitivity, land cover, and terrain slopeNoise and low‐quality footprints can be successfully removed using number of modes, sensitivity, quality flag and difference from TanDEM‐X [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Directing Attention Shapes Learning in Adults but Not Children.
- Author
-
Tandoc, Marlie C., Nadendla, Bharat, Pham, Theresa, and Finn, Amy S.
- Subjects
- *
SELECTIVITY (Psychology) , *COGNITIVE development , *CHILDREN'S drawings , *ADULTS , *LEARNING , *INCIDENTAL learning - Abstract
Children sometimes learn distracting information better than adults do, perhaps because of the development of selective attention. To understand this potential link, we ask how the learning of children (aged 7–9 years) and the learning of adults differ when information is the directed focus of attention versus when it is not. Participants viewed drawings of common objects and were told to attend to the drawings (Experiment 1: 42 children, 35 adults) or indicate when shapes (overlaid on the drawings) repeated (Experiment 2: 53 children, 60 adults). Afterward, participants identified fragments of these drawings as quickly as possible. Adults learned better than children when directed to attend to the drawings; however, when drawings were task irrelevant, children showed better learning than adults in the first half of the test. And although directing attention to the drawings improved learning in adults, children learned the drawings similarly across experiments regardless of whether the drawings were the focus of the task or entirely irrelevant. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Enhancing visual seismocardiography in noisy environments with adaptive bidirectional filtering for Cardiac Health Monitoring.
- Author
-
N, Geetha, Bhat, C. Rohith, TR, Mahesh, and Yimer, Temesgen Engida
- Subjects
- *
ADAPTIVE filters , *MATRIX decomposition , *NOISE control , *NONNEGATIVE matrices , *ARTIFICIAL implants - Abstract
Background: Wearable sensors have revolutionized cardiac health monitoring, with Seismocardiography (SCG) at the forefront due to its non-invasive nature. However, the substantial motion artefacts have hindered the translation of SCG-based medical applications, primarily induced by walking. In contrast, our innovative technique, Adaptive Bidirectional Filtering (ABF), surpasses these challenges by refining SCG signals more effectively than any motion-induced noise. ABF leverages a noise-cancellation algorithm, operating on the benefits of the Redundant Multi-Scale Wavelet Decomposition (RMWD) and the bidirectional filtering framework, to achieve optimal signal quality. Methodology: The ABF technique is a two-stage process that diminishes the artefacts emanating from motion. The first step by RMWD is the identification of the heart-associated signals and the isolating samples with those related frequencies. Subsequently, the adaptive bidirectional filter operates in two dimensions: it uses Time-Frequency masking that eliminates temporal noise while engaging in non-negative matrix Decomposition to ensure spatial correlation and dorsoventral vibration reduction jointly. The main component that is altered from the other filters is the recursive structure that changes to the motion-adapted filter, which uses vertical axis accelerometer data to differentiate better between accurate SCG signals and motion artefacts. Outcome: Our empirical tests demonstrate exceptional signal improvement with the application of our ABF approach. The accuracy in heart rate estimation reached an impressive r-squared value of 0.95 at − 20 dB SNR, significantly outperforming the baseline value, which ranged from 0.1 to 0.85. The effectiveness of the motion-artifact-reduction methodology is also notable at an SNR of − 22 dB. Consequently, ECG inputs are not required. This method can be seamlessly integrated into noisy environments, enhancing ECG filtering, automatic beat detection, and rhythm interpretation processes, even in highly variable conditions. The ABF method effectively filters out up to 97% of motion-related noise components within the SCG signal from implantable devices. This advancement is poised to become an integral part of routine patient monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Interpolation-Filtering Method for Image Improvement in Digital Holography.
- Author
-
Kozlov, Alexander V., Cheremkhin, Pavel A., Svistunov, Andrey S., Rodin, Vladislav G., Starikov, Rostislav S., and Evtikhiev, Nikolay N.
- Subjects
SPECKLE interference ,DIGITAL image processing ,IMAGE reconstruction ,HOLOGRAPHY ,INTERPOLATION - Abstract
Digital holography is actively used for the characterization of objects and 3D-scenes, tracking changes in medium parameters, 3D shape reconstruction, detection of micro-object positions, etc. To obtain high-quality images of objects, it is often necessary to register a set of holograms or to select a noise suppression method for specific experimental conditions. In this paper, we propose a method to improve filtering in digital holography. The method requires a single hologram only. It utilizes interpolation upscaling of the reconstructed image size, filtering (e.g., median, BM3D, or NLM), and interpolation to the original image size. The method is validated on computer-generated and experimentally registered digital holograms. Interpolation methods coefficients and filter parameters were analyzed. The quality is improved in comparison with digital image filtering up to 1.4 times in speckle contrast on the registered holograms and up to 17% and 29% in SSIM and NSTD values on the computer-generated holograms. The proposed method is convenient in practice since its realization requires small changes of standard filters, improving the quality of the reconstructed image. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Reinforcement learning-based estimation for spatio-temporal systems.
- Author
-
Mowlavi, Saviz and Benosman, Mouhacine
- Subjects
- *
REINFORCEMENT learning , *PARTIAL differential equations , *PARAMETRIC equations , *NAVIER-Stokes equations , *BURGERS' equation - Abstract
State estimators such as Kalman filters compute an estimate of the instantaneous state of a dynamical system from sparse sensor measurements. For spatio-temporal systems, whose dynamics are governed by partial differential equations (PDEs), state estimators are typically designed based on a reduced-order model (ROM) that projects the original high-dimensional PDE onto a computationally tractable low-dimensional space. However, ROMs are prone to large errors, which negatively affects the performance of the estimator. Here, we introduce the reinforcement learning reduced-order estimator (RL-ROE), a ROM-based estimator in which the correction term that takes in the measurements is given by a nonlinear policy trained through reinforcement learning. The nonlinearity of the policy enables the RL-ROE to compensate efficiently for errors of the ROM, while still taking advantage of the imperfect knowledge of the dynamics. Using examples involving the Burgers and Navier-Stokes equations with parametric uncertainties, we show that in the limit of very few sensors, the trained RL-ROE outperforms a Kalman filter designed using the same ROM and yields accurate instantaneous estimates of high-dimensional states corresponding to unknown initial conditions and physical parameter values. The RL-ROE opens the door to lightweight real-time sensing of systems governed by parametric PDEs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. An intelligent white blood cell detection and multi-class classification using fine optimal DCRNet.
- Author
-
Krishna Prasad, P. R., Reddy, Edara Sreenivasa, and Chandra Sekharaiah, K.
- Subjects
LEUCOCYTES ,FEATURE selection ,SUPPORT vector machines ,DEEP learning ,K-nearest neighbor classification ,BIONICS - Abstract
The major goal of this research is to develop a Deep Learning (DL) based automatic identification and classification of white blood cells (WBCs) with high accuracy and efficiency. The first phase of research is pre-processing and is accomplished by the Improved Median Wiener Filter (IMWF), which effectively eliminates the noises. The image is resized into a standard image size before filtering. The segmentation process takes place using Color Balancing Binary Threshold (CBBT) algorithm to divide the WBCs and another non-relevant background to improve the classification performance. The features like shape, texture and color of the WBCs are extracted from the segmented images. Finally, the classification takes place, and this is processed by a fine optimal deep convolution residual network (Fine Optimal DCRNet). In addition, the bionic model is introduced to improve classification accuracy. The dataset used in this research is BCCD and LISC datasets. The performance of the proposed model is validated using existing methods of Support Vector Machine (SVM), K-Nearest Neighbor (KNN), VGG-16, VGG-19, ResNet-50, DensetNet-121, DensetNet-169, Inception-V3, InceptionResNet-V2, Xception, MobileNet-224, Mobile NasNet, Tree, Naive Bayes, Ensemble active contour model, k-means clustering and handcraft and deep learned features-scale-invariant feature transform (HCDL-SIFT) in terms of Accuracy, Precision, Recall, Specificity, F-score, Relative Distance Error (RDE), Over-Segmentation Rate (OSR), Under-Segmentation Rate (USR) and Overall Error Rate (OER). For the LISC dataset, the detection model attains an outcome of 99%, 98%, 98%, 99%, 98%, 1.143, 0.0125, 0.056 and 0.125, respectively. For the BCCD dataset, apart from RDE, OSR, USR and OER metrics, the performance is evaluated as 98%, 96%, 98%, 99% and 97%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Optimized Deep LSTM-Based Multi-Object Tracking with Occlusion Handling Mechanism.
- Author
-
Sokashe-Ghorpade, Shital V. and Pardeshi, Sanjay Arjunsing
- Subjects
- *
OPTIMIZATION algorithms , *VIDEO surveillance , *VIDEO monitors , *COMPUTER vision , *PATIENT monitoring , *TRACKING algorithms , *OBJECT tracking (Computer vision) - Abstract
The multi-object tracking is a basic computer vision process having a huge class of real-life tools that range from monitoring of medical video to surveillance. The goal of tracking numerous items is to place numerous objects in a scene, handle their identities throughout time, and construct trajectories for analysis. However, this is a complex task, because of certain issues like occlusions, complicated object dynamics, and variations in the appearance of objects. In this research, a new technique named TPRO-based Deep LSTM is developed for tracking multi-object with occlusion handling. Here, the videos are considered as input wherein the extraction of frames is done from each video. Each frame undergoes pre-processing with filtering to eliminate noise from frames. By using a sparse Fuzzy c-Means (FCM) and Local Optimal-Oriented Pattern (LOOP) features, the localization of objects is done. Moreover, the visual and spatial trackings are considered for hybrid tracking. The second derivative model and neighborhood search model are used to perform visual tracking. Then the occlusion handling is performed. Concurrently, with the use of Deep Long Short-Term Memory (Deep LSTM) the spatial tracking is performed and the Taylor Poor Rich Optimization (TPRO) algorithm assigns the weight and bias of the Deep LSTM. The TPRO is obtained by the unification of the Taylor series along with the Poor and Rich Optimization algorithm. By combining visual and spatial tracking, the final tracked output is generated. The devised method achieves a performance with the highest value of 88.9% for Multiple Object Tracking Precision (MOTP), smallest tracking distance (TD) of 4.185, average MOTP of 0.889, average TD of 4.201, and highest tracking number (TN) of 14. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Mix‐Max: A Content‐Aware Operator for Real‐Time Texture Transitions.
- Author
-
Fournier, Romain and Sauvage, Basile
- Subjects
- *
DISTRIBUTION (Probability theory) , *VIDEO processing , *ALGORITHMS - Abstract
Mixing textures is a basic and ubiquitous operation in data‐driven algorithms for real‐time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel‐wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant‐time and parallel evaluation of the resulting mix over square footprints of MIP‐maps, making our operator suitable for real‐time rendering. We also develop a micro‐priority model, inspired by micro‐geometry models in rendering, which represents sub‐pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. KISS—Keep It Static SLAMMOT—The Cost of Integrating Moving Object Tracking into an EKF-SLAM Algorithm.
- Author
-
Mandel, Nicolas, Kompe, Nils, Gerwin, Moritz, and Ernst, Floris
- Subjects
- *
TRACKING algorithms , *RESEARCH personnel , *DYNAMIC models , *ROBOTICS , *ALGORITHMS - Abstract
The treatment of moving objects in simultaneous localization and mapping (SLAM) is a key challenge in contemporary robotics. In this paper, we propose an extension of the EKF-SLAM algorithm that incorporates moving objects into the estimation process, which we term KISS. We have extended the robotic vision toolbox to analyze the influence of moving objects in simulations. Two linear and one nonlinear motion models are used to represent the moving objects. The observation model remains the same for all objects. The proposed model is evaluated against an implementation of the state-of-the-art formulation for moving object tracking, DATMO. We investigate increasing numbers of static landmarks and dynamic objects to demonstrate the impact on the algorithm and compare it with cases where a moving object is mistakenly integrated as a static landmark (false negative) and a static landmark as a moving object (false positive). In practice, distances to dynamic objects are important, and we propose the safety–distance–error metric to evaluate the difference between the true and estimated distances to a dynamic object. The results show that false positives have a negligible impact on map distortion and ATE with increasing static landmarks, while false negatives significantly distort maps and degrade performance metrics. Explicitly modeling dynamic objects not only performs comparably in terms of map distortion and ATE but also enables more accurate tracking of dynamic objects with a lower safety–distance–error than DATMO. We recommend that researchers model objects with uncertain motion using a simple constant position model, hence we name our contribution Keep it Static SLAMMOT. We hope this work will provide valuable data points and insights for future research into integrating moving objects into SLAM algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Management strategy evaluation operating model conditioning: a swordfish case study.
- Author
-
Rosa, Daniela, Mosqueira, Iago, Fu, Dan, and Coelho, Rui
- Subjects
- *
SWORDFISH , *FISHERY management , *TUNA fisheries , *FISH populations , *FACTORIAL experiment designs - Abstract
Evaluation of fish stock status is a key step for fisheries management. Tuna Regional Fisheries Management Organizations (t-RFMOs) are moving towards management strategy evaluation (MSE), a process that combines science and policy and depends on technical aspects, developed by scientists, designed to meet management objectives established by managers and other stakeholders. In the Indian Ocean, the current management advice for swordfish (Xiphias gladius) is based on an ensemble of 24 models considering four areas of uncertainty about the stock dynamics. There is an ongoing MSE process for swordfish, and this paper describes the methodology being applied for the conditioning of the operating model (OM), including model selection and validation. In the MSE, nine sources of uncertainty were considered, each being characterized by 2–3 levels. A partial factorial design was employed to reduce the number of models from a full factorial design to those needed to encompass the overall uncertainty. A selection and validation process was carried out, filtering models that converged, showed good predictive skills, and provided plausible estimates. Overall, the estimated spawning stock biomass (SSB) relative to SSB at maximum sustainable yield (MSY), and fishing mortality (F) relative to FMSY encompasses the estimates of the stock assessment ensemble at the most optimist area of the distribution. The MSE for swordfish is an ongoing process that is expected to provide more robust management advice in the future. Further developments to the OM can still occur, but the methods presented herein can be applied to this, or other species, MSE processes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Open benchmark for filtering techniques in entity resolution.
- Author
-
Neuhof, Franziska, Fisichella, Marco, Papadakis, George, Nikoletos, Konstantinos, Augsten, Nikolaus, Nejdl, Wolfgang, and Koubarakis, Manolis
- Abstract
Entity Resolution identifies entity profiles that represent the same real-world object. A brute-force approach that considers all pairs of entities suffers from quadratic time complexity. To ameliorate this issue, filtering techniques reduce the search space to highly similar and, thus, highly likely matches. Such techniques come in two forms: (i) blocking workflows group together entity profiles with identical or similar signatures, and (ii) nearest-neighbor workflows convert all entity profiles into vectors and detect the ones closest to every query entity. The main techniques of these two types have never been juxtaposed in a systematic way and, thus, their relative performance is unknown. To cover this gap, we perform an extensive experimental study that investigates the relative performance of the main representatives per type over numerous established datasets. Comparing techniques of different types in a fair way is a non-trivial task, because the configuration parameters of each approach have a significant impact on its performance, but are hard to fine-tune. We consider a plethora of parameter configurations per methods, optimizing each workflow with respect to recall and precision in both schema-agnostic and schema-aware settings. The experimental results provide novel insights into the effectiveness, the time efficiency, the memory footprint, and the scalability of the considered techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Evaluating the Impact of Filtering Techniques on Deep Learning-Based Brain Tumour Segmentation.
- Author
-
Rosa, Sofia, Vasconcelos, Verónica, and Caridade, Pedro J. S. B.
- Subjects
CONTRAST-enhanced magnetic resonance imaging ,GREENHOUSE gases ,BRAIN tumors ,CONVOLUTIONAL neural networks ,SYMPTOMS - Abstract
Gliomas are a common and aggressive kind of brain tumour that is difficult to diagnose due to their infiltrative development, variable clinical presentation, and complex behaviour, making them an important focus in neuro-oncology. Segmentation of brain tumour images is critical for improving diagnosis, prognosis, and treatment options. Manually segmenting brain tumours is time-consuming and challenging. Automatic segmentation algorithms can significantly improve the accuracy and efficiency of tumour identification, thus improving treatment planning and outcomes. Deep learning-based segmentation tumours have shown significant advances in the last few years. This study evaluates the impact of four denoising filters, namely median, Gaussian, anisotropic diffusion, and bilateral, on tumour detection and segmentation. The U-Net architecture is applied for the segmentation of 3064 contrast-enhanced magnetic resonance images from 233 patients diagnosed with meningiomas, gliomas, and pituitary tumours. The results of this work demonstrate that bilateral filtering yields superior outcomes, proving to be a robust and computationally efficient approach in brain tumour segmentation. This method reduces the processing time by 12 epochs, which in turn contributes to lowering greenhouse gas emissions by optimizing computational resources and minimizing energy consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Enhancing visual seismocardiography in noisy environments with adaptive bidirectional filtering for Cardiac Health Monitoring
- Author
-
Geetha N, C. Rohith Bhat, Mahesh TR, and Temesgen Engida Yimer
- Subjects
Visual seismocardiography ,Motion ,Filtering ,Decomposition ,Signal ,Reduction ,Computer applications to medicine. Medical informatics ,R858-859.7 - Abstract
Abstract Background Wearable sensors have revolutionized cardiac health monitoring, with Seismocardiography (SCG) at the forefront due to its non-invasive nature. However, the substantial motion artefacts have hindered the translation of SCG-based medical applications, primarily induced by walking. In contrast, our innovative technique, Adaptive Bidirectional Filtering (ABF), surpasses these challenges by refining SCG signals more effectively than any motion-induced noise. ABF leverages a noise-cancellation algorithm, operating on the benefits of the Redundant Multi-Scale Wavelet Decomposition (RMWD) and the bidirectional filtering framework, to achieve optimal signal quality. Methodology The ABF technique is a two-stage process that diminishes the artefacts emanating from motion. The first step by RMWD is the identification of the heart-associated signals and the isolating samples with those related frequencies. Subsequently, the adaptive bidirectional filter operates in two dimensions: it uses Time-Frequency masking that eliminates temporal noise while engaging in non-negative matrix Decomposition to ensure spatial correlation and dorsoventral vibration reduction jointly. The main component that is altered from the other filters is the recursive structure that changes to the motion-adapted filter, which uses vertical axis accelerometer data to differentiate better between accurate SCG signals and motion artefacts. Outcome Our empirical tests demonstrate exceptional signal improvement with the application of our ABF approach. The accuracy in heart rate estimation reached an impressive r-squared value of 0.95 at − 20 dB SNR, significantly outperforming the baseline value, which ranged from 0.1 to 0.85. The effectiveness of the motion-artifact-reduction methodology is also notable at an SNR of − 22 dB. Consequently, ECG inputs are not required. This method can be seamlessly integrated into noisy environments, enhancing ECG filtering, automatic beat detection, and rhythm interpretation processes, even in highly variable conditions. The ABF method effectively filters out up to 97% of motion-related noise components within the SCG signal from implantable devices. This advancement is poised to become an integral part of routine patient monitoring.
- Published
- 2024
- Full Text
- View/download PDF
21. Reinforcement learning-based estimation for spatio-temporal systems
- Author
-
Saviz Mowlavi and Mouhacine Benosman
- Subjects
Estimation ,Filtering ,Partial differential equations ,Model reduction ,Reinforcement learning ,Medicine ,Science - Abstract
Abstract State estimators such as Kalman filters compute an estimate of the instantaneous state of a dynamical system from sparse sensor measurements. For spatio-temporal systems, whose dynamics are governed by partial differential equations (PDEs), state estimators are typically designed based on a reduced-order model (ROM) that projects the original high-dimensional PDE onto a computationally tractable low-dimensional space. However, ROMs are prone to large errors, which negatively affects the performance of the estimator. Here, we introduce the reinforcement learning reduced-order estimator (RL-ROE), a ROM-based estimator in which the correction term that takes in the measurements is given by a nonlinear policy trained through reinforcement learning. The nonlinearity of the policy enables the RL-ROE to compensate efficiently for errors of the ROM, while still taking advantage of the imperfect knowledge of the dynamics. Using examples involving the Burgers and Navier-Stokes equations with parametric uncertainties, we show that in the limit of very few sensors, the trained RL-ROE outperforms a Kalman filter designed using the same ROM and yields accurate instantaneous estimates of high-dimensional states corresponding to unknown initial conditions and physical parameter values. The RL-ROE opens the door to lightweight real-time sensing of systems governed by parametric PDEs.
- Published
- 2024
- Full Text
- View/download PDF
22. Multimaterial filtering applied to the topology optimization of a permanent magnet synchronous machine
- Author
-
Cherrière, Théodore, Hlioui, Sami, Louf, François, and Laurent, Luc
- Published
- 2024
- Full Text
- View/download PDF
23. Records of the 5 March 2021 Raoul Island transoceanic tsunami around the Pacific Ocean.
- Author
-
Roger, Jean
- Abstract
At 19:28 AM on 4 March 2021 (UTC), a Mw 8.1 megathrust earthquake occurred close to Raoul Island, New Zealand, on the Kermadec Subduction Zone. Closely following two other tsunamigenic ruptures, it triggered a tsunami that was quickly recorded by oceanic and coastal gauges. Analysis of 158 filtered sea-level records revealed that the tsunami had not only a regional impact but was recorded by most gauges in the Pacific Ocean and eight gauges in the Indian and the southern Atlantic Oceans, in good agreement with modelling results of tsunami propagation. Careful determination of tsunami arrival times and first and largest waves amplitude for each station support the fact that the first wave is almost never the highest one. The maximum amplitude is about three times higher than the first wave amplitude and the largest wave is recorded with a median value of ∼3 h after the first arrival. In addition, recordings of the largest wave of this small but transoceanic tsunami are not related to the distance to the earthquake epicentre. Finally, this tsunami which occurred simultaneously with several localised storms in the Pacific Ocean is an opportunity to highlight the difficulty in distinguishing between the different types of waves. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Generation and tuning of notched bands in wideband THz antennas using graphene/metal strips.
- Author
-
Ali, Mohd Farman, Sahu, Shitij, Singh, Ravinder, and Varshney, Gaurav
- Subjects
- *
MONOPOLE antennas , *ANTENNAS (Electronics) , *GRAPHENE , *RADIATORS , *ENGRAVING - Abstract
A technique is implemented for obtaining tunable band notch/filtering features in terahertz (THz) monopole wideband antennas. A parasitic metal strip is engraved in the antenna radiator, which creates the band notch characteristics in the wideband response. The tunability of the created notched band can be obtained by engraving a non-resonant U-shaped graphene strip in the radiator. The change in the material properties of the graphene strip alters the nature of field confinement in the antenna structure, enabling tunability in the filtering frequency band. Moreover, multiple notch bands can be obtained by carefully selecting the material of the parasitic element U-shaped graphene strip. Thus, the antenna response can be configured for dual- and triple- band filtering operation by varying the materials used for the strips to obtain band notch operation and a U-shaped graphene sheet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Mathematical Morphology on Directional Data.
- Author
-
Hauch, Konstantin and Redenbach, Claudia
- Abstract
We define morphological operators and filters for directional images whose pixel values are unit vectors. This requires an ordering relation for unit vectors which is obtained by using depth functions. They provide a centre-outward ordering with respect to a specified centre vector. We apply our operators on synthetic directional images and compare them with classical morphological operators for grey-scale images. As application examples, we enhance the fault region in a compressed glass foam and segment misaligned fibre regions of glass fibre-reinforced polymers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Antithetic multilevel particle filters.
- Author
-
Jasra, Ajay, Maama, Mohamed, and Ombao, Hernando
- Subjects
EULER method ,DISCRETIZATION methods ,DIFFUSION coefficients ,COST - Abstract
In this paper we consider the filtering of partially observed multidimensional diffusion processes that are observed regularly at discrete times. This is a challenging problem which requires the use of advanced numerical schemes based upon time-discretization of the diffusion process and then the application of particle filters. Perhaps the state-of-the-art method for moderate-dimensional problems is the multilevel particle filter of Jasra et al. (SIAM J. Numer. Anal. 55 (2017), 3068–3096). This is a method that combines multilevel Monte Carlo and particle filters. The approach in that article is based intrinsically upon an Euler discretization method. We develop a new particle filter based upon the antithetic truncated Milstein scheme of Giles and Szpruch (Ann. Appl. Prob. 24 (2014), 1585–1620). We show empirically for a class of diffusion problems that, for $\epsilon>0$ given, the cost to produce a mean squared error (MSE) of $\mathcal{O}(\epsilon^2)$ in the estimation of the filter is $\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2)$. In the case of multidimensional diffusions with non-constant diffusion coefficient, the method of Jasra et al. (2017) requires a cost of $\mathcal{O}(\epsilon^{-2.5})$ to achieve the same MSE. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Denoiseit: denoising gene expression data using rank based isolation trees
- Author
-
Jaemin Jeon, Youjeong Suk, Sang Cheol Kim, Hye-Yeong Jo, Kwangsoo Kim, and Inuk Jung
- Subjects
Gene ,Noise ,Filtering ,Matrix factorization ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Biology (General) ,QH301-705.5 - Abstract
Abstract Background Selecting informative genes or eliminating uninformative ones before any downstream gene expression analysis is a standard task with great impact on the results. A carefully curated gene set significantly enhances the likelihood of identifying meaningful biomarkers. Method In contrast to the conventional forward gene search methods that focus on selecting highly informative genes, we propose a backward search method, DenoiseIt, that aims to remove potential outlier genes yielding a robust gene set with reduced noise. The gene set constructed by DenoiseIt is expected to capture biologically significant genes while pruning irrelevant ones to the greatest extent possible. Therefore, it also enhances the quality of downstream comparative gene expression analysis. DenoiseIt utilizes non-negative matrix factorization in conjunction with isolation forests to identify outlier rank features and remove their associated genes. Results DenoiseIt was applied to both bulk and single-cell RNA-seq data collected from TCGA and a COVID-19 cohort to show that it proficiently identified and removed genes exhibiting expression anomalies confined to specific samples rather than a known group. DenoiseIt also showed to reduce the level of technical noise while preserving a higher proportion of biologically relevant genes compared to existing methods. The DenoiseIt Software is publicly available on GitHub at https://github.com/cobi-git/DenoiseIt
- Published
- 2024
- Full Text
- View/download PDF
28. A New Performance Metric to Evaluate Filter Feature Selection Methods in Text Classification
- Author
-
Rasim Çekik and Mahmut Kaya
- Subjects
Text classification ,feature selection ,filtering ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
High dimensionality and sparsity are the primary issues in text classification. Using feature selection approaches, the most effective way to solve the problem is to select a subset of features. The most common and effective methods used for this process are filter techniques. Various performance metrics such as Micro-F1, Macro-F1, and Accuracy are used to evaluate the performance of filter methods used for feature selection on datasets Such methods work depending on a classification algorithm. However, when selecting features in filter techniques, the information on the individual features is evaluated without considering the relationship between the features. In such an approach, the actual performance of the filter technique used in feature selection may not be determined. In such a case, it causes the existing methods to be insufficient in testing the validity of the proposed method. For this purpose, this study suggests a novel performance metric called Selection Error (SE) to determine the actual performance evaluation of filter techniques. The Selection Error metric allows us to analyze the information value of the selected features more accurately than existing methods without relying on a classifier. The feature selection performance of the filtering approaches was performed on six different datasets with both The Selection Error and traditional performance metrics. When the results are examined, it is seen that there is a strong relationship between the proposed performance metric and the classification performance metric results. The Selection Error aims to significantly contribute to the literature by demonstrating the success of filtering feature selection methods, regardless of classifier performance.
- Published
- 2024
- Full Text
- View/download PDF
29. Analysis of one ambient seismic noise in Tianjin
- Author
-
Ye Li, Wei Guo, Ke Xu, Fuyang Cao, Chaoqun Ma, and Xiaoyuan Xu
- Subjects
noise source ,time frequency domain characteristics ,influence scope ,polarization analysis ,filtering ,Geology ,QE1-996.5 ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
During checking continuous waveform record, it was found that some stations of Tianjin such as EWZ were significantly affected by one unknown noise, and EWZ station was the most seriously affected. Long term observations had also found that this noise cannot be observed for uncertainly several days each year. This noise was all year round and did not change with day and night time, and it had been existed for many years. This noise was different from typical environmental background noise and had some fixed characteristics. This article studied the noise from several aspects such as its own characteristic, its coverage of influence, spectral characteristics, and impact on earthquake records. The research has shown that the noise was not affected by natural factors such as weather and season, and had a characteristic of amplitude decreasing as the distance increasing from EWZ station. The main frequency of this noise ranged 1~2 Hz. The impact of noise was relatively wide and it can be recorded by 12 surrounding stations, with YGZ station being the farthest and approximately 58 km away from EWZ station. Through polarization analysis, it was found that the propagation of noise had a specific directionality, so it is preliminary judged that this noise had the characteristic of fixed source noise. This noise will have an impact on seismic records, reducing the seismic phase pickup rate, and even completely submerging it in the noise. Frequency filtering can improve it.
- Published
- 2024
- Full Text
- View/download PDF
30. The Filter and the Viewer: On Audience Discretion in Film Noir
- Author
-
Steven G. Smith
- Subjects
Film noir ,filtering ,genre ,Romanticism ,style ,viewing ,Motion pictures ,PN1993-1999 ,Philosophy (General) ,B1-5802 - Abstract
To the French critics who originally labelled certain films noir it seemed that a class of Hollywood products had gone darker during the war years – as though a dark filter had been placed over the lens. Films were not designed or marketed as noir, and retrospectively noir's status as a genre is still unsettled. Yet there is widespread interest today in experiencing diverse films as noir, and even in using a Noir Filter in Instagram and video games. Pursuing the filter clue, the noir experience can be thought of as subjection to a dark filtering of narrative. Image filtering styles can help to resolve not only the puzzle of noir's quasi-genre status but also an issue of general interest in aesthetic experience. The use of image filters makes a distinctively powerful contribution to the experience of a work or genre, or to the cultural dominance of an aesthetic regime, due to the unobtrusive formative role it plays in the economy of experience.
- Published
- 2024
- Full Text
- View/download PDF
31. Multi-Person Action Recognition Based on Millimeter-Wave Radar Point Cloud.
- Author
-
Dang, Xiaochao, Fan, Kai, Li, Fenfang, Tang, Yangyang, Gao, Yifei, and Wang, Yue
- Subjects
DEEP learning ,HUMAN-computer interaction ,POINT cloud ,LEARNING ,CORPORATE bonds - Abstract
Featured Application: This research has important applications in areas such as smart furniture and human-computer interaction. It will bring people a more efficient and comfortable living experience as well as a new smart experience. Human action recognition has many application prospects in human-computer interactions, innovative furniture, healthcare, and other fields. The traditional human motion recognition methods have limitations in privacy protection, complex environments, and multi-person scenarios. Millimeter-wave radar has attracted attention due to its ultra-high resolution and all-weather operation. Many existing studies have discussed the application of millimeter-wave radar in single-person scenarios, but only some have addressed the problem of action recognition in multi-person scenarios. This paper uses a commercial millimeter-wave radar device for human action recognition in multi-person scenarios. In order to solve the problems of severe interference and complex target segmentation in multiplayer scenarios, we propose a filtering method based on millimeter-wave inter-frame differences to filter the collected human point cloud data. We then use the DBSCAN algorithm and the Hungarian algorithm to segment the target, and finally input the data into a neural network for classification. The classification accuracy of the system proposed in this paper reaches 92.2% in multi-person scenarios through experimental tests with the five actions we set. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. A Defect Detection Method of Mixed Wafer Map Using Neighborhood Path Filtering Clustering Algorithm.
- Author
-
Hou, Xingna, Qin, Guanxiang, Lu, Ying, Yi, Mulan, and Chen, Shouhong
- Subjects
- *
ALGORITHMS , *SILHOUETTES , *PROBABILITY theory - Abstract
As the wafer process becomes more complex, the probability of mixed-type defective wafer maps is constantly increasing. Therefore, it is necessary to perform effective filtering and denoising processing on the mixed-type defective wafer map to facilitate the identification. We propose a Neighborhood Path Filtering Clustering (NPFC) algorithm in this paper. For determining the number of clusters in the wafer map, we combine the clustering index of the Silhouette coefficient and the similarity index of Mahalanobis distance to propose a clustering similarity index SD. Then, calculating the compactness index between the clusters is judged whether the clusters are merged or not, to obtain the final clustering effect. The greatest advantage of this method is that it avoids the influence of algorithm parameter settings on the filtering effectiveness. Meanwhile, it can still effectively identify clusters of arbitrary shapes. The experimental results show that compared with median filtering, K-mean clustering and adjacency clustering, the NPFC algorithm achieves good results in the filtering and clustering effect. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. A mathematical characterization of minimally sufficient robot brains.
- Author
-
Sakcak, Basak, Timperi, Kalle G, Weinstein, Vadim, and LaValle, Steven M
- Subjects
- *
MACHINE learning , *INFORMATION theory , *RECOMMENDER systems , *INFORMATION filtering , *INFORMATION storage & retrieval systems - Abstract
This paper addresses the lower limits of encoding and processing the information acquired through interactions between an internal system (robot algorithms or software) and an external system (robot body and its environment) in terms of action and observation histories. Both are modeled as transition systems. We want to know the weakest internal system that is sufficient for achieving passive (filtering) and active (planning) tasks. We introduce the notion of an information transition system (ITS) for the internal system which is a transition system over a space of information states that reflect a robot's or other observer's perspective based on limited sensing, memory, computation, and actuation. An ITS is viewed as a filter and a policy or plan is viewed as a function that labels the states of this ITS. Regardless of whether internal systems are obtained by learning algorithms, planning algorithms, or human insight, we want to know the limits of feasibility for given robot hardware and tasks. We establish, in a general setting, that minimal information transition systems (ITSs) exist up to reasonable equivalence assumptions, and are unique under some general conditions. We then apply the theory to generate new insights into several problems, including optimal sensor fusion/filtering, solving basic planning tasks, and finding minimal representations for modeling a system given input-output relations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. 基于改进型跟踪微分器的HOT - FSMO 无感控制.
- Author
-
陈志鹏, 张会林, 程文彬, and 王忠洋
- Subjects
- *
SLIDING mode control , *DYNAMICAL systems , *PERMANENT magnet motors , *SPEED , *ROTORS - Abstract
To address the problems of large sliding mode chatter and phase delay in the traditional first-order flux sliding mode observer, a HOT FSMO (High-Order Terminal Flux Sliding Mode Observer) based on an improved tracking differentiator is proposed in this study. A high-order sliding mode control law is designed to achieve chatter suppression of flux linkage estimation, and a new tracking differentiator with low chatter containing a terminal attractor is combined to replace the conventional low pass filter to achieve accurate tracking of the flux linkage and rotor position, as well as filtering functions. The experimental results show that the HOT FSMO based on the novel tracking differentiator can effectively reduce the sliding mode chattering and decrease the position estimation error, while speeding up the system response and improving the system dynamic performance under different speed and load conditions when compared with the traditional linear flux linkage sliding mode observer. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Sliding and Adaptive Windows to Improve Change Mining in Process Variability.
- Author
-
Hmami, Asmae, Sbai, Hanae, Baina, Karim, and Fredj, Mounia
- Subjects
- *
ALGORITHMS , *COLLECTIONS - Abstract
A configurable process Change Mining approach can detect changes from a collection of event logs and provide details on the unexpected behavior of all process variants of a configurable process. The strength of Change Mining lies in its ability to serve both conformance checking and enhancement purposes; users can simultaneously detect changes and ensure process conformance using a single, integrated framework. In prior research, a configurable process Change Mining algorithm has been introduced. Combined with our proposed preprocessing and change log generation methods, this algorithm forms a complete framework for detecting and recording changes in a collection of event logs. Testing the framework on synthetic data revealed limitations in detecting changes in different types of variable fragments. Consequently, it is recommended that the preprocessing approach be enhanced by applying a filtering algorithm based on sliding and adaptive windows. Our improved approach has been tested on various types of variable fragments to demonstrate its efficacy in enhancing Change Mining performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A continuation method for fitting a bandlimited curve to points in the plane.
- Author
-
Zhao, Mohan and Serkh, Kirill
- Abstract
In this paper, we describe an algorithm for fitting an analytic and bandlimited closed or open curve to interpolate an arbitrary collection of points in R 2 . The main idea is to smooth the parametrization of the curve by iteratively filtering the Fourier or Chebyshev coefficients of both the derivative of the arc-length function and the tangential angle of the curve and applying smooth perturbations, after each filtering step, until the curve is represented by a reasonably small number of coefficients. The algorithm produces a curve passing through the set of points to an accuracy of machine precision, after a limited number of iterations. It costs O(N log N) operations at each iteration, provided that the number of discretization nodes is N. The resulting curves are smooth, affine invariant, and visually appealing and do not exhibit any ringing artifacts. The bandwidths of the constructed curves are much smaller than those of curves constructed by previous methods. We demonstrate the performance of our algorithm with several numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Optimising occurrence data in species distribution models: sample size, positional uncertainty, and sampling bias matter.
- Author
-
Moudrý, Vítězslav, Bazzichetto, Manuele, Remelgado, Ruben, Devillers, Rodolphe, Lenoir, Jonathan, Mateo, Rubén G., Lembrechts, Jonas J., Sillero, Neftalí, Lecours, Vincent, Cord, Anna F., Barták, Vojtěch, Balej, Petr, Rocchini, Duccio, Torresani, Michele, Arenas‐Castro, Salvador, Man, Matěj, Prajzlerová, Dominika, Gdulová, Kateřina, Prošek, Jiří, and Marchetto, Elisa
- Subjects
- *
SPECIES distribution , *ECOLOGICAL niche , *SAMPLE size (Statistics) , *ECOLOGICAL models , *SPATIAL filters - Abstract
Species distribution models (SDMs) have proven valuable in filling gaps in our knowledge of species occurrences. However, despite their broad applicability, SDMs exhibit critical shortcomings due to limitations in species occurrence data. These limitations include, in particular, issues related to sample size, positional uncertainty, and sampling bias. In addition, it is widely recognised that the quality of SDMs as well as the approaches used to mitigate the impact of the aforementioned data limitations depend on species ecology. While numerous studies have evaluated the effects of these data limitations on SDM performance, a synthesis of their results is lacking. However, without a comprehensive understanding of their individual and combined effects, our ability to predict the influence of these issues on the quality of modelled species–environment associations remains largely uncertain, limiting the value of model outputs. In this paper, we review studies that have evaluated the effects of sample size, positional uncertainty, sampling bias, and species ecology on SDMs outputs. We build upon their findings to provide recommendations for the critical assessment of species data intended for use in SDMs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A Visible Light Positioning Technique Based on Artificial Neural Network.
- Author
-
do Nascimento, Mateus Rabelo Fonseca, Coutinho, Olange Guerson Gonçalves, Olivi, Leonardo Rocha, and Soares, Guilherme Marcio
- Subjects
ARTIFICIAL neural networks ,VISIBLE spectra ,DAYLIGHT ,OPTICAL communications ,LUMINOUS flux ,LED lamps ,NANOPOSITIONING systems - Abstract
This paper presents an indoor positioning strategy based on Visible Light Communication that relies on LED luminaires as transmitters, whose luminous flux is modulated at different frequencies, and a light sensor as a receiver. Then, a previously trained Artificial Neural Network (ANN) uses the illuminance signal gathered by the receiver as input to estimate the sensor's position. The main contribution of the technique is that the ANN is trained by using an illuminance estimator based on the lighting distribution of the luminaires, which is obtained through the IES file provided by the luminaire's manufacturer, without the need to collect data from the environment. In this work, the designed illuminance estimator is validated by comparing it to the well-known commercial software DIALux and Relux. The algorithm's setting and the performance evaluation of the ANN are explained. Then, the impact of the lighting uniformity and the environment's number of divisions on the accuracy of the results is analyzed. Error analyses are also made by adding uncertainty to the illuminance measurements obtained by the sensor. Finally, the work is compared to several papers in the field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Wideband Filtering Dielectric Resonator Antenna Based on Dual Mode Slotline Resonator.
- Author
-
Wang Chuanyun, Jiang Xiaofeng, Zhang Yonghua, Hu Weikang, and Fan Qilei
- Subjects
DIELECTRIC resonator antennas ,RESONATOR filters ,ANTENNA design ,WIRELESS communications ,ANTENNAS (Electronics) ,ANECHOIC chambers - Abstract
[Objective] To meet the application requirements of miniaturized and multifunctional dielectric resonator antennas (DRA) in wireless communication systems, such as the Internet of Vehicles (IoV). [Method] A wideband filtering dielectric resonator antenna based on a microstrip-dual mode slotline resonator feeding structure is proposed. In the antenna design process, the traditional slotline of the microstrip-slotline coupled feeding structure was replaced by a dual mode slotline resonator, forming a novel microstrip-dual mode slotline resonator coupling feeding network. This dual mode slotline resonator served as an energy coupler in the feeding network, which effectively excited the TE
111 resonant mode of the DRA; At the same time, two slotline resonant modes could also be generated to participate in antenna resonance, broadening the antenna bandwidth. Additionally, by introducing a spur line on the microstrip feedline and combining the intrinsic resonance characteristics of the slotline resonator, a radiation null can be produced on both sides of the antenna passband, showing a quasi-elliptical filter response on the gain curve. [Result] To further verify the performance of the antenna design, a pro totype FDRA was fabricated and measured, and the measurement and simulation results were generally consistent. The central frequency of the antenna is 4.12 GHz, the impedance bandwidth is 53.40 % (3.02~5.22 GHz), and the in-band flat gain is 5.7 dBi. [Conclusion] The antenna meets the application requirements of wireless communication systems, such as 5G and telematics. [ABSTRACT FROM AUTHOR]- Published
- 2024
40. High Density 3D Carbon Tube Nanoarray Electrode Boosting the Capacitance of Filter Capacitor.
- Author
-
Chen, Gan, Han, Fangming, Ma, Huachun, Li, Pei, Zhou, Ziyan, Wang, Pengxiang, Li, Xiaoyan, Meng, Guowen, and Wei, Bingqing
- Subjects
- *
CAPACITORS , *ELECTROLYTIC capacitors , *SUPERCAPACITORS , *CHEMICAL vapor deposition , *ELECTRODES , *FREQUENCY response , *ELECTRODE potential - Abstract
Highlights: A novel method is developed for precise control over the structure of 3D anodic aluminum oxide templates, enabling fine-tuning of both the vertical pore diameter and interspace within the templates. 3D carbon tube nanoarrays featuring significantly thinner and denser tubes are constructed as high-quality electrodes for miniaturized filter capacitors. The 3D compactly arranged carbon tube-based capacitor achieves a remarkable specific areal capacitance of 3.23 mF cm−2 with a phase angle of − 80.2° at 120 Hz. Electric double-layer capacitors (EDLCs) with fast frequency response are regarded as small-scale alternatives to the commercial bulky aluminum electrolytic capacitors. Creating carbon-based nanoarray electrodes with precise alignment and smooth ion channels is crucial for enhancing EDLCs' performance. However, controlling the density of macropore-dominated nanoarray electrodes poses challenges in boosting the capacitance of line-filtering EDLCs. Herein, a simple technique to finely adjust the vertical-pore diameter and inter-spacing in three-dimensional nanoporous anodic aluminum oxide (3D-AAO) template is achieved, and 3D compactly arranged carbon tube (3D-CACT) nanoarrays are created as electrodes for symmetrical EDLCs using nanoporous 3D-AAO template-assisted chemical vapor deposition of carbon. The 3D-CACT electrodes demonstrate a high surface area of 253.0 m2 g−1, a D/G band intensity ratio of 0.94, and a C/O atomic ratio of 8. As a result, the high-density 3D-CT nanoarray-based sandwich-type EDLCs demonstrate a record high specific areal capacitance of 3.23 mF cm−2 at 120 Hz and exceptional fast frequency response due to the vertically aligned and highly ordered nanoarray of closely packed CT units. The 3D-CT nanoarray electrode-based EDLCs could serve as line filters in integrated circuits, aiding power system miniaturization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. MULTILEVEL PARTICLE FILTERS FOR A CLASS OF PARTIALLY OBSERVED PIECEWISE DETERMINISTIC MARKOV PROCESSES.
- Author
-
JASRA, AJAY, KAMATANI, KENGO, and MAAMA, MOHAMED
- Subjects
- *
MARKOV processes , *ORDINARY differential equations , *DETERMINISTIC processes , *ALGORITHMS - Abstract
In this paper we consider the filtering of a class of partially observed piecewise deterministic Markov processes. In particular, we assume that an ordinary differential equation (ODE) drives the deterministic element and can only be solved numerically via a time discretization. We develop, based upon the approach in Lemaire, Thieullen, and Thomas [Adv. Appl. Probab., 52 (2020), pp. 138--172], a new particle and multilevel particle filter (MLPF) in order to approximate the filter associated to the discretized ODE. We provide a bound on the mean square error associated to the MLPF which provides guidance on setting the simulation parameters of the algorithm and implies that significant computational gains can be obtained versus using a particle filter. Our theoretical claims are confirmed in several numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Fast Bayesian Record Linkage for Streaming Data Contexts.
- Author
-
Taylor, Ian, Kaplan, Andee, and Betancourt, Brenda
- Subjects
- *
ELECTRONIC health records , *PANEL analysis , *ONLINE education , *ONLINE databases - Abstract
Record linkage is the task of combining records from multiple files which refer to overlapping sets of entities when there is no unique identifying field. In streaming record linkage, files arrive sequentially in time and estimates of links are updated after the arrival of each file. This problem arises in settings such as longitudinal surveys, electronic health records, and online events databases, among others. The challenge in streaming record linkage is to efficiently update parameter estimates as new data arrive. We approach the problem from a Bayesian perspective with estimates calculated from posterior samples of parameters and present methods for updating link estimates after the arrival of a new file that are faster than fitting a joint model with each new data file. In this article, we generalize a two-file Bayesian Fellegi-Sunter model to the multi-file case and propose two methods to perform streaming updates. We examine the effect of prior distribution on the resulting linkage accuracy as well as the computational tradeoffs between the methods when compared to a Gibbs sampler through simulated and real-world survey panel data. We achieve near-equivalent posterior inference at a small fraction of the compute time. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Learning Closed‐Form Equations for Subgrid‐Scale Closures From High‐Fidelity Data: Promises and Challenges.
- Author
-
Jakhar, Karan, Guan, Yifei, Mojgani, Rambod, Chattopadhyay, Ashesh, and Hassanzadeh, Pedram
- Subjects
- *
ARTIFICIAL neural networks , *PARAMETERIZATION , *RAYLEIGH-Benard convection , *LARGE eddy simulation models , *HEAT flux , *MACHINE learning , *EQUATIONS - Abstract
There is growing interest in discovering interpretable, closed‐form equations for subgrid‐scale (SGS) closures/parameterizations of complex processes in Earth systems. Here, we apply a common equation‐discovery technique with expansive libraries to learn closures from filtered direct numerical simulations of 2D turbulence and Rayleigh‐Bénard convection (RBC). Across common filters (e.g., Gaussian, box), we robustly discover closures of the same form for momentum and heat fluxes. These closures depend on nonlinear combinations of gradients of filtered variables, with constants that are independent of the fluid/flow properties and only depend on filter type/size. We show that these closures are the nonlinear gradient model (NGM), which is derivable analytically using Taylor‐series. Indeed, we suggest that with common (physics‐free) equation‐discovery algorithms, for many common systems/physics, discovered closures are consistent with the leading term of the Taylor‐series (except when cutoff filters are used). Like previous studies, we find that large‐eddy simulations with NGM closures are unstable, despite significant similarities between the true and NGM‐predicted fluxes (correlations >0.95). We identify two shortcomings as reasons for these instabilities: in 2D, NGM produces zero kinetic energy transfer between resolved and subgrid scales, lacking both diffusion and backscattering. In RBC, potential energy backscattering is poorly predicted. Moreover, we show that SGS fluxes diagnosed from data, presumed the "truth" for discovery, depend on filtering procedures and are not unique. Accordingly, to learn accurate, stable closures in future work, we propose several ideas around using physics‐informed libraries, loss functions, and metrics. These findings are relevant to closure modeling of any multi‐scale system. Plain Language Summary: Even in state‐of‐the‐art climate models, the effects of many important small‐scale processes cannot be directly simulated due to limited computing power. Thus, these effects are represented using functions called parameterizations. However, many of the current physics‐based parameterizations have major shortcomings, leading to biases and uncertainties in the models' predictions. Recently, there has been substantial interest in learning such parameterizations directly from short but very high‐resolution simulations. Most studies have focused on using deep neural networks, which while leading to successful parameterizations in some cases, are hard to interpret and explain. A few more recent studies have focused on another class of machine‐learning methods that discover equations. This approach has resulted in fully interpretable but unsuccessful parameterizations that produce unphysical results. Here, using widely used test cases, we (a) explain the reasons for these unphysical results, (b) connect the discovered equations to well‐known mathematically derived parameterizations, and (c) present ideas for learning successful parameterizations using equation‐discovery methods. Our main finding is that the common loss functions that match patterns representing effects of small‐scale processes are not enough, as important physical phenomena are not properly learned. Based on this, we have proposed a number of physics‐aware metrics and loss functions for future work. Key Points: Subgrid‐scale momentum/heat flux closures discovered using common algorithms are the analytically derivable nonlinear gradient model (NGM)In 2D turbulence/convection, NGM leads to unstable online simulations due to its inability to fully capture key inter‐scale energy transfersWe suggest that physics‐informed loss functions, libraries, metrics, and sparsity selections are needed to discover accurate/stable closures [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Enhancing ECG readability in LVAD patients: A comparative analysis of Denoising techniques with an emphasis on discrete wavelet transform.
- Author
-
Khalil, Jalkh, Kayla, Shahbazian, Phong, Nguyen, Wael, AlJaroudi, Evan, Hiner, Adnan, AlJaroudi, and Haitham, Hreibe
- Abstract
Electrocardiograms (ECGs) are vital for diagnosing cardiac conditions but obtaining clean signals in Left Ventricular Assist Device (LVAD) patients is hindered by electromagnetic interference (EMI). Traditional filters have limited efficacy. There is a current need for an easy and effective method. Raw ECG data obtained from 5 patients with LVADs. LVAD types included HeartMate II, III at multiple impeller speeds, and a case with HeartMate III and a ProtekDuo. ECG spectral profiles were examined ensuring the presence of diverse types of EMI in the study. ECGs were then processed with four denoising techniques: Moving Average Filter, Finite Impulse Response Filter, Fast Fourier Transform, and Discrete Wavelet Transform. Discrete Wavelet Transform proved as the most promising method. It offered a one solution fits all, enabling automatic processing with minimal user input while preserving crucial high-frequency components and reducing LVAD EMI artifacts. Our study demonstrates the practicality and efficiency of Discrete Wavelet Transform in obtaining high-fidelity ECGs in LVAD patients. This method could enhance clinical diagnosis and monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Inertial Methodology for the Monitoring of Structures in Motion Caused by Seismic Vibrations.
- Author
-
Rodríguez-Quiñonez, Julio C., Valdez-Rodríguez, Jorge Alejandro, Castro-Toscano, Moises J., Flores-Fuentes, Wendy, and Sergiyenko, Oleg
- Subjects
STRUCTURAL health monitoring ,KALMAN filtering ,WHITE noise ,UNITS of measurement ,ACCELEROMETERS ,GYROSCOPES - Abstract
This paper presents a non-invasive methodology for structural health monitoring (SHM) integrated with inertial sensors and signal conditioning techniques. The proposal uses the signal of an IMU (inertial measurement unit) tri-axial accelerometer and gyroscope to continuously measure the displacements of a structure in motion due to seismic vibrations. A system, called the "Inertial Displacement Monitoring System" or "IDMS", is implemented to attenuate the signal error of the IMU with methodologies such as a Kalman filter to diminish the influence of white noise, a Chebyshev filter to isolate the frequency values of a seismic motion, and a correction algorithm called zero velocity observation update (ZVOB) to detect seismic vibrations and diminish the influence of external perturbances. As a result, the IDMS is a methodology developed to measure displacements when a structure is in motion due to seismic vibration and provides information to detect failures opportunely. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. The effect of low-abundance OTU filtering methods on the reliability and variability of microbial composition assessed by 16S rRNA amplicon sequencing.
- Author
-
Nikodemova, Maria, Holzhausen, Elizabeth A., Deblois, Courtney L., Barnet, Jodi H., Peppard, Paul E., Suen, Garret, and Malecki, Kristen M.
- Subjects
RIBOSOMAL RNA ,MICROBIAL diversity ,ENDANGERED species ,GUT microbiome - Abstract
PCR amplicon sequencing may lead to detection of spurious operational taxonomic units (OTUs), inflating estimates of gut microbial diversity. There is no consensus in the analytical approach as to what filtering methods should be applied to remove low-abundance OTUs; moreover, few studies have investigated the reliability of OTU detection within replicates. Here, we investigated the reliability of OTU detection (% agreement in detecting OTU in triplicates) and accuracy of their quantification (assessed by coefficient of variation (CV)) in human stool specimens. Stool samples were collected from 12 participants 22-55 years old. We applied several methods for filtering lowabundance OTUs and determined their impact on alpha-diversity and betadiversity metrics. The reliability of OTU detection without any filtering was only 44.1% (SE=0.9) but increased after filtering low-abundance OTUs. After filtering OTUs with <0.1% abundance in the dataset, the reliability increased to 87.7% (SE=0.6) but at the expense of removing 6.97% reads from the dataset. When filtering was based on individual sample, the reliability increased to 73.1% after filtering OTUs with <10 copies while removing only 1.12% of reads. High abundance OTUs (>10 copies in sample) had lower CV, indicating better accuracy of quantification than low-abundance OTUs. Excluding very lowabundance OTUs had a significant impact on alpha-diversity metrics sensitive to the presence of rare species (observed OTUs, Chao1) but had little impact on relative abundance of major phyla and families and alpha-diversity metrics accounting for both richness and evenness (Shannon, Inverse Simpson). To increase the reliability of microbial composition, we advise removing OTUs with <10 copies in individual samples, particularly in studies where only one subsample per specimen is available for analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Nonfragile Filtering under Bounded Exogenous Disturbances.
- Author
-
Khlebnikov, M. V.
- Subjects
- *
LINEAR matrix inequalities , *LINEAR control systems , *SEMIDEFINITE programming , *LINEAR systems , *ELLIPSOIDS - Abstract
This paper considers filtering for linear systems subjected to persistent exogenous disturbances. The filtering quality is characterized by the size of the bounding ellipsoid that contains the estimated output of the system. A regular approach is proposed to solve the nonfragile filtering problem. This problem consists in designing a filter matrix that withstands admissible variations of its coefficients. The concept of invariant ellipsoids is applied to reformulate the original problem in terms of linear matrix inequalities and reduce it to a parametric semidefinite programming problem easily solved numerically. This paper continues the series of author's research works devoted to filtering under nonrandom bounded exogenous disturbances and measurement errors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Bragg Spot Finder (BSF): a new machine‐learning‐aided approach to deal with spot finding for rapidly filtering diffraction pattern images.
- Author
-
Dong, Jianxiang, Yin, Zhaozheng, Kreitler, Dale, Bernstein, Herbert J., and Jakoncic, Jean
- Subjects
- *
DIFFRACTION patterns , *X-ray diffraction , *CRYSTALLOIDS (Botany) , *X-ray imaging , *PROTEIN structure , *XBRL (Document markup language) - Abstract
Macromolecular crystallography contributes significantly to understanding diseases and, more importantly, how to treat them by providing atomic resolution 3D structures of proteins. This is achieved by collecting X‐ray diffraction images of protein crystals from important biological pathways. Spotfinders are used to detect the presence of crystals with usable data, and the spots from such crystals are the primary data used to solve the relevant structures. Having fast and accurate spot finding is essential, but recent advances in synchrotron beamlines used to generate X‐ray diffraction images have brought us to the limits of what the best existing spotfinders can do. This bottleneck must be removed so spotfinder software can keep pace with the X‐ray beamline hardware improvements and be able to see the weak or diffuse spots required to solve the most challenging problems encountered when working with diffraction images. In this paper, we first present Bragg Spot Detection (BSD), a large benchmark Bragg spot image dataset that contains 304 images with more than 66000 spots. We then discuss the open source extensible U‐Net‐based spotfinder Bragg Spot Finder (BSF), with image pre‐processing, a U‐Net segmentation backbone, and post‐processing that includes artifact removal and watershed segmentation. Finally, we perform experiments on the BSD benchmark and obtain results that are (in terms of accuracy) comparable to or better than those obtained with two popular spotfinder software packages (Dozor and DIALS), demonstrating that this is an appropriate framework to support future extensions and improvements. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. An improved unbiased particle filter.
- Author
-
Jasra, Ajay, Maama, Mohamed, and Ombao, Hernando
- Subjects
- *
FILTERS & filtration , *COMPUTER simulation - Abstract
In this paper, we consider the filtering of partially observed multi-dimensional diffusion processes that are observed regularly at discrete times. We assume that, for numerical reasons, one has to time-discretize the diffusion process, which typically leads to filtering that is subject to discretization bias. The approach in [A. Jasra, K. J. H. Law and F. Yu, Unbiased filtering of a class of partially observed diffusions, Adv. Appl. Probab.54 (2022), 3, 661–687] establishes that, when only having access to the time discretized diffusion, it is possible to remove the discretization bias with an estimator of finite variance. We improve on this method by introducing a modified estimator based on the recent work [A. Jasra, M. Maama and H. Ombao, Antithetic multilevel particle filters, preprint (2023), https://arxiv.org/abs/2301.12371]. We show that this new estimator is unbiased and has finite variance. Moreover, we conjecture and verify in numerical simulations that substantial gains are obtained. That is, for a given mean square error (MSE) and a particular class of multi-dimensional diffusion, the cost to achieve the said MSE falls. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Developing Riemann–Liouville-Fractional Masks for Image Enhancement.
- Author
-
Miah, Bapan Ali, Sen, Mausumi, Murugan, R., and Gupta, Damini
- Subjects
- *
IMAGE intensifiers , *DIFFERENTIAL calculus , *FRACTIONAL calculus , *INTEGRAL operators , *EULER method , *FRACTIONAL integrals - Abstract
In this article, we focus on the application of fractional differential calculus in image processing, specifically in the context of image decoding techniques. The article presents different methods of image decoding based on Riemann–Liouville fractional integration and compares their performance with established methods in the denoising of noisy images. Here we create new fractional masks by using the Middle Point approach method, the Grumwald–Letnikov scheme, the Toufik–Riemann scheme, and the Euler method with the help of the Riemann–Liouville fractional integral operator. The results demonstrate that the suggested algorithms outperform the compared techniques. The paper also suggests that the concepts presented can be applied to other definitions of fractional calculus. The future research goal is to determine the optimal fractional order for a given set of algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.