30,533 results on '"ERROR rates"'
Search Results
2. Dynamics of measurement-induced state transitions in superconducting qubits.
- Author
-
Hirasaki, Yuta, Daimon, Shunsuke, Kanazawa, Naoki, Itoko, Toshinari, Tokunari, Masao, and Saitoh, Eiji
- Subjects
- *
SUPERCONDUCTING transitions , *QUANTUM measurement , *QUBITS , *TIME-resolved measurements , *ERROR rates - Abstract
We have investigated temporal fluctuation of superconducting qubits via the time-resolved measurement for an IBM Quantum system. We found that the qubit error rate abruptly changes during specific time intervals. Each high error state persists for several tens of seconds and exhibits an on-off behavior. The observed temporal instability can be attributed to qubit transitions induced by a measurement stimulus. Resonant transition between fluctuating dressed states of the qubits coupled with high-frequency resonators can be responsible for the error-rate change. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. The optimization of evaporation rate in graphene-water system by machine learning algorithm.
- Author
-
Qiao, Degao, Yang, Ming, Gao, Yin, Hou, Jue, Zhang, Xingli, and Zhang, Hang
- Subjects
- *
RANDOM forest algorithms , *INTERFACIAL bonding , *PRODUCTION methods , *INSTRUCTIONAL systems , *PREDICTION models , *ERROR rates , *MACHINE learning , *PEARSON correlation (Statistics) , *DATA extraction - Abstract
Solar interfacial evaporation, as a novel practical freshwater production method, requires continuous research on how to improve the evaporation rates to increase water production. In this study, sets of data were obtained from molecule dynamics simulation and literature, in which the parameters included height, diameter, height–radius ratio, evaporation efficiency, and evaporation rate. Initially, the correlation between the four input parameters and the output of the evaporation rate was examined through traditional pairwise plots and Pearson correlation analysis, revealing weak correlations. Subsequently, the accuracy and generalization performance of the evaporation rate prediction models established by neural network and random forest were compared, with the latter demonstrating superior performance and reliability confirmed via random data extraction. Furthermore, the impact of different percentages (10%, 20%, and 30%) of the data on the model performance was explored, and the result indicated that the model performance is better when the test set is 20% and all the constructed model converge. Moreover, the mean absolute error and mean squared error of the evaporation rate prediction model for the three ratios were calculated to evaluate their performance. However, the relationship between the height- radius ratio and optimal evaporation rate was investigated using the enumeration method, and it was determined that the evaporation efficiency was optimal when the height–radius ratio was 6. Finally, the importance of height, diameter, height– radius ratio, and evaporation efficiency were calculated to optimize evaporator structure, increase evaporation rate, and facilitate the application of interfacial evaporation in solar desalination. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Designing Efficient Stratified Mean-Per-Unit Sampling Applications in Accounting and Auditing.
- Author
-
Hall, Thomas W., Hoogduin, Lucas A., Pierce, Bethane Jo, and Tsay, Jeffrey J.
- Subjects
CONFIDENCE intervals ,SAMPLE size (Statistics) ,ERROR rates ,TRUST ,SAMPLING (Process) ,AUDITING - Abstract
Despite technological advances in accounting systems and audit techniques, sampling remains a commonly used audit tool. For critical estimation applications involving low error rate populations, stratified mean-per-unit sampling (SMPU) has the unique advantage of producing trustworthy confidence intervals. However, SMPU is less efficient than other classical sampling techniques because it requires a larger sample size to achieve comparable precision. To address this weakness, we investigated how SMPU efficiency can be improved via three key design choices: (a) stratum boundary selection method, (b) number of sampling strata, and (c) minimum stratum sample size. Our tests disclosed that SMPU efficiency varies significantly with stratum boundary selection method. An iterative search-based method yielded the best efficiency, followed by the Dalenius–Hodges and Equal-Value-Per-Stratum methods. We also found that variations in Dalenius–Hodges implementation procedures yielded meaningful differences in efficiency. Regardless of boundary selection method, increasing the number of sampling strata beyond levels recommended in the professional literature yielded significant improvements in SMPU efficiency. Although a minor factor, smaller values of minimum stratum sample size were found to yield better SMPU efficiency. Based on these findings, suggestions for improving SMPU efficiency are provided. We also present the first known equations for planning the number of sampling strata given various application-specific parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Design a body mass index monitoring with telegram notifications based on IoT.
- Author
-
Rosiana, Elfirza, Abdurahman, Abdurahman, Setiawan, Jan, and Ramadhan, Rahmat Gilang
- Subjects
- *
BODY mass index , *MEASUREMENT errors , *BODY weight , *HEIGHT measurement , *ERROR rates - Abstract
Obesity has been identified as a cause of concern for a range of conditions, including hypertension and diabetes. Obesity indicators can be ascertained based on body mass index measurement. In this study involves the construction of a device that is capable of measuring an individual's height and weight in order to calculate their body mass index. The device presents the body mass index value in categories denoting lean, normal, fat, or obese. Furthermore, the data pertaining to measurements and body mass index categories, which are exhibited on the devices's display, are also transmitted to the telegram application. Ultrasonic sensors are employed for height measurement, load cells are utilized for body weight measurement, and the NodeMCU serves as the controlling microcontroller in the given measurement situations. The device that has been designed and manufactured is capable of being utilized effectively. The test resulted in height measurements with an average error rate in the range of 0.20% to 0.40%, weight measurements with an error rate in the range of 0.32% to 1.06%, and an average delay of 6.61 seconds in receiving messages through the Telegram application. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
6. Advances toward high-accuracy operation of tunable-barrier single-hole pumps in silicon.
- Author
-
Yamahata, Gento and Fujiwara, Akira
- Subjects
- *
ERROR rates , *SILICON , *TUNABLE lasers , *PUMPING machinery , *TEMPERATURE measurements , *CHARGE transfer , *OPTICAL pumping - Abstract
Precise and reproducible current generation is the key to realizing quantum current standards in metrology. A promising candidate is a tunable-barrier single-charge pump, which can accurately transfer single charges one by one with an error rate below the ppm level. Although several measurements have shown such levels of accuracy, it is necessary to further pursue the possibility of high-precision operation toward reproducible generation of the pumping current in many devices. Here, we investigated silicon single-hole pumps, which may have the potential to outperform single-electron pumps because of the heavy effective mass of holes. Measurements on the temperature dependence of the current generated by the single-hole pump revealed that the tunnel barrier had high energy selectivity, which is a critical parameter for high-accuracy operation. In addition, we applied the dynamic gate-compensation technique to the single-hole pump and confirmed that it yielded a further performance improvement. Finally, we demonstrated gigahertz operation of a single-hole pump in which the estimated lower bound of the pump error rate was around 0.01 ppm. These results imply that single-hole pumps in silicon are capable of high-accuracy, high-speed, and stable single-charge pumping in metrological and quantum-device applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Sequential Monitoring Using the Second Generation P-Value with Type I Error Controlled by Monitoring Frequency.
- Author
-
Chipman, Jonathan J., Greevy Jr., Robert A., Mayberry, Lindsay S., and Blume, Jeffrey D.
- Subjects
- *
FALSE positive error , *ERROR rates , *EXPERIMENTAL design , *PRISMS , *APATHY - Abstract
The Second Generation P-Value (SGPV) measures the overlap between an estimated interval and a composite hypothesis of parameter values. We develop a sequential monitoring scheme of the SGPV (SeqSGPV) to connect study design intentions with end-of-study inference anchored on scientific relevance. We build upon Freedman's "Region of Equivalence" (ROE) in specifying scientifically meaningful hypotheses called Pre-specified Regions Indicating Scientific Merit (PRISM). We compare PRISM monitoring versus monitoring alternative ROE specifications. Error rates are controlled through the PRISM's indifference zone around the point null and monitoring frequency strategies. Because the former is fixed due to scientific relevance, the latter is a targettable means for designing studies with desirable operating characters. An affirmation step to stopping rules improves frequency properties including the error rate, the risk of reversing conclusions under delayed outcomes, and bias. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
8. Benchmarking the Mantel test and derived methods for testing association between distance matrices.
- Author
-
Quilodrán, Claudio S., Currat, Mathias, and Montoya‐Burgos, Juan I.
- Subjects
- *
FALSE positive error , *COMPUTATIONAL statistics , *TEST methods , *ERROR rates , *GENETIC variation , *STATISTICAL power analysis - Abstract
Testing the association between objects is central in ecology, evolution, and quantitative sciences in general. Two types of variables can describe the relationships between objects: point variables (measured on individual objects), and distance variables (measured between pairs of objects). The Mantel test and derived methods have been extensively used for distance variables. Yet, these methods have been criticized due to low statistical power and inflated type I error when spatial autocorrelation is present. Here, we assessed the statistical power between different types of tested variables and the type I error rate over a wider range of autocorrelation intensities than previously assessed, both on univariate and multivariate data. We also illustrated the performance of distance matrix statistics through computational simulations of genetic diversity. We show that the Mantel test and derived methods are not affected by inflated type I error when spatial autocorrelation affects only one variable when investigating correlations, or when either the response or the explanatory variable(s) is affected by spatial autocorrelation while investigating causal relationships. As previously noted, with autocorrelation affecting more variables, inflated type I error could be reduced by modifying the significance threshold. Additionally, the Mantel test has no problem of statistical power when the hypothesis is formulated in terms of distance variables. We highlight that transformation of variable types should be avoided because of the potential information loss and modification of the tested hypothesis. We propose a set of guidelines to help choose the appropriate method according to the type of variables and defined hypothesis. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Post-error slowing during motor sequence learning under extrinsic and intrinsic error feedback conditions.
- Author
-
Ali, Hassan, Chatburn, Alex, and Immink, Maarten A.
- Subjects
- *
ERROR rates , *MOTOR learning , *AWARENESS , *FEMALES - Abstract
Post-error slowing, described as an error-corrective index of response binding during motor sequence learning, has only been demonstrated in the serial reaction time task under conditions where extrinsic error feedback is presented. The present experiment investigated whether post-error slowing is dependent on, or is influenced by, extrinsic error feedback. Thirty participants (14 females, Mage = 21.9 ± 1.8 years) completed the serial reaction time task with or without presentation of extrinsic error feedback. Post-error slowing was observed following response error whether feedback was presented or not. However, presentation of extrinsic error feedback increased post-error slowing across practice and extended the number of responses that were slowed following an error. There was no evidence of feedback effects on motor sequence learning or explicit awareness. Instead, feedback appeared to function as a performance factor that reduced response error rates relative to no feedback conditions. These findings illustrate that post-error slowing in motor sequence learning is not reliant on or a result of presentation of extrinsic error information. More specific to the serial reaction time task paradigm, the present findings demonstrate that the common practice of presenting error feedback is not necessary for investigating motor sequence learning unless the aim is to maintain low error rate. However, doing so might inflate reaction time in latter training blocks. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Prediction intervals with controlled length in the heteroscedastic Gaussian regression.
- Author
-
Denis, Christophe, Hebiri, Mohamed, and Zaoui, Ahmed
- Subjects
- *
ERROR rates , *NUMERICAL analysis , *PREDICTION models , *FORECASTING , *NOISE - Abstract
We tackle the problem of building a prediction interval in heteroscedastic Gaussian regression. We focus on prediction intervals with constrained expected length in order to guarantee interpretability of the output. In this framework, we derive a closed-form expression of the optimal prediction interval that allows for the development of a data-driven prediction interval based on plug-in. The construction of the proposed algorithm is based on two samples, one labelled and another unlabelled. Under mild conditions, we show that our procedure is asymptotically as good as the optimal prediction interval both in terms of expected length and error rate. In particular, the control of the expected length is distribution-free. We also derive rates of convergence under smoothness and the Tsybakov noise conditions. We conduct a numerical analysis that exhibits the good performance of our method. It also indicates that even with a few amount of unlabelled data, our method is very effective in enforcing the length constraint. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Spherical Penetration Grouting Model for Bingham Fluids Considering Gravity and Time-Varying Slurry Viscosity.
- Author
-
Yang, Cheng, Zhang, Shize, Liu, Deren, Wang, Xu, Zhang, Jiyuan, and Xiong, Zhibin
- Subjects
- *
DARCY'S law , *GROUTING , *GEOTECHNICAL engineering , *ERROR rates , *PERMEABILITY , *SLURRY - Abstract
As an effective reinforcement technology for seepage prevention, penetration grouting has been widely used in geotechnical and underground engineering. Because grouting is a hidden project, the extent of slurry spread is often estimated theoretically and through experience. Therefore, it is important to understand the diffusion pattern and scope of penetration grouting in reinforcement engineering. Based on the generalized Darcy's law, a penetration grouting model considering the gravity and the time-varying nature of the slurry viscosity is proposed in this study. Its validity and effectiveness are verified through a comparison with existing penetration grouting tests. Based on the established penetration grouting model, the effects of the grouting pressure, permeability coefficient, water–cement ratio, and other factors on penetration grouting are analyzed. The penetration and diffusion process of a Bingham fluid considering gravity and time-variable slurry viscosity is computationally simulated using a finite-element software. The research results show that the proposed penetration grouting model is more accurate than the traditional one that does not consider the two aforementioned factors, and its results are more in line with the experimental ones. The rate of error calculated from the experimental value is about 11%. The diffusion radius of the slurry increases with increasing grouting pressure, permeability coefficient, and water–cement ratio, and decreases with increasing groundwater pressure. With the elapse of the grouting time, the increase rate of the diffusion radius exhibits a trend of increasing first and then decreasing and tending to level off. These research results can provide certain theoretical support for penetration grouting research in geotechnical and underground engineering. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. Derivation of full range vapor pressure equation from an arbitrary point.
- Author
-
Lee, Jai-Yeop
- Subjects
- *
VAPOR pressure , *CRITICAL temperature , *REAL numbers , *BOILING-points , *ERROR rates - Abstract
An equation was proposed to obtain the full range of vapor pressures (VPs) for a substance at an arbitrary point of VP data. The basic model is a vapor pressure equation in the form of a reduced Antoine equation derived from the van der Waals equation. Here, Antoine constant C is defined as c by dividing it into gas constant and critical temperature, and is expressed as a polynomial function of reduced temperature. Previously, the exponents were integers, but have been improved to real numbers. Edmister's formula is a special case in which the Antoine constant C divided by the critical temperature in Chen's equation is 0. Since this is equivalent to polynomial of c being zero, the intercept can be set to zero. The results, estimated using the derived equation, were compared with VP data for 76 substances. The error rate of the equation using the acentric factor as a variable was 0.49%. This was lower than the 0.50% error rate of the Lee-Kesler method. Meanwhile, by using the correlation between the coefficients, the VP equation can be derived at any one point of VP data other than the acentric factor. For 76 substances, the equation derived from the corresponding VPs at reduced temperatures of 0.25 ∼ 0.95 in increments of 0.05 showed a mean error rate of 0.68%. This can be considered an accuracy equivalent to the Lee-Kesler method, which has the limitation of deriving a VP equation only at the reduced temperature of 0.7. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
13. An efficient snow flake schema with hash map using SHA-256 based on data masking for securing employee data.
- Author
-
Bharath, Tumkur Shankaregowda and Channakrishnaraju
- Subjects
DATA encryption ,WAVELET transforms ,DATABASES ,SIGNAL-to-noise ratio ,ERROR rates - Abstract
In various organizations and enterprises, data masking is used to store sensitive data efficiently and securely. The data encryption and secretsharing-based data deploying strategies secure privacy of subtle attributes but not secrecy. To solve this problem, the novel snowflake schema with the hash map using secure hash algorithm-256 (SHA-256) is proposed for the data masking. SHA-256 approach combines data masking by secret sharing for relational databases to secure both privacy as well as the confidentiality of secret employee data. The data masking approach supports preserving and protecting the privacy of sensitive and complex employee data. The data masking is developed on selected database fields to cover the sensitive data in the set of query outcomes. The proposed method embeds one or multiple secret attributes about multiple cover attributes in a similar relational database. The proposed method is validated through different performance metrics such as peak signal-to-noise ratio (PSNR) and error rate (ER) and it achieves the values of 50.084dB and 0.0281 when compared to the existing methods like Huffman-based lossless image coding and quad-tree partitioning and integer wavelet transform (IWT). [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
14. 调制指数对 PCM-FM 遥测系统性能影响分析.
- Author
-
粟登银, 韩 勇, and 陈秋丰
- Subjects
DETECTION algorithms ,POWER transmission ,ERROR rates ,PHASE diagrams ,CROSS correlation ,TELEMETRY - Abstract
Copyright of Journal of Test & Measurement Technology is the property of Publishing Center of North University of China and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2025
- Full Text
- View/download PDF
15. Performance of 5G and Wi-Fi 6 coexistence: spectrum sharing based on optimized duty cycle.
- Author
-
Zaid, Asmaa Helmy, Fayez Wanis Zaki, and Nafea, Hala Bahy-Eldeen
- Subjects
WIRELESS communications ,INTERNET protocols ,STREAMING video & television ,WIRELESS Internet ,ERROR rates - Abstract
Smart mobile device usage is increasing rapidly; hence, cellular operators face the challenge of spectrum resource shortage. To address this issue, researchers have explored several approaches to achieving a highly efficient utilization of wireless communication network resources. One promising solution lies in the fair coexistence of 5G/Wi-Fi 6 in the unlicensed 5 GHz band. This research investigates a duty cycle mechanism to perform fair spectrum sharing between these two wireless technologies, intending to optimize performance metrics such as throughput, capacity, bit error rate (BER), and latency. The results of this study demonstrate a significant improvement in system performance when employing the proposed coexistence method compared to using 5G alone in a single cell. Specifically, a 40% increase in throughput and a 14% improvement in capacity are reported. Moreover, for a single cell using Wi-Fi 6 only, the BER was reduced by 19%, and the latency was less than one millisecond. Additionally, the duty cycle mechanism reported here is used to prioritize call services, with the blocking probability for voice-over internet protocol (VoIP) and video stream calls being improved. Furthermore, the adaptive bandwidth reservation reduced the blocking probability of video calls from 21.8% to 0.9% compared to the fixed method; no VoIP calls were blocked. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
16. Electrocardiogram features detection using stationary wavelet transform.
- Author
-
Aqil, Mounaim and Jbari, Atman
- Subjects
WAVELETS (Mathematics) ,ERROR rates ,ELECTROCARDIOGRAPHY ,APNEA ,TEST methods - Abstract
The main objective of this paper is to provide a novel stationary wavelet transform (SWT) based method for electrocardiogram (ECG) feature detection. The proposed technique uses the detail coefficients of the ECG signal decomposition by SWT and the selection of the appropriate coefficient to detect a specific wave of the signal. Indeed, the temporal and frequency analysis of these coefficients allowed us to choose detail coefficient of level 2 (Cd2) to detect the R peaks. In contrast, the coefficient of level 3 (Cd3) is determined to extract the Q, S, P, and T waves from the ECG. The proposed method was tested on recordings from the apnea and Massachusetts Institute of Technology-Beth Israel hospital (MIT-BIH) databases. The performances obtained are excellent. Indeed, the technique presents a sensitivity of 99.83%, a predictivity of 99.72%, and an error rate of 0.44%. A further important advantage of the method is its ability to detect different waves even in the presence of baseline wander (BLW) of the ECG signal. This property makes it possible to bypass the filtering operation of BLW. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
17. Bayesian design of clinical trials with multiple time-to-event outcomes subject to functional cure.
- Author
-
Cho, Seoyoon, Psioda, Matthew A., and Ibrahim, Joseph G.
- Subjects
- *
FALSE positive error , *EXPERIMENTAL design , *STATISTICAL power analysis , *ENDOMETRIAL cancer , *ERROR rates - Abstract
With the continuous advancement of medical treatments, there is an increasing demand for clinical trial designs and analyses using cure rate models to accommodate a plateau in the survival curve. This is especially pertinent in oncology, where high proportions of patients, such as those with melanoma, lung cancer, and endometrial cancer, exhibit usual life spans post-cancer detection. A Bayesian clinical trial design methodology for multivariate time-to-event outcomes with cured fractions is developed. This approach employs a copula to jointly model the multivariate time-to-event outcomes. We propose a model that uses a Gaussian copula on the population survival function, irrespective of cure status. The minimum sample size required to achieve high statistical power while maintaining reasonable control over the type I error rate from a Bayesian perspective is identified using point-mass sampling priors. The methodology is demonstrated in simulation studies inspired by an endometrial cancer trial. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
18. 面向极化编码慢跳频的干扰状态迭代估计 及译码算法.
- Author
-
高露琴, 吴晓富, 张索非, and 胡海峰
- Subjects
LOW density parity check codes ,ITERATIVE decoding ,CHANNEL estimation ,ERROR rates ,BURST noise - Abstract
Copyright of Telecommunication Engineering is the property of Telecommunication Engineering and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2025
- Full Text
- View/download PDF
19. Quantum error correction with Goppa codes from maximal curves: Design, simulation, and performance.
- Author
-
Nourozi, Vahid
- Subjects
- *
ALGEBRAIC codes , *FINITE fields , *ALGEBRAIC geometry , *FINITE geometries , *ERROR rates - Abstract
This paper characterizes Goppa codes of certain maximal curves over finite fields defined by equations of the form yn = xm + x. We investigate algebraic geometric and quantum stabilizer codes associated with these maximal curves and propose modifications to improve their parameters. The theoretical analysis is complemented by extensive simulation results, which validate the performance of these codes under various error rates. We provide concrete examples of the constructed codes, comparing them with known results to highlight their strengths and trade-offs. The simulation data, presented through detailed graphs and tables, offers insights into the practical behavior of these codes in noisy environments. Our findings demonstrate that while the constructed codes may not always achieve optimal minimum distances, they offer systematic construction methods and interesting parameter trade-offs that could be valuable in specific applications or for further theoretical study. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
20. The effect of reward and voluntary choice on the motor learning of serial reaction time task.
- Author
-
Quan, Yanghui, Wang, Jiayue, Wang, Yandong, and Kang, Guanlan
- Subjects
MOTOR ability ,ERROR rates ,MOTOR learning ,IMPLICIT learning ,MOTIVATION (Psychology) - Abstract
Objective: Reward and voluntary choice facilitate motor skill learning through motivation. However, it remains unclear how their combination influences motor skill learning. The purpose of the present study is to investigate the effects of reward and voluntary choice on motor skill learning in a serial reaction time task (SRTT). Methods: Participants completed six parts of SRTT, including pre-test, training phase, immediate post-test, a random session, delayed post-test, and retention test on the following day. During the training phase, participants were divided into four groups (reward_choice, reward_no-choice, no-reward_choice, no-reward_no-choice). In the reward condition, participants received reward for correct and faster (than a baseline) responses while those in the no-reward groups did not. For the choice manipulation, participants in the voluntary choice groups chose the color of the target, whereas in the forced choice groups, the same color was assigned by the computer. Results: The results showed that the four groups did not exhibit any significant differences in reaction time and error rate in the pre-test phase. Importantly, both reward and voluntary choice significantly enhanced sequence-specific learning effects, while no interaction was found. No significant effects of reward and voluntary choice were observed in the retention test. Conclusions: These findings suggest that reward and voluntary choice enhance motor skill performance and training independently, potentially at the action-selection level, which implies different mechanisms underlying the influences of reward and voluntary choice. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
21. Task-related errors as a catalyst for empathy towards embodied pedagogical agents.
- Author
-
Rehren, Oliver, Jansen, Sebastian, Seemann, Martina, and Ohler, Peter
- Subjects
STUDENT engagement ,HUMAN-robot interaction ,ERROR rates ,ROBOT design & construction ,EMPATHY - Abstract
Introduction: The increasing integration of digital tools in education highlights the potential of embodied pedagogical agents. This study investigates how task-related errors and language cues from a robot influence human perception, specifically examining their impact on anthropomorphism and subsequent empathy, and whether these perceptions affect persuasion. Methods: Thirty-nine participants interacted with a NAO robot during a quiz. Employing a 3 × 2 mixed design, we manipulated the robot's error rate (above average, human-like, below average) between subjects and language style (humble, dominant) within subjects. We measured perceived anthropomorphism, empathy, sympathy, and persuasion. Data were analyzed using multilevel modeling to assess the relationships between manipulated variables and outcomes. Results: Our findings indicate that human-like error rates significantly increased perceived anthropomorphism in the robot, which in turn led to higher levels of empathy and sympathy towards it. However, perceived anthropomorphism did not directly influence persuasion. Furthermore, the manipulated language styles did not show a significant direct effect on perceived anthropomorphism, empathy, sympathy, or persuasion in the main experiment, despite pretest results indicating differences in perceived personality based on language cues. Discussion: These results have important implications for the design of embodied pedagogical agents. While strategic implementation of human-like error rates can foster empathy and enhance the perception of humanness, this alone may not directly translate to greater persuasiveness. The study highlights the complex interplay between perceived competence, likability, and empathy in human-robot interaction, particularly within educational contexts. Future research should explore these dynamics further, utilizing larger samples, diverse robot designs, and immersive environments to better understand the nuances of how errors and communication styles shape learner engagement with pedagogical agents. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
22. Bayesian hierarchical model with adaptive similarity evaluation of treatment effects in oncology basket trials.
- Author
-
Kitabayashi, Ryo, Sato, Hiroyuki, Nomura, Shogo, and Hirakawa, Akihiro
- Subjects
- *
FALSE positive error , *CHI-squared test , *ERROR rates , *STANDARDIZED tests , *COMPARATIVE method - Abstract
AbstractWe developed a novel Bayesian hierarchical model (BHM) that incorporates a similarity measure calculated using the standardized chi-square test statistic to evaluate the heterogeneity of response rates between two cancer types of interest for basket trials in oncology. Our proposed design involves the use of the response rates of not only the two cancer types of interest, but also all cancer types in the trials when the similarity between two cancer types is estimated. Simulation studies revealed that the proposed method had comparative type 1 error rate and power, improved accuracy for the posterior estimation of response rate, and reduced number of patients in the trials with interim analysis in many cases compared to the existing BHM using a similarity measure of response rate among the cancer types. We applied the proposed method to real data from two basket trials and determined that its operating characteristics, in terms of the posterior probability of the response rate among cancer types, differed from those of existing designs. Overall, our proposed method is an alternative approach to the existing BHM that provides a more effective and efficient means of evaluating the heterogeneity of response rates between different cancer types and estimating response rates. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
23. Quantum secure direct communication based on quantum error correction code.
- Author
-
Ding, Chao-Wei, Wang, Wen-Yang, Zhang, Wen-Da, Zhou, Lan, and Sheng, Yu-Bo
- Subjects
- *
ERROR functions , *ERROR rates , *QUANTUM communication , *PHOTONS , *NOISE - Abstract
Quantum secure direct communication (QSDC) enables the message sender to directly transmit messages to the message receiver through quantum channel without keys. Environmental noise is the main obstacle for QSDC's practicality. For enhancing QSDC's noise robustness, we introduce the quantum error correction (QEC) code into QSDC and propose the QSDC protocol based on the redundancy code. This QSDC protocol correlates atomic state with the electron–photon entangled pairs and transmits photons in quantum channels for two rounds. The parties can construct the remote atomic logical entanglement channel and decode messages with the heralded photonic Bell state measurement (BSM) and single electron measurement. This QSDC protocol is unconditionally secure in theory and has some advantages. First, benefiting from the heralded photonic BSM, it can eliminate the influence from photon transmission loss and has the potential to realize long-distance secure message transmission. Second, taking use of the error correction function of the repetition code, the error rate caused by the decoherence during the second round of photon transmission can be reduced, which can reduce the message error and increase the secret message capacity. Third, the whole protocol is feasible under current experimental condition. Our QSDC protocol can be extended to use other stronger QEC code. It provides a promising method to promote QSDC's practicality in the future. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
24. Write error reduction in magnetic tunnel junctions for voltage-controlled magnetoresistive random access memory by using exchange coupled free layer.
- Author
-
Sakai, Lui, Higo, Yutaka, Hosomi, Masanori, Matsumoto, Rie, Nozaki, Takayuki, Yuasa, Shinji, and Imamura, Hiroshi
- Subjects
- *
MAGNETIC tunnelling , *MAGNETIC anisotropy , *NONVOLATILE memory , *ANCHORING effect , *ERROR rates , *RANDOM access memory - Abstract
Voltage-controlled magnetoresistive random access memory (VC-MRAM) is an emerging nonvolatile memory based on the voltage-controlled magnetic anisotropy (VCMA) effect. It has been garnering considerable attention because of its fast and low-power operation. However, two major issues must be addressed for practical applications. First, the voltage-induced switching of the free layer magnetization is sensitive to ultrashort voltage pulse duration. Second, the write error rate (WER) of the voltage-induced switching is high. To address these issues, a magnetic tunnel junction (MTJ) structure with an exchange coupled free layer, consisting of a precession layer with the VCMA effect and an anchor layer without the VCMA effect, is proposed. The anchor layer prevents the precession layer from returning to its initial direction, thereby reducing the WER without requiring the voltage pulse duration to be precisely controlled. The write operation of the proposed MTJ with an exchange coupled free layer was analyzed using the macrospin model. Using optimized MTJ parameters, a low WER of approximately 10−6 was obtained for an 80 nm MTJ without requiring the pulse duration to be precisely controlled. These results facilitate the reduction of the WER for VC-MRAM and improve its usability, thereby expanding its range of applications. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
25. Design of Four-Plate Parallel Dynamic Capacitive Wireless Power Transfer Coupler for Mobile Robot Wireless-Charging Applications.
- Author
-
Bae, Hongguk and Park, Sangwook
- Subjects
EQUIVALENT electric circuits ,S-matrix theory ,ELECTRIC circuit networks ,ERROR rates ,ELECTRIC capacity ,MOBILE robots - Abstract
A detailed theoretical design of an electric resonance-based coupler for dynamic wireless power transfer (DWPT) at the mobile robot level is presented. The scattering matrix of the coupler was derived by transforming and multiplying transmission matrices for each circuit network in a practical equivalent circuit that accounted for loss resistance. This theoretical approach was validated through equivalent circuit models, yielding results consistent with 3D full-wave simulations and showing an error rate of less than 1%. Additionally, a null-power point characteristic, where efficiency sharply decreases when the receiver moves outside the transmitter's range, was observed. The detailed theoretical design of the practical equivalent circuit for electric resonance-based dynamic WPT couplers is expected to contribute to the design of couplers for various specifications in future applications. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
26. Comparison of precision of a paperless electronic input method versus the conventional paper form in an andrology laboratory: a prospective study.
- Author
-
Lam, Kevin K. W., Tsang, Percy C. K., Chan, Connie C. Y., Ng, Evans P. K., Cheung, Tak-Ming, Li, Raymond H. W., Ng, Ernest H. Y., and Yeung, William S. B.
- Subjects
SEMEN analysis ,DATA entry ,ERROR rates ,TABLET computers ,INFORMATION storage & retrieval systems - Abstract
Copyright of Basic & Clinical Andrology is the property of BioMed Central and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2025
- Full Text
- View/download PDF
27. Harnessing spatiotemporal transformation in magnetic domains for nonvolatile physical reservoir computing.
- Author
-
Jing Zhou, Jikang Xu, Lisen Huang, Lee Koon Yap, Sherry, Shaohai Chen, Xiaobing Yan, and Sze Ter Lim
- Subjects
- *
MAGNETIC domain , *PRINTED circuits , *COMPUTATIONAL physics , *ERROR rates , *ENERGY consumption - Abstract
Combining physics with computational models is increasingly recognized for enhancing the performance and energy efficiency in neural networks. Physical reservoir computing uses material dynamics of physical substrates for temporal data processing. Despite the ease of training, building an efficient reservoir remains challenging. Here, we explore beyond the conventional delay-based reservoirs by exploiting the spatiotemporal transformation in all-electric spintronic devices. Our nonvolatile spintronic reservoir effectively transforms the history dependence of reservoir states to the path dependence of domains. We configure devices triggered by different pulse widths as neurons, creating a reservoir featured by strong nonlinearity and rich interconnections. Using a small reservoir of merely 14 physical nodes, we achieved a high recognition rate of 0.903 in written digit recognition and a low error rate of 0.076 in Mackey-Glass time series prediction on a proof-of-concept printed circuit board. This work presents a promising route of nonvolatile physical reservoir computing, which is adaptable to the larger memristor family and broader physical neural networks. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
28. A quantification method of dynamic miscommunication based on the Markov process within subway teams.
- Author
-
Wang, Pei, Jie, Wang, and Jie-qiong, Zhou
- Subjects
- *
MARKOV processes , *TWO-way communication , *COGNITIVE psychology , *HUMAN error , *ERROR rates - Abstract
A qualitative analysis of the communication process of subway teams was performed from the perspective of cognitive psychology in this study. The findings revealed that the communication process exhibited characteristics consistent with a Markov process, and then a model was approved to calculate the probability of miscommunication based on Markov processes. As communication within subway teams is a two-way process, a model for the dynamic probability of miscommunication based on the Markov two-way communication mode was derived. The model was subsequently validated through a case study involving a screen door fault event. The research results indicate that the actual cognitive communication process yields a significantly lower probability of miscommunication compared to the values obtained from the technique for human error rate prediction (THERP) manual. This demonstrates the effectiveness of dynamic two-way communication as a means of reducing miscommunication and confirms the feasibility and effectiveness of the proposed calculation model. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
29. Arithmetic Optimization Algorithm Quantization Optimized With Energy Detection Using Nonparametric Amplitude.
- Author
-
A, Darwin Nesakumar, S, Rukmani Devi, T M, Inbamalar, and K N, Pavithra
- Subjects
- *
OPTIMIZATION algorithms , *ERROR probability , *COGNITIVE radio , *RADIO networks , *ERROR rates - Abstract
The spectrum sensing is a major significant task in cognitive radio networks (CRNs) to avoid the unacceptable interference to primary users (PUs). Here, the threshold value determines the effectiveness of spectrum sensing and regarded as a sensing system. The fixed threshold used by the current energy detection‐based spectrum sensing (SS) techniques does not provide sufficient safety for the main users. The threshold is determined by lowering the complete probability of decision error in addition to these guidelines. Therefore, an energy detection using nonparametric amplitude quantization optimized with arithmetic optimization algorithm for enhanced spectrum sensing in CRNs (ED‐NAQ‐AOA‐SS CRN) is proposed in this paper to acquire the ideal threshold for decreasing the total error probability. The proposed method achieves greater probability of detection of 99.67%, 98.38%, 92.34%, and 97.45%, lower settling time of 98.33%, 89.34%, 83.12%, and 88.96%, and lower error rate of 93.15%, 91.25%, 79.90%, and 92.88% compared with existing techniques, like intelligent spectrum sharing and sensing in CRN with adaptive rider optimization algorithm (AROA), a novel technique for spectrum sensing in CRN utilizing fractional gray wolf optimization with the cuckoo search optimization (GWOCS), and adaptive neuro‐fuzzy inference scheme depending on cooperative spectrum sensing optimization in CRNs (ANFIS). [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
30. On context specificity and management reasoning: moving beyond diagnosis.
- Author
-
Boyle, James G., Walters, Matthew R., Burton, Fiona M., Paton, Catherine, Hughes, Martin, Jamieson, Susan, and Durning, Steven J.
- Subjects
- *
MEDICAL students , *DIAGNOSTIC errors , *MEDICAL logic , *ERROR rates , *MEDICAL care - Abstract
Diagnostic error is a global emergency. Context specificity is likely a source of the alarming rate of error and refers to the vexing phenomenon whereby a physician can see two patients with the same presenting complaint, identical history and examination findings, but due to the presence of contextual factors, decides on two different diagnoses. Studies have not empirically addressed the potential role of context specificity in management reasoning and errors with a diagnosis may not consistently translate to actual patient care.We investigated the effect of context specificity on management reasoning in individuals working within a simulated internal medicine environment. Participants completed two ten minute back to back common encounters. The clinical content of each encounter was identical. One encounter featured the presence of carefully controlled contextual factors (CF+ vs. CF−) designed to distract from the correct diagnosis and management. Immediately after each encounter participants completed a post encounter form.Twenty senior medical students participated. The leading diagnosis score was higher (mean 0.88; SEM 0.07) for the CF− encounter compared with the CF+ encounter (0.58; 0.1; 95 % CI 0.04–0.56; p=0.02). Management reasoning scores were higher (mean 5.48; SEM 0.66) for the CF− encounter compared with the CF+ encounter (3.5; 0.56; 95 % CI 0.69–3.26; p=0.01). We demonstrated context specificity in both diagnostic and management reasoning.This study is the first to empirically demonstrate that management reasoning, which directly impacts the patient, is also influenced by context specificity, providing additional evidence of context specificity’s role in unwanted variance in health care. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
31. Does a priming warm-up influence the incidence of V˙O2pl during a ramp test and verification phase?
- Author
-
Qiao, JianBo, Rosbrook, Paul, Sweet, Daniel K., Pryor, Riana R., Hostler, David, Looney, David, and Pryor, J. Luke
- Subjects
- *
AEROBIC capacity , *WARMUP , *ERROR rates , *CYCLING , *CYCLISTS - Abstract
Objective: This study compared the effects of two different warm-up protocols (normal vs. priming) on the oxygen plateau (V˙O2pl) incidence rate during a ramp test. It also compared the cardiopulmonary responses during the ramp test and subsequent verification phase. Methods: Eleven recreational cyclists performed two experimental visits. The first visit required a normal warm-up (cycling at 50 W for 10 min) followed by the ramp test (30 W.min-1) and supramaximal verification phase with 30 min rest between tests. The second visit required a priming warm-up (cycling at 50 W for 4 min increasing to 70% difference between the gas exchange threshold [GET] and maximum work rate [WRmax] for 6 min) followed by the same protocol as in the first visit. Physiological responses were collected during the exercise and compared. Oxygen kinetics (V˙O2 Kinetics) and V˙O2pl incidence rate were determined during the ramp tests for both visits. Results: As planned, following the warm-up the priming visit experienced greater physiological response. However, the incidence rate of V˙O2pl during the ramp test was the same between visits (73%), and maximal oxygen uptake was not different between visits after the ramp test (normal, 4.0 ± 0.8; primed, 4.0 ± 0.7 L·min−1, p = 0.230) and verification phase (normal, 3.8 ± 0.6; primed, 3.8 ± 0.7 L·min−1, p = 0.924) using a Holm-Bonferroni correction for controlling family-wise error rate. V˙O2 Kinetics were not different between visits during the ramp test (normal, 10.8 ± 1.1; primed, 10.8 ± 1.2 mL·min−1·W-1, p = 0.407). The verification phase confirmed V˙O2max in 100% for both the normal and priming visits. Conclusion: Our hypothesis that a priming warm-up facilitates the incidence rate of V˙O2pl during a ramp test is not supported by the results. The verification phase remains a prudent option when determining a 'true' V˙O2max is required. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
32. Multi-Condition Remaining Useful Life Prediction Based on Mixture of Encoders.
- Author
-
Liu, Yang, Xu, Bihe, and Geng, Yangli-ao
- Subjects
- *
REMAINING useful life , *DEEP learning , *TRANSFORMER models , *FEATURE extraction , *ERROR rates - Abstract
Accurate Remaining Useful Life (RUL) prediction is vital for effective prognostics in and the health management of industrial equipment, particularly under varying operational conditions. Existing approaches to multi-condition RUL prediction often treat each working condition independently, failing to effectively exploit cross-condition knowledge. To address this limitation, this paper introduces MoEFormer, a novel framework that combines a Mixture of Encoders (MoE) with a Transformer-based architecture to achieve precise multi-condition RUL prediction. The core innovation lies in the MoE architecture, where each encoder is designed to specialize in feature extraction for a specific operational condition. These features are then dynamically integrated through a gated mixture module, enabling the model to effectively leverage cross-condition knowledge. A Transformer layer is subsequently employed to capture temporal dependencies within the input sequence, followed by a fully connected layer to produce the final prediction. Additionally, we provide a theoretical performance guarantee for MoEFormer by deriving a lower bound for its error rate. Extensive experiments on the widely used C-MAPSS dataset demonstrate that MoEFormer outperforms several state-of-the-art methods for multi-condition RUL prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
33. Isfahan Artificial Intelligence Event 2023: Drug Demand Forecasting.
- Author
-
Jahani, Meysam, Zojaji, Zahra, Montazerolghaem, AhmadReza, Palhang, Maziar, Ramezani, Reza, Golkarnoor, Ahmadreza, Safaei, Alireza Akhavan, Bahak, Hossein, Saboori, Pegah, Halaj, Behnam Soufi, Naghsh-Nilchi, Ahmad R., Mohamadpoor, Fatemeh, and Jafarizadeh, Saeid
- Subjects
- *
ARTIFICIAL intelligence , *SUPPLY chain management , *MACHINE learning , *ERROR rates , *PRODUCTION management (Manufacturing) , *DEMAND forecasting - Abstract
Background: The pharmaceutical industry has seen increased drug production by different manufacturers. Failure to recognize future needs has caused improper production and distribution of drugs throughout the supply chain of this industry. Forecasting demand is one of the basic requirements to overcome these challenges. Forecasting the demand helps the drug to be well estimated and produced at a certain time. Methods: Artificial intelligence (AI) technologies are suitable methods for forecasting demand. The more accurate this forecast is the better it will be to decide on the management of drug production and distribution. Isfahan AI competitions-2023 have organized a challenge to provide models for accurately predicting drug demand. In this article, we introduce this challenge and describe the proposed approaches that led to the most successful results. Results: A dataset of drug sales was collected in 12 pharmacies of Hamadan University of Medical Sciences. This dataset contains 8 features, including sales amount and date of purchase. Competitors compete based on this dataset to accurately forecast the volume of demand. The purpose of this challenge is to provide a model with a minimum error rate while addressing some qualitative scientific metrics. Conclusions: In this competition, methods based on AI were investigated. The results showed that machine learning methods are particularly useful in drug demand forecasting. Furthermore, changing the dimensions of the data features by adding the geographic features helps increase the accuracy of models. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
34. Delamination Prediction in Layered Composites Using Optimized ANN Algorithms: A Comparative Analysis.
- Author
-
Balkan, Demet
- Subjects
- *
ARTIFICIAL neural networks , *ERROR rates , *CRACK propagation (Fracture mechanics) , *COMPOSITE materials , *MANUFACTURING processes - Abstract
This study investigates the effectiveness of Artificial Neural Networks (ANNs) in predicting the outcomes of Double Cantilever Beam (DCB) tests, focusing on time and force as input variables and displacement as the predicted output. Three ANN training algorithms—Scaled Conjugate Gradient (SCG), Broyden Fletcher Goldfarb Shanno (BFGS) Quasi-Newton, and Levenberg-Marquardt (LM)—were evaluated based on prediction accuracy and computational efficiency. A parametric study was performed by varying the number of neurons (from 10 to 100) in a single hidden layer to optimize network structure. Among the evaluated algorithms, LM demonstrated superior performance, achieving prediction accuracies of 99.6% for force and 99.3% for displacement. In contrast, SCG exhibited the fastest convergence but had a significantly higher error rate of 8.6%. The BFGS algorithm provided a compromise between accuracy and speed but was ultimately outperformed by LM in terms of overall precision. In addition, configurations with up to 100 neurons were tested, indicating that although slightly lower error rates could be achieved, the increase in computation time was substantial. Consequently, the LM algorithm with 50 neurons delivered the best balance between accuracy and computational cost. These findings underscore the potential of ANNs, particularly LM-based models, to enhance material design processes by providing reliable predictions from limited experimental data, thereby reducing both resource utilization and the time required for testing. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
35. Universal Image Vaccine Against Steganography.
- Author
-
Wei, Shiyu, Wang, Zichi, and Zhang, Xinpeng
- Subjects
- *
INFORMATION technology security , *ERROR rates , *VACCINES , *HISTOGRAMS , *ALGORITHMS - Abstract
In the past decade, the diversification of steganographic techniques has posed significant threats to information security, necessitating effective countermeasures. Current defenses, mainly reliant on steganalysis, struggle with detection accuracy. While "image vaccines" have been proposed, they often target specific methodologies. This paper introduces a universal steganographic vaccine to enhance steganalysis accuracy. Our symmetric approach integrates with existing methods to protect images before online dissemination using the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. Experimental results show significant accuracy improvements across traditional and deep learning-based steganalysis, especially at medium-to-high payloads. Specifically, for payloads of 0.1–0.5 bpp, the original detection error rate was reduced from 0.3429 to 0.2346, achieving an overall average reduction of 31.57% for traditional algorithms, while the detection success rate of deep learning-based algorithms can reach 100%. Overall, integrating CLAHE as a universal vaccine significantly advances steganalysis. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
36. Spectral Content Effects Study in Non-Contact Resonance Ultrasound Spectroscopy.
- Author
-
Tayyib, Muhammad and Svilainis, Linas
- Subjects
- *
RESONANT ultrasound spectroscopy , *SIGNAL-to-noise ratio , *MEASUREMENT errors , *ERROR rates , *RESONANCE - Abstract
The application of spread-spectrum signals (arbitrary pulse width and position (APWP) sequences) in air-coupled resonant ultrasound spectroscopy is studied. It was hypothesized that spread-spectrum signal optimization should be based on te signal to noise ratio (SNR). Six APWP signal optimization criteria were proposed for this purpose. Experimental measurements were conducted using a thin polycarbonate sample using two standard spread-spectrum signals, linear and nonlinear frequency modulation, together with six optimized APWP signals. It was found that the performance of APWP signals derived from linear frequency modulation was better. The two best performing optimization criteria are SNR improvement on a linear scale with the SNR as an additional weight and energy improvement on a dB scale. The influence of spectral coverage on measurement errors was evaluated. It was found that it is sufficient to cover the sample resonance peak and the valley. The lowest error rates for density, 3%, and for thickness, 3.5%, were achieved when the upper valley was covered. For velocity, the best result, 5%, was achieved when the lower valley was covered. The lowest error rate for attenuation, 3.8%, was achieved in the case when both valleys were covered. Yet no significant performance degradation was noted when a whole −30 dB passband was covered. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
37. Pediatric asthma inhaler technique: quality and content analysis of YouTube videos.
- Author
-
Akca Sumengen, Aylin, Simsek, Enes, Ozcevik Subasi, Damla, Cakir, Gokce Naz, Semerci, Remziye, and Gregory, Karen L.
- Subjects
- *
METERED-dose inhalers , *INHALERS , *CHILD patients , *ERROR rates , *CAREGIVERS - Abstract
Background: Proper technique for using inhalers is crucial in treating pediatric asthma. YouTube offers a wide range of videos on pediatric inhaler technique, but there is a need to analyze the quality, reliability, and content of these resources. Aims: This study aims to analyze the quality, reliability, and content of YouTube videos on pediatric asthma inhaler techniques. Methods: The study has a descriptive, retrospective, and cross-sectional design. The research was conducted by searching YouTube using the "Pediatric Metered Dose Inhaler," "Pediatric Accuhaler," and "Pediatric Diskus." The video's popularity was measured using the Video Power Index. The quality and reliability of the videos were evaluated using the modified DISCERN and Global Quality Scale (GQS). Results: This study analyzed 55 YouTube videos on the pediatric inhaler technique. 19 of the videos were related to the pMDI inhaler with a spacer for tidal breathing, 14 pMDI inhaler with a spacer for single breath, and 22 diskus device. Findings show that videos demonstrating the use of pMDI devices for single breath have more reliable modified DISCERN scores. However, videos related to tidal breathing are more popular than those showing the use of diskus devices and single breath. Based on the checklist for videos on diskus devices, the steps with the highest error rates are 'Check dose counter' at 72.7% and 'Breathe out gently, away from the inhaler' at 63.6%. A moderate correlation was observed between the modified DISCERN score and the GQS. Conclusions: While YouTube videos on the pMDI single-breath technique may be useful for pediatric patients and caregivers, it is crucial for them to receive inhaler technique education from their healthcare provider. This study's findings hold great significance for pediatric patients and caregivers, particularly those who rely on YouTube for health-related information. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
38. Asymmetric Error Correction in the Synchronization Tapping Task.
- Author
-
Tomyta, Kenta, Ohira, Hideki, and Katahira, Kentaro
- Subjects
- *
ERROR rates , *SYNCHRONIZATION , *MATHEMATICAL models , *METRONOME , *RHYTHM - Abstract
In synchronization tapping tasks, tapping onset often precedes metronome one by a few tens of milliseconds, which is known as negative mean asynchrony. However, the mechanism by which negative mean asynchrony occurs remains incompletely understood. This study hypothesized that one of the mechanisms was the asymmetric error correction process for asynchrony. We examined this hypothesis using a generalized linear mixed model. The results suggested that the error correction rate for the positive asynchrony was larger than that for the negative asynchrony. This finding may contribute to improving mathematical models of the synchronization tapping task. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
39. False Data Injection Attack Detection for Virtual Coupling Systems of Heavy-Haul Trains: A Deep Learning Approach.
- Author
-
Yu, Xiaoquan, Li, Wei, Li, Shuo, Yang, Yingze, and Peng, Jun
- Subjects
- *
AUTOENCODER , *DEEP learning , *ERROR rates , *VIRTUAL design , *VELOCITY - Abstract
Cooperative control for virtual coupling systems of multiple heavy-haul trains can improve the safety and efficiency of heavy-haul railway transportation. However, the false data injection attack for the virtual coupling system is a serious obstacle, which will lead to imprecise train operation control. To address this issue, a deep learning-based false data injection attack (FDIA) detection for virtual coupling systems of heavy-haul trains is proposed. First, the cyber-physical model of the virtual coupling system is established. Second, a cooperative control law is designed for the virtual coupling system, and the effects of the FDIA on the virtual coupling system is analyzed. Then, the unsupervised autoencoder method is introduced to achieve the false data injection attack detection. The autoencoder network model is trained with normal operation data and tested with abnormal operation data. The performance of the proposed method is verified in four different simulation scenarios: normal case, velocity attack case, position attack case, and joint attack case. Simulation results show that the proposed method can effectively increase the detection accuracy and reduce the error rate with other supervised methods. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
40. Identifying and Reducing Insulin Errors in the Simulated Military Critical Care Air Transport Environment: A Human Factors Approach.
- Author
-
Frasier, Lane L, Cheney, Mark, Burkhardt, Joshua, Alderman, Mark, Nelson, Eric, Proctor, Melissa, Brown, Daniel, Davis, William T, Smith, Maia P, and Strilka, Richard
- Subjects
- *
MEDICATION errors , *ERGONOMICS , *INTENSIVE care patients , *AIR travel , *ERROR rates - Abstract
Introduction During high-fidelity simulations in the Critical Care Air Transport (CCAT) Advanced course, we identified a high frequency of insulin medication errors and sought strategies to reduce them using a human factors approach. Materials and Methods Of 169 eligible CCAT simulations, 22 were randomly selected for retrospective audio–video review to establish a baseline frequency of insulin medication errors. Using the Human Factors Analysis Classification System, dosing errors, defined as a physician ordering an inappropriate dose, were categorized as decision-based; administration errors, defined as a clinician preparing and administering a dose different than ordered, were categorized as skill-based. Next, 3 a priori interventions were developed to decrease the frequency of insulin medication errors, and these were grouped into 2 study arms. Arm 1 included a didactic session reviewing a sliding-scale insulin (SSI) dosing protocol and a hands-on exercise requiring all CCAT teams to practice preparing 10 units of insulin including a 2-person check. Arm 2 contained arm 1 interventions and added an SSI cognitive aid available to students during simulation. Frequency and type of insulin medication errors were collected for both arms with 93 simulations for arm 1 (January–August 2021) and 139 for arm 2 (August 2021–July 2022). The frequency of decision-based and skill-based errors was compared across control and intervention arms. Results Baseline insulin medication error rates were as follows: decision-based error occurred in 6/22 (27.3%) simulations and skill-based error occurred in 6/22 (27.3%). Five of the 6 skill-based errors resulted in administration of a 10-fold higher dose than ordered. The post-intervention decision-based error rates were 9/93 (9.7%) and 23/139 (2.2%), respectively, for arms 1 and 2. Compared to baseline error rates, both arm 1 (P = .04) and arm 2 (P < .001) had a significantly lower rate of decision-based errors. Additionally, arm 2 had a significantly lower decision-based error rate compared to arm 1 (P = .015). For skill-based preparation errors, 1/93 (1.1%) occurred in arm 1 and 4/139 (2.9%) occurred in arm 2. Compared to baseline, this represents a significant decrease in skill-based error in both arm 1 (P < .001) and arm 2 (P < .001). There were no significant differences in skill-based error between arms 1 and 2. Conclusions This study demonstrates the value of descriptive error analysis during high-fidelity simulation using audio–video review and effective risk mitigation using training and cognitive aids to reduce medication errors in CCAT. As demonstrated by post-intervention observations, a human factors approach successfully reduced decision-based error by using didactic training and cognitive aids and reduced skill-based error using hands-on training. We recommend the development of a Clinical Practice Guideline including an SSI protocol, guidelines for a 2-person check, and a cognitive aid for implementation with deployed CCAT teams. Furthermore, hands-on training for insulin preparation and administration should be incorporated into home station sustainment training to reduced medication errors in the operational environment. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
41. Significance of image brightness levels for PRNU camera identification.
- Author
-
Martin, Abby and Newman, Jennifer
- Subjects
- *
DIGITAL cameras , *ERROR rates , *SPATIAL variation , *CAMERAS , *DETECTORS - Abstract
A forensic investigator performing source identification on a questioned image from a crime aims to identify the unknown camera that acquired the image. On the camera sensor, minute spatial variations in intensities between pixels, called photo response non‐uniformity (PRNU), provide a unique and persistent artifact appearing in every image acquired by the digital camera. This camera fingerprint is used to produce a score between the questioned image and an unknown camera using a court‐approved camera identification algorithm. The score is compared to a fixed threshold to determine a match or no match. Error rates for the court‐approved camera‐identification PRNU algorithm were established on a very large set of image data, making no distinction between images with different brightness levels. Camera exposure settings and in‐camera processing strive to produce a visually pleasing image, but images that are too dark or too bright are not uncommon. While prior work has shown that exposure settings can impact the accuracy of the court‐approved algorithm, these settings are often unreliable in the image metadata. In this work, we apply the court‐approved PRNU algorithm to a large data set where images are assigned a brightness level as a proxy for exposure settings using a novel classification method and then analyze error rates. We find statistically significant differences between error rates for nominal images and for images labeled dark or bright. Our result suggests that in court, the error rate of the PRNU algorithm for a questioned image may be more accurately characterized when considering the image brightness. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
42. Quantifying the strength of firearms comparisons based on error rate studies.
- Author
-
Aggadi, Nada, Zeller, Kimberley, and Busey, Tom
- Subjects
- *
ERROR rates , *FIREARMS , *QUANTITATIVE research , *BULLETS , *CALIBRATION - Abstract
Forensic firearms and tool mark examiners compare bullets and cartridge cases to assess whether they originate from the same source or different sources. To communicate their observations, they rely on predefined conclusion scales ranging from Identification to Elimination. However, these terms have not been calibrated against the actual strength of the evidence except indirectly through error rate studies. The present research reanalyzes the findings of firearms and cartridge case comparisons from error rate studies to generate a quantitative measure of the strength of the evidence for each comparison. We use an ordered probit model to summarize the distribution of responses of examiners and aggregate the data for all comparisons to produce a set of likelihood ratios. The likelihood ratios can be as low as less than 10, which does not seem to justify the current articulation scale that may imply a strength of evidence of 10,000 or greater. This suggests that examiners are using language that overstates the strength of the evidence by several orders of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
43. Exploring dual-lidar mean and turbulence measurements over Perdigão's complex terrain.
- Author
-
Coimbra, Isadora L., Mann, Jakob, Palma, José M. L. M., and Batista, Vasco T. P.
- Subjects
- *
TURBULENT flow , *TURBULENCE , *HEIGHT measurement , *ERROR rates , *ANEMOMETER - Abstract
To assess the accuracy of lidars in measuring mean wind speed and turbulence at large distances above the ground as an alternative to tall and expensive meteorological towers, we evaluated three dual-lidar measurements in virtual-mast (VM) mode over the complex terrain of the Perdigão-2017 campaign. The VMs were obtained by overlapping two coordinated range height indicator scans, prioritising continuous vertical measurements at multiple heights at the expense of high temporal and spatial synchronisation. Forty-six days of results from three VMs (VM1 on the SW ridge, VM2 in the valley, and VM3 on the NE ridge) were compared against sonic readings (at 80 and 100 m a.g.l.) in terms of 10 min means and variances to assess accuracy and the influence of atmospheric stability, vertical velocity, and sampling rate on VM measurements. For mean flow quantities – wind speed (Vh) and u and v velocity components – the r2 values were close to 1 at all VMs, with the lowest equal to 0.948, whereas in the case of turbulence measurements (u′u′ and v′v′), the lowest was 0.809. Concerning differences between ridge and valley measurements, the average RMSE for the wind variances was 0.295 m2s-2 at the VMs on the ridges. In the valley, under a more complex and turbulent flow, smaller between-beam angle, and lower lidars' synchronisation, VM2 presented the highest variance RMSE, 0.600 m2s-2 for u′u′. The impact of atmospheric stability on VM measurements also varied by location, especially for the turbulence variables. VM1 and VM3 exhibited better statistical metrics of the mean and turbulent wind under stable conditions, whereas at VM2, the better results with a stable atmosphere were restricted to the wind variances. We suspect that with a stable and less turbulent atmosphere, the scan synchronisation in the dual-lidar systems had a lower impact on the measurement accuracy. The impact of the zero vertical velocity assumption on dual-lidar retrievals at 80 and 100 m a.g.l. in Perdigão was minimal, confirming the validity of the VM results at these heights. Lastly, the VMs' low sampling rate contributed to 33 % of the overall RMSE for mean quantities and 78 % for variances at 100 m a.g.l., under the assumption of a linear influence of the sampling rate on the dual-lidar error. Overall, the VM results showed the ability of this measurement methodology to capture mean and turbulent wind characteristics under different flow conditions and over mountainous terrain. Upon appraisal of the VM accuracy based on sonic anemometer measurements at 80 and 100 m a.g.l., we obtained vertical profiles of the wind up to 430 m a.g.l. To ensure dual-lidar measurement reliability, we recommend a 90 ° angle between beams and a sampling rate of at least 0.05 Hz for mean and 0.2 Hz for turbulent flow variables. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
44. Genotyping Error Detection and Customised Filtration for SNP Datasets.
- Author
-
Kan‐Lingwood, Noa Yaffa, Sagi, Liran, Mazie, Shahar, Shahar, Naama, Zecherle Bitton, Lilith, Templeton, Alan, Rubenstein, Daniel, Bouskila, Amos, and Bar‐David, Shirli
- Subjects
- *
ERROR rates , *GENETIC polymorphisms , *MISSING data (Statistics) , *GENETIC distance , *SINGLE nucleotide polymorphisms - Abstract
A major challenge in analysing single‐nucleotide polymorphism (SNP) genotype datasets is detecting and filtering errors that bias analyses and misinterpret ecological and evolutionary processes. Here, we present a comprehensive method to estimate and minimise genotyping error rates (deviations from the 'true' genotype) in any SNP datasets using triplicates (three repeats of the same sample) in a four‐step filtration pipeline. The approach involves: (1) SNP filtering by missing data; (2) SNP filtering by error rates; (3) sample filtering by missing data and (4) detection of recaptured individuals by using estimated SNP error rates. The modular pipeline is provided in an R script that allows customised adjustments. We demonstrate the applicability of the method using non‐invasive sampling from the Asiatic wild ass (Equus hemionus) population in Israel. We genotyped 756 samples using 625 SNPs, of which 255 were triplicates of 85 samples. The average SNP error rate, calculated based on the number of mismatching genotypes across triplicates before filtration, was 0.0034 and was reduced to 0.00174 following filtration. Evaluating genetic distance (GD) and relatedness (r) between triplicates before and after filtration (expected to be at the minimum and maximum respectively) showed a significant reduction in the average GD, from 58.1 to 25.3 (p = 0.0002) and a significant increase in relatedness, from r = 0.98 to r = 0.991 (p = 0.00587). We demonstrate how error rate estimation enhances recapture detection and improves genotype quality. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
45. Can Artificial Intelligence Identify Reading Fluency and Level? Comparison of Human and Machine Performance.
- Author
-
Yıldız, Mustafa, Keskin, Hasan Kağan, Oyucu, Saadin, Hartman, Douglas K., Temur, Murat, and Aydoğmuş, Mücahit
- Subjects
- *
ARTIFICIAL intelligence , *AUTOMATIC speech recognition , *LOGISTIC regression analysis , *MACHINE performance , *ERROR rates , *SPEECH perception - Abstract
This study examined whether an artificial intelligence-based automatic speech recognition system can accurately assess students' reading fluency and reading level. Participants were 120 fourth-grade students attending public schools in Türkiye. Students read a grade-level text out loud while their voice was recorded. Two experts and the artificial intelligence-based automatic speech recognition system analyzed the recordings for reading errors. Following the analysis, a word error rate was calculated for both the experts and the artificial intelligence-based automatic speech recognition system. Word error rates were converted into reading accuracy rate scores. Inter-rater agreement and linear regression analyses were used to compare the raters' reading fluency scores, and logistic regression analyses were used to compare the classification of readers according to their reading levels. Results showed that the difference between the scores of the artificial intelligence-based automatic speech recognition system and the expert scores was minimal. This is because there was a very high level of agreement between the artificial intelligence-based automatic speech recognition system and the experts scores. Linear regression analyses showed that the artificial intelligence-based automatic speech recognition system significantly predicted the scores of experts. According to the logistic regression analysis results, the artificial intelligence-based automatic speech recognition system was at least 93% as successful as human raters in classifying readers as poor and good. These results give us hope that reading assessments at classroom, school, regional, national, and even international levels can be conducted more accurately and economically by using artificial intelligence-based systems in the coming years. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
46. Personalized assessment of eating disorder cognitions during treatment: A new measure of cognitive pathology change.
- Author
-
Ortiz, Anna Marie L., Butler, Rachel M., and Levinson, Cheri A.
- Subjects
- *
EATING disorders , *BECK Depression Inventory , *MULTIDIMENSIONAL scaling , *ERROR rates , *INVENTORIES - Abstract
Modifying cognitive distortions, or thinking errors, is crucial in eating disorders (ED) treatment. To address the lack of a personalized measure for ED cognitions, the Thought Inventory was developed. The study aimed to establish its feasibility and validity, identify thinking error contents and types, examine changes in belief of irrational thoughts, and investigate associations with change in ED symptoms. Hypotheses, procedure, and planned analyses were pre-registered to ensure transparency. Participants (N = 55) completed the Thought Inventory, the Eating Disorder Examination Questionnaire, the Eating Pathology Symptom Inventory, the Frost Multidimensional Perfectionism Scale, the Beck Depression Inventory, and the Penn State Worry Questionnaire at pre-and post-ten weeks of treatment. Using the Thought Inventory, participants collaborated with study therapists to identify ED-related thinking errors and rate the degree of belief in these thoughts on a scale of 0 to 100 %. Cognitions primarily contained self-judgments, food rules, and concern over shape, while catastrophizing/fortune telling, emotional reasoning, and should/must statements were the most common types of thinking errors. Belief in cognitions significantly decreased over treatment and change in thought belief was positively associated with change in ED symptoms. The Thought Inventory shows promise as a personalized measure. Future research should explore whether ED cognitions, assessed in this manner, are a mechanism of change in ED treatment. • Cognitive distortions play a critical role in eating disorders. • Currently, there is no personalized assessment of eating disorder cognitions. • Therefore, we developed the Thought Inventory and establish the initial feasibility and validity of this new measure. • Results demonstrate initial promise for the use of this new personalized measure. • Future research is needed to further offer support for the utility of the Thought Inventory. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
47. Biomathematical enzyme kinetics model of prebiotic autocatalytic RNA networks: degenerating parasite-specific hyperparasite catalysts confer parasite resistance and herald the birth of molecular immunity.
- Author
-
Pirovino, Magnus, Iseli, Christian, Curran, Joseph A., and Conrad, Bernard
- Subjects
- *
ENZYME kinetics , *CHEMICAL reactions , *ARMS race , *ERROR rates , *AUTOCATALYSIS - Abstract
Catalysis and specifically autocatalysis are the quintessential building blocks of life. Yet, although autocatalytic networks are necessary, they are not sufficient for the emergence of life-like properties, such as replication and adaptation. The ultimate and potentially fatal threat faced by molecular replicators is parasitism; if the polymerase error rate exceeds a critical threshold, even the fittest molecular species will disappear. Here we have developed an autocatalytic RNA early life mathematical network model based on enzyme kinetics, specifically the steady-state approximation. We confirm previous models showing that these second-order autocatalytic cycles are sustainable, provided there is a sufficient nucleotide pool. However, molecular parasites become untenable unless they sequentially degenerate to hyperparasites (i.e. parasites of parasites). Parasite resistance–a parasite-specific host response decreasing parasite fitness–is acquired gradually, and eventually involves an increased binding affinity of hyperparasites for parasites. Our model is supported at three levels; firstly, ribozyme polymerases display Michaelis-Menten saturation kinetics and comply with the steady-state approximation. Secondly, ribozyme polymerases are capable of sustainable auto-amplification and of surmounting the fatal error threshold. Thirdly, with growing sequence divergence of host and parasite catalysts, the probability of self-binding is expected to increase and the trend towards cross-reactivity to diminish. Our model predicts that primordial host-RNA populations evolved via an arms race towards a host-parasite-hyperparasite catalyst trio that conferred parasite resistance within an RNA replicator niche. While molecular parasites have traditionally been viewed as a nuisance, our model argues for their integration into the host habitat rather than their separation. It adds another mechanism–with biochemical precision–by which parasitism can be tamed and offers an attractive explanation for the universal coexistence of catalyst trios within prokaryotes and the virosphere, heralding the birth of a primitive molecular immunity. Author summary: The quintessential components of life comprise a potent mixture of naturally occurring, but improbable chemical reactions (catalysis), and the arrangement of such accelerated chemical reactions into closed loops (autocatalytic sets). This is required but is not sufficient for such networks to self-propagate (amplification of the information carrier = host polymerization) and adapt (Darwinian evolution). As soon as self-propagation is attained, the next hurdle is parasitism. This typically involves shorter molecules (the products of replicative errors) that hitchhike the replicative potential of the host. They will invariably outcompete the regular amplification process, unless a solution is found. We have addressed this problem using a new model based on the mathematics of catalysis. It confirms previous studies demonstrating that autocatalytic sets become self-sustaining, assuming that a sufficient pool of molecular building blocks is available. However, molecular parasitism is pervasive and potentially fatal for both host and parasite. In our model, we allow these parasites to degenerate in a controlled fashion, giving rise to parasites of parasites (hyperparasites). As long as these hyperparasites acquire binding specificity for parasites, an attenuation of parasitism is observed. These parasite-hyperparasite cycles stabilize the host cycle, explaining why they are conserved, and why they are the likely reason behind the observation that all cellular hosts are associated with parasites (e.g. bacteria) and hyperparasites (e.g. viruses) across all kingdoms of life. Moreover, it provides a novel solution to the usually intractable problem of parasitism. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
48. A improved group quantum key distribution protocol with multi-party collaboration.
- Author
-
Yuan, Qi, Yuan, Hao, Zhou, MeiTong, Wen, JingJing, Li, JuYan, and Hao, Bing
- Subjects
- *
QUANTUM groups , *QUANTUM states , *ERROR rates , *PHOTON counting , *EAVESDROPPING - Abstract
The rapid advancement of quantum key distribution technology in recent years has spurred significant innovation within the field. Nevertheless, a crucial yet frequently underexplored challenge involves the comprehensive evaluation of security quantum state modulation. To address this issue, we propose a novel framework for quantum group key distribution. In the setup phase, preprocessing is introduced to monitor photon intensity and count, thereby ensuring the secure initialization of the protocol. During the measurement phase, signal consistency checks are implemented to verify that the intensity of the signal received by the measurement device corresponds precisely to the transmitted signal. In the key generation phase, error correction is employed to mitigate errors induced by noise or external interference, effectively reducing the error margin and restricting the information available to potential eavesdroppers. This systematic, multi-phase approach significantly enhances the framework's robustness. Experimental results demonstrate that the proposed protocol not only substantially reduces the error rate under adversarial eavesdropping but also improves the efficiency and security of the key distribution process. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
49. Green communication systems via a wavefront multiplexing technique.
- Author
-
Yeh, Hen‐Geul and Lee, Joe
- Subjects
- *
ADDITIVE white Gaussian noise , *MULTIPLEXING , *TELECOMMUNICATION systems , *ERROR rates - Abstract
A green communication scheme using an orthogonal wavefront (WF) multiplexing (Muxing) scheme spatially combined with orthogonal frequency‐division multiplexing (OFDM) techniques is proposed. It forms a spatial WF OFDM transceiver. The WF Muxing technique serves as the preprocessing and postprocessing method of the WF OFDM transceiver. With coordinated multiple point forward transmission, this spatial WF OFDM system establishes a communication network. It can be applied to multiple base stations (BSs) with down links to single or multiple mobile units (MUs). Although signals are received non‐coherently due to different distances between BSs and MUs, they can be compensated and coherently combined via adaptive equalizers at MUs. This is achieved by using pilot signals with an optimization method at the receiver of MUs. Simulation results demonstrate that the WF OFDM scheme obtains the same bit error rate (BER) as predicted by theory in an additive white Gaussian noise (AWGN) channel. Moreover, the required effective equivalent isotropically radiated power (EIRP) from BSs to the MUs is significantly reduced due to multiple non‐coherent transmission. Accordingly, the interference to adjacent frequency bands' signals will be low. This green communication network is achieved via the combination of WF Muxing, OFDM, and optimization at the receiver together. More investigations are needed to show that this WF OFDM transceiver can be applied to frequency selective mobile fading channels. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
50. Efficient Adjusted Joint Significance Test and Sobel-Type Confidence Interval for Mediation Effect.
- Author
-
Zhang, Haixiang
- Subjects
- *
FALSE positive error , *STATISTICAL power analysis , *STATISTICAL hypothesis testing , *CONFIDENCE intervals , *ERROR rates , *MEDIATION (Statistics) - Abstract
Mediation analysis is an important statistical tool in many research fields, where the joint significance test is widely utilized for examining mediation effects. Nevertheless, the limitation of this mediation testing method stems from its conservative Type I error, which reduces its statistical power and imposes certain constraints on its utility. The proposed solution to address this gap is the adjusted joint significance test for one mediator, which introduces a novel data-adjusted approach for assessing mediation effects that showcases significant advancements. The method is specifically designed to be user-friendly, thereby eliminating the necessity for intricate procedures. We further extend the adjusted joint significance test for small-scale mediation hypotheses with family-wise error rate (FWER) control. Additionally, a novel adjusted Sobel-type confidence interval is proposed for the mediation effects, demonstrating significant advancements over conventional Sobel's method. The effectiveness of our mediation testing and confidence interval estimation is assessed through extensive simulations, and compared against a multitude of existing approaches. Finally, we present the application of the method to three substantive datasets with continuous, binary and time-to-event outcomes, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.