1,190 results on '"Performance estimation"'
Search Results
2. The exact worst-case convergence rate of the alternating direction method of multipliers.
- Author
-
Zamani, Moslem, Abbaszadehpeivasti, Hadi, and de Klerk, Etienne
- Subjects
- *
SEMIDEFINITE programming - Abstract
Recently, semidefinite programming performance estimation has been employed as a strong tool for the worst-case performance analysis of first order methods. In this paper, we derive new non-ergodic convergence rates for the alternating direction method of multipliers (ADMM) by using performance estimation. We give some examples which show the exactness of the given bounds. We also study the linear and R-linear convergence of ADMM in terms of dual objective. We establish that ADMM enjoys a global linear convergence rate if and only if the dual objective satisfies the Polyak–Łojasiewicz (PŁ) inequality in the presence of strong convexity. In addition, we give an explicit formula for the linear convergence rate factor. Moreover, we study the R-linear convergence of ADMM under two scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems.
- Author
-
Zamani, Moslem, Abbaszadehpeivasti, Hadi, and de Klerk, Etienne
- Subjects
- *
SEMIDEFINITE programming , *ALGORITHMS - Abstract
In this paper, we study the gradient descent–ascent method for convex–concave saddle-point problems. We derive a new non-asymptotic global convergence rate in terms of distance to the solution set by using the semidefinite programming performance estimation method. The given convergence rate incorporates most parameters of the problem and it is exact for a large class of strongly convex-strongly concave saddle-point problems for one iteration. We also investigate the algorithm without strong convexity and we provide some necessary and sufficient conditions under which the gradient descent–ascent enjoys linear convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. INTERPOLATION CONDITIONS FOR LINEAR OPERATORS AND APPLICATIONS TO PERFORMANCE ESTIMATION PROBLEMS.
- Author
-
BOUSSELMI, NIZAR, HENDRICKX, JULIEN M., and GLINEUR, FRANÇCOIS
- Subjects
- *
LINEAR operators , *INTERPOLATION algorithms , *CONVEX functions , *INTERPOLATION , *EIGENVALUES - Abstract
The performance estimation problem methodology makes it possible to determine the exact worst-case performance of an optimization method. In this work, we generalize this framework to first-order methods involving linear operators. This extension requires an explicit formulation of interpolation conditions for those linear operators. We consider the class of linear operators M : x → Mx, where matrix M has bounded singular values, and the class of linear operators, where M is symmetric and has bounded eigenvalues. We describe interpolation conditions for these classes, i.e., necessary and sufficient conditions that, given a list of pairs {(xi, yi)}, characterize the existence of a linear operator mapping xi to yi for all i. Using these conditions, we first identify the exact worst-case behavior of the gradient method applied to the composed objective hoM, and observe that it always corresponds to M being a scaling operator. We then investigate the Chambolle--Pock method applied to f +g o M, and improve the existing analysis to obtain a proof of the exact convergence rate of the primal-dual gap. In addition, we study how this method behaves on Lipschitz convex functions, and obtain a numerical convergence rate for the primal accuracy of the last iterate. We also show numerically that averaging iterates is beneficial in this setting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. PROVABLY FASTER GRADIENT DESCENT VIA LONG STEPS.
- Author
-
GRIMMER, BENJAMIN
- Subjects
- *
SEMIDEFINITE programming , *CONVEX programming , *MOTIVATION (Psychology) , *LOGICAL prediction - Abstract
This work establishes new convergence guarantees for gradient descent in smooth convex optimization via a computer-assisted analysis technique. Our theory allows nonconstant stepsize policies with frequent long steps potentially violating descent by analyzing the overall effect of many iterations at once rather than the typical one-iteration inductions used in most first-order method analyses. We show that long steps, which may increase the objective value in the short term, lead to provably faster convergence in the long term. A conjecture towards proving a faster O(1/T log T) rate for gradient descent is also motivated along with simple numerical validation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. On the Rate of Convergence of the Difference-of-Convex Algorithm (DCA).
- Author
-
Abbaszadehpeivasti, Hadi, de Klerk, Etienne, and Zamani, Moslem
- Subjects
- *
SEMIDEFINITE programming , *ALGORITHMS , *SOCIAL norms , *NONSMOOTH optimization - Abstract
In this paper, we study the non-asymptotic convergence rate of the DCA (difference-of-convex algorithm), also known as the convex–concave procedure, with two different termination criteria that are suitable for smooth and non-smooth decompositions, respectively. The DCA is a popular algorithm for difference-of-convex (DC) problems and known to converge to a stationary point of the objective under some assumptions. We derive a worst-case convergence rate of O (1 / N) after N iterations of the objective gradient norm for certain classes of DC problems, without assuming strong convexity in the DC decomposition and give an example which shows the convergence rate is exact. We also provide a new convergence rate of O(1/N) for the DCA with the second termination criterion. Moreover, we derive a new linear convergence rate result for the DCA under the assumption of the Polyak–Łojasiewicz inequality. The novel aspect of our analysis is that it employs semidefinite programming performance estimation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Tight Ergodic Sublinear Convergence Rate of the Relaxed Proximal Point Algorithm for Monotone Variational Inequalities.
- Author
-
Gu, Guoyong and Yang, Junfeng
- Subjects
- *
ALGORITHMS , *LOGICAL prediction , *CONCRETE , *MOTIVATION (Psychology) - Abstract
This paper considers the relaxed proximal point algorithm for solving monotone variational inequality problems, and our main contribution is the establishment of a tight ergodic sublinear convergence rate. First, the tight or exact worst-case convergence rate is computed using the performance estimation framework. It is observed that this numerical bound asymptotically coincides with the best-known existing rate, whose tightness is not clear. This implies that, without further assumptions, sublinear convergence rate is likely the best achievable rate for the relaxed proximal point algorithm. Motivated by the numerical result, a concrete example is constructed, which provides a lower bound on the exact worst-case convergence rate. This lower bound coincides with the numerical bound computed via the performance estimation framework, leading us to conjecture that the lower bound provided by the example is exactly the tight worse-case rate, which is then verified theoretically. We thus have established an ergodic sublinear complexity rate that is tight in terms of both the sublinear order and the constants involved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A novel approach to node coverage enhancement in wireless sensor networks using walrus optimization algorithm
- Author
-
V. Saravanan, Indhumathi G, Ramya Palaniappan, Narayanasamy P, M. Hema Kumar, K. Sreekanth, and Navaneethan S
- Subjects
Node coverage ,Optimization problem ,Wireless sensor network ,Performance estimation ,Walrus optimization algorithm ,Technology - Abstract
Wireless Sensor Networks (WSNs) are crucial components of modern technology, supporting applications like healthcare, industrial automation, and environmental monitoring. This research aims to design intelligent and adaptive sensor networks by integrating metaheuristics with node coverage optimization in WSNs. By incorporating metaheuristics and optimizing node coverage, WSNs can become more resilient and robust, leading to the development of self-adapting, self-organizing networks capable of efficiently covering dynamic and diverse environments. This research introduces the Walrus Optimization Algorithm for Node Coverage Enhancement in WSNs, called the WaOA-NCEWSN technique. The primary goal of this technique is to optimize the coverage of a target region using a limited number of Sensor Nodes (SNs) and by improving their placement. The WaOA is inspired by walrus behaviours like feeding, migrating, breeding, escaping, roosting, and gathering in response to environmental signals. The WaOA-NCEWSN technique uses an objective function that defines the coverage ratio, representing the maximum probability of coverage in a 2D-WSN monitoring area. Comparative analysis with other models using 50, 75, 100, and 200 nodes shows that the WaOA-NCEWSN technique performs better. The compilation times for the WaOA-NCEWSN technique are 5.14s, 6.48s, 6.54s, and 7.47s for 50, 75, 100, and 200 nodes, respectively. Experimental results indicate that the WaOA-NCEWSN technique offers superior coverage performance compared to other models.
- Published
- 2024
- Full Text
- View/download PDF
9. Safety Performance of Neural Networks in the Presence of Covariate Shift
- Author
-
Cheng, Chih-Hong, Ruess, Harald, Theodorou, Konstantinos, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Reynolds, Andrew, editor, and Tasiran, Serdar, editor
- Published
- 2024
- Full Text
- View/download PDF
10. Strategies for maintaining efficiency of edge services
- Author
-
Abdul Majeed, Ayesha and Spence, Ivor
- Subjects
Edge computing ,distributed DNN ,failures ,service downtime ,offloading ,containers ,performance estimation ,fog computing - Abstract
Cloud-centric systems face challenges in meeting the requirements of real-time Internet of Things (IoT) applications. IoT applications are integrated with Artificial Intelligence to facilitate real-time analytics. An Edge computing paradigm is established that improves the performance of applications in a decentralised approach by moving services closer to the end user. However, resources at the edge are heterogeneous, resource-limited, unreliable, and have intermittent availability. As a result, the application's performance may fluctuate at runtime, affecting the service availability. Therefore, assigned services to the edge resources should adapt to runtime environment to maintain the application's performance. However, it may take time for the services to adapt quickly to the varying runtime conditions. Another problem is that the availability of edge services is impacted by factors such as system overload, network failure, or user mobility, which often occur on edge at runtime. To ensure availability of edge services, the impact of service interruptions caused by failures must be minimised. This thesis aims to develop strategies to cope with the above-mentioned issues for deploying and managing edge services by adapting to runtime conditions and failures. This thesis introduces a framework EDGESER that presents three strategies to improve the performance efficiency of edge services. First, this thesis presents a strategy that models and measures the service delay of available offloading approaches at the edge. Furthermore, the strategy identifies the parameters that impact the service time of different offloading approaches by considering the computation, communication and offline parameters such as the size of the data etc. The second strategy focuses on efficiently adapting to runtime conditions by minimising the downtime incurred while redeploying edge services. The strategy is based on an approach that uses secondary edge-cloud pipelines to process user requests when reassigning edge services. The proposed approach minimises downtime compared to an existing baseline approach, making it suitable for latency-constrained applications. This strategy also considers the trade-off between downtime of edge service and the amount of memory needed for the proposed approach and baseline approach. The third strategy rapidly responds to service failures on edge by redeploying services on a different edge server. The salient feature of the proposed strategy investigates three techniques based on the characteristics of Deep Neural Networks (DNNs) to adapt to service failure. If edge services are recovered and redeployed on edge servers selected solely to mitigate failures, QoS violations, such as service latency violations, may occur. This highlights the significance of optimising service failures and user requirements jointly. The strategy also considers the trade-offs of the techniques, such as accuracy and latency, to choose the best technique based on user-defined objectives (accuracy, latency, and downtime thresholds) when a service failure happens. This enables the edge service providers to balance the trade-off between service quality and service failures.
- Published
- 2023
11. Estimation of full-scale performance of energy-saving devices using Boundary Layer Similarity model.
- Author
-
Sadakata, Katsuaki, Hino, Takanori, and Takagi, Youhei
- Subjects
- *
BOUNDARY layer (Aerodynamics) , *COMPUTATIONAL fluid dynamics , *ENERGY consumption , *SHIP fuel - Abstract
The recent global trend toward decarbonization is also occurring in the maritime industry and there is an urgent need to improve the fuel efficiency of ships. Various energy-saving devices (ESDs) are adopted for this purpose. However, because of the difference in the boundary layer thickness between full-scale and model-scale, it is difficult to estimate the actual performance of ESDs in a tank test. As a solution to overcome this difficulty, the Boundary Layer Similarity model (BLS model) is proposed in which only the stern part of the hull is extracted to shorten the model length and to make its boundary layer thickness equivalent to that of a full-scale ship. By using this BLS model, the performance of ESDs on a full-scale can be estimated in a model test. In the present paper, the applicability of the BLS model is investigated by CFD (Computational Fluid Dynamics) simulations of performance estimation of an energy-saving fin in different locations for a bulk carrier. It is found that the BLS model is capable of predicting the full-scale performance of the fins by the model-scale simulations. Moreover, the flow fields affected by the fins with the BLS model are similar to those of full-scale. It is expected that the use of the BLS model enables the prediction of the full-scale performance of ESDs in a model test. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Hybridhadoop: CPU-GPU hybrid scheduling in hadoop.
- Author
-
Oh, Chanyoung, Yi, Saehanseul, Seok, Jongkyu, Jung, Hyeonjin, Yoon, Illo, and Yi, Youngmin
- Subjects
- *
HIGH performance computing , *GRAPHICS processing units , *SCHEDULING , *HETEROGENEOUS computing - Abstract
As a GPU has become an essential component in high performance computing, it has been attempted by many works to leverage GPU computing in Hadoop. However, few works considered to fully utilize the GPU in Hadoop and only a few works studied utilizing both CPU and GPU at the same time. In this paper, we propose a CPU-GPU hybrid scheduling in Hadoop, where both CPUs and GPUs in a node are exploited as much as possible in an adaptive manner. The technical barrier stands in that the optimal number of GPU tasks is not known in advance, and the total number of Containers in a node cannot be changed once a Hadoop job starts. In the proposed approach, we first determine the initial number of Containers as well as the hybrid execution mode, then the proposed dynamic scheduler adjusts the number of Containers for a GPU and a CPU with the help of a GPU monitor during the job execution. It also employs a load-balancing algorithm for the tail. The experiments with various benchmarks show that the proposed CPU-GPU hybrid scheduling achieves 3.87 × of speedup on average against the 12-core CPU-only Hadoop. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Performance Estimation of a Medium-Resolution Earth Observation Sensor Using Nanosatellite Replica.
- Author
-
Colodro-Conde, Carlos
- Subjects
- *
MICROSPACECRAFT , *DETECTORS , *COMPLIANT mechanisms , *LABORATORY equipment & supplies - Abstract
In many areas of engineering, the design of a new system usually involves estimating performance-related parameters from early stages of the project to determine whether a given solution will be compliant with the defined requirements. This aspect is particularly relevant during the design of satellite payloads, where the target environment is not easily accessible in most cases. In the context of Earth observation sensors, this problem has been typically solved with the help of a set of complex pseudo-empirical models and/or expensive laboratory equipment. This paper describes a more practical approach: the illumination conditions measured by an in-orbit payload are recreated on ground with the help of a replica of the same payload so the performance of another Earth observation sensor in development can be evaluated. The proposed method is specially relevant in the context of small satellites, as the possibility of having extra units devoted to these tasks becomes greater as costs are reduced. The results obtained using this method in an actual space mission are presented in this paper, giving valuable information that will help in further stages of the project. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Derin öğrenme teknikleri kullanılarak üretim sistemlerinde KPI tabanlı performans tahminleme.
- Author
-
Akkurt, Taha and Sarıçiçek, İnci
- Abstract
Firms in the manufacturing sector need to constantly monitor their performance in order to maintain their development under competitive conditions in the market. In this study, eleven KPIs are determined to measure the production performance by taking into account the factory assets. The proposed system is designed in which the relevant KPIs are obtained via the instantaneous data received from the CNC machines in a production system. The main objective of this study is to measure and estimate production performance. In this way, it is aimed to provide a proactive approach for the assets whose performance is monitored by the decision-makers. LSTM and LightGBM models which are deep learning techniques are proposed for the estimation of performance indicators. The approximately three-month time series OEE (Overall Equipment Effectiveness) values of the sample CNC machine are used for estimation. The estimation performance of methods is compared over performance metrics (MSE, MAE, etc.). The results indicated that LightGBM outperforms LSTM for all performance metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Employing machine learning algorithm for properties of wood ceramics prediction: A case study of ammonia nitrogen adsorption capacity, apparent porosity, surface hardness and burn-off for wood ceramics.
- Author
-
Jiang, Wenjun, Guo, Xiurong, Guan, Qi, Zhang, Yanlin, and Du, Danfeng
- Subjects
- *
MACHINE learning , *WOOD , *ATMOSPHERIC ammonia , *ADSORPTION capacity , *BOOSTING algorithms , *RANDOM forest algorithms , *CERAMICS - Abstract
The estimation of material performance plays a crucial role in practical life, enabling the rational allocation of time and resources while enhancing the practical application of materials. Therefore, this study investigates the analytical and predictive capabilities of five machine learning (ML) algorithms (the Random Forest algorithm – RF, the Adaboost algorithm – AB, the Gradient Boosting algorithm – GB, the Extra Trees algorithm – ET and linear models – LM) for comprehensive performance parameters of wood ceramics (ammonia nitrogen adsorption quantity - Q , open porosity - P , burn-off , and hardness). In the analysis of model prediction capabilities, five key statistical parameters (Root Mean Square Error - RMSE , Mean Square Error - MSE , Coefficient of Correlation - R , Determination Coefficient - R 2 , and Mean Absolute Relative Error - MARE) were calculated. The results indicate the following: (1) Among various ML models, the GB model exhibits the most superior performance with an R 2 ≥ 0.9594. (2) However, in predicting the four performance parameters of wood ceramics, the AB model with an R 2 ranging from 0.0078 to 0.45, indicating notably poor predictive capabilities. Despite integrating the Support Vector Regression (SVR) module, no enhancement in predictive accuracy was observed. (3) While forecasting the four performance parameters of wood ceramics, the LM model demonstrates predictive accuracy relatively akin to that of the RF and ET models among the ML models. (4) Though the feature importance scores vary across distinct input variable models, there is a consistent trend of change. (5) Variables B and D exhibit some level of correlation with other variables. In summary, the results of this study suggest that, for the analysis and prediction of wood ceramic performance, the GB model among the five regression models demonstrates outstanding simulation performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Viper: Utilizing Hierarchical Program Structure to Accelerate Multi-Core Simulation
- Author
-
Alen Sabu, Changxi Liu, and Trevor E. Carlson
- Subjects
Multi-core simulation ,performance estimation ,RTL evaluation ,workload sampling ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Pre-silicon performance evaluation is a crucial component of computer systems research and development. While simulation has long been the de facto standard in this context, it can be prohibitively time-consuming for long-running, realistic workloads. To expedite this process, researchers have traditionally turned to sampling techniques. However, these techniques typically rely on fixed-length intervals for analysis, which can often be out of sync with the periodicity of program execution. Additionally, since an application’s phase behavior is strongly correlated to the code it executes, it can exhibit a hierarchy of phase behaviors that can be observed at various interval lengths, rendering conventional sampling techniques inadequate. To address these limitations, we propose Viper– a novel sampled simulation methodology that applies to single-threaded and multi-threaded workloads by leveraging the hierarchical structure of program execution. Viper takes into account both application periodicity and inter-thread synchronization in order to achieve better sampling accuracy and smaller regions, which enables faster register-transfer level (RTL) simulations. We evaluate Viper with the multi-threaded SPEC CPU2017 benchmarks and demonstrate a significant simulation speedup (up to 2, $710\times $ , $358\times $ on average for the train input set) while maintaining an average sampling error of just 1.32%. The source code of Viper is available at https://github.com/nus-comparch/viper.
- Published
- 2024
- Full Text
- View/download PDF
17. A Longitudinal Study of the Development of Executive Function and Calibration Accuracy.
- Author
-
Goudas, Marios, Samara, Evdoxia, and Kolovelonis, Athanasios
- Subjects
STATISTICAL models ,STATISTICAL correlation ,TASK performance ,ELEMENTARY schools ,RESEARCH funding ,EXECUTIVE function ,DESCRIPTIVE statistics ,LONGITUDINAL method ,LATENT structure analysis ,CHILD development ,STATISTICAL reliability ,RESEARCH ,CALIBRATION ,BASKETBALL ,COMPARATIVE studies ,DATA analysis software ,COGNITION ,EVALUATION ,CHILDREN - Abstract
This longitudinal study examined the development of executive function and calibration accuracy in preadolescents. This study's sample consisted of 262 students (127 females) from grades 4 (n = 91), 5 (n = 89), and 6 (n = 82) who took measures of executive function and performance calibration in a sport task three times over 20 months. A latent growth-curve modeling analysis showed a significant relationship between the rates of change of executive function and calibration accuracy. The results also showed a dynamic interplay in the development of executive function and calibration accuracy. There were significant interindividual differences in the estimated population means both in executive function and calibration accuracy and in the rate of change of executive function, but not in the rate of change of calibration accuracy. The age of the participants had a positive effect only on the estimated population mean of executive function. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Analysis of the Performance of YOLO Models for Tomato Plant Diseases Identification
- Author
-
Ahmed, Shakil, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, and Uddin, Mohammad Shorif, editor
- Published
- 2023
- Full Text
- View/download PDF
19. Feature Selection for Performance Estimation of Machine Learning Workflows
- Author
-
Neruda, Roman, Figueroa-García, Juan Carlos, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Rocha, Álvaro, editor, Ferrás, Carlos, editor, and Ibarra, Waldo, editor
- Published
- 2023
- Full Text
- View/download PDF
20. Seismic Performance of a Masonry Infilled Reinforced Concrete Frame Building Designed as per Indian Codes
- Author
-
Sharma, M., Singh, Y., Burton, H., Chen, Sheng-Hong, Series Editor, di Prisco, Marco, Series Editor, Vayas, Ioannis, Series Editor, Jakka, Ravi S., editor, Singh, Yogendra, editor, Sitharam, T. G., editor, and Maheshwari, Bal Krishna, editor
- Published
- 2023
- Full Text
- View/download PDF
21. Machine Learning Assessment: Implications to Cybersecurity
- Author
-
Yousef, Waleed A., Xhafa, Fatos, Series Editor, Traore, Issa, editor, Woungang, Isaac, editor, and Saad, Sherif, editor
- Published
- 2023
- Full Text
- View/download PDF
22. Optimal resource optimisation based on multi‐layer monitoring
- Author
-
Dimitrios Uzunidis, Panagiotis Karkazis, and Helen C. Leligou
- Subjects
machine learning ,next generation networking ,performance estimation ,software defined networking ,Telecommunication ,TK5101-6720 - Abstract
Abstract The satisfaction of the Quality of Service (QoS) levels during an entire service life‐cycle is one of the key targets for Service Providers (SP). To achieve this in an optimal way, it is required to predict the exact amount of the needed physical and virtual resources, for example, CPU and memory usage, for any possible combination of parameters that affect the system workload, such as number of users, duration of each request, etc. To solve this problem, the authors introduce a novel architecture and its open‐source implementation that a) monitors and collects data from heterogeneous resources, b) uses them to train machine learning models and c) tailors them to each particular service type. The candidate solution is validated in two real‐life services showing very good accuracy in predicting the required resources for a large number of operational configurations where a data augmentation method is also applied to further decrease the estimation error up to 32%.
- Published
- 2023
- Full Text
- View/download PDF
23. Model Selection for Time Series Forecasting An Empirical Analysis of Multiple Estimators.
- Author
-
Cerqueira, Vitor, Torgo, Luis, and Soares, Carlos
- Subjects
FORECASTING ,PREDICTION models ,SAMPLE size (Statistics) - Abstract
Evaluating predictive models is a crucial task in predictive analytics. This process is especially challenging with time series data because observations are not independent. Several studies have analyzed how different performance estimation methods compare with each other for approximating the true loss incurred by a given forecasting model. However, these studies do not address how the estimators behave for model selection: the ability to select the best solution among a set of alternatives. This paper addresses this issue. The goal of this work is to compare a set of estimation methods for model selection in time series forecasting tasks. This objective is split into two main questions: (i) analyze how often a given estimation method selects the best possible model; and (ii) analyze what is the performance loss when the best model is not selected. Experiments were carried out using a case study that contains 3111 time series. The accuracy of the estimators for selecting the best solution is low, despite being significantly better than random selection. Moreover, the overall forecasting performance loss associated with the model selection process ranges from 0.28 to 0.58%. Yet, no considerable differences between different approaches were found. Besides, the sample size of the time series is an important factor in the relative performance of the estimators. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. A Parameterized Modeling Method for Magnetic Circuits of Adjustable Permanent Magnet Couplers.
- Author
-
Wang, Dazhi, Li, Wenhui, Wang, Jiaxing, Song, Keling, Ni, Yongliang, and Li, Yanming
- Subjects
- *
MAGNETIC circuits , *PERMANENT magnets , *EDDY current losses , *PERMANENT magnet motors , *EDDY current testing , *MAGNETIC flux , *ELECTRIC inductance - Abstract
The contactless transmission between the conductor rotor and the permanent magnet (PM) rotor of an adjustable permanent magnet coupler (APMC) provides the device with significant tolerance for alignment errors, making the performance estimation complicated and inaccurate. The first proposal of an edge coefficient in this paper helps to describe the edge effect with better accuracy. Accurate equivalent magnetic circuit (EMC) models of the APMC are established for each region. Models of magnetic flux, magnetic resistance, and eddy current density are established by defining the equivalent dimensional parameters of the eddy current circuit. Furthermore, the concept of magnetic inductance is proposed for the first time, parameterizing eddy currents that are difficult to describe with physical models and achieving the modeling of the dynamic eddy current circuit. The magnetic resistance is subdivided into two parts corresponding to the output and slip according to the power relationship. Furthermore, eddy current loss and dynamic torque models are further derived. The method proposed in this paper enables the APMC to be modeled and calculated in a completely new way. The correctness and accuracy of the model have been fully demonstrated using finite element simulation and an experimental prototype. In addition, the limitations of the proposed method and the reasons are fully discussed and investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
25. Design and Analysis of High Performance Heterogeneous Block-based Approximate Adders.
- Author
-
FARAHMAND, EBRAHIM, ALIMAHANI, HANIF, MUHAMMAD ABDULLAH, and SHAFIQUE, MUHAMMAD
- Subjects
ERROR rates ,APPROXIMATION error ,FIGURATIVE art - Abstract
Approximate computing is an emerging paradigm to improve the power and performance efficiency of errorresilient applications. As adders are one of the key components in almost all processing systems, a significant amount of research has been carried out toward designing approximate adders that can offer better efficiency than conventional designs; however, at the cost of some accuracy loss. In this article, we highlight a new class of energy-efficient approximate adders, namely, Heterogeneous Block-based Approximate Adders (HBAAs), and propose a generic configurable adder model that can be configured to represent a particular HBAA con- figuration. An HBAA, in general, is composed of heterogeneous sub-adder blocks of equal length, where each sub-adder can be an approximate sub-adder and have a different configuration. The sub-adders are mainly approximated through inexact logic and carry truncation. Compared to the existing design space, HBAAs provide additional design points that fall on the Pareto-front and offer a better quality-efficiency tradeoff in certain scenarios. Furthermore, to enable efficient design space exploration based on user-defined constraints, we propose an analytical model to efficiently evaluate the Probability Mass Function (PMF) of approximation error and other error metrics, such as Mean Error Distance (MED), Normalized Mean Error Distance (NMED), and Error Rate (ER) of HBAAs. The results show that HBAA configurations can provide around 15% reduction in area and up to 17% reduction in energy compared to state-of-the-art approximate adders. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. Engineering Estimation Method of Large Expansion Ratio Nozzle Performance
- Author
-
Rong HUANG, Yao-qiang JIA, and Bin LI
- Subjects
large expansion ratio ,nozzle ,separation ,performance estimation ,numerical simulation ,Astrophysics ,QB460-466 - Abstract
Based on isentropic assumption, mass conservation and separation prediction formula, an engineering estimation method of nozzle performance considering flow separation was developed for large expansion ratio nozzle, and the performance evaluation method was verified with numerical simulation in this paper. The variation trend of thrust performance with nozzle pressure ratio(NPR) estimated by present method is in good agreement with simulation results. The results indicate that the present method has a good predicting ability at highly over-expanded flow conditions and an accurate estimation on large NPR state. It can provide direct guidance for nozzle design.
- Published
- 2023
- Full Text
- View/download PDF
27. Performance Estimation of a Medium-Resolution Earth Observation Sensor Using Nanosatellite Replica
- Author
-
Carlos Colodro-Conde
- Subjects
performance estimation ,earth observation ,nanosatellite ,multispectral ,short-wave infrared (SWIR) ,InGaAs ,Chemical technology ,TP1-1185 - Abstract
In many areas of engineering, the design of a new system usually involves estimating performance-related parameters from early stages of the project to determine whether a given solution will be compliant with the defined requirements. This aspect is particularly relevant during the design of satellite payloads, where the target environment is not easily accessible in most cases. In the context of Earth observation sensors, this problem has been typically solved with the help of a set of complex pseudo-empirical models and/or expensive laboratory equipment. This paper describes a more practical approach: the illumination conditions measured by an in-orbit payload are recreated on ground with the help of a replica of the same payload so the performance of another Earth observation sensor in development can be evaluated. The proposed method is specially relevant in the context of small satellites, as the possibility of having extra units devoted to these tasks becomes greater as costs are reduced. The results obtained using this method in an actual space mission are presented in this paper, giving valuable information that will help in further stages of the project.
- Published
- 2024
- Full Text
- View/download PDF
28. Optimal resource optimisation based on multi‐layer monitoring.
- Author
-
Uzunidis, Dimitrios, Karkazis, Panagiotis, and Leligou, Helen C.
- Subjects
MACHINE learning ,DATA augmentation ,SOFTWARE-defined networking ,QUALITY of service ,SATISFACTION - Abstract
The satisfaction of the Quality of Service (QoS) levels during an entire service life‐cycle is one of the key targets for Service Providers (SP). To achieve this in an optimal way, it is required to predict the exact amount of the needed physical and virtual resources, for example, CPU and memory usage, for any possible combination of parameters that affect the system workload, such as number of users, duration of each request, etc. To solve this problem, the authors introduce a novel architecture and its open‐source implementation that a) monitors and collects data from heterogeneous resources, b) uses them to train machine learning models and c) tailors them to each particular service type. The candidate solution is validated in two real‐life services showing very good accuracy in predicting the required resources for a large number of operational configurations where a data augmentation method is also applied to further decrease the estimation error up to 32%. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
29. A SYSTEMATIC APPROACH TO LYAPUNOV ANALYSES OF CONTINUOUS-TIME MODELS IN CONVEX OPTIMIZATION.
- Author
-
MOUCER, CÉLINE, TAYLOR, ADRIEN, and BACH, FRANCIS
- Subjects
- *
ORDINARY differential equations , *STOCHASTIC differential equations , *LYAPUNOV functions , *CONTINUOUS time models - Abstract
First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to finding and verifying Lyapunov functions for classes of ordinary and stochastic differential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [Math. Program., 145 (2014), pp. 451-482], to continuous-time models. We retrieve convergence results comparable to those of discrete-time methods using fewer assumptions and inequalities and provide new results for a family of stochastic accelerated gradient flows. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
30. Automated tight Lyapunov analysis for first-order methods
- Author
-
Upadhyaya, Manu, Banert, Sebastian, Taylor, Adrien B., and Giselsson, Pontus
- Published
- 2024
- Full Text
- View/download PDF
31. Application Runtime Estimation for AURIX Embedded MCU Using Deep Learning
- Author
-
Fricke, Florian, Scharoba, Stefan, Rachuj, Sebastian, Konopik, Andreas, Kluge, Florian, Hofstetter, Georg, Reichenbach, Marc, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Orailoglu, Alex, editor, Reichenbach, Marc, editor, and Jung, Matthias, editor
- Published
- 2022
- Full Text
- View/download PDF
32. Curious Properties of Latency Distributions
- Author
-
Gajda, Michał J., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, and Arai, Kohei, editor
- Published
- 2022
- Full Text
- View/download PDF
33. Accurate LLVM IR to Binary CFGs Mapping for Simulation of Optimized Embedded Software
- Author
-
Cornaglia, Alessandro, Viehl, Alexander, Bringmann, Oliver, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Orailoglu, Alex, editor, Jung, Matthias, editor, and Reichenbach, Marc, editor
- Published
- 2022
- Full Text
- View/download PDF
34. Evolution of the shape parameters of photovoltaic module as a function of temperature and irradiance: New method of performance prediction without setting reference conditions
- Author
-
Hao Lu, Yunpeng Zhang, Peng Hao, Jiao Ma, Li Zhang, Tingkun Gu, and Ming Yang
- Subjects
Photovoltaic modules ,Performance estimation ,Power law model ,Optimization algorithm ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The accurate characterization and prediction of current–voltage (I–V) characteristics of photovoltaic (PV) modules under different weather conditions are essential for solar power forecasting and ensuring grid stability. This paper proposed a novel method based on the power-law model (PLM) for estimating PV module performance under different weather conditions without setting reference conditions. The effects of solar irradiance, ambient temperature, and module types are all fully considered. The dependence of shape parameters in PLM on irradiance and temperature are thoroughly investigated and the effect of the selection of reference conditions is eliminated by modifying a set of new transforming equations. The parameters of the new transforming equations can be extracted from experimental data by an optimization algorithm. Due to the advantages of PLM, the proposed method applies to any type of PV module and the I–V characteristics can be expressed explicitly without Lambert W-function or iterative solution. The effectiveness and accuracy proposed method are verified and tested on the large datasets of eighteen PV modules in three locations. Compared with existing methods, the proposed method shows higher accuracy and better performance in the estimation of I–V characteristics and maximum power under different weather conditions.
- Published
- 2022
- Full Text
- View/download PDF
35. Generalised Performance Estimation in Novel Hybrid MPC Architectures: Modeling the CONWIP Flow-Shop System.
- Author
-
Vespoli, Silvestro, Grassi, Andrea, Guizzi, Guido, and Popolo, Valentina
- Subjects
DEEP learning ,PRODUCTION control ,PRODUCTION planning ,DEVELOPED countries ,INDUSTRY 4.0 ,STOCHASTIC processes - Abstract
The ability to supply increasingly individualized market demand in a short period of time while maintaining costs to a bare minimum might be considered a vital factor for industrialized countries' competitive revival. Despite significant advances in the field of Industry 4.0, there is still an open gap in the literature regarding advanced methodologies for production planning and control. Among different production and control approaches, hybrid architectures are gaining huge interest in the literature. For such architectures to operate at their best, reliable models for performance prediction of the supervised production system are required. In an effort to advance the development of hybrid architecture, this paper develops a model able to predict the performance of the controlled system when it is structured as a controlled work-in-progress (CONWIP) flow-shop with generalized stochastic processing times. To achieve this, we employed a simulation tool using both discrete-event and agent-based simulation techniques, which was then utilized to generate data for training a deep learning neural network. This network was proposed for estimating the throughput of a balanced system, together with a normalization method to generalize the approach. The results showed that the developed estimation tool outperforms the best-known approximated mathematical models while allowing one-shot training of the network. Finally, the paper develops preliminary insights about generalized performance estimation for unbalanced lines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. A Longitudinal Study of the Development of Executive Function and Calibration Accuracy
- Author
-
Marios Goudas, Evdoxia Samara, and Athanasios Kolovelonis
- Subjects
latent growth-curve modeling ,metacognition ,development ,preadolescence ,basketball shooting ,performance estimation ,Pediatrics ,RJ1-570 - Abstract
This longitudinal study examined the development of executive function and calibration accuracy in preadolescents. This study’s sample consisted of 262 students (127 females) from grades 4 (n = 91), 5 (n = 89), and 6 (n = 82) who took measures of executive function and performance calibration in a sport task three times over 20 months. A latent growth-curve modeling analysis showed a significant relationship between the rates of change of executive function and calibration accuracy. The results also showed a dynamic interplay in the development of executive function and calibration accuracy. There were significant interindividual differences in the estimated population means both in executive function and calibration accuracy and in the rate of change of executive function, but not in the rate of change of calibration accuracy. The age of the participants had a positive effect only on the estimated population mean of executive function.
- Published
- 2024
- Full Text
- View/download PDF
37. Battling Unawareness of One's Test Performance: Do Practice, Self-Efficacy, and Emotional Intelligence Matter?
- Author
-
Pilotti, Maura A. E., El Alaoui, Khadija, and Waked, Arifi N.
- Subjects
- *
EMOTIONAL intelligence , *SELF-efficacy , *AT-risk students , *ESTIMATION bias , *INDIVIDUAL differences - Abstract
The "Dunning–Kruger effect" refers to the tendency of poor performers to overestimate test outcomes. Although a widespread phenomenon, questions exist regarding its source and sensitivity to countermeasures. The present field study aimed to (a) examine whether practice with tests used in previous classes can enhance students' ability to estimate test outcomes, (b) determine the main source of the effect (i.e., is it unawareness of one's readiness or wishful thinking?), and (c) assess the extent to which particular individual differences can be used as predictors of test performance. In this study, participants practiced with old tests and then completed the final exam. Before and after the exam, they predicted their grades and indicated their subjective confidence in the predictions made. Furthermore, participants' emotional intelligence and self-efficacy about their academic abilities were surveyed. Results suggested that poor performers were not unaware of their test preparation, but rather engaged in wishful thinking. In fact, although they overestimated their test grades, their estimates not only improved after completing the final test but also were regarded with little confidence. Overall, estimation bias was a good predictor of students' final test performance, whereas subjective confidence and emotional intelligence only weakly predicted such performance. Thus, if proactive interventions are to be developed for at-risk students, performance-estimation tasks may offer valuable information regarding such students' future performance in a course much more than emotional intelligence and self-efficacy measures. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
38. Comparison of Single-Lane Roundabout Entry Degree of Saturation Estimations from Analytical and Regression Models.
- Author
-
Čudina Ivančev, Ana, Ahac, Maja, Ahac, Saša, and Dragčević, Vesna
- Subjects
- *
REGRESSION analysis , *PATH analysis (Statistics) , *TRAFFIC flow , *LEG - Abstract
Roundabout design is an iterative process consisting of a preliminary geometry design, geometry performance checks, and the estimation of intersection functionality (based on the results of analytical or regression models). Since both roundabout geometry design procedures and traffic characteristics vary around the world, the discussion on which functionality estimation model is more appropriate is ongoing. This research aims to reduce the uncertainty in decision-making during this final roundabout design stage. Its two objectives were to analyze and compare the results of roundabout performance estimations derived from one analytical and one regression model, and to quantify the model results' susceptibility to changes in roundabout geometric parameters. For this, 60 four-legged single-lane roundabout schemes were created, varying in size and leg alignment. Their geometric parameters resulted from the assumption of their location in a suburban environment and chosen design vehicle swept path analysis. To compare the models' results, the degree of saturation of roundabout entries was calculated based on presumed traffic flows. The results showed that the regression model estimates higher functionality and that this difference (both between the two models and regression models applied on different schemes) is more pronounced as the outer radius and angle between the legs increase. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
39. Modeling and performance estimation for L-shaped OWC wave energy converters with a theoretical correction for spring-like air compressibility.
- Author
-
Nguyen, Duy Tong, Chow, Yi-Chih, and Lin, Chen-Chou
- Subjects
- *
COLUMNS , *COMPRESSIBILITY , *POTENTIAL energy , *RESILIENT design , *STRUCTURAL design - Abstract
The investigation of Oscillating Water Columns (OWC) has gained significant attention in recent years thanks to their resilient structural design, allowing them to withstand harsh environmental conditions. This study focuses on the L-shaped OWC (L-OWC) due to its excellent energy capture efficiency. It should be noted that air compressibility plays a significant role in the plenum chamber of the OWC, especially at full scale. The present paper proposes a scaling-rematched approach, facilitating the evaluation of hydrodynamic coefficients and the performance estimation with a theoretical correction for the unmatched scaling problem of spring-like air compressibility. The methodology is applied successfully to a model-scale L-OWC design, employing three-dimensional, incompressible-flow simulations combined with an impeller model. The present study reveals the hydrodynamic characteristics (advantages) of the L-OWC: small fluid damping coefficient and large added mass, and hence the possibility that the spring-like air compressibility can be used to raise the efficiency of power capturing. Furthermore, there exists an interval where the air compressibility has a positive effect on the performance, not only having a significantly longer span than that of the conventional OWC but also much more directly matching the period range of high energy potential found in the wave climate of Northeastern Taiwan waters. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A Parameterized Modeling Method for Magnetic Circuits of Adjustable Permanent Magnet Couplers
- Author
-
Dazhi Wang, Wenhui Li, Jiaxing Wang, Keling Song, Yongliang Ni, and Yanming Li
- Subjects
adjustable permanent magnet coupler ,equivalent magnetic circuit ,electromagnetic field analytical modeling ,parametric expression of eddy current circuit ,performance estimation ,finite element analysis ,Mathematics ,QA1-939 - Abstract
The contactless transmission between the conductor rotor and the permanent magnet (PM) rotor of an adjustable permanent magnet coupler (APMC) provides the device with significant tolerance for alignment errors, making the performance estimation complicated and inaccurate. The first proposal of an edge coefficient in this paper helps to describe the edge effect with better accuracy. Accurate equivalent magnetic circuit (EMC) models of the APMC are established for each region. Models of magnetic flux, magnetic resistance, and eddy current density are established by defining the equivalent dimensional parameters of the eddy current circuit. Furthermore, the concept of magnetic inductance is proposed for the first time, parameterizing eddy currents that are difficult to describe with physical models and achieving the modeling of the dynamic eddy current circuit. The magnetic resistance is subdivided into two parts corresponding to the output and slip according to the power relationship. Furthermore, eddy current loss and dynamic torque models are further derived. The method proposed in this paper enables the APMC to be modeled and calculated in a completely new way. The correctness and accuracy of the model have been fully demonstrated using finite element simulation and an experimental prototype. In addition, the limitations of the proposed method and the reasons are fully discussed and investigated.
- Published
- 2023
- Full Text
- View/download PDF
41. Experimental Evaluation of Train and Test Split Strategies in Link Prediction
- Author
-
de Bruin, Gerrit Jan, Veenman, Cor J., van den Herik, H. Jaap, Takes, Frank W., Kacprzyk, Janusz, Series Editor, Benito, Rosa M., editor, Cherifi, Chantal, editor, Cherifi, Hocine, editor, Moro, Esteban, editor, Rocha, Luis Mateus, editor, and Sales-Pardo, Marta, editor
- Published
- 2021
- Full Text
- View/download PDF
42. EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search
- Author
-
Lopes, Vasco, Alirezazadeh, Saeid, Alexandre, Luís A., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Farkaš, Igor, editor, Masulli, Paolo, editor, Otte, Sebastian, editor, and Wermter, Stefan, editor
- Published
- 2021
- Full Text
- View/download PDF
43. Multi-objective Neural Architecture Search with Almost No Training
- Author
-
Hu, Shengran, Cheng, Ran, He, Cheng, Lu, Zhichao, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ishibuchi, Hisao, editor, Zhang, Qingfu, editor, Cheng, Ran, editor, Li, Ke, editor, Li, Hui, editor, Wang, Handing, editor, and Zhou, Aimin, editor
- Published
- 2021
- Full Text
- View/download PDF
44. Optimization of Multi-Core Accelerator Performance Based on Accurate Performance Estimation
- Author
-
Sunwoo Kim, Youngho Seo, Sungkyung Park, and Chester Sungchung Park
- Subjects
Communication bandwidth ,direct memory access ,multicore accelerator ,performance estimation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Multicore accelerators have emerged to efficiently execute recent applications with complex computational dimensions. Compared to a single-core accelerator, a multicore accelerator handles a larger amount of communication and computation simultaneously. Since the conventional performance estimation algorithm tailored to single-core accelerators cannot estimate the performance of multicore accelerators accurately, we propose a novel performance estimation algorithm for a multicore accelerator. The proposed algorithm predicts a dynamic communication bandwidth of each direct memory access controller (DMAC) based on the runtime state of DMACs, making it possible to estimate the communication amounts handled by DMACs accurately by taking into account the temporal intervals. The proposed algorithm is evaluated for convolutional neural networks and wireless communications. The experimental results using a pre-register transfer level (RTL) simulator shows that the proposed algorithm can estimate the performance of a multicore accelerator with the estimation error of up to 2.8%, regardless of the system communication bandwidth. These results were also verified by the hardware implementations on Xilinx ZYNQ. In addition, the proposed algorithm is used to explore a design space of accelerator core dimensions, and the resulting optimal core dimension provides performance gains of 10.8% and 31.2%, compared to the conventional multicore accelerator and single-core accelerator, respectively. The source code is available on the GitHub repository: https://github.com/SDL-KU/OptAccTile.
- Published
- 2022
- Full Text
- View/download PDF
45. Representative random sampling: an empirical evaluation of a novel bin stratification method for model performance estimation.
- Author
-
Rendleman, Michael C., Smith, Brian J., Canahuate, Guadalupe, Braun, Terry A., Buatti, John M., and Casavant, Thomas L.
- Abstract
High-dimensional cancer data can be burdensome to analyze, with complex relationships between molecular measurements, clinical diagnostics, and treatment outcomes. Data-driven computational approaches may be key to identifying relationships with potential clinical or research use. To this end, reliable comparison of feature engineering approaches in their ability to support machine learning survival modeling is crucial. With the limited number of cases often present in multi-omics datasets (“big p, little n,” or many features, few subjects), a resampling approach such as cross validation (CV) would provide robust model performance estimates at the cost of flexibility in intermediate assessments and exploration in feature engineering approaches. A holdout (HO) estimation approach, however, would permit this flexibility at the expense of reliability. To provide more reliable HO-based model performance estimates, we propose a novel sampling procedure: representative random sampling (RRS). RRS is a special case of continuous bin stratification which minimizes significant relationships between random HO groupings (or CV folds) and a continuous outcome. Monte Carlo simulations used to evaluate RRS on synthetic molecular data indicated that RRS-based HO (RRHO) yields statistically significant reductions in error and bias when compared with standard HO. Similarly, more consistent reductions are observed with RRS-based CV. While resampling approaches are the ideal choice for performance estimation with limited data, RRHO can enable more reliable exploratory feature engineering than standard HO. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. A passive Brain-Computer Interface for operator mental fatigue estimation in monotonous surveillance operations: time-on-task and performance labeling issues.
- Author
-
Hinss MF, Jahanpour ES, Brock AM, and Roy RN
- Abstract
A central component of search and rescue missions is the visual search of survivors. In large parts, this depends on human operators and is, therefore, subject to the constraints of human cognition, such as mental fatigue. This makes detecting mental fatigue a critical step to be implemented in future systems. However, to the best of our knowledge, it has seldom been evaluated using a realistic visual search task. In addition, an accuracy discrepancy exists between studies that use time-on-task (TOT) - the popular method- and performance metrics for labels. Yet, to our knowledge, they have never been directly compared. This study was designed to address both issues: the use of a realistic task to elicit mental fatigue during a monotonous visual search task and the labelling type used for intra-participant fatigue estimation. Over four blocks of 15 minutes, participants had to identify targets on a computer while their cardiac, cerebral (EEG), and eye-movement activities were recorded. The recorded data were then fed into several physiological computing pipelines. The results show that the capability of a machine learning algorithm to detect mental fatigue depends less on the input data but rather on how mental fatigue is defined. Using TOT, very high classification accuracies are obtained (e.g. 99.3%). On the other hand, if mental fatigue is estimated based on behavioural performance, a metric with a much greater operational value, classification accuracies return to chance level (i.e. 52.2%). TOTbased mental fatigue estimation is popular, and strong classification accuracies can be achieved with a multitude of sensors. These factors contribute to the popularity of this method, but both usability and the relation to the concept of mental fatigue are neglected., (© 2024 IOP Publishing Ltd. All rights, including for text and data mining, AI training, and similar technologies, are reserved.)
- Published
- 2024
- Full Text
- View/download PDF
47. CALIPER: A Coarse Grain Parallel Performance Estimator and Predictor
- Author
-
Kalyur, Sesha, Nagaraja, G.S, Akan, Ozgur, Editorial Board Member, Bellavista, Paolo, Editorial Board Member, Cao, Jiannong, Editorial Board Member, Coulson, Geoffrey, Editorial Board Member, Dressler, Falko, Editorial Board Member, Ferrari, Domenico, Editorial Board Member, Gerla, Mario, Editorial Board Member, Kobayashi, Hisashi, Editorial Board Member, Palazzo, Sergio, Editorial Board Member, Sahni, Sartaj, Editorial Board Member, Shen, Xuemin (Sherman), Editorial Board Member, Stan, Mircea, Editorial Board Member, Jia, Xiaohua, Editorial Board Member, Zomaya, Albert Y., Editorial Board Member, Miraz, Mahdi H., editor, Excell, Peter S., editor, Ware, Andrew, editor, Soomro, Safeeullah, editor, and Ali, Maaruf, editor
- Published
- 2020
- Full Text
- View/download PDF
48. Fast Performance Estimation and Design Space Exploration of SSD Using AI Techniques
- Author
-
Kim, Jangryul, Ha, Soonhoi, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Orailoglu, Alex, editor, Jung, Matthias, editor, and Reichenbach, Marc, editor
- Published
- 2020
- Full Text
- View/download PDF
49. Improving Performance Estimation for FPGA-Based Accelerators for Convolutional Neural Networks
- Author
-
Ferianc, Martin, Fan, Hongxiang, Chu, Ringo S. W., Stano, Jakub, Luk, Wayne, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Rincón, Fernando, editor, Barba, Jesús, editor, So, Hayden K. H., editor, Diniz, Pedro, editor, and Caba, Julián, editor
- Published
- 2020
- Full Text
- View/download PDF
50. Application of the singular value and pivoted QR decompositions to reduce experimental efforts in compressor characterization
- Author
-
Andrés Tiseira, Benjamín Pla, Pau Bares, and Alexandra Aramburu
- Subjects
Centrifugal compressor map ,Performance estimation ,Optimal placement ,Singular value decomposition ,QR decomposition ,Science (General) ,Q1-390 ,Social sciences (General) ,H1-99 - Abstract
Compressor characterization, either by running experiments in a turbocharger test rig or by detailed CFD modelling, can be expensive and time-consuming. In this work, a novel method is proposed which can be used to build a complete compressor map from a reduced number of measured operating points combined with a previously collected database. The methodology is based on the application of the Singular Value Decomposition (SVD) method to acquire the orthonormal bases of a matrix which contains the information of previous compressor observations. These bases are used along with pivoted QR decomposition to obtain the minimum number of measurement points which are required to implement this technique as well as its optimal placement within the map. The reconstruction of two different compressor maps was made to validate the method. The results show a substantially better trade-off between number of testing points and accuracy compared to standard equidistributed sampling.
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.