16,977 results
Search Results
2. Phishlimiter: A Phishing Detection and Mitigation Approach Using Software-Defined Networking
- Author
-
Tommy Chin, Kaiqi Xiong, and Chengbin Hu
- Subjects
Artificial neural network (ANN) ,General Computer Science ,Computer science ,Phishing attack ,Testbed ,General Engineering ,020206 networking & telecommunications ,phishing ,security ,02 engineering and technology ,software-defined networking (SDN) ,Hyperlink ,Computer security ,computer.software_genre ,Phishing ,Electronic mail ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Software-defined networking ,lcsh:TK1-9971 ,computer - Abstract
Phishing is one of the most harmful social engineering techniques to subdue end users where threat actors find a chance to gain access to critical information systems. A common approach in phishing is through the use of e-mail communication with an embedded hyperlink. The detection and mitigation of phishing attacks are a grand challenge due to the complexity of current phishing attacks. Existing techniques are often too time consuming to be used in the real world in terms of detection and mitigation time. Likewise, they employ static detection rules that are not effective in the real world due to the dynamics of phishing attacks. In this paper, we present PhishLimiter, a new detection and mitigation approach, where we first propose a new technique for deep packet inspection (DPI) and then leverage it with software-defined networking (SDN) to identify phishing activities through e-mail and web-based communication. The proposed DPI approach consists of two components: phishing signature classification and real-time DPI. Based on the programmability of SDN, we develop the store and forward mode and the forward and inspect mode to the direct network traffic by using an artificial neural network model to classify phishing attack signatures and design the real-time DPI so that PhishLimiter can flexibly address the dynamics of phishing attacks in the real world. PhishLimiter also provides better network traffic management for containing phishing attacks since it has the global view of a network through SDN. Furthermore, we evaluate PhishLimiter using a real-world testbed environment and data sets consisting of real-world e-mail with embedded links. Our extensive experimental study shows that PhishLimiter provides an effective and efficient solution to deter malicious activities.
- Published
- 2018
3. Integrated Topology Management in Flying Ad Hoc Networks: Topology Construction and Adjustment
- Author
-
Do-Yup Kim and Jang-Won Lee
- Subjects
Routing protocol ,General Computer Science ,Wireless ad hoc network ,Computer science ,Distributed computing ,relay deployment ,topology edit distance ,Topology (electrical circuits) ,02 engineering and technology ,Network topology ,Computer Science::Robotics ,0203 mechanical engineering ,0202 electrical engineering, electronic engineering, information engineering ,General Materials Science ,Network performance ,gradient descent ,particle swarm optimization (PSO) ,General Engineering ,Particle swarm optimization ,020206 networking & telecommunications ,020302 automobile design & engineering ,Initial topology ,topology management ,Flying ad hoc network (FANET) ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Routing (electronic design automation) ,lcsh:TK1-9971 - Abstract
Flying ad hoc networks (FANETs) that consist of multiple unmanned aerial vehicles (UAVs) are promising technologies for future networked systems due to the versatility of UAVs. One of the most distinguishing features of FANET is frequent and rapid topological fluctuations due to the high-mobility of UAVs. Hence, the topology management adapting to the movements of UAVs is one of the most critical issues in FANET. In this paper, we study a FANET topology management problem that optimizes the locations and movements of UAVs to maximize the network performance, adapting to the topological changes while UAVs carry out their missions. When formulating the problem, we take into account the routing protocol as an arbitrary function since the network performance is inseparably linked with the routing protocol in use. We first develop two algorithms. One is the topology construction algorithm, which constructs a FANET topology from the scratch without any given initial topology, based on particle swarm optimization. The other is the topology adjustment algorithm, which incrementally adjusts the FANET topology adapting to the movements of UAVs with low-computational costs, based on gradient descent. Then, by defining a logical distance (the so-called topology edit distance) that measures the degree of changes in FANET topology, we develop an integrated topology management algorithm that contains the topology construction and adjustment algorithms. The simulation results show that our algorithm achieves a good network performance with low computational overhead, which is one of the most essential virtues in FANETs with rapidly varying topology.
- Published
- 2018
4. Analyses of Tabular AlphaZero on Strongly-Solved Stochastic Games
- Author
-
Chu-Hsuan Hsueh, Kokolo Ikeda, I-Chen Wu, Jr-Chang Chen, and Tsan-Sheng Hsu
- Subjects
reinforcement learning ,tabular ,General Computer Science ,General Engineering ,AlphaZero ,EinStein würfelt nicht! ,General Materials Science ,stochastic games ,Electrical and Electronic Engineering ,board games ,Chinese dark chess - Abstract
The AlphaZero algorithm achieved superhuman levels of play in chess, shogi, and Go by learning without domain-specific knowledge except for game rules. This paper targets stochastic games and investigates whether AlphaZero can learn theoretical values and optimal play. Since the theoretical values of stochastic games are expected win rates, not a simple win, loss, or draw, it is worth investigating the ability of AlphaZero to approximate expected win rates of positions. This paper also thoroughly studies how AlphaZero is influenced by hyper-parameters and some implementation details. The analyses are mainly based on AlphaZero learning with lookup tables. Deep neural networks (DNNs) like the ones in the original AlphaZero are also experimented and compared. The tested stochastic games include reduced and stronglysolved variants of Chinese dark chess and EinStein würfelt nicht!. The experiments showed that AlphaZero could learn policies that play almost optimally against the optimal player and could learn values accurately. In more detail, such good results were achieved by different hyper-parameter settings in a wide range, though it was observed that games on larger scales tended to have a little narrower range of proper hyper-parameters. In addition, the results of learning with DNNs were similar to lookup tables.
- Published
- 2023
5. Decoding Reinforcement Learning for newcomers
- Author
-
Andry Maykol Pinto, A. Pedro Aguiar, Gustavo Andrade, Matheus F. Reis, and Francisco Neves
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
An intelligible step-by-step Reinforcement Learning (RL) problem formulation and the availability of an easy-to-use demonstrative toolbox for students at various levels (e.g., undergraduate, bachelor, master, doctorate), researchers and educators. This tool facilitates the familiarization with the key concepts of RL, its problem formulation and implementation. The results demonstrated in this paper are produced by a Python program that is released open-source, along with other lecture materials to reduce the learning barriers in such innovative research topic in robotics. The RL paradigm is showing promising results as a generic purpose framework for solving decision-making problems (e.g., robotics, games, finance). In this work, RL is used for solving a robotics 2D navigational problem where the robot needs to avoid collisions with obstacles while aiming to reach a goal point. A navigational problem is simple and convenient for educational purposes, since the outcome is unambiguous (e.g., the goal is reached or not, a collision happened or not). Thus, the intent is to accelerate the adoption of RL techniques in the field of mobile robotics. Motivate and promote the adoption of RL techniques to solve decision-making problems, specifically in robotics. Due to a lack of accessible educational and demonstrative toolboxes concerning the field of RL, this work combines theoretical exposition with an accessible open-source graphical interactive toolbox to facilitate the apprehension. This study aims to reduce the learning barriers and inspire young students, researchers and educators to use RL as an obvious tool to solve robotics problems.
- Published
- 2023
6. Passing Vehicle Road Occupancy Detection Using the Magnetic Sensor Array
- Author
-
Juozas Balamutas, Dangirutis Navikas, Vytautas Markevičius, Mindaugas Čepenas, Algimantas Valinevičius, Mindaugas Žilys, Michal Frivaldsky, Zhixiong Li, Darius Andriukaitis, and IEEE (Institute of Electrical and Electronics Engineers)
- Subjects
magnetic signature ,General Computer Science ,magnetic field measurement ,intelligent transportation systems ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering ,vehicle re-identification - Abstract
The increasing presence of vehicles on roads necessitates intelligent traffic management solutions in areas where video cameras cannot be utilized. Currently, there are limited choices for depersonalized vehicle reidentification systems. This paper introduces a system that later will be used for vehicle reidentification. The system uses anisotropic magnetoresistive sensors and is based on the hypothesis that each vehicle leaves unique magnetic signatures which can be used for comparison and matching. Vehicle location on the road perpendicular to sensor array detection methodology is presented in this work. An array of magnetic sensors is installed in asphalt across the vehicle's driving direction. The system continuously measures Earth's natural magnetic field and detects distortions when vehicles pass a sensors’ array and then logs magnetic signatures. Useful parameters from raw sensor axes are calculated – modules and derivatives. Applying signal-to-noise ratio calculation for module derivatives between ambient noise and signal gives important features for neural network input. Different types of neural network architectures and output result interpretation techniques are investigated. Further, after evaluating network output it is possible to label sensor nodes that are directly beneath the vehicle. Experiment results show that implemented algorithm is highly sufficient for valid sensors under the vehicle selection. Correct sensor selection is important for further re-identification algorithms.
- Published
- 2023
7. A Parallel Corpus-Based Approach to the Crime Event Extraction for Low-Resource Languages
- Author
-
N. Khairova, O. Mamyrbayev, N. Rizun, M. Razno, and G. Ybytayeva
- Subjects
event extraction ,Datavetenskap (datalogi) ,Cross-lingual transfer ,General Computer Science ,Computer Sciences ,crime analysis ,parallel corpus ,General Engineering ,General Materials Science ,natural language processing ,Electrical and Electronic Engineering ,low-resource language ,semantic annotation - Abstract
These days, a lot of crime-related events take place all over the world. Most of them are reported in news portals and social media. Crime-related event extraction from the published texts can allow monitoring, analysis, and comparison of police or criminal activities in different countries or regions. Existing approaches to event extraction mainly suggest processing texts in English, French, Chinese, and some other resource-rich and well-annotated languages. This paper presents a parallel corpus-based approach that follows a closed-domain event extraction methodology to event extraction from web news articles in low-resource languages. To identify the event, its arguments, and the arguments' roles in the source-language part of the corpus we utilize an enhanced pattern-based method that involves the multilingual synonyms dictionary with knowledge about crime-related concepts and logic-linguistic equations. The event extraction from the target-language part of the corpus uses a cross-lingual crime-related event extraction transfer technique that is based on supplementary knowledge about the semantic similarity patterns of the considered pair of languages. The presented approach does not require a preliminarily annotated corpus for training making it more attractive to low-resource languages and allows extracting TRANSFER, CRIME, and POLICE types of events and their seven subtypes from various topics of news articles simultaneously. Implementation of our approach for the Russian-Kazakh parallel corpus of news portals articles allowed obtaining the F1-measure of crime-related event extraction of over 82% for the source language and 63% for the target language.
- Published
- 2023
8. Deep Unfolding of Chebyshev Accelerated Iterative method for Massive MIMO Detection
- Author
-
Salah Berra, Sourav Chakraborty, Rui Dinis, and Shahriar Shahabuddin
- Subjects
overrelaxation ,signal detection ,General Computer Science ,massive MIMO ,General Engineering ,accelerated Chebyshev ,iterative methods ,General Materials Science ,deep unfolding ,Electrical and Electronic Engineering ,matrix inversion - Abstract
The zero-forcing (ZF) and minimum mean square error (MMSE) based detectors can approach optimal performance in the uplink of massive multiple-input multiple-output (MIMO) systems. However, they require inverting a matrix whose complexity is cubic in relation to the matrix dimension. This can lead to the high computational effort, especially in massive MIMO systems. To mitigate this, several iterative methods have been proposed in the literature. In this paper, we consider accelerated Chebyshev SOR (AC-SOR) and accelerated Chebyshev AOR (AC-AOR) algorithms, which improve the detection performance of conventional Successive Over-Relaxation (SOR) and Accelerated Over-Relaxation (AOR) methods, respectively. Additionally, we propose using a deep unfolding network (DUN) to optimize the parameters of the iterative AC-SOR and AC-AOR algorithms, leading to the AC-AORNet and AC-SORNet methods, respectively. The proposed DUN-based method leads to significant performance improvements compared to conventional iterative detectors for various massive MIMO channels. The results demonstrate that the AC-AORNet and AC-SORNet are effective, outperforming other state-of-the-art algorithms. Furthermore, they are highly effective, particularly for high-order modulations such as 256-QAM (Quadrature Amplitude Modulation). Moreover, the proposed AC-AORNet and AC-SORNet require almost the same number of computations as AC-AOR and AC-SOR methods, respectively, since the use of deep unfolding has a negligible impact on the system’s detection complexity. Furthermore, the proposed DUN features a fast and stable training scheme due to its smaller number of trainable parameters.
- Published
- 2023
9. Robust Cross Directional Controller Design for Paper Machine Spatial Distributed System
- Author
-
Sanjeev Kumar, Subhash Chander Sharma, Rajesh Mahadeva, Janaka Alawatugoda, and Vinay Gupta
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Published
- 2023
10. Effect of Electric-Field Components on the Flashover Characteristics of Oil-Paper Insulation Under Combined AC-DC Voltage
- Author
-
Jin Fubao and Zhou Yuanxiang
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Published
- 2023
11. Annotated Open Corpus Construction and BERT-Based Approach for Automatic Metadata Extraction From Korean Academic Papers
- Author
-
Hyesoo Kong, Hwamook Yoon, Jaewook Seol, Mihwan Hyun, Hyejin Lee, Soonyoung Kim, and Wonjun Choi
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Published
- 2023
12. Self-Adaptive Overtemperature Protection Materials for Safety-Centric Domestic Induction Heating Applications
- Author
-
A. Pascual, J. Acero, S. Llorente, C. Carretero, and J. M. Burdio
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
Security aspects in the household sphere have become a major concern in modern societies. In particular, regardless of the technology used, users increasingly appreciate a protection system to prevent material damage in the case of human errors or distractions during the cooking process. This paper presents a sensorless method for detecting and limiting overtemperature, unique to induction cooktops, based on their specific features, such as automatic pot detection and load power factor estimation. The protection system exploits the change in the load material properties at certain temperatures, the effect of which may be enhanced by arranging a multilayer structure comprising a low Curie temperature alloy and an aluminum layer. The proposed multilayer load exhibits two differentiated states: a normal state, where the cookware is efficiently heated, and a protection state, above the safety temperature, where the power factor abruptly decreases, limiting the overheating and making the state easily detectable by the cooktop. This method of overtemperature self-protection uses the electronics of conventional induction cooktops; therefore, no other sensors or systems are required, reducing its complexity and costs. Simulation and experimental results are provided for several cookware designs, thereby proving the feasibility of this proposal.
- Published
- 2023
13. Adaptive Fuzzy Supplementary Controller for SSR Damping in a Series-Compensated DFIG-Based Wind Farm
- Author
-
Mohamed Abdeen, Sayed Hosny Ahmed El-Banna, Sara Elgohary, Hend Mostafa, Nora Ghaly, Nourhan Adel, Zeinab Elkhwas, Mohamed Alahmady, Hossam M. Zawbaa, Salah Kamel, and European Union's Horizon 2020 Research and Enterprise Ireland through the Marie Sklodowska-Curie (Grant Number: 847402)
- Subjects
General Computer Science ,Electrical and Electronics ,Sub-synchronous resonance (SSR) ,General Engineering ,adaptive fuzzy supplementary controller ,General Materials Science ,gate-controlled series capacitor (GCSC) ,stability ,Electrical and Computer Engineering ,Electrical and Electronic Engineering - Abstract
Although using a series compensation technique in a long transmission line effectively increases the transmittable power; it may cause a sub-synchronous resonance (SSR) phenomenon. Gate-controlled series capacitor (GCSC) is an effective method for SSR damping by controlling the turn-off angle. In the previous studies, a constant supplementary damping controller (SDC) was used for controlling the turn-off angle, which can mitigate the SSR phenomenon. However, these methods can not capture the maximum transmittable power at different operating points. In this paper, a fuzzy logic controller (FLC) is proposed to compute the gain of SDC based on the wind speed and the error between the measured and reference line currents for transferring as much power as possible and damping the SSR phenomenon simultaneously. Using the MATLAB/SIMULINK program, the proposed method is tested at different operating points to validate its effectiveness and robustness. Compared to the traditional method (constant SDC), the maximum transmittable power, as well as SSR damping, is achieved in all studied cases by the proposed method (variable SDC).
- Published
- 2023
14. Rule-Based Design for Low-Cost Double-Node Upset Tolerant Self-Recoverable D-Latch
- Author
-
Seyedehsomayeh Hatefinasab, Alfredo Medina-Garcia, Diego P. Morales, Encarnacion Castillo, and Noel Rodriguez
- Subjects
Double node upsets (DNU) ,Low cost single event double node upset tolerant (LSEDUT) ,General Computer Science ,High impedance state (HIS) ,Delay-power-area product (DPAP) ,Soft error (SE) ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering ,Single node upset (SNU) ,Power-delay product (PDP) - Abstract
This paper presents a low-cost, self-recoverable, double-node upset tolerant latch aiming at nourishing the lack of these devices in the state of the art, especially featuring self-recoverability while maintaining a low-cost pro le. Thus, this D-latch may be useful for high reliability and high-performance safety-critical applications as it can detect and recover faults happening during holding time in harsh radiation environments. The proposed D-latch design is based on a low-cost single event double-node upset tolerant latch and a rule-based double-node upset (DNU) tolerant latch which provides it with the self-recoverability against DNU, but paired with a low transistor count and high performance. Simulation waveforms support the achievements and demonstrate that this new D-latch is fully self-recoverable against double-node upset. In addition, the minimum improvement of the delay-power-area product of the proposed rule-based design for the low-cost DNU tolerant self-recoverable latch (RB-LDNUR) is 59%, compared with the latest DNU self-recoverable latch on the literature., Spanish Government MCIN/AEI/10.13039/501100011033/FEDER PID2020-117344RB-I00, Regional Government P20_00265 P20_00633 B-RNM-680-UGR20
- Published
- 2023
15. Abnormality Detection and Localization Schemes Using Molecular Communication Systems: A Survey
- Author
-
Ali Etemadi, Maryam Farahnak-Ghazani, Hamidreza Arjmandi, Mahtab Mirmohseni, and Masoumeh Nasiri-Kenari
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,General Computer Science ,Computer Science - Information Theory ,Information Theory (cs.IT) ,FOS: Electrical engineering, electronic engineering, information engineering ,General Engineering ,General Materials Science ,Electrical Engineering and Systems Science - Signal Processing ,Electrical and Electronic Engineering - Abstract
Abnormality detection and localization (ADL) have been studied widely in wireless sensor networks (WSNs) literature, where the sensors use electromagnetic waves for communication. Molecular communication (MC) has been introduced as an alternative approach for ADL in particular areas such as healthcare, being able to tackle the shortcomings of conventional WSNs, such as invasiveness, bio-incompatibility, and high energy consumption. In this paper, we introduce a general framework for MC-based ADL, which consists of multiple tiers for sensing the abnormality and communication between different agents, including the sensors, the fusion center (FC), the gateway (GW), and the external node (e.g., a local cloud), and describe each tier and the agents in this framework. We classify and explain different abnormality recognition methods, the functional units of the sensors, and different sensor features. Further, we describe different types of interfaces required for converting the internal and external signals at the FC and GW. Moreover, we present a unified channel model for the sensing and communication links. We categorize the MC-based abnormality detection schemes based on the sensor mobility, cooperative detection, and cooperative sensing/activation. We also classify the localization approaches based on the sensor mobility and propulsion mechanisms and present a general framework for the externally-controllable localization systems. Finally, we present some challenges and future research directions to realize and develop MC-based systems for ADL. The important challenges in the MC-based systems lie in four main directions as implementation, system design, modeling, and methods, which need considerable attention from multidisciplinary perspectives., Comment: *Ali Etemadi and Maryam Farahnak-Ghazani are co-first authors
- Published
- 2023
16. Framework for Illumination Estimation and Segmentation in Multi-Illuminant Scenes
- Author
-
Donik Vrsnak, Ilija Domislovic, Marko Subasic, and Sven Loncaric
- Subjects
General Computer Science ,Color Constancy ,Segmentation ,Multi-Illuminant ,Illumination Estimation ,Deep Learning ,Framework ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
Color constancy is an important part of the human visual system, as it allows us to perceive the colors of objects invariant to the color of the illumination that is illuminating them. Modern digital cameras have to be able to recreate this property computationally. However, this is not a simple task, as the response of each pixel on the camera sensor is the product of the combination of spectral characteristics of the illumination, object, and the sensor. Therefore, many assumptions have to be made to approximately solve this problem. One common procedure was to assume only one global source of illumination. However, this assumption is often broken in real-world scenes. Thus, multi-illuminant estimation and segmentation is still a mostly unsolved problem. In this paper, we address this problem by proposing a novel framework capable of estimating per-pixel illumination of any scene with two sources of illumination. The framework consists of a deep-learning model capable of segmenting an image into regions with uniform illumination and models capable of single- illuminant estimation. First, a global estimation of the illumination is produced, and is used as input to the segmentation model along with the original image, which segments the image into regions where that illuminant is dominant. The output of the segmentation is used to mask the input and the masked images are given to the estimation models, which produce the final estimation of the illuminations. The models comprising the framework are first trained separately, then combined and fine-tuned jointly. This allows us to utilize well researched single-illuminant estimation models in a multi-illuminant scenario. We show that such an approach improves both segmentation and estimation capabilities. We tested different configurations of the proposed framework against other single-and multi-illuminant estimation and segmentation models on a large dataset of multi-illuminant images. On this dataset, the proposed framework achieves the best results, in both multi-illumination estimation and segmentation problems. Furthermore, generalization properties of the framework were tested on often used single-illuminant datasets. There, it achieved comparable performance with state- of-the-art single-illumination models, even though it was trained only on the multi-illuminant images.
- Published
- 2023
17. A Deep Learning Receiver for Non-Linear Transmitter
- Author
-
Hamed Farhadi, Johan Haraldson, and Mårten Sundberg
- Subjects
General Computer Science ,6G, AI, ML, PA nonlinearity, Hexa-X ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
Non-linearity of wireless transceivers, specifically power amplifier (PA) non-linearity, could pose major limitations towards having high throughput, and cost and energy efficient wireless communication systems. Such limitations from the PA is typically compensated in the transmitter, e.g. by applying power back-off or performing digital-pre-distortion (DPD) aiming to linearize the transmitter. However, applying PA power back-off leads to lower energy efficiency, and lower output power, and hence lower coverage; and performing DPD results in higher complexity of the transmitters. This paper presents an alternative approach based on a receiver method to perform signal detection in the presence of distortions due to PA non-linearity. We propose a receiver technique using artificial neural networks (ANN) to compensate for the PA non-linearity at the receiver side. The paper presents link-level simulation results using pre-trained neural network models based on synthesized training data. The simulation results confirm that the designed receiver can tolerate higher distortions, hence allow the PA output power back-off to be reduced, leading to higher output power improving coverage, spectral efficiency, energy efficiency, and throughput.
- Published
- 2023
18. Novel Rotor Structure Employing Large Flux Barrier and Disproportional Airgap for Enhancing Efficiency of IPMSM Adopting Concentrated Winding Structure
- Author
-
Xianji Tao, Masatsugu Takemoto, Ren Tsunata, and Satoshi Ogasawara
- Subjects
General Computer Science ,Iron ,concentrated winding structure ,General Engineering ,disproportional airgap ,flux barrier ,Costs ,IPMSM ,Atmospheric modeling ,high efficiency ,Torque ,Rotors ,General Materials Science ,Electrical and Electronic Engineering ,Magnetic flux ,Copper - Abstract
Interior permanent magnetic synchronous motors (IPMSMs) adopting concentrated windings have been widely used in industrial applications. To reduce operating costs, it is an important issue to enhance the efficiency of an IPMSM as much as possible while maintaining manufacturing costs. In general, an IPMSM used for an industrial application always operates in a specific operating area according to the required load. Therefore, this paper has two purposes. The first purpose is to propose a novel rotor structure which can enhance efficiency at the target wide-speed middle-torque operating area without additional manufacturing costs. The second purpose is to clarify the design method for a suitable rotor structure depending on its target operating area. Reducing losses is the key to enhancing efficiency. This paper first examines the effects of adopting large flux barriers and a disproportional airgap on copper and iron losses, and clarifies their merits and respective high-efficiency operating areas. Furthermore, to take advantage of the two rotor structures, a novel rotor structure which employs both large flux barriers and a disproportional airgap has been proposed. 2D-FEM (Finite-Element Method) is used for discussion first, and a prototype machine is manufactured to verify the 2D-FEM results. Both 2D-FEM and experimental results show that the proposed rotor structure can enhance the efficiency of an IPMSM most effectively at the target operating area. Moreover, for a low-speed high-torque operating area, adopting only large flux barriers is most suitable. And for a high-speed low-torque operating area, adopting only a disproportional airgap is most suitable.
- Published
- 2023
19. Enabling All In-Edge Deep Learning: A Literature Review
- Author
-
Praveen Joshi, Mohammed Hasanuzzaman, Chandra Thapa, Haithem Afli, and Ted Scully
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Distributed, Parallel, and Cluster Computing ,General Computer Science ,General Engineering ,General Materials Science ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Electrical and Electronic Engineering ,Machine Learning (cs.LG) - Abstract
In recent years, deep learning (DL) models have demonstrated remarkable achievements on non-trivial tasks such as speech recognition and natural language understanding. One of the significant contributors to its success is the proliferation of end devices that acted as a catalyst to provide data for data-hungry DL models. However, computing DL training and inference is the main challenge. Usually, central cloud servers are used for the computation, but it opens up other significant challenges, such as high latency, increased communication costs, and privacy concerns. To mitigate these drawbacks, considerable efforts have been made to push the processing of DL models to edge servers. Moreover, the confluence point of DL and edge has given rise to edge intelligence (EI). This survey paper focuses primarily on the fifth level of EI, called all in-edge level, where DL training and inference (deployment) are performed solely by edge servers. All in-edge is suitable when the end devices have low computing resources, e.g., Internet-of-Things, and other requirements such as latency and communication cost are important in mission-critical applications, e.g., health care. Firstly, this paper presents all in-edge computing architectures, including centralized, decentralized, and distributed. Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers. Thirdly, model adaptation techniques based on model compression and conditional computation are described because the standard cloud-based DL deployment cannot be directly applied to all in-edge due to its limited computational resources. Fourthly, this paper discusses eleven key performance metrics to evaluate the performance of DL at all in-edge efficiently. Finally, several open research challenges in the area of all in-edge are presented., 21 pages
- Published
- 2023
20. HEDF: A Method for Early Forecasting Software Defects Based on Human Error Mechanisms
- Author
-
Fuqun Huang and Lorenzo Strigini
- Subjects
FOS: Computer and information sciences ,General Computer Science ,Computer Science - Artificial Intelligence ,General Engineering ,Computational Complexity (cs.CC) ,D.2 ,K.4 ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Computer Science - Computational Complexity ,Computer Science - Computers and Society ,Artificial Intelligence (cs.AI) ,Computers and Society (cs.CY) ,General Materials Science ,Electrical and Electronic Engineering - Abstract
As the primary cause of software defects, human error is the key to understanding, and perhaps to predicting and avoiding them. Little research has been done to predict defects on the basis of the cognitive errors that cause them. This paper proposes an approach to predicting software defects through knowledge about the cognitive mechanisms of human errors. Our theory is that the main process behind a software defect is that an error-prone scenario triggers human error modes, which psychologists have observed to recur across diverse activities. Software defects can then be predicted by identifying such scenarios, guided by this knowledge of typical error modes. The proposed idea emphasizes predicting the exact location and form of a possible defect. We conducted two case studies to demonstrate and validate this approach, with 55 programmers in a programming competition and 5 analysts serving as the users of the approach. We found it impressive that the approach was able to predict, at the requirement phase, the exact locations and forms of 7 out of the 22 (31.8%) specific types of defects that were found in the code. The defects predicted tended to be common defects: their occurrences constituted 75.7% of the total number of defects in the 55 developed programs; each of them was introduced by at least two persons. The fraction of the defects introduced by a programmer that were predicted was on average (over all programmers) 75%. Furthermore, these predicted defects were highly persistent through the debugging process. If the prediction had been used to successfully prevent these defects, this could have saved 46.2% of the debugging iterations. This excellent capability of forecasting the exact locations and forms of possible defects at the early phases of software development recommends the approach for substantial benefits to defect prevention and early detection., 30 pages, 5 figures, and 17 tables
- Published
- 2023
21. Selective Harmonic Elimination in a Multilevel Inverter Using Multi-Criteria Search Enhanced Firefly Algorithm
- Author
-
Muhammad Khizer, Sheroze Liaquat, Muhammad Fahad Zia, Saikrishna Kanukollu, Ahmed Al-Durra, and S. M. Muyeen
- Subjects
Cascaded H-bridge multilevel inverter ,General Computer Science ,firefly algorithm ,General Engineering ,General Materials Science ,multi-criteria search ,Electrical and Electronic Engineering ,selective harmonic elimination - Abstract
This research paper proposes the new multi-criteria search based enhanced firefly algorithm for solving selective harmonic elimination in a multilevel inverter. This new enhanced firefly utilizes adaptive nature of social and cognitive components to find the global optima. To see the effectiveness of the proposed algorithm and for the evaluation of results, a three phase nine level cascaded multilevel inverter is used. It is compared with existing meta-heuristic algorithms namely particle swarm optimization and firefly algorithm to validate its effectiveness. Crucial parameters for optimization, including population size and number of iterations, are kept same for comparison. For comparison, total harmonic distortion and convergence behaviour of algorithms against various modulation index values are considered. Moreover, results have clearly indicated that the proposed algorithm has surpassed particle swarm optimization and firefly algorithms in terms of convergence behaviour by attaining lower fitness value in lesser number of iterations. Finally, the experimental validation of selective harmonic elimination in multi-level inverter is also performed and analyzed. 2013 IEEE. This work was supported by the Khalifa University. Scopus
- Published
- 2023
22. Secure, ID Privacy and Inference Threat Prevention Mechanisms for Distributed Systems
- Author
-
Tahani Hamad Aljohani and Ning Zhang
- Subjects
Authentication ,General Computer Science ,Inference attack ,Shannon entropy ,General Engineering ,Public key ,Encryption ,Pseudonym ,ID Privacy ,Security ,Elliptic curves ,General Materials Science ,Electrical and Electronic Engineering ,Data privacy - Abstract
This paper investigates facilitating remote collection of a patient’s data in distributed system while protecting the security of the data, preserving the privacy of the patient’s ID, and preventing inference attack. The paper presents a novel framework called SPID stand for a Secure, ID Privacy, and Inference Threat Prevention Mechanisms for Distributed Systems. In designing this framework, we make the following novel contributions. The SPID presents a novel architecture that supports the use of a distributed set of servers owned by different service providers. The SPID allows the patient to access these servers using certificates generated by the patient. The SPID allows the patient to select one server to be the home server, and select a number of servers to be the foreign servers. The patient uses the foreign servers to upload data. The home server is responsible for collecting the patient’s data from the foreign servers and sending them to the healthcare provider. The SPID proposes a method for efficient verification of each request from the patient without searching in the server’s database for the verification key. This is done by using some of the Elliptic Curves Cryptography (ECC) properties. The SPID has been analyzed using a bench-marking tool and evaluated using queuing theory. The evaluation results indicate an efficient performance when the number of servers increases. We uses Shannon entropy method to measure the likelihood of the inference attack.
- Published
- 2023
23. Studying Direct Lightning Stroke Impact on Human Safety Near HVTL Towers Considering Two Layer Soils and Ionization Influence
- Author
-
Osama E. Gouda, Adel Z. El Dein, Sara Yassin Omar, Matti Lehtonen, Mohamed M. F. Darwish, Cairo University, Aswan University, Department of Electrical Engineering and Automation, Aalto-yliopisto, and Aalto University
- Subjects
ATP program ,Ionization ,General Computer Science ,Human body safety ,General Engineering ,High-voltage techniques ,High voltage transmission lines ,Lightning ,Soil ionization ,Electric potential ,Poles and towers ,Human heart ,Lightning strokes ,Power transmission lines ,General Materials Science ,Soil moisture ,Safety ,Electrical and Electronic Engineering ,Electrodes ,Human factors ,non-homogenous soil ,Grounding - Abstract
Publisher Copyright: Author A lightning strike is considered one of the most risky natural phenomena that can lead to human harmful and the surrounding soil layers. To tackle this issue, this article investigates the influence of direct lightning characteristics in terms of human body safety. Specifically, such investigation is carried out on the effect of resistivities of two-layer soils on human safety when lightning stroke hits the towers of the high voltage transmission lines (HVTLs). The merit of the proposed study is that the soil ionization phenomenon is taken into consideration. Further, the study focuses on the current passing through the human heart, when step and touch (contact) voltages are generated by grounding potential rise, caused by direct lightning strikes transmission tower and the produced potential rise that a person could be exposed. Also, studying the effects of peak current and time of lightning strokes are investigated. Additionally, the paper presents the effect of different reflection factors on human safety.For validation purposes, the ATP program is used in the simulation of the grounding system as well as the human body model. Numerous simulations were accomplished in order to examine the behavior of the current passing through with the human heart. Based on the simulation results, it was concluded that the soil characteristics have superior influences on the contact and step potentials and, accordingly, the survival threshold.
- Published
- 2023
24. Experimental Evaluation of Quantum Machine Learning Algorithms
- Author
-
Ricardo Daniel Monteiro Simoes, Patrick Huber, Nicola Meier, Nikita Smailov, Rudolf M. Fuchslin, and Kurt Stockinger
- Subjects
General Computer Science ,Machine learning ,General Engineering ,General Materials Science ,Quantum computing ,006: Spezielle Computerverfahren ,Electrical and Electronic Engineering ,Neural network - Abstract
Machine learning and quantum computing are both areas with considerable progress in recent years. The combination of these disciplines holds great promise for both research and practical applications. Recently there have also been many theoretical contributions of quantum machine learning algorithms with experiments performed on quantum simulators. However, most questions concerning the potential of machine learning on quantum computers are still unanswered such as How well do current quantum machine learning algorithms work in practice? How do they compare with classical approaches? Moreover, most experiments use different datasets and hence it is currently not possible to systematically compare different approaches. In this paper we analyze how quantum machine learning can be used for solving small, yet practical problems. In particular, we perform an experimental analysis of kernel-based quantum support vector machines and quantum neural networks. We evaluate these algorithm on 5 different datasets using different combinations of quantum feature maps. Our experimental results show that quantum support vector machines outperform their classical counterparts on average by 3 to 4% in accuracy both on a quantum simulator as well as on a real quantum computer. Moreover, quantum neural networks executed on a quantum computer further outperform quantum support vector machines on average by up to 5% and classical neural networks by 7%.
- Published
- 2023
25. Shortest Path Finding in Quantum Networks With Quasi-Linear Complexity
- Author
-
Sara Santos, Francisco A. Monteiro, Bruno C. Coutinho, and Yasser Omar
- Subjects
Path-finding algorithm ,Multiobjective routing ,General Computer Science ,General Engineering ,Ciências Naturais::Ciências da Computação e da Informação [Domínio/Área Científica] ,Engenharia e Tecnologia::Engenharia Eletrotécnica, Eletrónica e Informática [Domínio/Área Científica] ,General Materials Science ,Quantum repeaters ,Electrical and Electronic Engineering ,End-to-end fidelity ,Quantum networks - Abstract
A fully-quantum network implies the creation of quantum entanglement between a given source node and some other destination node, with a number of quantum repeaters in between. This paper tackles the problem of quantum entanglement distribution by solving the routing problem over an infrastructure based on quantum repeaters and with a finite number of pairs of entangled qubits available in each link. The network model considers that link purification is available such that a nested purification protocol can be applied at each link to generate entangled qubits with higher fidelity than the original ones. A low-complexity multi-objective routing algorithm to find the shortest path between any two given nodes is proposed and assessed for random networks, using a fairly general path extension mechanism that can fit a large family of particular technological requirements. Different types of quantum protocols require different levels of fidelity for the entangled qubit pairs. For that reason, the proposed algorithm identifies the shortest path between two nodes that assures an end-to-end fidelity above a specified threshold. The minimum requirements for the end-to-end entanglement fidelity depend on the whole extension of the paths, and cannot be looked at as a local property of each link. Moreover, one needs to keep track not only of the shortest path, but also of longer paths holding more entangled qubits than the shorter paths in order to satisfy the fidelity criterion. Thus, standard single parameter shortest-path algorithms do not necessarily converge to the optimal solution. The problem of finding the best path in a network subject to multiple criteria (known as multi-objective routing) is, in general, an NP-hard problem due to the rapid growth of the number of stored paths. This work proposes a metric that identifies and discards paths that are objectively worse than others. By doing so, the time complexity of the proposed algorithm scales near-to-linearly with respect to the number of nodes in the network, showing that the shortest path problem in quantum networks can be solved with a complexity very close to the one of the classical counterparts. That is analytically proved for the case where all the links of a path have the same fidelity (homogeneous model). The algorithm is also adapted to a particular type of path extension, where different links along a path can be purified to different degrees, asserting its flexibility and near-to-linearity even when heterogeneous fidelities along the sections of a path are considered. info:eu-repo/semantics/publishedVersion
- Published
- 2023
26. Operation and Planning of Energy Hubs Under Uncertainty—A Review of Mathematical Optimization Approaches
- Author
-
Michal Jasinski, Arsalan Najafi, Omid Homaee, Mostafa Kermani, Georgios Tsaousoglou, Zbigniew Leonowicz, and Tomas Novak
- Subjects
IGDT ,General Computer Science ,Mathematical optimization ,Uncertainty ,General Engineering ,Multi-carrier energy systems ,Stochastic programming ,Chance constrained ,Energy hub ,General Materials Science ,SDG 7 - Affordable and Clean Energy ,Electrical and Electronic Engineering ,Robust optimization - Abstract
Co-designing energy systems across multiple energy carriers is increasingly attracting attention of researchers and policy makers, since it is a prominent means of increasing the overall efficiency of the energy sector. Special attention is attributed to the so-called energy hubs, i.e., clusters of energy communities featuring electricity, gas, heat, hydrogen, and also water generation and consumption facilities. Managing an energy hub entails dealing with multiple sources of uncertainty, such as renewable generation, energy demands, wholesale market prices, etc. Such uncertainties call for sophisticated decision-making techniques, with mathematical optimization being the predominant family of decision-making methods proposed in the literature of recent years. In this paper, we summarize, review, and categorize research studies that have applied mathematical optimization approaches towards making operational and planning decisions for energy hubs. Relevant methods include robust optimization, information gap decision theory, stochastic programming, and chance-constrained optimization. The results of the review indicate the increasing adoption of robust and, more recently, hybrid methods to deal with the multi-dimensional uncertainties of energy hubs.
- Published
- 2023
27. Development and Analysis of a Detail Model for Steer-by-Wire Systems
- Author
-
Marcus Irmer, Rene Degen, Alexander Nubgen, Karin Thomas, Hermann Henrichfreise, and Margot Ruschitzka
- Subjects
General Computer Science ,Teknik och teknologier ,General Engineering ,Modellierung ,Engineering and Technology ,General Materials Science ,ddc:500 ,Electrical and Electronic Engineering ,Steer-by-wire - Abstract
Steer-by-wire systems represent a key technology for highly automated and autonomous driving. In this context, robust steering control is a fundamental precondition for automated vehicle lateral control. However, there is a need for improvement due to degrees of freedom, signal delays, and nonlinear characteristics of the plant which are unconsidered in the design models for the design of current steering controls. To be able to design an extremely robust steering control, suitable optimal models of a steer-by-wire system are required. Therefore, this paper presents an innovative nonlinear detail model of a steer-by-wire system. The detail model represents all characteristics of a real steer-by-wire system. In the context of a dominance analysis of the detail model, all dominant characteristics of a steer-by-wire system, including parameter dependencies, are identified. Through model reduction, a reduced model of the steer-by-wire system is then developed that can be used for a subsequent robust control design. Furthermore, this paper compares the steer-by-wire system with a conventional electromechanical power steering and shows similarities as well as differences.
- Published
- 2023
28. Utilization of EV Charging Station in Demand Side Management Using Deep Learning Method
- Author
-
Abdul Hafeez, Rashid Alammari, and Atif Iqbal
- Subjects
General Computer Science ,electric vehicle charging station ,CO2 emission ,General Engineering ,deep learning ,data-driven approach ,demand-side management ,General Materials Science ,peak clipping ,Electrical and Electronic Engineering - Abstract
Conventional energy sources are a major source of pollution. Major efforts are being made by global organizations to reduce CO2 emissions. Research shows that by 2030, EVs can reduce CO2 emissions by 28%. However, two major obstacles affect the widespread adoption of electric vehicles: the high cost of EVs and the lack of charging stations. This paper presents a comprehensive data-driven approach based demand-side management for a solar-powered electric vehicle charging station connected to a microgrid. The proposed approach utilizes a solar-powered electric vehicle charging station to compensate for the energy required during peak demand, which reduces the utilization of conventional energy sources and shortens the problem of fewer EVCS in the current scenario. PV power stations, commercial loads, residential loads, and electric vehicle charging stations were simulated using the collected real-time data. Furthermore, a deep learning approach was developed to control the energy supply to the microgrid and to charge the electric vehicle from the grid during off-peak hours. Furthermore, two different machine learning approaches were compared to estimate the state of charge estimation of an energy storage system. Finally, the proposed framework of the demand management system was executed for a case study of 24 hours. The results reflect that peak demand has been compensated with the help of an electric vehicle charging station during peak hours. 2013 IEEE. This publication was made possible by Qatar University Research grant# [QUCP-CENG-2020-2] from Qatar University, Qatar. The APC for the article is funded by the Qatar National Library, Doha, Qatar. Scopus
- Published
- 2023
29. Fighting Money Laundering With Statistics and Machine Learning
- Author
-
Rasmus Ingemann Tuffveson Jensen and Alexandros Iosifidis
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,General Computer Science ,Statistics - Machine Learning ,General Engineering ,Machine Learning (stat.ML) ,General Materials Science ,Electrical and Electronic Engineering ,Machine Learning (cs.LG) - Abstract
Money laundering is a profound global problem. Nonetheless, there is little scientific literature on statistical and machine learning methods for anti-money laundering. In this paper, we focus on anti-money laundering in banks and provide an introduction and review of the literature. We propose a unifying terminology with two central elements: (i) client risk profiling and (ii) suspicious behavior flagging. We find that client risk profiling is characterized by diagnostics, i.e., efforts to find and explain risk factors. On the other hand, suspicious behavior flagging is characterized by non-disclosed features and hand-crafted risk indices. Finally, we discuss directions for future research. One major challenge is the need for more public data sets. This may potentially be addressed by synthetic data generation. Other possible research directions include semi-supervised and deep learning, interpretability, and fairness of the results., Comment: Accepted for publication in IEEE Access, vol. 11, pp. 8889-8903, doi:10.1109/ACCESS.2023.3239549
- Published
- 2023
30. Adaptive Congestion Control Mechanism to Enhance TCP Performance in Cooperative IoV
- Author
-
Tapas Kumar Mishra, Kshira Sagar Sahoo, Muhammad Bilal, Sayed Chhattan Shah, and Manas Kumar Mishra
- Subjects
IoT ,IoV ,General Computer Science ,Communication Systems ,General Engineering ,congestion control ,flow control ,Telekommunikation ,Telecommunications ,AIMD ,General Materials Science ,Electrical and Electronic Engineering ,TCP ,energy efficiency ,Kommunikationssystem - Abstract
One of the main causes of energy consumption in Internet of Vehicles (IoV) networks is an ill-designed network congestion control protocol, which results in numerous packet drops, lower throughput, and increased packet retransmissions. In IoV network, the objective to increase network throughput can be achieved by minimizing packets re- transmission and optimizing bandwidth utilization. It has been observed that the congestion control mechanism (i.e., the congestion window) can plays a vital role in mitigating the aforementioned challenges. Thus, this paper present a cross-layer technique to controlling congestion in an IoV network based on throughput and buffer use. In the proposed approach, the receiver appends two bits in the acknowledgment (ACK) packet that describes the status of the buffer space and link utilization. The sender then uses this information to monitor congestion and limit the transmission of packets from the sender. The proposed model has been experimented extensively and the results demonstrate a significantly higher network performance percentage in terms of buffer utilization, link utilization, throughput, and packet loss.
- Published
- 2023
31. Mitigation of Voltage and Frequency Excursions in Low-Inertia Microgrids
- Author
-
Ivan Todorovic, Ivana Isakov, Dejan Reljic, Dejan G. Jerkan, and Drazen Dujic
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
Power systems proliferated by distributed generation sources are becoming increasingly prone to frequency and voltage disturbances. These problems are exacerbated in microgrids since they have fewer intrinsic disturbance-rejecting measures and features. To increase the reliability and stability of emerging power systems, the advanced control structures of the distributed generation sources based on power electronics devices must be deployed during suboptimal operating conditions. The aggravating circumstance is that both voltage and frequency excursion can be transient and long-lasting and consequently can occur simultaneously. The algorithm proposed in this paper integrates voltage support (nominal voltage restoration) and inertia emulation features with the comprehensive current references management scheme, thus securing improved grid operating conditions during several different faults and occurrences. The control algorithm is developed and tested in the context of a small microgrid, but it can be applied with minimal alterations in traditional grids, also. To prove that it is possible to decrease simultaneously voltage unbalances and frequency deviations, a test microgrid consisting of a synchronous generator, photovoltaic system, battery storage system, and controllable balanced and unbalanced loads was developed in a hardware-in-the-loop environment.
- Published
- 2023
32. Robust and Optimal Control Designed for Autonomous Surface Vessel Prototypes
- Author
-
Murillo Ferreira Dos Santos, Accacio Ferreira Dos Santos Neto, Leonardo De Mello Honorio, Mathaus Ferreira Da Silva, and Paolo Mercorelli
- Subjects
Control systems ,General Computer Science ,Optimal Control ,Robust control ,Uncertainty ,General Engineering ,Vehicle dynamics ,Tuning ,Topology ,Autonomous Surface Vehicles ,Successive Loop Closure ,Engineering ,Robust Control Design ,pid Controller ,General Materials Science ,Electrical and Electronic Engineering - Abstract
It is well known that activities in running water or wind and waves expose the Autonomous Surface Vessels (ASVs) to considerable challenges. Under these conditions, it is essential to develop a robust control system that can meet the requirements and ensure the safe and accurate execution of missions. In this context, this paper presents a new topology for controller design based on a combination of the Successive Loop Closure (SLC) method and optimal control. This topology enables the design of robust autopilots based on the Proportional-Integral-Derivative (PID) controller. The controllers are tuned from the solution of the optimal control problem, which aims to minimize the effects of model uncertainties. To verify the effectiveness of the proposed controller, a numerical case study of a natural ASV with 3 Degree of Freedom (DoF) is investigated. The results show that the methodology enabled the tuning of a PID controller capable of dealing with different parametric uncertainties, demonstrating robustness and applicability for different prototype scenarios.
- Published
- 2023
33. Understanding the Ageing Performance of Alternative Dielectric Fluids
- Author
-
Cristina Mendez Gutierrez, Alfredo Ortiz Fernandez, Carlos Javier Renedo Estebanez, Cristian Olmo Salas, Riccardo Maina, and Universidad de Cantabria
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Transformers ,Electrical and Electronic Engineering ,Natural esters ,Insulating paper ,Thermal ageing - Abstract
Mineral oil has traditionally been used as a cooling fluid in power transformers, but its low biodegradability and low fire point have motivated the search of alternatives. In this work, six different dielectric fluids have been studied, including four vegetable liquids, from sunflower, rapeseed, soybean, and palm, one synthetic ester and a mineral oil used for comparison. These oils were subjected to an accelerated thermal ageing in glass vessels at 150°C for four weeks (672 hours) in presence of Kraft insulating paper. Different oils parameters were measured during the ageing, i.e. breakdown voltage, dielectric dissipation factor, permittivity, DC resistivity, density, kinematic viscosity, flash and fire points, interfacial tension, acidity, and dissolved gases; additionally, the degree of polymerisation (DP) of the paper was measured. Results showed that the changes of the natural esters properties, except for the palm oil, were similar along the ageing time. Palm oil results were similar to those of the mineral oil, whereas synthetic ester showed a behaviour similar to natural esters. The kraft paper degradation was higher in the mineral oil, followed by the synthetic ester and the palm oil. No significant differences were found in the ageing with the natural esters. This work was supported in part by the European Union’s Horizon 2020 Research and Innovation Program, through the Marie Sklodowska-Curie, under Grant 823969, and in part by the Ministry of Economy, through the National Research Project: Gestión del Ciclo de Vida de Transformadores Aislados con Fluidos Biodegradables, under Grant PID 2019-107126RBC22.
- Published
- 2023
34. Attribute-Based Approaches for Secure Data Sharing in Industrial Contexts
- Author
-
Ulf Bodin, Olov Schelén, and Alex Chiquito
- Subjects
IoT ,NGAC ,Datavetenskap (datalogi) ,General Computer Science ,Computer Sciences ,Attribute-based ,Access Control ,General Engineering ,Encryption ,Fine-grained ,General Materials Science ,Electrical and Electronic Engineering - Abstract
The sharing of data is becoming increasingly important for the process and manufacturing industries that are using data-driven models and advanced analysis to assess production performance and make predictions, e.g., on wear and tear. In such environments, access to data needs to be accurately controlled to prevent leakage to unauthorized users while providing easy to manage policies. Data should further be shared with users outside trusted domains using encryption. Finally, means for revoking access to data are needed. This paper provides a survey on attribute-based approaches for access control to data, focusing on policy management and enforcement. We aim to identify key properties provided by attribute-based access control (ABAC) and attribute-based encryption (ABE) that can be combined and used to meet the abovementioned needs.We describe such possible combinations in the context of a proposed architecture for secure data sharing. The paper concludes by identifying knowledge gaps to provide direction to future research on attribute-based approaches for secure data sharing in industrial contexts. Validerad;2023;Nivå 2;2023-02-13 (hanlid);Funder: Arrowhead Tools Research Project (826452)
- Published
- 2023
35. Drivers’ Warning Application Through Personalized DSSS-CDMA Data Transmission by Using the FM Radio Broadcasting Infrastructure
- Author
-
Florin Doru HUTU, Radu Gabriel Bozomitu, 'Gheorghe Asachi' Technical University of Iasi (TUIASI), Software and Cognitive radio for telecommunications (SOCRATE), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-CITI Centre of Innovation in Telecommunications and Integration of services (CITI), Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA), CITI Centre of Innovation in Telecommunications and Integration of services (CITI), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National de Recherche en Informatique et en Automatique (Inria), Université de Lyon-Institut National des Sciences Appliquées (INSA), Inria Lyon, and Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
[SPI]Engineering Sciences [physics] ,General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
International audience; In this paper, a new drivers’ warning application through personalized Direct-Sequence Spread Spectrum Code-Division Multiple Access (DSSS-CDMA) transmissions, performed using the FM radio broadcasting infrastructure is presented. The proposed application is designed to simultaneously transmit a maximum of 15 low-resolution image notifications together with standard FM radio broadcasting in different geographical areas. The application is intended to warn drivers of significant driving events in major traffic areas of a country. The proposed solution is low-cost and rapid to put into practice because the modifications, both on the transmission infrastructure and on the receiver’s side, are relatively easy to implement. The transmission of image notifications is performed in subsidiary bands of commercial FM radio systems. The receiver is implemented using the Software-Defined Radio (SDR) paradigm and is able to extract the audio signal (corresponding to the usual FM transmission) and the data component, depending on the geographical area in which the vehicle is located. The receiver is based on a novel implementation of a modified Costas loop using two new nonlinear limiters. Appropriate image notification is selected using a specific decoding key, generated according to the geographical position of the vehicle. The performance of the data transmission system is analyzed by plotting the Bit Error Rate (BER) versus the signal-to-noise ratio of the data signal for different numbers of image notifications simultaneously transmitted. The proposed radio communication system was validated through an experimental setup based on Universal Software Radio Peripheral (USRP) devices driven by MATLAB/Simulink software.
- Published
- 2023
36. Empirical Study: How Issue Classification Influences Software Defect Prediction
- Author
-
Petar Afric, Davor Vukadin, Marin Silic, and Goran Delac
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering ,Issue tracking ,Version Control Systems ,Natural language processing , Issue classification ,Software defect prediction ,RoBERTa - Abstract
Software defect prediction aims to identify potentially defective software modules to better allocate limited quality assurance resources. Practitioners often do this by utilizing supervised models trained using historical data. This data is gathered by mining version control and issue tracking systems. Version control commits are linked to issues they address. If the linked issue is classified as a bug report, the change is considered as bug fixing. The problem arises from the fact that issues are often incorrectly classified within issue tracking systems. This introduces noise into the gathered datasets. In this paper, we investigate the influence issue classification has on software defect prediction dataset quality and resulting model performance. To do this, we mine data from 7 popular open-source repositories, create issue classification and software defect prediction datasets for each of them. We investigate issue classification using four different methods ; a simple keyword heuristic, an improved keyword heuristic, the FastText model and the RoBERTa model. Our results show that using the RoBERTa model for issue classification produces the best software defect prediction datasets, containing on average 14.3641% of mislabeled instances. SDP models trained on such datasets achieve superior performance, to those trained on SDP datasets created using other issue classification methods, in 65 out of 84 experiments, with 55 of them being statistically relevant. Furthermore, in 17 out of 28 experiments we could not show a statistically relevant performance difference between SDP models trained on RoBERTa derived software defect prediction datasets and those created using manually labeled issues.
- Published
- 2023
37. Enhanced Signal Detection and Constellation Design for Massive SIMO Communications With 1-Bit ADCs
- Author
-
Doaa Abdelhameed, Kenta Umebayashi, Italo Atzeni, and Antti Tolli
- Subjects
General Computer Science ,massive MIMO ,transmit constellation design ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering ,1-bit ADCs ,data detection ,upper bound on the SER - Abstract
In this paper, we investigate a transmitter and receiver design for a single-user massive SIMO (single-input multiple-output) system with 1-bit analog-to-digital converters (ADCs) at the base station (BS), where the user adopts higher-order modulation, e.g., 16-quadrature amplitude modulation (16-QAM), for the data transmission. For the channel estimation and the signal detection, linear least-squares (LS) estimation and maximum ratio combining (MRC) are respectively employed. In this context, we first introduce closed-form formulas for the mean of the estimated symbols and for the correlation matrix between their real and imaginary parts considering the effect of 1-bit quantization. The study of the distribution of the estimated symbols indicates that, in presence of 1-bit ADCs, the conventional 16-QAM detector and the typical square 16-QAM modulation are not adequate. In light of this, we propose three novel symbol detectors and re-design the 16-QAM modulation in order to improve the symbol error rate (SER). Furthermore, the upper bound on the SER is analyzed based on the pair-wise error probability and the boundary equation between two regions is also studied. Through numerical results, the proposed framework, i.e., the symbol detector and the transmit constellation design, shows a significant enhancement in the SER performance against the conventional detector and the typical square 16-QAM modulation.
- Published
- 2023
38. A New Optimization Method for Gapped and Distributed Core Magnetics in LLC Converter
- Author
-
Lordoglu, Abdulsamed, Gulbahce, Mehmet Onur, Kocabas, Derya Ahmet, and Dusmez, Serkan
- Subjects
power electronics ,General Computer Science ,General Engineering ,power converter ,General Materials Science ,resonant converters ,Electrical and Electronic Engineering - Abstract
LLC converter design optimization remains a challenging task for varying loads such as in battery chargers. There are numerous L-L-C combinations to choose from a design space that can satisfy the required voltage gains of the application. An accurate magnetic model is essential to optimally size the passive components according to the application needs. This paper provides a new design tool for gapped core magnetics to optimize the transformer and resonant inductor in LLC converters. Unlike conventional design algorithms, the proposed algorithm considers multiple distributed cores and selects the optimal magnetic flux density by minimizing a penalty function that includes power loss, cost and volume of the magnetic components using Big-Bang Big-Crunch Algorithm. The gapped core transformer and inductor design equations have been verified in Ansys Maxwell and Simplorer co-simulation environment for a 3700 W 48 V LLC and calculated power loss have been compared with experimental results. For the givenLmandLrpair of37.52μHand9.38μH, the proposed magnetic model designed a transformer with two-distributed cores, each one exhibiting a magnetizing inductance of17.73μHand leakage inductance of0.7μHon a EE422120 with 3F36 material. The total power loss of the transformers are measured as 12.44 W on a 3700 W prototype switched at 350 kHz.
- Published
- 2023
39. DuctiLoc: Energy-Efficient Location Sampling With Configurable Accuracy
- Author
-
Panagiota Katsikouli, Diego Madariaga, Aline Carneiro Viana, Alberto Tarable, and Marco Fiore
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
Mobile device tracking technologies based on various positioning systems have made location data collection ubiquitous. The frequency at which location samples are recorded varies across applications, yet it is usually pre-defined and fixed, resulting in redundant information, and draining the battery of mobile devices. In this paper, we first answer the question “at what frequency should individual human movements be sampled so that they can be reconstructed with minimum loss of information?”. Our analysis unveils a novel linear scaling law of the localization error with respect to the sampling interval. We then present DuctiLoc, a location sampling mechanism that utilises the law above to profile users and adapt the position tracking frequency to their mobility. DuctiLoc is energy efficient, as it does not rely on power- hungry sensors or expensive computations; moreover, it provides a handy knob to control energy usage, by configuring the target positioning accuracy. Controlling the trade-off between accuracy and sampling rate of human movement is useful in a number of contexts, including mobile computing and cellular networks. Real-world experiments with an Android implementation show that DuctiLoc can effectively adjust the sampling frequency to individual mobility habits and target accuracy level, reducing the energy consumption by 60% to 98% with respect to a baseline periodic sampling. Regional Government of Madrid TRUE inpress
- Published
- 2023
40. Investigating the Potential of Flexible Links for Increased Payload to Mass Ratios for Collaborative Robotics
- Author
-
Greet Van de Perre, Thierry Hubert, Tom Verstraten, Bram Vanderborght, Applied Mechanics, and Faculty of Engineering
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
One of the main restrictions of commercial cobots can be found in their limited payload to mass ratios. Flexible link manipulators seem to offer interesting advantages over traditional rigid robotics, in terms of lower self-weight, lower energy consumption and safer operation. However, the design and loading specifications in the general literature on flexible link manipulators differ from those expected in a collaborative industrial setting. In this paper, we want to investigate whether the use of flexible links can be truly beneficial for collaborative robotics. Firstly, the theoretical potential of flexible links to increase the payload to mass ratio is investigated. The feasibility to design a cylindrical flexible link for specific, realistic loading conditions is investigated, and the effect of link flexibility on the demanded motor torque and maximum reachable payload is visualised, while considering cylindrical links. Subsequently, to get insights in the accuracy and usability of such a manipulator, weexperimentally quantify to what extent the undesired side effects of the flexible design can be counteracted using an appropriate controller. To comply with the envisioned application of collaborative robotics, a control strategy based on strain measurements along the link and robust to payload changes is proposed. The obtained accuracy was measured by tracking the end effector position using a Vicon motion capture system, considering two types of single link designs; firstly, a very flexible link setup with rectangular cross section, and secondly, a cylindrical flexible link, loaded to reach a payload to mass ratio of 1.
- Published
- 2023
41. Radial Velocity Estimation for Multiple Objects Using In-Air Sonar With MIMO Virtual Arrays
- Author
-
Robin Kerstens, Wouter Jansen, and Jan Steckel
- Subjects
Computer. Automation ,General Computer Science ,Mass communications ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering ,Engineering sciences. Technology - Abstract
As autonomous platforms become more advanced, it is useful to obtain more information about the environment. Classic vision sensors such as RGB-D cameras, LiDAR, or acoustical imaging cameras are capable of accurately displaying the location of an object in 3D space and are widely used for this purpose. However, a problem arises when the measurement environment becomes more complex and measurement data might begin to suffer in terms of accuracy. Imaging sonar sensors are developed specifically for these environments but are limited in terms of frame rate due to the inherent slow nature of sound. This paper will propose a method to increase the amount of useful information obtained from the available data from a single measurement in post-processing. Utilizing a signal in combination with a Doppler velocity-tuned matched-filter bank it is possible to estimate the radial velocity of an object with respect to the sensor’s location. This allows the robot to make better decisions when it comes to path planning and collision avoidance, as a rapidly approaching object requires a different action than a stationary object or one moving away from the sensor. Another advantage of this system is that a single seemingly-large object might be identified as two close lying objects with different speeds. This paper serves as a proof-of-concept with results from a realistic simulation environment using an imaging sonar sensor and shows that the proposed method is perfectly suitable for making accurate estimations of an object’s radial velocity.
- Published
- 2023
42. Anticancer Peptides Classification Using Kernel Sparse Representation Classifier
- Author
-
Ehtisham Fazal, Muhammad Sohail Ibrahim, Seongyong Park, Imran Naseem, and Abdul Wahab
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,General Computer Science ,FOS: Biological sciences ,FOS: Electrical engineering, electronic engineering, information engineering ,General Engineering ,General Materials Science ,Electrical Engineering and Systems Science - Signal Processing ,Electrical and Electronic Engineering ,Quantitative Biology - Quantitative Methods ,Quantitative Methods (q-bio.QM) ,Machine Learning (cs.LG) - Abstract
Cancer is one of the most challenging diseases because of its complexity, variability, and diversity of causes. It has been one of the major research topics over the past decades, yet it is still poorly understood. To this end, multifaceted therapeutic frameworks are indispensable. \emph{Anticancer peptides} (ACPs) are the most promising treatment option, but their large-scale identification and synthesis require reliable prediction methods, which is still a problem. In this paper, we present an intuitive classification strategy that differs from the traditional \emph{black box} method and is based on the well-known statistical theory of \emph{sparse-representation classification} (SRC). Specifically, we create over-complete dictionary matrices by embedding the \emph{composition of the K-spaced amino acid pairs} (CKSAAP). Unlike the traditional SRC frameworks, we use an efficient \emph{matching pursuit} solver instead of the computationally expensive \emph{basis pursuit} solver in this strategy. Furthermore, the \emph{kernel principal component analysis} (KPCA) is employed to cope with non-linearity and dimension reduction of the feature space whereas the \emph{synthetic minority oversampling technique} (SMOTE) is used to balance the dictionary. The proposed method is evaluated on two benchmark datasets for well-known statistical parameters and is found to outperform the existing methods. The results show the highest sensitivity with the most balanced accuracy, which might be beneficial in understanding structural and chemical aspects and developing new ACPs. The Google-Colab implementation of the proposed method is available at the author's GitHub page (\href{https://github.com/ehtisham-Fazal/ACP-Kernel-SRC}{https://github.com/ehtisham-fazal/ACP-Kernel-SRC}).
- Published
- 2023
43. Correction of Satellite Sea Surface Salinity Products Using Ensemble Learning Method
- Author
-
Jian Chen, Yangjun Wang, Senliang Bao, Hengqian Yan, Huizan Wang, and Ren Zhang
- Subjects
General Computer Science ,Correlation coefficient ,General Engineering ,Ensemble learning ,Random forest ,Salinity ,Indian ocean ,Climatology ,Satellite data ,Environmental science ,General Materials Science ,Satellite ,Sea surface salinity ,Electrical and Electronic Engineering - Abstract
Although salinity satellites can provide high-resolution global sea surface salinity (SSS) data, the satellite data still display large errors close to the coast. In this paper, a nonlinear empirical method based on random forest is proposed to correct two Soil Moisture and Ocean Salinity (SMOS) L3 products in the tropical Indian Ocean, including SMOS BEC and SMOS CATDS data. The agreement between in-situ data and the corrected SMOS data is better than that between in-situ data and the original satellite data. The root-mean-square deviation (RMSD) of the satellite SSS data decreased from 0.366 to 0.275 and from 0.367 to 0.255 for SMOS BEC and SMOS CATDS, respectively. The effect of the correction model was better in the Arabian Sea than in the Bay of Bengal. The RMSD of corrected BEC (CATDS) SSS was reduced from 0.44 (0.48) to 0.276 (0.269), and the correlation coefficient was increased to 0.915 from 0.741(0.801) in the Arabian Sea while the correlation coefficient improved less than 0.02 in the Bay of Bengal. The cross-validation results highlight the robustness and effectiveness of the correction model. Additionally, the effects of different features on the correction model are discussed to demonstrate the vital role of geographical information in the correction of satellite SSS data. The proposed method outperformed other machine-learning methods with respect to the RMSD and correlation coefficient.
- Published
- 2023
44. Aerial Image Color Balancing Based on Rank-Deficient Free Network
- Author
-
Minglu Wei, Manzhu Yu, Li Zheng, Wenjie Jian, and Zongqian Zhan
- Subjects
General Computer Science ,Rank (linear algebra) ,business.industry ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,General Engineering ,Orthophoto ,Equalization (audio) ,Digital imaging ,Process (computing) ,Color balance ,General Materials Science ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Aerial image ,Block (data storage) - Abstract
In the process of digital image acquisition, color inconsistency is a common issue due to the influence of photographic conditions. The inconsistency of color affects the quality of digital orthophoto products and subsequent applications significantly, and this issue remains challenging both in the academia and industry. This paper proposes a color equalization method for aerial image based on block adjustment and an improved color balance model based on the analysis of causes for color deviation. Two nonlinear functions are used to simulate the process of color deviation and correct the color of images. Due to the existence of error, the color corrected images still have minor deviations in the color of the overlapped regions. Therefore, this research introduces an error equation to address this issue and uses the least square principle to calculate the adjustment. The simulation experiments and real experiments are carried out on several groups of aerial image data respectively. The results show that the method not only achieves better visual effect but also has a higher adjustment precision. Finally, the rationality and parameters of the model proposed in this paper are discussed.
- Published
- 2023
45. Autoencoder-Based Iterative Modeling and Multivariate Time-Series Subsequence Clustering Algorithm
- Author
-
Jonas Köhne, Lars Henning, and Clemens Gühmann
- Subjects
autoencoder ,Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Computer Science - Machine Learning ,General Computer Science ,segmentation ,General Engineering ,unsupervised clustering ,Machine Learning (cs.LG) ,change point detection ,FOS: Electrical engineering, electronic engineering, information engineering ,Condition-based maintenance ,multivariate time-series data ,General Materials Science ,000 Informatik, Informationswissenschaft, allgemeine Werke::000 Informatik, Wissen, Systeme::004 Datenverarbeitung ,Informatik ,Electrical Engineering and Systems Science - Signal Processing ,Electrical and Electronic Engineering ,subsequence ,clustering - Abstract
This paper introduces an algorithm for the detection of change-points and the identification of the corresponding subsequences in transient multivariate time-series data (MTSD). The analysis of such data has become more and more important due to the increase of availability in many industrial fields. Labeling, sorting or filtering highly transient measurement data for training condition based maintenance (CbM) models is cumbersome and error-prone. For some applications it can be sufficient to filter measurements by simple thresholds or finding change-points based on changes in mean value and variation. But a robust diagnosis of a component within a component group for example, which has a complex non-linear correlation between multiple sensor values, a simple approach would not be feasible. No meaningful and coherent measurement data which could be used for training a CbM model would emerge. Therefore, we introduce an algorithm which uses a recurrent neural network (RNN) based Autoencoder (AE) which is iteratively trained on incoming data. The scoring function uses the reconstruction error and latent space information. A model of the identified subsequence is saved and used for recognition of repeating subsequences as well as fast offline clustering. For evaluation, we propose a new similarity measure based on the curvature for a more intuitive time-series subsequence clustering metric. A comparison with seven other state-of-the-art algorithms and eight datasets shows the capability and the increased performance of our algorithm to cluster MTSD online and offline in conjunction with mechatronic systems., 26 pages, 11 figures, for associated python code repositories see https://github.com/Jokonu/mt3scm and https://github.com/Jokonu/abimca; Minor spelling and grammar corrections, fixed wrong bibtex entry for SOStream, some improvements and corrections in formulas of section 4
- Published
- 2023
46. Intelligent Prediction Method for Heat Dissipation State of Converter Heatsink
- Author
-
Hao Jia, Jie Chen, Heping Fu, Ruichang Qiu, and Zhigang Liu
- Subjects
Thermal force ,intelligent prediction ,General Computer Science ,General Engineering ,Power electronic converters heatsink intelligent prediction Gauss-Newton iteration method ,Maintenance engineering ,Insulated gate bipolar transistors ,Power electronic ,Gauss-Newton iteration method ,Heat transfer ,General Materials Science ,Thermal analysis ,Resistance heating ,Thermal resistance ,Electrical and Electronic Engineering ,converters heatsink - Abstract
Currently, the offline manual periodic detection method is a well-established practice in detecting the thermal state of the converter heatsink. This method, however, is huge in maintenance costs. To lower maintenance costs and improve maintenance efficiency in detecting the thermal dissipation state, this paper proposes an intelligent online prediction scheme based on Gauss-Newton iteration method. Firstly, the power loss model of the power module is established according to the characteristics of IGBT and FWD. The power loss of the power device is then calculated in real time with the voltage and current parameters of the converter. Next, the transient thermal model of the heatsink is established based on thermodynamics theory. And the calculation method of steady thermal resistance of heatsink based on Gauss-Newton iteration method is proposed according to the model. The transient thermal impedance data allow for timely prediction of thermal resistance of the heatsink and characterize the thermal state of the heatsink. Finally, with the help of DSP28377D, an experimental platform is built to verify the scheme. Results show that this method can realize intelligent prediction of thermal state online
- Published
- 2023
47. Path Planning for Mobile Robot Considering Turnabouts on Narrow Road by Deep Q-Network
- Author
-
Naoki Motoi, Masato Kobayashi, and Tomoaki Nakamura
- Subjects
reinforcement learning ,General Computer Science ,Mobile robot ,General Engineering ,turnabout ,General Materials Science ,Electrical and Electronic Engineering ,path planning - Abstract
This paper proposes a path planning method for a nonholonomic mobile robot that takes turnabouts on a narrow road. A narrow road is any space in which the robot cannot move without turning around. Conventional path planning techniques ignore turnabout points and directions determined by environmental data, which might result in collisions or deadlocks on a narrow road. The proposed method uses the Deep Q-network (DQN) to obtain a control strategy for path planning on narrow roads. In the simulation, the robot learned the optimal velocity commands that maximized the long-term reward. The reward is designed to reach a target with a smaller change in robot velocity and fewer turnabouts. The success rate and the number of turnabouts in the simulation and experiment were used to evaluate the trained model. According to simulation and environmental data, the proposed strategy enables the robot to travel on narrow roads. Additionally, these outcomes demonstrate comparable performance on a number of roadways that are not part of the learning environments, supporting the robustness of the trained model.
- Published
- 2023
48. An Ageing-Aware and Temperature Mapping Algorithm for Multilevel Cache Nodes
- Author
-
Emmanuel Ofori-Attah and Michael Opoku Agyeman
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Abstract
Increase in chip inactivity in the future threatens the performance of many-core systems and therefore, efficient techniques are required for continuous scaling of transistors. As of a result of this challenge, future proposed many-core system designs must consider the possibility of a 50% functioning chip per time as well maintaining performance. Fortunately, this 50% inactivity can be increased by managing the temperature of active nodes and the placement of the dark nodes to leverage a balance working chip whilst considering the lifetime of nodes. However, allocating dark nodes inefficiently can increase the temperature of the chip and increase the waiting time of applications. Consequently, due to stochastic application characteristics, a dynamic rescheduling technique is more desirable compared to fixed design mapping. In this paper, we propose an Ageing Before Temperature Electromigration-Aware, Negative Bias Temperature Instability (NBTI) & Time-dependent Dielectric Breakdown (TDDB) Neighbour Allocation (ABENA 2.0), a dynamic rescheduling management system which considers the ageing and temperature before mapping applications. ABENA also considers the location of active and dark nodes and migrate task based on the characteristics of the nodes. Our proposed algorithm employ Dynamic Voltage Frequency Scaling (DVFS) to reduce the Voltage and Frequency (VF) of the nodes. Results show that, our proposed methods improve the ageing of nodes compared to a conventional round-robin management system by 10% in temperature, and 10% ageing
- Published
- 2023
49. EduQG: A Multi-Format Multiple-Choice Dataset for the Educational Domain
- Author
-
Amir Hadifar, Semere Kiros Bitew, Johannes Deleu, Chris Develder, and Thomas Demeester
- Subjects
FOS: Computer and information sciences ,Internet ,Computer Science - Computation and Language ,Technology and Engineering ,learning ,General Computer Science ,Natural language processing ,General Engineering ,question generation ,Transfer learning ,retrieval) ,Online services ,Annotations ,QUESTION ,TESTS ,Training ,General Materials Science ,Syntactics ,multiple-choice questions ,Electrical and Electronic Engineering ,Question answering (information ,Computation and Language (cs.CL) ,transfer - Abstract
Natural language processing technology has made significant progress in recent years, fuelled by increasingly powerful general language models. This has also inspired a sizeable body of work targeted specifically towards the educational domain, where the creation of questions (both for assessment and practice) is a laborious/expensive effort. Thus, automatic Question-Generation (QG) solutions have been proposed and studied. Yet, according to a recent survey of the educational QG community's progress, a common baseline dataset unifying multiple domains and question forms (e.g., multiple choice vs. fill-the-gap), including readily available baseline models to compare against, is largely missing. This is the gap we aim to fill with this paper. In particular, we introduce a high-quality dataset in the educational domain, containing over 3,000 entries, comprising (i) multiple-choice questions, (ii) the corresponding answers (including distractors), and (iii) associated passages from the course material used as sources for the questions. Each question is phrased in two forms, normal and cloze (i.e., fill-the-gap), and correct answers are linked to source documents with sentence-level annotations. Thus, our versatile dataset can be used for both question and distractor generation, as well as to explore new challenges such as question format conversion. Furthermore, 903 questions are accompanied by their cognitive complexity level as per Bloom's taxonomy. All questions have been generated by educational experts rather than crowd workers to ensure they are maintaining educational and learning standards. Our analysis and experiments suggest distinguishable differences between our dataset and commonly used ones for question generation for educational purposes. We believe this new dataset can serve as a valuable resource for research and evaluation in the educational domain. The dataset and baselines are made available to support further research in question generation for education (https://github.com/hadifar/question-generation).
- Published
- 2023
50. Correct and Crisp Edge Detection Approach Based on Dense Network
- Author
-
Xiangxiang Li, Gang Shi, Xiaoli Wang, Xiaohua Li, and Guangxiao Niu
- Subjects
Correctness ,General Computer Science ,Computer science ,General Engineering ,Process (computing) ,Convolutional neural network ,Edge detection ,Digital image ,Feature (computer vision) ,Path (graph theory) ,Convergence (routing) ,General Materials Science ,Electrical and Electronic Engineering ,Algorithm - Abstract
Edge detection is a basic problem in computer vision and image processing. The main purpose of edge detection is to identify points with obvious brightness changes in digital images. At present, there are many good detection methods, but most of them do not consider the correctness and crispness of edges at the same time. In order to address this problem, this paper proposes a method based on a deep convolution neural network. The method is mainly based on a dense network that is then combined with the single network structure of a backward refinement path module. The former can detect and retain the feature information between different layers in an image. The latter makes full use of the extracted information so that the low-level detail features and high-level abstract features can be better integrated in the final output. We tested the method on the BIPED data set. The results show that the correctness and crispness of the edges can be balanced in the detection process, and the ODS, OIS and AP of this method reach 0.888, 0.893 and 0.916, respectively. Compared with the state-of-the-art approaches, the proposed method improves the standard evaluation by 3%-5%, and the convergence speed is also significantly improved.
- Published
- 2023
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.