69 results on '"Gelenbe, Erol"'
Search Results
2. An OpenNCP-based Solution for Secure eHealth Data Exchange
- Author
-
Staffa, Mariacarla, Sgaglione, Luigi, Mazzeo, Giovanni, Coppolino, Luigi, D'Antonio, Salvatore, Romano, Luigi, Gelenbe, Erol, Stan, Oana, Carpov, Sergiu, Grivas, Evangelos, Campegiani, Paolo, Castaldo, Luigi, Votis, Konstantinos, Koutkias, Vassilis, and Komnios, Ioannis
- Published
- 2018
- Full Text
- View/download PDF
3. Deep Learning with Dense Random Neural Network for Detecting Attacks against IoT-connected Home Environments
- Author
-
Brun, Olivier, Yin, Yonghua, and Gelenbe, Erol
- Published
- 2018
- Full Text
- View/download PDF
4. IoT Network Cybersecurity Assessment with the Associated Random Neural Network
- Author
-
Gelenbe, Erol and Nakip, Mert
- Subjects
Machine Learning ,Cybersecurity ,Associated Random Neural Network ,MIRAI Attacks ,Botnets ,Internet of Things (IoT) - Abstract
This paper proposes a method to assess the security of an n device, or IP address, IoT network by simultaneously identifying all the compromised IoT devices and IP addresses. It uses a specific Random Neural Network (RNN) architecture composed of two mutually interconnected sub-networks that complement each other in a recurrent structure, called the Associated RNN (ARNN). For each of the n devices or IP addresses in the IoT network, two distinct neurons of the ARNN advocate opposite views: compromised or not compromised. The fully interconnected 2n neuron ARNN structure of paired neurons learns offline from ground truth data. Thus rather than requiring a separate attack detector at each network node, the ARNN offers a single overall attack detector that observes the incoming traffic at each node, learns about the interdependencies between network nodes, and formulates a recommendation for each device or IP address in an IoT network. The ARNN weight initialization and learning algorithm are discussed, and the ARNN performance is evaluated using real attack data, and compared against several learning and testing techniques. Results are obtained both for off-line learning with ground truth data, and for on-line incremental learning using a simplified average metric measured from incoming packet traffic. Comparisons with the best state-of-the-art techniques show that the ARNN significantly outperforms previously known approaches.
- Published
- 2023
- Full Text
- View/download PDF
5. Measurement Based Evaluation and Mitigation of Flood Attacks on a LAN Test-Bed
- Author
-
Mohammed Nasereddin, Mert Nakip, and Gelenbe, Erol
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Science - Networking and Internet Architecture ,Computer Science - Cryptography and Security ,Cryptography and Security (cs.CR) ,Internet of Things, Local Area Networks, Cybersecurity, Random Neural Networks, G-Networks, UDP Flood Attacks, Intrusion Detection and Mitigation - Abstract
The IoT's vulnerability to network attacks has motivated the design of intrusion detection schemes (IDS) using Machine Learning (ML), with a low computational cost for online detection but intensive offline learning. Such IDS can have high attack detection accuracy and are easily installed on servers that communicate with IoT devices. However, they are seldom evaluated in realistic operational conditions where IDS processing may be held up by the system overload created by attacks. Thus we first present an experimental study of UDP Flood Attacks on a Local Area Network Test-Bed, where the first line of defence is an accurate IDS using an Auto-Associative Dense Random Neural Network. The experiments reveal that during severe attacks, the packet and protocol management software overloads the multi-core server, and paralyses IDS detection. We therefore propose and experimentally evaluate an IDS design where decisions are made from a very small number of incoming packets, so that attacking traffic is dropped within milli-seconds after an attack begins and the paralysing effect of congestion is avoided., 8 pages, 11 figures
- Published
- 2023
- Full Text
- View/download PDF
6. Impact of Network Delay and Decision Imperfections in IoT Assisted Cruise Ship Evacuation
- Author
-
Ma, Yuting, Gelenbe, Erol, and Liu, Kezhong
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Science - Networking and Internet Architecture - Abstract
Major challenges of assisting passengers to safely and quickly escape from ships when an emergency occurs, include complex realistic features such as human behavior uncertainty, dynamic human traversal times, and the computation and communication delays in the systems that offer advice to users during an emergency. In this paper, we present simulations that examine the influence of these key features on evacuation performance in terms of evacuation time. The approach is based on our previously proposed lookup table-based ship passenger evacuation method, i.e., ANT. The simulation results we present show that delays in the users' reception of instructions significantly impair the effectiveness of the evacuation service. In contrast, behavior uncertainty has a weaker influence on the performance of the navigation method. In addition, these effects also vary with the extent of the behavior uncertainty, the dynamics of the traversal time distributions, and the delay in receiving directions. These findings demonstrate the importance of carefully designing evacuation systems for passenger ships in a way that takes into account all realistic features of the ship's indoor evacuation environment, including the crucial role of information technology., 6 pages, 9 figures, WF-IoT conference
- Published
- 2023
7. Protecting IoT Servers Against Flood Attacks with the Quasi Deterministic Transmission Policy
- Author
-
Gelenbe, Erol and Nasereddin, Mohammed
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Science - Networking and Internet Architecture ,Computer Science - Cryptography and Security ,Cryptography and Security (cs.CR) - Abstract
IoT Servers that receive and process packets from IoT devices should meet the QoS needs of incoming packets, and support Attack Detection software that analyzes the incoming traffic to identify and discard packets that may be part of a Cyberattack. Since UDP Flood Attacks can overwhelm IoT Servers by creating congestion that paralyzes their operation and limits their ability to conduct timely Attack Detection, this paper proposes and evaluates a simple architecture to protect a Server that is connected to a Local Area Network, using a Quasi Deterministic Transmission Policy Forwarder (SQF) at its input port. This Forwarder shapes the incoming traffic, sends it to the Server in a manner which does not modify the overall delay of the packets, and avoids congestion inside the Server. The relevant theoretical background is briefly reviewed, and measurements during a UDP Flood Attack are provided to compare the Server performance, with and without the Forwarder. It is seen that during a UDP Flood Attack, the Forwarder protects the Server from congestion allowing it to effectively identify Attack Packets. On the other hand, the resulting Forwarder congestion can also be eliminated at the Forwarder with "drop" commands generated by the Forwarder itself, or sent by the Server to the Forwarder., 8 pages, 13 figures
- Published
- 2023
8. Towards low-GHG emissions from energy use in selected sectors - CAETS Energy report 2022
- Author
-
Adesina, Adejosi A., Albarran-Nunez, Jose Francisco, Alvarez Pelegry, Eloy, Anyaeji, Otis, Avidan, Amos, Bamberger, Yves, Bandyopadhyay, Bibek, Behrendt, Frank, Bertero, Raúl, Bravo López, Manuel, Cai, Rui, Carnicer, Roberto S., Caron, Patrick, Cataldo, José, Chakraborty, Sudhansu Shakhar, Chang, Woong-Seong, Chaturvedi, Pradeeep, Coker, Olufunmi, Dominguez Abascal, Jaime, Domínguez Abascal, José, Duggan, Gerry, Duic, Neven, Evans, Robert, Ferreño, Oscar, Finch, Nigel, Fredenberg, Lennart, Fritz de-la-orta, Erwin, Fu, Lin, Gao, Kunlun, Gelenbe, Erol, Gehrisch, Wolf, Giovambattista, Alberto, Godefroy, Julie, Haslett, Andrew, Hefft, Daniel, Hofmann-Sievert, Rita, Holzner, Christian, Hu, Shan, Igwe, Godwin, Imasogie, Benjamin, Jiang, Yi, Kearsley, Elsabe, Langlais, Catherine, Lieuwen, Timothy, Matlosz, Michael, Meisen, Axel, Melvin, Christopher, Mesarovic, Miodrag, Morillón, David, Moullec, Gaël-Georges, O’Brien, Kieran, Oke, Clement, Olivier-Bourbigou, Hélène, Park, Chinho, Pátzay, György, Reinders, Felix, Sanso, Brunilde, Scott, Norman Roy, Sohn, Il, Speer, John, Tanguy, Philippe A., Vignart, Oscar, Wagner, Ulrich, Wang, Yishen, Wright, Dave, and Wu, Yanting
- Published
- 2023
9. Traffic Based Sequential Learning During Botnet Attacks to Identify Compromised IoT Devices
- Author
-
Gelenbe, Erol and Nakip, Mert
- Subjects
Internet of Things (IoT), compromised device identication, random neural network, autoassociative deep random neural network, botnets, Mirai, attack detection and prevention - Abstract
A novel online Compromised Device Identication System (CDIS) is presented to identify IoT devices and/or IP addresses that are compromised by a Botnet attack, within a set of sources and destinations that transmit packets. The method uses specific metrics that are selected for this purpose andwhich are easily extracted from network traffic, and trains itself online during normal operation with an Auto-Associative Dense Random Neural Network (AADRNN) using traffic metrics measured as trafficarrives. As it operates, the AADRNN is trained with auto-associative learning only using traffic that itestimates as being benign, without prior collection of different attack data. The experimental evaluationon publicly available Mirai Botnet attack data shows that CDIS achieves high performance with Balanced Accuracy of 97%, despite its low on-line training and execution time. Experimental comparisons show thatthe AADRNN with sequential (online) auto-associative learning, provides the best performance among sixdifferent state-of-the-art machine learning models. Thus CDIS can provide crucial effective information to prevent the spread of Botnet attacks in IoT networks having multiple devices and IP addresses.
- Published
- 2022
10. Computer and information Sciences : 31St international Symposium, Iscis 2016, Kraków, Poland, October 27–28, 2016, Proceedings (Volume 659.0)
- Author
-
Grochla, Krzysztof, Czachórski, Tadeusz, and Gelenbe, Erol
- Subjects
Business & Economics / Management ,Business & Economics / Industries / Manufacturing ,Business & Economics / Industries / Energy - Abstract
Information Systems and Communication Service; Artificial Intelligence (incl. Robotics); Computer Communication Networks; Software Engineering/Programming and Operating Systems; Probability and Statistics in Computer Science; Computer Imaging, Vision, Pattern Recognition and Graphics
- Published
- 2016
11. Modelling energy changes in the energy harvesting battery of an IoT device
- Author
-
Czachórski, Tadeusz, Gelenbe, Erol, and Godlove Suila Kuaban
- Subjects
Energy Harvesting, IoT, Diffusion models, Markovian Models - Abstract
The complexity of battery-powered autonomous devices such as Internet of Things (IoT) nodes or Unmanned Aerial Vehicles (UAV) and the necessity to ensure an acceptable quality of service, reliability, and security, have significantly increased their energy demand. In this paper, we discuss using a diffusion approximation process to approximate the dynamic changes in the energy content of a battery. We consider the case when energy harvesting sources are constantly charging the battery. The model assumes a probabilistic consumption and delivery of energy, giving the time-dependent distributions of the energy at the battery, of the time remaining until it becomes empty, the time required to charge the battery to its total capacity, or the time it is operational between two moments of complete depletion. When possible, we compare the diffusion approximation results with corresponding Markovian models.
- Published
- 2022
- Full Text
- View/download PDF
12. G-Networks that Detect Different Types of Cyberattacks
- Author
-
Gelenbe, Erol and Nakip, Mert
- Subjects
Cyberattacks ,G-Networks ,Auto-Associative Learning ,Auto-Associative Deep Random Neural Network ,The Random Neural Network - Abstract
Malicious network attacks are a serious source of concern, and machine learning techniques have been widely used to build Attack Detectors. In particular, network based attacks have been widely studied since attacks try to compromise systems as network packets that enter network ports. Attack Detectors are trained off-line with real attack data as well as with real non-attack data, and used online to monitor system entry points connected to networks, so that an alarm is raised when the arrival of attack traffic is detected. Many machine learning based Attack Detectors are typically trained to identify certain specific attacks, and the training of such algorithms to cover many different types of attacks may be excessively time consuming. G-networks are queueing networks with product form solution, which were proven to be universal approximators for continuous and bounded functions. In this paper a specific instance of the “G-network with triggers” is organized as a multilayer network, then trained with “normal” (non-attack) traffic from a well known DARPA attack traffic data repository. It is then shown to accurately detect several different attack types contained in the same DARPA traffic repository.
- Published
- 2022
- Full Text
- View/download PDF
13. Modelling of the Energy Depletion Process and Battery Depletion Attacks for Battery-Powered Internet of Things (IoT) Devices.
- Author
-
Kuaban, Godlove Suila, Gelenbe, Erol, Czachórski, Tadeusz, Czekalski, Piotr, and Tangka, Julius Kewir
- Subjects
- *
INTERNET of things , *WIENER processes , *PROBABILITY density function , *THRESHOLD energy - Abstract
The Internet of Things (IoT) is transforming almost every industry, including agriculture, food processing, health care, oil and gas, environmental protection, transportation and logistics, manufacturing, home automation, and safety. Cost-effective, small-sized batteries are often used to power IoT devices being deployed with limited energy capacity. The limited energy capacity of IoT devices makes them vulnerable to battery depletion attacks designed to exhaust the energy stored in the battery rapidly and eventually shut down the device. In designing and deploying IoT devices, the battery and device specifications should be chosen in such a way as to ensure a long lifetime of the device. This paper proposes diffusion approximation as a mathematical framework for modelling the energy depletion process in IoT batteries. We applied diffusion or Brownian motion processes to model the energy depletion of a battery of an IoT device. We used this model to obtain the probability density function, mean, variance, and probability of the lifetime of an IoT device. Furthermore, we studied the influence of active power consumption, sleep time, and battery capacity on the probability density function, mean, and probability of the lifetime of an IoT device. We modelled ghost energy depletion attacks and their impact on the lifetime of IoT devices. We used numerical examples to study the influence of battery depletion attacks on the distribution of the lifetime of an IoT device. We also introduced an energy threshold after which the device's battery should be replaced in order to ensure that the battery is not completely drained before it is replaced. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. Comprehensive user requirements engineering methodology for secure and interoperable health data exchange
- Author
-
Natsiavas, Pantelis, Rasmussen, Janne, Voss-Knude, Maja, Votis, Κostas, Coppolino, Luigi, Campegiani, Paolo, Cano, Isaac, Marí, David, Faiella, Giuliana, Clemente, Fabrizio, Nalin, Marco, Grivas, Evangelos, Stan, Oana, Gelenbe, Erol, Dumortier, Jos, Petersen, Jan, Tzovaras, Dimitrios, Romano, Luigi, Komnios, Ioannis, and Koutkias, Vassilis
- Published
- 2018
- Full Text
- View/download PDF
15. Challenges for European Science and Technology Driven Innovation in Europe
- Author
-
Gelenbe, Erol, Brasseur, Guy, Cheffneux, Luc, Dehant, Veronique, Fabjanska Anna, Halloin, Veronique, Judkiewicz, Michel, Mrsa, Vladimir, and Perez-Ariaga, Ignacio J.
- Subjects
Europe ,Open Science ,Research ,EU H2020 Programs ,Innovation ,European Universities - Abstract
In 2020 theGNP of the United States, the European Union (EU) and China amounted to $20.8bn, $15.3bn and $14.7bn, respectively. Europe with 7% of the world's population produced 21% of the world's scientific publications, ahead of the USA andmatching China. But in 2020, among the twenty largest technology brands in terms of capitalization, one was European, with four Chinese companies and one Korean company. Among technology companies ranked by revenue, of the top twenty, only one was European. In 2020, among the top 50 companies by revenue, only86[VM1]were European, while European venture capital deals accounted for 13% , against 50% for the USA, of an estimated $270 billion total venture capital deals across the world. These indicators, which included the UK as being within the EU for 2020, warn us that despite its history, culture, quality of life and size of its population, Europe must make further efforts to transfer science into technology, innovation and business. This report identifies some of Europe's challenges in this respect, and recommends new initiatives to improve science and technology driven innovation in Europe
- Published
- 2022
- Full Text
- View/download PDF
16. IoT Traffic Shaping and the Massive Access Problem
- Author
-
Gelenbe, Erol and Sigman, Karl
- Subjects
Internet of Things (IoT), Traffic Shaping, Quasi-Deterministic Transmission Policy (QDTP), Adaptive Non- Deterministic Transmission Policy (ANTP), Quality of Service, Massive Access Problem, Queueing Analysis ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,GeneralLiterature_MISCELLANEOUS - Abstract
IoT gateways aim to meet the deadlines and QoSneeds of packets from as many IoT devices as possible, thoughthis can lead to a form of congestion known as the MassiveAccess Problem (MAP). While much work was conducted on predictive or reactive scheduling schemes to match the arrival process of packets to the service capabilities of IoT gateways, such schemes may use substantial computation and communication between gateways and IoT devices. This paper proves thatthe recently proposed “Quasi-Deterministic-Transmission-Policy (QDTP)” traffic shaping approach which delays packets at IoTdevices, substantially alleviates the MAP: QDTP does not increaseoverall end-to-end delay and reduces gateway queue length. Wethen introduce the Adaptive Non-Deterministic Transmission Policy(ANTP) that requires only one packet buffer at the gateway,offering substantial QoS improvement over FIFO scheduling.
- Published
- 2022
- Full Text
- View/download PDF
17. Security in Computer and Information Sciences
- Author
-
Gelenbe, Erol, Jankovic, Marija, Kehagias, Dionysios, Marton, Anna, and Vilmos, Andras
- Subjects
architecture types ,artificial intelligence ,communication systems ,computer crime ,computer hardware ,computer networks ,computer security ,computer systems ,cryptography ,data security ,Internet of Things (IoT) ,network protocols ,network security ,signal processing ,software architecture ,software design ,software engineering ,telecommunication networks ,telecommunication systems ,bic Book Industry Communication::U Computing & information technology::UR Computer security ,bic Book Industry Communication::U Computing & information technology::UN Databases::UNH Information retrieval ,bic Book Industry Communication::U Computing & information technology::UK Computer hardware::UKN Network hardware ,bic Book Industry Communication::U Computing & information technology::UM Computer programming / software development::UMZ Software Engineering ,bic Book Industry Communication::U Computing & information technology::UB Information technology: general issues::UBL Legal aspects of IT ,bic Book Industry Communication::G Reference, information & interdisciplinary subjects::GP Research & information: general::GPJ Coding theory & cryptology - Abstract
This open access book constitutes the thoroughly refereed proceedings of the Second International Symposium on Computer and Information Sciences, EuroCybersec 2021, held in Nice, France, in October 2021. The 9 papers presented together with 1 invited paper were carefully reviewed and selected from 21 submissions. The papers focus on topics of security of distributed interconnected systems, software systems, Internet of Things, health informatics systems, energy systems, digital cities, digital economy, mobile networks, and the underlying physical and network infrastructures. This is an open access book.
- Published
- 2022
- Full Text
- View/download PDF
18. Random Quantum Neural Networks (RQNN) for Noisy Image Recognition
- Author
-
Konar, Debanjan, Gelenbe, Erol, Bhandary, Soham, Sarma, Aditya Das, and Cangi, Attila
- Subjects
FOS: Computer and information sciences ,Quantum Physics ,machine learning ,image recognition ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,FOS: Physical sciences ,Electrical Engineering and Systems Science - Image and Video Processing ,Quantum Physics (quant-ph) ,neural networks ,quantum computing - Abstract
Classical Random Neural Networks (RNNs) have demonstrated effective applications in decision making, signal processing, and image recognition tasks. However, their implementation has been limited to deterministic digital systems that output probability distributions in lieu of stochastic behaviors of random spiking signals. We introduce the novel class of supervised Random Quantum Neural Networks (RQNNs) with a robust training strategy to better exploit the random nature of the spiking RNN. The proposed RQNN employs hybrid classical-quantum algorithms with superposition state and amplitude encoding features, inspired by quantum information theory and the brain's spatial-temporal stochastic spiking property of neuron information encoding. We have extensively validated our proposed RQNN model, relying on hybrid classical-quantum algorithms via the PennyLane Quantum simulator with a limited number of \emph{qubits}. Experiments on the MNIST, FashionMNIST, and KMNIST datasets demonstrate that the proposed RQNN model achieves an average classification accuracy of $94.9\%$. Additionally, the experimental findings illustrate the proposed RQNN's effectiveness and resilience in noisy settings, with enhanced image classification accuracy when compared to the classical counterparts (RNNs), classical Spiking Neural Networks (SNNs), and the classical convolutional neural network (AlexNet). Furthermore, the RQNN can deal with noise, which is useful for various applications, including computer vision in NISQ devices. The PyTorch code (https://github.com/darthsimpus/RQN) is made available on GitHub to reproduce the results reported in this manuscript.
- Published
- 2022
19. Energy, QoS and Security Aware Edge Services
- Author
-
Gelenbe, Erol, Nowak, Mateusz, Frohlich, Piotr, Fiolka, Jerzy, and Checinski, Jacek
- Subjects
SDN, Random Neural Networks, Green Computing, Edge Computing, Energy-awareness, Green Networking, Security, IoT, QoS - Abstract
With the development of communication technologies and the increasing bandwidth of optical fibres and transmission speeds in current 5G and future 6G wireless networks, there is a growing demand for solutions organising traffic in such networks, taking into account both end-to-end transmissions and the possibility of data processing by edge services. The most pressing problems of today's computer networks are not only bandwidth and transmission delays, but also security and energy consumption, which is becoming increasingly important in today's climate. This paper presents a solution based on neural networks that organises network traffic taking into account the above criteria - quality of service (QoS), energy consumption and security.
- Published
- 2021
- Full Text
- View/download PDF
20. Randomization of Data Generation Times Improves Performance of Predictive IoT Networks
- Author
-
Nakıip, Mert and Gelenbe, Erol
- Subjects
Earliest deadline first scheduling ,Scheduling ,Massive Access Problem ,business.industry ,Computer science ,Test data generation ,Network packet ,Quality of service ,Throughput ,Internet of Things (IoT) ,Predictive networks ,Packet loss ,Logic gate ,Performance improvement ,business ,Computer network - Abstract
Input traffic from Internet of Things (IoT) devices is often both periodic and requires to be received by a given deadline. This can create congestion at instants of time when traffic flowing from multiple devices arrives at a shared input port or gateway, resulting in missed deadlines at the receiver.As a consequence, scheduling techniques such as the “Earliest Deadline First” (EDF) and “Priority based on Average Load” (PAL) are used to schedule the flow from different devices so as to try to satisfy the needs of the largest number of traffic flows in a timely fashion. In this paper, we propose the Randomization of flow Generation Times (RGT) in order to smooth the total incoming traffic to the input port or gateway, on top of the use of EDF and PAL. We then evaluate the performance of RGT together with PAL and EDP, for traffic load with a varyingnumber of up to 6400 IoT devices. Our simulation results show that RGT provides significantly better performance when added to EDF and PAL. Also, the additional computation required by RGT at each device can be quite small, suggesting that RGT is a very useful addition for improving the performance of IoT networks.
- Published
- 2021
- Full Text
- View/download PDF
21. 10.1109/HEALTHCOM49281.2021.9398997
- Author
-
Li, Nan, Xiping Hu, Ngai, Edith, and Gelenbe, Erol
- Subjects
Computer Science::Networking and Internet Architecture ,Computer Science::Information Theory - Abstract
With the expansion of the IoT, it is important to optimize available bandwidth to reliably support edge to device communications. Thus we propose a wireless network where each edge server communicates with its end devices using its wireless band as a primary channel, assisted by a secondary edge server that can relay communications via its own wireless band as a secondary channel. The network can optimize capacity by balancing load between primary and secondary wireless bands, and we analyze the geometry of achievable rate regions, depending on the state of bands modeled as Rayleigh fading channels. The allocation of a connection to the primary or secondary band is formulated as an optimization problem which is then solved, and illustrated with numerical examples.
- Published
- 2021
- Full Text
- View/download PDF
22. Time Dependent Diffusion Model for Security Driven Software Defined Networks
- Author
-
Czachórski, Tadeusz, Gelenbe, Erol, Godlove Suila Kuaban, and Marek, Dariusz
- Subjects
SDN ,Transients ,Security ,QoS ,IoT Networks ,Routing - Abstract
We present a model of a Software Defined Network (SDN) where frequent changes in routing and traffic rates at routers are needed to respond to the security, quality of service (QoS), and energy savings re- quirements of applications such as the Internet of Things. Such frequent path and traffic changes introduce time-dependent network behaviours, and standard queueing models are not well adapted to analyse the tran- sient regime, we propose a tractable diffusion approximation for both the transient and steady-state behaviour. Our model can represent any net- work topology transmitting time-dependent flows with routing changes, and computes queue length and delay distributions at each network node and along complete paths between senders and receivers. Using realistic router parameters, we show that transients occupy a significant fraction of system time, so that the optimisation conducted with SDN controllers needs to include the effect of time-dependent behaviours.
- Published
- 2021
- Full Text
- View/download PDF
23. The European cross-border health data exchange roadmap: Case study in the Italian setting
- Author
-
Nalin, Marco, Baroni, Ilaria, Faiella, Giuliana, Romano, Maria, Matrisciano, Flavia, Gelenbe, Erol, Martinez, David Mari, Dumortier, Jos, Natsiavas, Pantelis, Votis, Kostas, Koutkias, Vassilis, Tzovaras, Dimitrios, and Clemente, Fabrizio
- Published
- 2019
- Full Text
- View/download PDF
24. Optimum Checkpointing for Long-running Programs
- Author
-
Siavvas, Miltiadis and Gelenbe, Erol
- Subjects
roll-back recovery ,application-level checkpoints ,optimum checkpoints ,software reliability ,program loops - Abstract
Checkpoints are widely used to improve the performance of computer systems and programs in the presence of failures, and significantly reduce the cost of restarting a program each time that it fails. Application level checkpointing has been proposed for programs which may execute on platforms which are prone to failures, and also to reduce the execution time of programs which are prone to internal failures. Thus we propose a mathematical model to estimate the average execution time of a program that operates in the presence of dependability failures, without and with application level checkpointing, and use it to estimate the optimum interval in number of instructions executed between successive checkpoints. Specific emphasis is given on programs with loops, whereas the results are illustrated through simulation.
- Published
- 2019
- Full Text
- View/download PDF
25. Emergency Management Systems and Algorithms: a Comprehensive Survey
- Author
-
Bi, Huibo and Gelenbe, Erol
- Subjects
FOS: Computer and information sciences ,Computer Science - Other Computer Science ,Other Computer Science (cs.OH) ,FOS: Electrical engineering, electronic engineering, information engineering ,Systems and Control (eess.SY) ,Electrical Engineering and Systems Science - Systems and Control - Abstract
Owing to the increasing frequency and destruction of natural and manmade disasters to modern highly-populated societies, emergency management, which provides solutions to prevent or address disasters, have drawn considerable research over the last few decades and become a multidisciplinary area. Because of its open and inclusive nature, new technologies always tend to influence, change or even revolutionise this research area. Hence, it is imperative to consolidate the state-of-the-art studies and knowledge to meet the research needs and identify the future research directions. The paper presents a comprehensive and systemic review of the existing research in the field of emergency management from both the system design aspect and algorithm engineering aspect. We begin with the history and evolution of the emergency management research. Then the two main research topics of this area, "emergency navigation" and "emergency search and rescue planning", are introduced and discussed. Finally, we suggest the emerging challenges and opportunities from system optimisation, evacuee behaviour modelling and optimisation, computing patterns, data analysis, energy and cyber security aspects., 33 pages, 3 figures
- Published
- 2019
26. An Empirical Evaluation of the Relationship between Technical Debt and Software Security
- Author
-
Siavvas, Miltiadis, Tsoukalas, Dimitrios, Janković, Marija, Kehagias, Dionysios, Chatzigeorgiou, Alexander, Tzovaras, Dimitrios, Aničić, Nenad, and Gelenbe, Erol
- Subjects
empirical study ,software security ,technical debt ,static analysis ,vulnerability prediction - Abstract
Technical Debt (TD) is commonly used in practice as a measure of software quality. Due to the potential overlap between software quality and software security, an interesting topic is to investigate whether TD can be used as a software security indicator as well. However, although some softwarerelated factors (e.g. software metrics) have been studied for their ability to indicate security risk in software products, no research attempts exist specifically focusing on TD. To this end, in the present study, we empirically evaluate the ability of TD to indicate security risks in software products. For this purpose, a relatively large code repository comprising 50 open-source software applications was constructed and analyzed using popular open-source static analysis tools, in order to calculate their TD and security level (i.e. vulnerability density). Subsequently, statistical analysis was employed, to assess the relationship between TD and software security. The results of the empirical study revealed a statistically significant positive and strong correlation between the TD and the vulnerability densities of the studied software products. This provides preliminary evidence for the ability of TD to be used as an indicator of software security. To the best of our knowledge, this is the first study that empirically evaluates the relationship between TD and software security.
- Published
- 2019
- Full Text
- View/download PDF
27. Security in Computer and Information Sciences
- Author
-
Gelenbe, Erol, Campegiani, Paolo, Czachórski, Tadeusz, Katsikas, Sokratis K., Komnios, Ioannis, Romano, Luigi, and Tzovaras, Dimitrios
- Subjects
Computer science ,Data protection ,Application software ,Data encryption (Computer science) ,Special purpose computers ,Computer communication systems ,thema EDItEUR::G Reference, Information and Interdisciplinary subjects::GP Research and information: general::GPJ Coding theory and cryptology ,thema EDItEUR::U Computing and Information Technology::UK Computer hardware::UKN Network hardware ,thema EDItEUR::U Computing and Information Technology::UN Databases::UNH Information retrieval ,thema EDItEUR::U Computing and Information Technology::UR Computer security ,thema EDItEUR::U Computing and Information Technology::UY Computer science::UYQ Artificial intelligence::UYQE Expert systems / knowledge-based systems - Abstract
This open access book constitutes the thoroughly refereed proceedings of the First International ISCIS Security Workshop 2018, Euro-CYBERSEC 2018, held in London, UK, in February 2018. The 12 full papers presented together with an overview paper were carefully reviewed and selected from 31 submissions. Security of distributed interconnected systems, software systems, and the Internet of Things has become a crucial aspect of the performance of computer systems. The papers deal with these issues, with a specific focus on societally critical systems such as health informatics systems, the Internet of Things, energy systems, digital cities, digital economy, mobile networks, and the underlying physical and network infrastructures.
- Published
- 2018
- Full Text
- View/download PDF
28. Time-Dependent Performance of a Multi-Hop Software Defined Network.
- Author
-
Czachórski, Tadeusz, Gelenbe, Erol, Kuaban, Godlove Suila, Marek, Dariusz, and Herencsar, Norbert
- Subjects
SOFTWARE-defined networking ,NETWORK performance ,STATISTICAL accuracy ,TRANSIENT analysis - Abstract
It has been recently observed that Software Defined Networks (SDN) can change the paths of different connections in the network at a relatively frequent pace to improve the overall network performance, including delay and packet loss, or to respond to other needs such as security. These changes mean that a network that SDN controls will seldom operate in steady state; rather, the network may often be in transient mode, especially when the network is heavily loaded and path changes are critically important. Hence, we propose a transient analysis of such networks to better understand how frequent changes in paths and the switches' workloads may affect multi-hop networks' performance. Since conventional queueing models are difficult to solve for transient behaviour and simulations take excessive computation time due to the need for statistical accuracy, we use a diffusion approximation to study a multi-hop network controlled by SDN. The results show that network optimization should consider the transient effects of SDN and that transients need to be included in the design of algorithms for SDN controllers that optimize network performance. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. Minimizing Energy and Computation in Long-Running Software.
- Author
-
Gelenbe, Erol, Siavvas, Miltiadis, and Rojas-Cessa, Roberto
- Subjects
ENERGY consumption ,POWER resources ,WEBSITES ,COST functions ,COMPUTER software - Abstract
Long-running software may operate on hardware platforms with limited energy resources such as batteries or photovoltaic, or on high-performance platforms that consume a large amount of energy. Since such systems may be subject to hardware failures, checkpointing is often used to assure the reliability of the application. Since checkpointing introduces additional computation time and energy consumption, we study how checkpoint intervals need to be selected so as to minimize a cost function that includes the execution time and the energy. Expressions for both the program's energy consumption and execution time are derived as a function of the failure probability per instruction. A first principle based analysis yields the checkpoint interval that minimizes a linear combination of the average energy consumption and execution time of the program, in terms of the classical "Lambert function". The sensitivity of the checkpoint to the importance attributed to energy consumption is also derived. The results are illustrated with numerical examples regarding programs of various lengths and showing the relation between the checkpoint interval that minimizes energy consumption and execution time, and the one that minimizes a weighted sum of the two. In addition, our results are applied to a popular software benchmark, and posted on a publicly accessible web site, together with the optimization software that we have developed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
30. Mobile Communications with Intermittent and Renewable Energy
- Author
-
Abdelrahman, Omer H. and Gelenbe, Erol
- Subjects
Energy Packets ,Renewable energy ,Backhaul networks ,Performance analysis ,Mobile communications ,Intermittent energy - Abstract
Mobile communications are a powerful contributor to social and economic development worldwide, and in particular in less developed parts of the world. However the extension and penetration of mobile communications are often hampered by the state of the electrical grid, which may not cover all areas and which may also be intermittent and unreliable. Thus we review some of the economic and technological challenges for mobile telecommunications that integrate and exploit potentially plentiful renewable energy sources, such as photovoltaic, in developing areas of the world. Such sources of energy for communication networks are also useful to mitigate the environmental impact of energy consumption of Information and Communication Technologies (ICT) world-wide. However renewable energy sources are themselves intermittent, and this raises new challenges concerning network design. Hence this paper also develops an analytical approach that can be used to evaluate the quality of service of networks that operate with intermittent sources of energy.
- Published
- 2016
- Full Text
- View/download PDF
31. Nonnegative autoencoder with simplified random neural network
- Author
-
Yin, Yonghua and Gelenbe, Erol
- Subjects
FOS: Computer and information sciences ,Computer Science::Machine Learning ,Statistics::Machine Learning ,Computer Science - Learning ,Computer Science::Computer Vision and Pattern Recognition ,Computer Science::Neural and Evolutionary Computation ,Machine Learning (cs.LG) - Abstract
This paper proposes new nonnegative (shallow and multi-layer) autoencoders by combining the spiking Random Neural Network (RNN) model, the network architecture typical used in deep-learning area and the training technique inspired from nonnegative matrix factorization (NMF). The shallow autoencoder is a simplified RNN model, which is then stacked into a multi-layer architecture. The learning algorithm is based on the weight update rules in NMF, subject to the nonnegative probability constraints of the RNN. The autoencoders equipped with this learning algorithm are tested on typical image datasets including the MNIST, Yale face and CIFAR-10 datasets, and also using 16 real-world datasets from different areas. The results obtained through these tests yield the desired high learning and recognition accuracy. Also, numerical simulations of the stochastic spiking behavior of this RNN auto encoder, show that it can be implemented in a highly-distributed manner., 10 pages (a small edit to the abstract)
- Published
- 2016
32. Data Driven SMART Intercontinental Overlay Networks
- Author
-
Brun, Olivier, Wang, Lan, and Gelenbe, Erol
- Subjects
Networking and Internet Architecture (cs.NI) ,FOS: Computer and information sciences ,Computer Science - Networking and Internet Architecture ,Computer Science - Distributed, Parallel, and Cluster Computing ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Distributed, Parallel, and Cluster Computing (cs.DC) - Abstract
This paper addresses the use of Big Data and machine learning based analytics to the real-time management of Internet scale Quality-of-Service Route Optimisation with the help of an overlay network. Based on the collection of large amounts of data sampled each $2$ minutes over a large number of source-destinations pairs, we show that intercontinental Internet Protocol (IP) paths are far from optimal with respect to Quality of Service (QoS) metrics such as end-to-end round-trip delay. We therefore develop a machine learning based scheme that exploits large scale data collected from communicating node pairs in a multi-hop overlay network that uses IP between the overlay nodes themselves, to select paths that provide substantially better QoS than IP. The approach inspired from Cognitive Packet Network protocol, uses Random Neural Networks with Reinforcement Learning based on the massive data that is collected, to select intermediate overlay hops resulting in significantly better QoS than IP itself. The routing scheme is illustrated on a $20$-node intercontinental overlay network that collects close to $2\times 10^6$ measurements per week, and makes scalable distributed routing decisions. Experimental results show that this approach improves QoS significantly and efficiently in a scalable manner., 9 pages
- Published
- 2015
33. A Manifesto for Future Generation Cloud Computing: Research Directions for the Next Decade.
- Author
-
BUYYA, RAJKUMAR, SRIRAMA, SATISH NARAYANA, CASALE, GIULIANO, CALHEIROS, RODRIGO, SIMMHAN, YOGESH, VARGHESE, BLESSON, GELENBE, EROL, JAVADI, BAHMAN, VAQUERO, LUIS MIGUEL, NETTO, MARCO A. S., TOOSI, ADEL NADJARAN, RODRIGUEZ, MARIA ALEJANDRA, LLORENTE, IGNACIO M., DE CAPITANI DI VIMERCATI, SABRINA, SAMARATI, PIERANGELA, MILOJICIC, DEJAN, VARELA, CARLOS, BAHSOON, RAMI, DE ASSUNCAO, MARCOS DIAS, and RANA, OMER
- Subjects
CLOUD computing ,ON-demand computing ,SOFTWARE-defined networking ,SCIENTIFIC computing ,INTERNET of things ,SOFTWARE reliability - Abstract
The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. G-Networks with Adders.
- Author
-
Fourneau, Jean-Michel and Gelenbe, Erol
- Subjects
QUEUEING networks ,JOB shops ,MANUFACTURING processes - Abstract
Queueing networks are used to model the performance of the Internet, of manufacturing and job-shop systems, supply chains, and other networked systems in transportation or emergency management. Composed of service stations where customers receive service, and then move to another service station till they leave the network, queueing networks are based on probabilistic assumptions concerning service times and customer movement that represent the variability of system workloads. Subject to restrictive assumptions regarding external arrivals, Markovian movement of customers, and service time distributions, such networks can be solved efficiently with "product form solutions" that reduce the need for software simulators requiring lengthy computations. G-networks generalise these models to include the effect of "signals" that re-route customer traffic, or negative customers that reject service requests, and also have a convenient product form solution. This paper extends G-networks by including a new type of signal, that we call an "Adder", which probabilistically changes the queue length at the service center that it visits, acting as a load regulator. We show that this generalisation of G-networks has a product form solution. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
35. Synchronising Energy Harvesting and Data Packets in a Wireless Sensor.
- Author
-
Gelenbe, Erol
- Subjects
- *
ENERGY harvesting , *DATA packeting , *REMOTE sensing , *ENERGY storage , *BUFFER storage (Computer science) , *PROBABILITY theory , *MATHEMATICAL analysis - Abstract
We consider a wireless sensor node that gathers energy through harvesting and reaps data through sensing. The node has a wireless transmitter that sends out a data packet whenever there is at least one "energy packet" and one "data packet", where an energy packet represents the amount of accumulated energy at the node that can allow the transmission of a data packet. We show that such a system is unstable when both the energy storage space and the data backlog buffer approach infinity, and we obtain the stable stationary solution when both buffers are finite. We then show that if a single energy packet is not sufficient to transmit a data packet, there are conditions under which the system is stable, and we provide the explicit expression for the joint probability distribution of the number of energy and data packets in the system. Since the two flows of energy and data can be viewed as flows that are instantaneously synchronised, this paper also provides a mathematical analysis of a fundamental problem in computer science related to the stability of the "join" synchronisation primitive. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
36. Computer and Information Sciences
- Author
-
Czachórski, Tadeusz, Gelenbe, Erol, Grochla, Krzysztof, and Lent, Ricardo
- Subjects
Information Systems and Communication Service ,Artificial Intelligence (incl. Robotics) ,Computer Communication Networks ,Software Engineering/Programming and Operating Systems ,Probability and Statistics in Computer Science ,Computer Imaging, Vision, Pattern Recognition and Graphics ,thema EDItEUR::K Economics, Finance, Business and Management::KJ Business and Management::KJM Management and management techniques::KJMV Management of specific areas ,thema EDItEUR::K Economics, Finance, Business and Management::KN Industry and industrial studies::KNB Energy industries and utilities ,thema EDItEUR::K Economics, Finance, Business and Management::KN Industry and industrial studies::KND Manufacturing industries ,thema EDItEUR::T Technology, Engineering, Agriculture, Industrial processes::TC Biochemical engineering ,thema EDItEUR::W Lifestyle, Hobbies and Leisure::WC Antiques, vintage and collectables::WCF Collecting coins, banknotes, medals and other related items - Abstract
Information Systems and Communication Service; Artificial Intelligence (incl. Robotics); Computer Communication Networks; Software Engineering/Programming and Operating Systems; Probability and Statistics in Computer Science; Computer Imaging, Vision, Pattern Recognition and Graphics
- Published
- 2016
- Full Text
- View/download PDF
37. Smart SDN Management of Fog Services to Optimize QoS and Energy.
- Author
-
Fröhlich, Piotr, Gelenbe, Erol, Fiołka, Jerzy, Chęciński, Jacek, Nowak, Mateusz, Filus, Zdzisław, and Skarmeta, Antonio
- Subjects
- *
SOFTWARE-defined networking , *ARTIFICIAL intelligence , *REACTION time , *MIMO systems , *REINFORCEMENT learning , *TIME-varying systems , *NUMBER systems , *INTERNET of things - Abstract
The short latency required by IoT devices that need to access specific services have led to the development of Fog architectures that can serve as a useful intermediary between IoT systems and the Cloud. However, the massive numbers of IoT devices that are being deployed raise concerns about the power consumption of such systems as the number of IoT devices and Fog servers increase. Thus, in this paper, we describe a software-defined network (SDN)-based control scheme for client–server interaction that constantly measures ongoing client–server response times and estimates network power consumption, in order to select connection paths that minimize a composite goal function, including both QoS and power consumption. The approach using reinforcement learning with neural networks has been implemented in a test-bed and is detailed in this paper. Experiments are presented that show the effectiveness of our proposed system in the presence of a time-varying workload of client-to-service requests, resulting in a reduction of power consumption of approximately 15% for an average response time increase of under 2%. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Size-Based Routing Policies: Non-Asymptotic Analysis and Design of Decentralized Systems †.
- Author
-
Bachmat, Eitan, Doncel, Josu, Gelenbe, Erol, and Calzarossa, Maria Carla
- Subjects
QUEUING theory ,POLICY analysis ,QUALITY of service ,SPECIAL effects in lighting ,NUMBER systems - Abstract
Size-based routing policies are known to perform well when the variance of the distribution of the job size is very high. We consider two size-based policies in this paper: Task Assignment with Guessing Size (TAGS) and Size Interval Task Assignment (SITA). The latter assumes that the size of jobs is known, whereas the former does not. Recently, it has been shown by our previous work that when the ratio of the largest to shortest job tends to infinity and the system load is fixed and low, the average waiting time of SITA is, at most, two times less than that of TAGS. In this article, we first analyze the ratio between the mean waiting time of TAGS and the mean waiting time of SITA in a non-asymptotic regime, and we show that for two servers, and when the job size distribution is Bounded Pareto with parameter α = 1 , this ratio is unbounded from above. We then consider a system with an arbitrary number of servers and we compare the mean waiting time of TAGS with that of Size Interval Task Assignment with Equal load (SITA-E), which is a SITA policy where the load of all the servers are equal. We show that in the light traffic regime, the performance ratio under consideration is unbounded from above when (i) the job size distribution is Bounded Pareto with parameter α = 1 and an arbitrary number of servers as well as (ii) for Bounded Pareto distributed job sizes with α ∈ (0 , 2) \ { 1 } and the number of servers tends to infinity. Finally, we use the result of our previous work to show how to design decentralized systems with quality of service constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
39. Efficient Feature Selection for Static Analysis Vulnerability Prediction.
- Author
-
Filus, Katarzyna, Boryszko, Paweł, Domańska, Joanna, Siavvas, Miltiadis, Gelenbe, Erol, and Larrucea, Xabier
- Subjects
DATA mining software ,COMPUTER software quality control ,MACHINE learning ,STATISTICAL correlation ,FORECASTING - Abstract
Common software vulnerabilities can result in severe security breaches, financial losses, and reputation deterioration and require research effort to improve software security. The acceleration of the software production cycle, limited testing resources, and the lack of security expertise among programmers require the identification of efficient software vulnerability predictors to highlight the system components on which testing should be focused. Although static code analyzers are often used to improve software quality together with machine learning and data mining for software vulnerability prediction, the work regarding the selection and evaluation of different types of relevant vulnerability features is still limited. Thus, in this paper, we examine features generated by SonarQube and CCCC tools, to identify those that can be used for software vulnerability prediction. We investigate the suitability of thirty-three different features to train thirteen distinct machine learning algorithms to design vulnerability predictors and identify the most relevant features that should be used for training. Our evaluation is based on a comprehensive feature selection process based on the correlation analysis of the features, together with four well-known feature selection techniques. Our experiments, using a large publicly available dataset, facilitate the evaluation and result in the identification of small, but efficient sets of features for software vulnerability prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. A Survey of Algorithms and Systems for Evacuating People in Confined Spaces.
- Author
-
Bi, Huibo and Gelenbe, Erol
- Subjects
EMERGENCY management ,INFORMATION & communication technologies ,RESEARCH management ,LITERATURE reviews ,INTERDISCIPLINARY research ,EMERGENCY communication systems - Abstract
The frequency, destruction and costs of natural and human-made disasters in modern highly-populated societies have resulted in research on emergency evacuation and wayfinding, which has drawn considerable attention. The subject is now a multidisciplinary area of research where information and communication technologies (ICT), and in particular the Internet of Things (IoT), have a significant impact on sensing and computing dynamic reactions that mitigate or prevent the worst outcomes of disasters. This paper offers state-of-the-art knowledge in this area so as to share ongoing research results, identify the research gaps and address the need for future research. We present a comprehensive review of research on emergency evacuation and wayfinding, focusing on the algorithmic and system design aspects. Starting from the history of emergency management research, we identify the emerging challenges concerning system optimisation, evacuee behaviour optimisation and data analysis, and the additional energy consumption by ICT equipment that operates the emergency management infrastructure. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Energy and performance optimisation with energy packet networks
- Author
-
Zhang, Yunxiao and Gelenbe, Erol
- Subjects
621.3 - Abstract
The recent exponential growth of Information and Communication Technologies (ICT) also leads to a rapid increase in ICT energy consumption. The challenge of achieving energy-efficient ICT has been therefore explored, including Energy Harvesting (EH) technologies with intermittent renewable energy sources. Thus there has been considerable interest in understanding how to optimise energy efficiency and quality of service (QoS) for computer-communication systems powered by EH technologies. This thesis investigates mathematical modelling and performance evaluation for computer communication systems powered by EH technologies with intermittent renewable energy sources. In our research, we use the Energy Packet Network (EPN) paradigm, which is a discrete state-space modelling framework based on G -network theory. In such a system, discrete activities such as computer jobs and data in the form of packets consume energy represented by discrete energy packets (EPs), each of which is a basic unit of energy in Joules. This approach uses queueing theory such that the joint behaviour of discretised energy flows and job or data flows is analysed in a single model, which can evaluate both the performance and energy efficiency of a complex interconnected computer-communication system. Specifically, our work uses the EPN paradigm to address four relevant problems of practical interest to optimise performance and energy efficiency. We use energy flow only or job flow and energy flow together to optimise a multi-server system's performance or QoS, for instance, the average response time of jobs. In the first problem (Problem P1), we investigate how to select the optimal fraction of power that is shared between heterogeneous servers, so as to minimise the average response time of jobs. We use the Lagrange multiplier method to solve Problem P1 analytically, which is subject to power constraints. We also obtain a physically meaningful condition to guarantee system stability and global optimality. The second problem (Problem P2) is to minimise the average response time of jobs by dynamically deciding whether to move jobs between servers, so as to balance the workload at each server. Problem P2 is solved numerically through the gradient descent. Regarding the third problem (Problem P3), we consider a cost function that combines both the average response time of jobs and the rate of energy loss. Then we select the optimal fraction of power shared between heterogeneous servers again to minimise the cost function. The optimal solution is obtained by solving a system of equations simultaneously. The fourth problem (Problem P4) investigates how to match the energy flow into energy buffers and job flow into servers or workstations, so as to minimise the average response time of jobs. We have given a necessary condition to match energy flow and job flow, which finds the local minimum of the average response time of jobs analytically. Throughout this thesis, we generalise the assumption of the previous work of the EPN paradigm that one EP can only be used to execute one single job. We assume that the number of jobs can be executed by one EP is a random variable of Xi following general probability distribution. The derivation of the average number of jobs that can be executed by one EP is given in Appendix A. For the sake of computational convenience, we assume the random variable X_i follows a geometric distribution in our analysis. However, we have also discussed how to solve the problem with the general distribution in Chapter 6 and Appendix B.
- Published
- 2019
- Full Text
- View/download PDF
42. Static analysis for facilitating secure and reliable software
- Author
-
Siavvas, Miltiadis and Gelenbe, Erol
- Subjects
621.3 - Abstract
Software security and reliability are aspects of major concern for software development enterprises that wish to deliver dependable software to their customers. Several static analysis-based approaches for facilitating the development of secure and reliable software have been proposed over the years. The purpose of the present thesis is to investigate these approaches and to extend their state of the art by addressing existing open issues that have not been sufficiently addressed yet. To this end, an empirical study was initially conducted with the purpose to investigate the ability of software metrics (e.g., complexity metrics) to discriminate between different types of vulnerabilities, and to examine whether potential interdependencies exist between different vulnerability types. The results of the analysis revealed that software metrics can be used only as weak indicators of specific security issues, while important interdependencies may exist between different types of vulnerabilities. The study also verified the capacity of software metrics (including previously uninvestigated metrics) to indicate the existence of vulnerabilities in general. Subsequently, a hierarchical security assessment model able to quantify the internal security level of software products, based on static analysis alerts and software metrics is proposed. The model is practical, since it is fully-automated and operationalized in the form of individual tools, while it is also sufficiently reliable since it was built based on data and well-accepted sources of information. An extensive evaluation of the model on a large volume of empirical data revealed that it is able to reliably assess software security both at product- and at class-level of granularity, with sufficient discretion power, while it may be also used for vulnerability prediction. The experimental results also provide further support regarding the ability of static analysis alerts and software metrics to indicate the existence of software vulnerabilities. Finally, a mathematical model for calculating the optimum checkpoint interval, i.e., the checkpoint interval that minimizes the execution time of software programs that adopt the application-level checkpoint and restart (ALCR) mechanism was proposed. The optimum checkpoint interval was found to depend on the failure rate of the application, the execution cost for establishing a checkpoint, and the execution cost for restarting a program after failure. Emphasis was given on programs with loops, while the results were illustrated through several numerical examples.
- Published
- 2019
- Full Text
- View/download PDF
43. Novel applications and contexts for the cognitive packet network
- Author
-
Akinwande, Olumide and Gelenbe, Erol
- Subjects
629.8 - Abstract
Autonomic communication, which is the development of self-configuring, self-adapting, self-optimising and self-healing communication systems, has gained much attention in the network research community. This can be explained by the increasing demand for more sophisticated networking technologies with physical realities that possess computation capabilities and can operate successfully with minimum human intervention. Such systems are driving innovative applications and services that improve the quality of life of citizens both socially and economically. Furthermore, autonomic communication, because of its decentralised approach to communication, is also being explored by the research community as an alternative to centralised control infrastructures for efficient management of large networks. This thesis studies one of the successful contributions in the autonomic communication research, the Cognitive Packet Network (CPN). CPN is a highly scalable adaptive routing protocol that allows for decentralised control in communication. Consequently, CPN has achieved significant successes, and because of the direction of research, we expect it to continue to find relevance. To investigate this hypothesis, we research new applications and contexts for CPN. This thesis first studies Information-Centric Networking (ICN), a future Internet architecture proposal. ICN adopts a data-centric approach such that contents are directly addressable at the network level and in-network caching is easily supported. An optimal caching strategy for an information-centric network is first analysed, and approximate solutions are developed and evaluated. Furthermore, a CPN inspired forwarding strategy for directing requests in such a way that exploits the in-network caching capability of ICN is proposed. The proposed strategy is evaluated via discrete event simulations and shown to be more effective in its search for local cache hits compared to the conventional methods. Finally, CPN is proposed to implement the routing system of an Emergency Cyber-Physical System for guiding evacuees in confined spaces in emergency situations. By exploiting CPN's QoS capabilities, different paths are assigned to evacuees based on their ongoing health conditions using well-defined path metrics. The proposed system is evaluated via discrete-event simulations and shown to improve survival chances compared to a static system that treats evacuees in the same way.
- Published
- 2019
- Full Text
- View/download PDF
44. Energy packet network with negligible service time : a modelling and performance analysis approach to wireless sensor networks
- Author
-
Kadioglu, Yasin Murat and Gelenbe, Erol
- Subjects
621.3 - Abstract
Autonomous self-organising systems have gained much attention to manage complex tasks without any human interaction. Similarly, simpler systems are also required for the applications of Internet of Things, such as smart home services, wearables, smart cities and connected health systems. These simpler systems provide autonomous standalone devices for remote sensing, processing and transmission of information. However, such devices may not have a reliable connections to power mains or may not be convenient for regular battery replacements. Therefore, local energy capturing from ambient intermittent sources such as vibration, electromagnetic waves, heat or light through harvesting could be of great interest for such devices. This thesis researches mathematical modelling and performance analysis of such autonomous digital devices operating with energy harvesting from intermittent sources. The approach used in this research is based on the Energy Packet Network paradigm where the arrivals of data and energy at devices are considered as discrete random processes. The devices operate by consuming harvested energy in a discrete manner in order to process, store and transmit data (wired or wireless) in a negligible time interval, such that the operation or the workload of the devices is also modelled as a discrete random process. Probability models based on random walks and Markov chains are investigated in this study to predict effective rates at which such devices operate well for different energy consumption scenarios, and to obtain closed-form formulas for stationary probability distributions and to make further analysis on the other quantities of interest. Consequently, a "product form solution" of a cascade network of N nodes where state transitions involve simultaneous state changes in multiple nodes, due to data packets that flow through several nodes consuming energy packets is proposed. A modelling approach to evaluate the effect of several battery attacks on such devices is studied. Finally, optimum placement of wireless nodes into a region where there is a spatial continuous distribution of energy and data traffic is presented for different transmission schemes, and optimisation objectives.
- Published
- 2018
- Full Text
- View/download PDF
45. Random neural networks for deep learning
- Author
-
Yin, Yonghua and Gelenbe, Erol
- Subjects
621.3 - Abstract
The random neural network (RNN) is a mathematical model for an 'integrate and fire' spiking network that closely resembles the stochastic behaviour of neurons in mammalian brains. Since its proposal in 1989, there have been numerous investigations into the RNN's applications and learning algorithms. Deep learning (DL) has achieved great success in machine learning, but there has been no research into the properties of the RNN for DL to combine their power. This thesis intends to bridge the gap between RNNs and DL, in order to provide powerful DL tools that are faster, and that can potentially be used with less energy expenditure than existing methods. Based on the RNN function approximator proposed by Gelenbe in 1999, the approximation capability of the RNN is investigated and an efficient classifier is developed. By combining the RNN, DL and non-negative matrix factorisation, new shallow and multi-layer non-negative autoencoders are developed. The autoencoders are tested on typical image datasets and real-world datasets from different domains, and the test results yield the desired high learning accuracy. The concept of dense nuclei/clusters is examined, using RNN theory as a basis. In dense nuclei, neurons may interconnect via soma-to-soma interactions and conventional synaptic connections. A mathematical model of the dense nuclei is proposed and the transfer function can be deduced. A multi-layer architecture of the dense nuclei is constructed for DL, whose value is demonstrated by experiments on multi-channel datasets and server-state classification in cloud servers. A theoretical study into the multi-layer architecture of the standard RNN (MLRNN) for DL is presented. Based on the layer-output analyses, the MLRNN is shown to be a universal function approximator. The effects of the layer number on the learning capability and high-level representation extraction are analysed. A hypothesis for transforming the DL problem into a moment-learning problem is also presented. The power of the standard RNN for DL is investigated. The ability of the RNN with only positive parameters to conduct image convolution operations is demonstrated. The MLRNN equipped with the developed training algorithm achieves comparable or better classification at a lower computation cost than conventional DL methods.
- Published
- 2018
- Full Text
- View/download PDF
46. Traffic and task allocation in networks and the cloud
- Author
-
Wang, Lan and Gelenbe, Erol
- Subjects
621.3 - Abstract
Communication services such as telephony, broadband and TV are increasingly migrating into Internet Protocol(IP) based networks because of the consolidation of telephone and data networks. Meanwhile, the increasingly wide application of Cloud Computing enables the accommodation of tens of thousands of applications from the general public or enterprise users which make use of Cloud services on-demand through IP networks such as the Internet. Real-Time services over IP (RTIP) have also been increasingly significant due to the convergence of network services, and the real-time needs of the Internet of Things (IoT) will strengthen this trend. Such Real-Time applications have strict Quality of Service (QoS) constraints, posing a major challenge for IP networks. The Cognitive Packet Network (CPN) has been designed as a QoS-driven protocol that addresses user-oriented QoS demands by adaptively routing packets based on online sensing and measurement. Thus in this thesis we first describe our design for a novel ''Real-Time (RT) traffic over CPN'' protocol which uses QoS goals that match the needs of voice packet delivery in the presence of other background traffic under varied traffic conditions; we present its experimental evaluation via measurements of key QoS metrics such as packet delay, delay variation (jitter) and packet loss ratio. Pursuing our investigation of packet routing in the Internet, we then propose a novel Big Data and Machine Learning approach for real-time Internet scale Route Optimisation based on Quality-of-Service using an overlay network, and evaluate is performance. Based on the collection of data sampled each $2$ minutes over a large number of source-destinations pairs, we observe that intercontinental Internet Protocol (IP) paths are far from optimal with respect to metrics such as end-to-end round-trip delay. On the other hand, our machine learning based overlay network routing scheme exploits large scale data collected from communicating node pairs to select overlay paths, while it uses IP between neighbouring overlay nodes. We report measurements over a week long experiment with several million data points shows substantially better end-to-end QoS than is observed with pure IP routing. Pursuing the machine learning approach, we then address the challenging problem of dispatching incoming tasks to servers in Cloud systems so as to offer the best QoS and reliable job execution; an experimental system (the Task Allocation Platform) that we have developed is presented and used to compare several task allocation schemes, including a model driven algorithm, a reinforcement learning based scheme, and a ''sensible’’ allocation algorithm that assigns tasks to sub-systems that are observed to provide lower response time. These schemes are compared via measurements both among themselves and against a standard round-robin scheduler, with two architectures (with homogenous and heterogenous hosts having different processing capacities) and the conditions under which the different schemes offer better QoS are discussed. Since Cloud systems include both locally based servers at user premises and remote servers and multiple Clouds that can be reached over the Internet, we also describe a smart distributed system that combines local and remote Cloud facilities, allocating tasks dynamically to the service that offers the best overall QoS, and it includes a routing overlay which minimizes network delay for data transfer between Clouds. Internet-scale experiments that we report exhibit the effectiveness of our approach in adaptively distributing workload across multiple Clouds.
- Published
- 2018
- Full Text
- View/download PDF
47. Internet search assistant based on the random neural network
- Author
-
Serrano Bermejo, Guillermo (Will) and Gelenbe, Erol
- Subjects
621.3 - Abstract
Web users can not be guaranteed that the results provided by Web search engines or recommender systems are either exhaustive or relevant to their search needs. Businesses have the commercial interest to rank higher on results or recommendations to attract more customers while Web search engines and recommender systems make their profit based on their advertisements. This research analyses the result rank relevance provided by the different Web search engines, metasearch engines, academic databases and recommender systems. We propose an Intelligent Search Assistant (ISA) that addresses these issues from the perspective of end-users acting as an interface between users and the different search engines; it emulates a Web Search Recommender System for general topic queries where the user explores the results provided. Our ISA sends the original query, retrieves the provided options from the Web and reorders the results. The proposed mathematical model of our ISA divides a user query into a multidimensional term vector. Our ISA is based on the Random Neural Network with Deep Learning Clusters. The potential value of each neuron or cluster is calculated by applying our innovative cost function to each snippet and weighting its dimension terms with different relevance parameters. Our ISA adapts to the perceived user interest learning user relevance on an iterative process where the user evaluates directly the listed results. Gradient Descent and Reinforcement Learning are used independently to update the Random Neural Network weights and we evaluate their performance based on the learning speed and result relevance. Finally, we present a new relevance metric which combines relevance and rank. We use this metric to validate and assess the learning performance of our proposed algorithm against other search engines. In some situations, our ISA and its iterative learning outperforms other search engines and recommender systems.
- Published
- 2018
- Full Text
- View/download PDF
48. Performance analysis of mobile networks under signalling storms
- Author
-
Pavloski, Mihajlo and Gelenbe, Erol
- Subjects
621.3 - Abstract
There are numerous security challenges in cellular mobile networks, many of which originate from the Internet world. One of these challenges is to answer the problem with increasing rate of signalling messages produced by smart devices. In particular, many services in the Internet are provided through mobile applications in an unobstructed manner, such that users get an always connected feeling. These services, which usually come from instant messaging, advertising and social networking areas, impose significant signalling loads on mobile networks by frequent exchange of control data in the background. Such services and applications could be built intentionally or unintentionally, and result in denial of service attacks known as signalling attacks or storms. Negative consequences, among others, include degradations of mobile network’s services, partial or complete net- work failures, increased battery consumption for infected mobile terminals. This thesis examines the influence of signalling storms on different mobile technologies, and proposes defensive mechanisms. More specifically, using stochastic modelling techniques, this thesis first presents a model of the vulnerability in a single 3G UMTS mobile terminal, and studies the influence of the system’s internal parameters on stability under a signalling storm. Further on, it presents a queueing network model of the radio access part of 3G UMTS and examines the effect of the radio resource control (RRC) inactivity timers. In presence of an attack, the proposed dynamic setting of the timers manage to lower the signalling load in the network and to increase the threshold above which a network failure could happen. Further on, the network model is upgraded into a more generic and detailed model, represent different generations of mobile technologies. It is than used to compare technologies with dedicated and shared organisation of resource allocation, referred to as traditional and contemporary networks, using performance metrics such as: signalling and communication delay, blocking probability, signalling load on the network’s nodes, bandwidth holding time, etc. Finally, based on the carried analysis, two mechanisms are proposed for detection of storms in real time, based on counting of same-type bandwidth allocations, and usage of allocated bandwidth. The mechanisms are evaluated using discrete event simulation in 3G UMTS, and experiments are done combining the detectors with a simple attack mitigation approach.
- Published
- 2017
- Full Text
- View/download PDF
49. Emergency navigation, energy optimisation and cooperative algorithms for motion and evacuation
- Author
-
Bi, Huibo and Gelenbe, Erol
- Subjects
621.382 - Abstract
The increasing concentration of human populations in modern urbanised societies has aggravated the frequency and destruction of both natural and manmade disasters, and has motivated considerable research over the last few decades. Accompanying the development of computing technology, emergency navigation algorithms in built environment have evolved from off-line algorithms that direct evacuees in accordance with pre-deployed static evacuation plans to on-line algorithms that dynamically calculate egress paths for evacuees. However, these algorithms normally consider evacuees in a homogeneous manner, and ignore the different requirements and relative risk of death among different groups of people caused by different mobilities, physical strength, health conditions and level of resistance to hazard. Therefore, this work aims to develop systems and algorithms to dynamically customise distinct paths for different categories of evacuees. To this end, we borrow the concept of Cognitive Packet Network (CPN) and adapt it to the context of emergency navigation. On top of the CPN framework, we design several routing metrics to calculate distinct egress paths for different categories of evacuees. To improve the inter and intra-group coordination, several cooperative strategies are proposed to further optimise the routes calculated by the proposed routing algorithm. To provide a more accurate prediction to the congestion level of each egress path during an evacuation process under the effect of panic behaviours, we combine the CPN based routing algorithm with a G-network model to analyse the congestion level on a path via capturing the dynamics of diverse categories of evacuees under the influence of panic and re-routing decisions from the navigation system. Finally, we extend our work to large scale evacuations, and propose a G-network based emergency navigation algorithm to direct vehicles to safe areas in the aftermath of a large-scale disaster in an energy and time efficient manner.
- Published
- 2017
- Full Text
- View/download PDF
50. Novel graph analytics for enhancing data insight
- Author
-
Papadopoulos, Stavros and Gelenbe, Erol
- Subjects
006.3 - Abstract
Graph analytics is a fast growing and significant field in the visualization and data mining community, which is applied on numerous high-impact applications such as, network security, finance, and health care, providing users with adequate knowledge across various patterns within a given system. Although a series of methods have been developed in the past years for the analysis of unstructured collections of multi-dimensional points, graph analytics has only recently been explored. Despite the significant progress that has been achieved recently, there are still many open issues in the area, concerning not only the performance of the graph mining algorithms, but also producing effective graph visualizations in order to enhance human perception. The current thesis deals with the investigation of novel methods for graph analytics, in order to enhance data insight. Towards this direction, the current thesis proposes two methods so as to perform graph mining and visualization. Based on previous works related to graph mining, the current thesis suggests a set of novel graph features that are particularly efficient in identifying the behavioral patterns of the nodes on the graph. The specific features proposed, are able to capture the interaction of the neighborhoods with other nodes on the graph. Moreover, unlike previous approaches, the graph features introduced herein, include information from multiple node neighborhood sizes, thus capture long-range correlations between the nodes, and are able to depict the behavioral aspects of each node with high accuracy. Experimental evaluation on multiple datasets, shows that the use of the proposed graph features for the graph mining procedure, provides better results than the use of other state-of-the-art graph features. Thereafter, the focus is laid on the improvement of graph visualization methods towards enhanced human insight. In order to achieve this, the current thesis uses non-linear deformations so as to reduce visual clutter. Non-linear deformations have been previously used to magnify significant/cluttered regions in data or images for reducing clutter and enhancing the perception of patterns. Extending previous approaches, this work introduces a hierarchical approach for non-linear deformation that aims to reduce visual clutter by magnifying significant regions, and leading to enhanced visualizations of one/two/three-dimensional datasets. In this context, an energy function is utilized, which aims to determine the optimal deformation for every local region in the data, taking the information from multiple single-layer significance maps into consideration. The problem is subsequently transformed into an optimization problem for the minimization of the energy function under specific spatial constraints. Extended experimental evaluation provides evidence that the proposed hierarchical approach for the generation of the significance map surpasses current methods, and manages to effectively identify significant regions and deliver better results. The thesis is concluded with a discussion outlining the major achievements of the current work, as well as some possible drawbacks and other open issues of the proposed approaches that could be addressed in future works.
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.