69 results on '"Sadre R"'
Search Results
2. A hybrid procedure for efficient link dimensioning
- Author
-
Schmidt, R. de O., Sadre, R., Sperotto, A., van den Berg, H., and Pras, A.
- Published
- 2014
- Full Text
- View/download PDF
3. Characterisation of Enzymes Involved in Tocopherol Biosynthesis
- Author
-
Sadre, R., Paus, H., Frentzen, M., Weier, D., Murata, N., editor, Yamada, M., editor, Nishida, I., editor, Okuyama, H., editor, Sekiya, J., editor, and Hajime, W., editor
- Published
- 2003
- Full Text
- View/download PDF
4. FY17 ISCR Scholar End-of-Assignment Report - Robbie Sadre
- Author
-
Sadre, R., primary
- Published
- 2017
- Full Text
- View/download PDF
5. Measurement Artifacts in NetFlow Data
- Author
-
Hofstede, R.J., Drago, Idilio, Sperotto, Anna, Sadre, R., Pras, Aiko, Roughan, Matthew, and Chang, Rocky
- Subjects
Computer science ,Network management ,EWI-23200 ,02 engineering and technology ,artifacts ,computer.software_genre ,Set (abstract data type) ,NetFlow ,0202 electrical engineering, electronic engineering, information engineering ,Network management, measurements, NetFlow, artifacts ,Network packet ,business.industry ,METIS-296371 ,Process (computing) ,020206 networking & telecommunications ,measurements ,Scalability ,020201 artificial intelligence & image processing ,Data mining ,Granularity ,Robust analysis ,business ,computer ,IR-85418 - Abstract
Flows provide an aggregated view of network traffic by grouping streams of packets. The resulting scalability gain usually excuses the coarser data granularity, as long as the flow data reflects the actual network traffic faithfully. However, it is known that the flow export process may introduce artifacts in the exported data. This paper extends the set of known artifacts by explaining which implementation decisions are causing them. In addition, we verify the artifacts' presence in data from a set of widely-used devices. Our results show that the revealed artifacts are widely spread among different devices from various vendors. We believe that these results provide researchers and operators with important insights for developing robust analysis applications.
- Published
- 2013
- Full Text
- View/download PDF
6. Assessing the quality of flow measurements from OpenFlow devices
- Author
-
Hendriks, Luuk, de Oliveira Schmidt, R., Sadre, R., Bezerra, Jeronimo, and Pras, Aiko
- Subjects
OpenFlow ,counters ,Traffic measurements ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,EWI-27762 ,IR-104409 - Abstract
Since its initial proposal in 2008, OpenFlow has evolved to become today’s main enabler of Software-Defined Networking. OpenFlow specifies operations for network forwarding devices and a communication protocol between data and control planes. Although not primarily designed as a traffic measurement tool, many works have proposed to use measured data from OpenFlow to support, e.g., traffic engineering or security in OpenFlow-enabled networks. These works, however, generally do not question or address the quality of actual measured data obtained from OpenFlow devices. Therefore, in this paper we assess the quality of measurements in real OpenFlow devices from multiple vendors. We demonstrate that inconsistencies and measurement artifacts can be found due to particularities of different OpenFlow implementations, making it impractical to deploy an OpenFlow measurement-based approach in a network consisting of devices from multiple vendors. In addition, we show that the accuracy of measured packet and byte counts and duration for flows vary among the tested devices, and in some cases counters are not even implemented for the sake of forwarding performance.
- Published
- 2016
7. Impact of Packet Sampling on Link Dimensioning
- Author
-
de Oliveira Schmidt, R., Stadler, R., Sadre, R., Sperotto, Anna, van den Berg, Hans Leo, Pras, Aiko, and UCL - SST/ICTM/INGI - Pôle en ingénierie informatique
- Subjects
Traffic volumes ,Aggregates ,Monitoring ,Computer Networks and Communications ,Computer science ,Loss measurement ,Real-time computing ,Defence, Safety and Security ,Traffic monitoring ,Inter-arrival time ,symbols.namesake ,Bandwidth ,CSR - Cyber Security & Robustness ,Sampling (signal processing) ,Quality of service ,METIS-314938 ,EWI-26199 ,Electrical and Electronic Engineering ,Sampling algorithm ,Dimensioning ,TS - Technical Sciences ,sFlow ,Network packet ,Bandwidth (signal processing) ,Variance (accounting) ,Radiation detectors ,Packet sampling ,IR-98046 ,ICT ,2023 OA procedure ,symbols ,Link dimensioning ,Network operator ,Estimation ,Cyber Security & Resilience ,Gibbs sampling - Abstract
Link dimensioning is used by network operators to properly provision the capacity of their network links. Proposed methods for link dimensioning often require statistics, such as traffic variance, that need to be calculated from packet-level measurements. In practice, due to increasing traffic volume, operators deploy packet sampling techniques aiming to reduce the burden of traffic monitoring, but little is known about how link dimensioning is affected by such measurements. In this paper, we make use of a previously proposed and validated dimensioning formula that requires traffic variance to estimate required link capacity. We assess the impact of three packet sampling techniques on link dimensioning, namely, Bernoulli, $n$ -in- $N$ and sFlow sampling. To account for the additional variance introduced by the sampling algorithms, we propose approaches to better estimate traffic variance from sampled data according to the employed technique. Results show that, depending on sampling rate and link load, packet sampling does not negatively impact on link dimensioning accuracy even at very short timescales such as 10 ms. Moreover, we also show that the loss of inter-arrival time of sampled packets due to the exporting process in sFlow does not harm the estimations, given that an appropriate sampling rate is used. Our study is validated using a large dataset consisting of traffic packet traces captured at several locations around the globe.
- Published
- 2015
8. Taking on Internet Bad Neighorhoods
- Author
-
Moreira Moura, G.C., Sadre, R., and Pras, A.
- Subjects
bad neighborhoods ,internet security ,spam - Abstract
It's known fact that malicious IP addresses are not evenly distributed over the IP addressing space. In this paper, we frame networks concentrating malicious addresses as bad neighborhoods. We propose a formal definition and show this concentration can be used to predict future attacks (new spamming sources, in our case), and propose an algorithm to aggregate individual IP addresses can bigger neighborhoods. Moreover, we show how bad neighborhoods are specific according to the exploited application (e.g., spam, ssh) and how the performance of different blacklist sources impacts lightweight spam filtering algorithms.
- Published
- 2014
9. Impact of Packet Sampling on Link Dimensioning
- Author
-
Schmidt, R.D.O., Sadre, R., Sperotto, A., Berg, H. van den, Pras, A., Schmidt, R.D.O., Sadre, R., Sperotto, A., Berg, H. van den, and Pras, A.
- Abstract
Link dimensioning is used by network operators to properly provision the capacity of their network links. Proposed methods for link dimensioning often require statistics, such as traffic variance, that need to be calculated from packet-level measurements. In practice, due to increasing traffic volume, operators deploy packet sampling techniques aiming to reduce the burden of traffic monitoring, but little is known about how link dimensioning is affected by such measurements. In this paper, we make use of a previously proposed and validated dimensioning formula that requires traffic variance to estimate required link capacity. We assess the impact of three packet sampling techniques on link dimensioning, namely, Bernoulli, n-in-N and sFlow sampling. To account for the additional variance introduced by the sampling algorithms, we propose approaches to better estimate traffic variance from sampled data according to the employed technique. Results show that, depending on sampling rate and link load, packet sampling does not negatively impact on link dimensioning accuracy even at very short timescales such as 10 ms. Moreover, we also show that the loss of inter-arrival time of sampled packets due to the exporting process in sFlow does not harm the estimations, given that an appropriate sampling rate is used. Our study is validated using a large dataset consisting of traffic packet traces captured at several locations around the globe. © 2015 IEEE.
- Published
- 2015
10. Gaussian traffic revisited
- Author
-
de Oliveira Schmidt, R., Sadre, R., Pras, Aiko, and Panwar, S.
- Subjects
traffic models ,Gaussianity fit ,IR-87972 ,Gaussian model ,METIS-299964 ,EWI-23292 - Abstract
The assumption of Gaussian traffic is widely used in network modeling and planning. Due to its importance, researchers have repeatedly studied the Gaussian character of traffic aggregates. However, dedicated studies on this subject date back to 2002 and 2006. It is well known that network traffic has changed in the past few years due the the increasing use of social networks, clouds and video streaming websites. Therefore, the goal of this paper is to verify whether the Gaussianity assumption still holds for current network traffic. To this end, we study the characteristics of a large dataset, consisting of traces from four continents. The employed analysis methodology is similar to that found in previous works. In addition to the analysis of recent measurements, we also perform tests for a very long measurement period of six years. Our results show that the evolution of network traffic has not had a significant impact on its Gaussian character. Our findings also indicate that it is safer to relate the degree of Gaussianity to traffic bandwidth than to the number of users for high-speed links.
- Published
- 2013
11. Evaluating Third-Party Bad Neighborhood Blacklists for Spam Detection
- Author
-
Moreira Moura, Giovane, Sperotto, Anna, Sadre, R., Pras, Aiko, Seon Hong, C., Diao, Y., and De Turk, F.
- Subjects
IR-84179 ,ComputingMethodologies_PATTERNRECOGNITION ,METIS-296249 ,EWI-22957 - Abstract
The distribution of malicious hosts over the IP address space is far from being uniform. In fact, malicious hosts tend to be concentrate in certain portions of the IP address space, forming the so-called Bad Neighborhoods. This phenomenon has been previously exploited to filter Spam by means of Bad Neighborhood blacklists. In this paper, we evaluate how much a network administrator can rely upon different Bad Neighborhood blacklists generated by third-party sources to fight Spam. One could expect that Bad Neighborhood blacklists generated from different sources contain, to a varying degree, disjoint sets of entries. Therefore, we investigate (i) how specific a blacklist is to its source, and (ii) whether different blacklists can be interchangeably used to protect a target from Spam. We analyze five Bad Neighborhood blacklists generated from real-world measurements and study their effectiveness in protecting three production mail servers from Spam. Our findings lead to several operational considerations on how a network administrator could best benefit from Bad Neighborhood-based Spam filtering.
- Published
- 2013
12. Towards Bandwidth Estimation Using Flow-Level Measurements
- Author
-
de Oliveira Schmidt, R., Sperotto, Anna, Pras, Aiko, Sadre, R., Novotny, Jiri, Celeda, Pavel, Waldburger, Martin, and Stiller, Burkhard
- Subjects
Dynamic bandwidth allocation ,Computer science ,Network packet ,Real-time computing ,Context (language use) ,flow measurements ,METIS-287880 ,EWI-21928 ,Scalability ,NetFlow ,Bandwidth (computing) ,Bandwidth provisioning ,Granularity ,IR-81209 ,Dimensioning ,IPFIX ,Simulation - Abstract
Bandwidth estimation is one of the prerequisite for efficient link dimensioning. In the past, several approaches to bandwidth estimation have been proposed, ranging from rules-of-thumb providing over-provisioning guidelines to mathematically backed-up provisioning formulas. The limitation of such approaches, in our eyes, is that they largely rely on packet-based measurements, which are almost unfeasible considering nowadays load and speed (1---10 Gbps). In this context, flow-based measurements seems to be a suitable alternative, addressing both data aggregation as well as scalability issues. However, flows pose a challenge for bandwidth estimation, namely the coarser data granularity compared to packet-based approaches, which can lead to a lower precision in the estimation of the needed bandwidth. In this paper, we investigate what is the impact of flow-based measurements on bandwidth estimation. In particular, we are interested in quantifying the impact of flows on main statistical traffic characteristics, in particular the traffic rate variance. Our approach is validated on real traffic traces captured from 2002 to 2011 at the University of Twente.
- Published
- 2012
13. Estimating Bandwidth Requirements using Flow-level Measurements
- Author
-
Bruyère, P., de Oliveira Schmidt, R., Sperotto, Anna, Sadre, R., and Pras, Aiko
- Subjects
Traffic measurements ,Bandwidth provisioning ,NetFlow ,IPFIX ,EWI-22119 ,IR-81215 - Abstract
Bandwidth provisioning is an important task of network management and it is done aiming to meet desired levels of quality of service. Current practices of provisioning are mostly based on rules-of-thumb and use coarse traffic measurements that may lead to problems of under and over dimensioning of links. Several solutions have already been proposed, in which link provisioning is done by measuring and analyzing network traffic at the packet-level. However, high-speed traffic rates as observed nowadays demand scalable measurement solutions. In this regard, flow monitoring seems to be a promising approach. Many software tools and network equipment already provide flow monitoring. But, flows result in inherent information loss due to the aggregation of packets details, i.e. only a summary of traffic characteristics is provided. The poster will present a flow-based approach that overcomes the problem of traffic information loss and enable the use of flows in place of packet measurements for bandwidth provisioning. Among other results we will show that outcomes from the proposed flow-based approach can be as good as the ones obtained with packet-level measurements.
- Published
- 2012
14. Internet bad neighborhoods aggregation
- Author
-
Moreira Moura, Giovane, Sadre, R., Sperotto, Anna, Pras, Aiko, Paschoal Gaspary, L., and De Turk, Filip
- Subjects
METIS-284989 ,IR-79352 ,EWI-21235 ,Network security ,business.industry ,Computer science ,Aggregate (data warehouse) ,020206 networking & telecommunications ,02 engineering and technology ,Intrusion detection system ,Computer security ,computer.software_genre ,Internet security ,EC Grant Agreement nr.: FP7/257513 ,Prefix ,Variable (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,The Internet ,business ,computer ,Subnetwork ,Computer network - Abstract
Internet Bad Neighborhoods have proven to be an innovative approach for fighting spam. They have also helped to understand how spammers are distributed on the Internet. In our previous works, the size of each bad neighborhood was fixed to a /24 subnetwork. In this paper, however, we investigate if it is feasible to aggregate Internet bad neighborhoods not only at /24, but to any network prefix. To do that, we propose two different aggregation strategies: fixed prefix and variable prefix. The motivation for doing that is to reduce the number of entries in the bad neighborhood list, thus reducing memory storage requirements for intrusion detection solutions. We also introduce two error measures that allow to quantify how much error was incurred by the aggregation process. An evaluation of both strategies was conducted by analyzing real world data in our aggregation prototype.
- Published
- 2012
- Full Text
- View/download PDF
15. Simulative and Analytical Evaluation for ASD-Based Embedded Software
- Author
-
Sadre, R., Remke, Anne Katharina Ingrid, Hettinga, Sjors, Haverkort, Boudewijn R.H.M., and Schmitt, Jens B.
- Subjects
Computer science ,business.industry ,Software development ,EWI-21737 ,Software metric ,METIS-286311 ,Software analytics ,Computer engineering ,Software sizing ,Software construction ,IR-80116 ,Software reliability testing ,Software verification and validation ,business ,Software verification ,Simulation - Abstract
The Analytical Software Design (ASD) method of the company Verum has been designed to reduce the number of errors in embedded software. However, it does not take performance issues into account, which can also have a major impact on the duration of software development. This paper presents a discrete-event simulator for the performance evaluation of ASD-structured software as well as a compositional numerical analysis method using fixed-point iteration and phase-type distribution fitting. Whereas the numerical analysis is highly accurate for non-interfering tasks, its accuracy degrades when tasks run in opposite directions through interdependent software blocks and the utilization increases. A thorough validation identifies the underlying problems when analyzing the performance of embedded software.
- Published
- 2012
16. Internet Bad Neighborhoods: the Spam Case
- Author
-
Moreira Moura, Giovane, Sadre, R., Pras, Aiko, Festor, Olivier, and Lupu, Emil
- Subjects
METIS-277737 ,EWI-20379 ,EC Grant Agreement nr.: FP7/257513 ,IR-77802 - Abstract
A significant part of current attacks on the Internet comes from compromised hosts that, usually, take part in botnets. Even though bots themselves can be distributed all over the world, there is evidence that most of the malicious hosts are, in fact, concentrated in small fractions of the IP address space, on certain networks. Based on that, the Bad Neighborhood concept was introduced. The general idea of Bad Neighborhoods is to rate a subnetwork by the number of malicious hosts that have been observed in that subnetwork. Even though Bad Neighborhoods were successfully employed in mail filtering, the very concept was not investigated in further details. Therefore, in this work we provide a closer look on it, by proposing four definitions for spam-based Bad Neighborhoods that take into account the way spammers operate. We apply the definitions to real world data sets and show that they provide valuable insight into the behavior of spammers and the networks hosting them. Among our findings, we show that 10% of the Bad Neighborhoods are responsible for the majority of spam.
- Published
- 2011
17. Decomposition-Based Queueing Network Analysis with FiFiQueues
- Author
-
Sadre, R., Haverkort, Boudewijn R.H.M., Boucherie, Richardus J., van Dijk, Nico, and van Dijk, Nico M.
- Subjects
Decomposition ,Queueing theory ,Mathematical optimization ,EWI-19213 ,Computer science ,METIS-277480 ,Distributed computing ,Context (language use) ,Gordon–Newell theorem ,BCMP network ,Fixed point ,Fixed-point ,Queueing networks ,Mean value analysis ,Layered queueing network ,IR-75077 ,G-network ,QNA - Abstract
In this paper we present an overview of decomposition-based analysis techniques for large open queueing networks. We present a general decomposition-based solution framework, without referring to any particular model class, and propose a general fixed-point iterative solution method for it. We concretize this framework by describing the well-known QNA method, as proposed by Whitt in the early 1980s, in that context, before describing our FiFiQueues approach. FiFiQueues allows for the efficient analysis of large open queueing networks of which the interarrival and service time distributions are of phase-type; individual queues, all with single servers, can have bounded or unbounded buffers. Next to an extensive evaluation with generally very favorable results for FiFiQueues, we also present a theorem on the existence of a fixed-point solution for FiFiQueues.
- Published
- 2010
- Full Text
- View/download PDF
18. Simpleweb/University of Twente Traffic Traces Data Repository
- Author
-
Barbosa, R.R.R., Sadre, R., Pras, Aiko, and van de Meent, R.
- Subjects
METIS-270800 ,EWI-17829 ,Network Management ,IR-71273 ,Network Measurement - Abstract
The computer networks research community lacks of shared measurement information. As a consequence, most researchers need to expend a considerable part of their time planning and executing measurements before being able to perform their studies. The lack of shared data also makes it hard to compare and validate results. This report describes our efforts to distribute a portion of our network data through the Simpleweb/University of Twente Traffic Traces Data Repository.
- Published
- 2010
19. Internet Bad Neighborhoods Temporal Behavior
- Author
-
Moreira Moura, G.C. (author), Sadre, R. (author), Pras, A. (author), Moreira Moura, G.C. (author), Sadre, R. (author), and Pras, A. (author)
- Abstract
Malicious hosts tend to be concentrated in certain areas of the IP addressing space, forming the so-called Bad Neighborhoods. Knowledge about this concentration is valuable in predicting attacks from unseen IP addresses. This observation has been employed in previous works to filter out spam. In this paper, we focus on the temporal behavior of bad neighborhoods. The goal is to determine if bad neighborhoods strike multiple times over a certain period of time, and if so, when do the attacks occur. Among other findings, we show that even though bad neighborhoods do not exhibit a favorite combination of days to carry out attacks, 85% of the recurrent bad neighborhoods do carry out a second attack within the first 5 days from the first attack. These and the other findings here presented lead to several considerations on how attack prediction models can be more effective i.e., generating both predictive and short neighborhood blacklists., POLG, Technology, Policy and Management
- Published
- 2014
20. Taking on Internet Bad Neighorhoods
- Author
-
Moreira Moura, G.C. (author), Sadre, R. (author), Pras, A. (author), Moreira Moura, G.C. (author), Sadre, R. (author), and Pras, A. (author)
- Abstract
It's known fact that malicious IP addresses are not evenly distributed over the IP addressing space. In this paper, we frame networks concentrating malicious addresses as bad neighborhoods. We propose a formal definition and show this concentration can be used to predict future attacks (new spamming sources, in our case), and propose an algorithm to aggregate individual IP addresses can bigger neighborhoods. Moreover, we show how bad neighborhoods are specific according to the exploited application (e.g., spam, ssh) and how the performance of different blacklist sources impacts lightweight spam filtering algorithms., POLG, Technology, Policy and Management
- Published
- 2014
21. Bad Neighborhoods on the Internet
- Author
-
Moreira Moura, G.C. (author), Sadre, R. (author), Pras, A. (author), Moreira Moura, G.C. (author), Sadre, R. (author), and Pras, A. (author)
- Abstract
Analogous to the real world, sources of malicious activities on the Internet tend to be concentrated in certain networks instead of being evenly distributed. In this article, we formally define and frame such areas as Internet Bad Neighborhoods. By extending the reputation of malicious IP addresses to their neighbors, the bad neighborhood approach ultimately enables attack prediction from unforeseen addresses. We investigate spam and phishing bad neighborhoods, and show how their underlying business models, counter-intuitively, impacts the location of the neighborhoods (both geographically and in the IP addressing space). We also show how bad neighborhoods are highly concentrated at few Internet Service Providers and discuss how our findings can be employed to improve current network and spam filters and incentivize botnet mitigation initiatives., POLG, Technology, Policy and Management
- Published
- 2014
22. Scalability of Networks and Services: Proceedings of the Third International Conference on Autonomous Infrastructure, Management and Security (AIMS 2009)
- Author
-
Sadre, R. and Pras, Aiko
- Subjects
METIS-263926 ,EWI-15718 ,IR-67812 - Published
- 2009
23. Self-Management of Hybrid Networks: Can We Trust NetFlow Data?
- Author
-
Fioreze, Tiago, Granville, Lisandro Zambenedetti, Granville, L., Pras, Aiko, Sperotto, Anna, and Sadre, R.
- Subjects
Computer science ,Network packet ,business.industry ,Quality of service ,Data security ,Sampling (statistics) ,METIS-263864 ,Context (language use) ,Information security ,EWI-15393 ,NetFlow ,Duration (project management) ,business ,IR-65501 ,Computer network - Abstract
Network measurement provides vital information on the health of managed networks. The collection of network information can be used for several reasons (e.g., accounting or security) depending on the purpose the collected data will be used for. At the University of Twente (UT), an automatic decision process for hybrid networks that relies on collected network information has been investigated. This approach, called self-management of hybrid networks requires information retrieved from measuring processes in order to automatically decide on establishing/releasing lambda-connections for IP flows that are long in duration and big in volume (known as elephant flows). Nonetheless, the employed measurement technique can break the self-management decisions if the reported information does not accurately describe the actual behavior and characteristics of the observed flows. Within this context, this paper presents an investigation on the trustfulness of measurements performed using the popular NetFlow monitoring solution when elephant flows are especially observed. We primarily focus on the use of NetFlow with sampling in order to collect network information and investigate how reliable such information is for the self-management processes. This is important because the self-management approach decides which flows should be offloaded to the optical level based on the current state of the network and its running flows. We observe three specific flow metrics: octets, packets, and flow duration. Our analysis shows that NetFlow provides reliable information regarding octets and packets. On the other hand, the flow duration reported when sampling is employed tends to be shorter than the actual duration.
- Published
- 2009
- Full Text
- View/download PDF
24. A Fixed-Point Algorithm for Closed Queueing Networks
- Author
-
Sadre, R., Haverkort, Boudewijn R.H.M., Reinelt, Patrick, and Wolter, K.
- Subjects
Scheme (programming language) ,Queueing theory ,Mathematical optimization ,Computer science ,Perspective (graphical) ,EWI-11260 ,Approximation error ,METIS-242205 ,Mean value analysis ,Layered queueing network ,Fixed point algorithm ,IR-64424 ,computer ,computer.programming_language - Abstract
In this paper we propose a new efficient iterative scheme for solving closed queueing networks with phase-type service time distributions. The method is especially efficient and accurate in case of large numbers of nodes and large customer populations. We present the method, put it in perspective, and validate it through a large number of test scenarios. In most cases, the method provides accuracies within 5% relative error (in comparison to discrete-event simulation).
- Published
- 2007
- Full Text
- View/download PDF
25. A Class-Based Least-Recently-Used Caching Algorithm for WWW Proxies Proceedings
- Author
-
Khayari el Abdouni, Rachid, Sadre, R., Haverkort, Boudewijn R.H.M., Kemper, P., and Sanders, W.H.
- Subjects
Hardware_MEMORYSTRUCTURES ,METIS-213933 ,IR-46038 - Abstract
In this paper we study and analyze the in uence of caching stategies on the performance of WWW proxies. We propose a new strategy called class-based LRU that works recency-based as well as size-based, with the ultimate aim to obtain a well-balanced mixture between large and small documents in the cache, and hence, good performance for both small and large object requests. We show that for class-based LRU good results are obtained for both the hit rate and the byte hit rate, if the size of the classes and the corresponding document size ranges are well choosen. The latter is achieved by using a Bayesian decision rule and a characterisation of the requested object-size distribution using the EM-algorithm. Furthermore, the overhead to implement class-based LRU is comparable to that of LRU and does not depend on the number of cached objects.
- Published
- 2003
26. A Class-Based Weighted-Fair Queueing Algorithm for WWW Proxy Scheduling
- Author
-
El Abdouni Khayari, R., Sadre, R., Haverkort, Boudewijn R.H.M., and Zoschke, N.
- Subjects
EWI-890 - Published
- 2002
27. Lightweight link dimensioning using sFlow sampling
- Author
-
Schmidt, R. de O., primary, Sadre, R., additional, Sperotto, A., additional, and Pras, A., additional
- Published
- 2013
- Full Text
- View/download PDF
28. The effects of DDoS attacks on flow monitoring applications
- Author
-
Sadre, R., primary, Sperotto, A., additional, and Pras, A., additional
- Published
- 2012
- Full Text
- View/download PDF
29. Internet bad neighborhoods aggregation
- Author
-
Moura, G. C. M., primary, Sadre, R., additional, Sperotto, A., additional, and Pras, A., additional
- Published
- 2012
- Full Text
- View/download PDF
30. A first look into SCADA network traffic
- Author
-
Barbosa, R. R. R., primary, Sadre, R., additional, and Pras, A., additional
- Published
- 2012
- Full Text
- View/download PDF
31. Internet Bad Neighborhoods: The spam case.
- Author
-
Moura, G.C.M., Sadre, R., and Pras, A.
- Published
- 2011
32. Self-management of hybrid networks: Can we trust netflow data?
- Author
-
Fioreze, T., Granville, L.Z., Pras, A., Sperotto, A., and Sadre, R.
- Published
- 2009
- Full Text
- View/download PDF
33. Fitting heavy-tailed HTTP traces with the new stratified EM-algorithm.
- Author
-
Sadre, R. and Haverkort, B.R.
- Published
- 2008
- Full Text
- View/download PDF
34. Improving Network Services' Resilience using Independent Configuration Replication
- Author
-
Lopes, Miguel, António Costa, Dias, Bruno, Deturck, F., Diao, Y., Hong, Cs, Medhi, D., and Sadre, R.
35. A validation of the pseudo self-similar traffic model
- Author
-
El Abdouni Khayari, R., primary, Sadre, R., additional, and Haverkort, B.R., additional
- Full Text
- View/download PDF
36. A framework for integrated configuration management tools
- Author
-
Vanbrabant, Bart, Joosen, Wouter, DeTurck, F, Diao, Y, Hong, CS, Medhi, D, and Sadre, R
- Abstract
IT infrastructure management is becoming increasingly complex due to the increasing size and complexity of IT infrastructures. This complexity leads to errors and subsequently to failure. Configuration errors caused by operator errors are a significant contributor to downtime. These errors are often introduced by failing to update all dependent configuration parameters. Interdependencies between parameters are often not explicitly documented and exist at all layers in an infrastructure. Existing configuration management tools fail to capture all relevant relations and as such are prone to introducing inconsistencies. In this paper we present the Infrastructure Management Platform (IMP), a framework for building integrated configuration management tools. IMP models all relevant interdependencies between parameters to support consistency when updating a configuration. IMP is an integrated management framework, managing relations between configuration parameters across all layers. Additionally it reuses existing management interfaces and deployment agents to facilitate integrated management. We have validated IMP in several case studies by implementing reusable configuration modules, and have demonstrated that tools built on top of IMP can significantly increase the abstraction level in which an infrastructure is configured. © 2013 IFIP. ispartof: pages:534-540 ispartof: 2013 IFIP/IEEE INTERNATIONAL SYMPOSIUM ON INTEGRATED NETWORK MANAGEMENT pages:534-540 ispartof: IFIP/IEEE International Symposium on Integrated Network Management location:Gent date:27 May - 31 May 2013 status: published
- Published
- 2013
37. Automated allocation and configuration of dual stack IP networks
- Author
-
Daniels, Wilfried, Vanbrabant, Bart, Hughes, Danny, Joosen, Wouter, DeTurck, F, Diao, Y, Hong, CS, Medhi, D, and Sadre, R
- Abstract
The manual configuration and management of a modern network infrastructure is an increasingly complex task. This complexity is caused by factors including heterogeneity, a high degree of change and dependencies between configuration parameters. Due to increasing complexity, manual configuration has become time consuming and error prone. This paper proposes an automatic configuration tool for dual stack IP networks that addresses these issues by using high level abstractions to model the network topology and key parameters. From this high level configuration model, low level configuration files can be generated and deployed. A key parameter specified in the high level model is the network prefix of the entire network. When translating a configuration model to configuration files, IPv4 and IPv6 subnets need to be allocated. We provide an allocation algorithm to do this allocation in the most efficient way. Evaluation of our approach shows that there is a significant increase in efficiency compared to manual configuration. ispartof: pages:1148-1153 ispartof: IFIP/IEEE International Symposium on Integrated Network Management (IM'13) pages:1148-1153 ispartof: IFIP/IEEE Management of the Future Internet (ManFI'13) location:Ghent, Belgium date:27 May - 31 May 2013 status: published
- Published
- 2013
38. Minimizing the impact of delay on live SVC-based HTTP adaptive streaming services
- Author
-
Bouten, N., Latre, S., Jeroen Famaey, Turck, F., Leekwijck, W., DeTurck, F, Diao, Y, Hong, CS, Medhi, D, and Sadre, R
- Subjects
Technology and Engineering ,IBCN - Abstract
HTTP Adaptive Streaming (HAS) is becoming the de-facto standard for Over-The-Top video streaming services. Video content is temporally split into segments which are offered at multiple qualities to the clients. These clients autonomously select the quality layer matching the current state of the network through a quality selection heuristic. Recently, academia and industry have begun evaluating the feasibility of adopting layered video coding for HAS. Instead of downloading one file for a certain quality level, scalable video streaming requires downloading several interdependent layers to obtain the same quality. This implies that the base layer is always downloaded and is available for playout, even when throughput fluctuates and enhancement layers can not be downloaded in time. This layered video approach can help in providing better service quality assurance for video streaming. However, adopting scalable video coding for HAS also leads to other issues, since requesting multiple files over HTTP leads to an increased impact of the end-to-end delay and thus on the service provided to the client. This is even worse in a Live TV scenario where the drift on the live signal should be minimized, requiring smaller segment and buffer sizes. In this paper, we characterize the impact of delay on several measurement-based heuristics. Furthermore, we propose several ways to overcome the end-to-end delay issues, such as parallel and pipelined downloading of segment layers, to provide a higher quality for the video service.
- Published
- 2013
39. PeerVote: A Decentralized Voting Mechanism for P2P Collaboration Systems
- Author
-
Burkhard Stiller, Dalibor Peric, Fabio Hecht, David Hausheer, Thomas Bocek, University of Zurich, Sadre, R, Pras, A, and Bocek, T
- Subjects
Computer science ,10009 Department of Informatics ,media_common.quotation_subject ,Distributed computing ,Fault tolerance ,Load balancing (computing) ,Recommender system ,000 Computer science, knowledge & systems ,Computer security ,computer.software_genre ,Central authority ,If and only if ,Robustness (computer science) ,Voting ,Scalability ,1700 General Computer Science ,2614 Theoretical Computer Science ,computer ,media_common - Abstract
Peer-to-peer (P2P) systems achieve scalability, fault tolerance, and load balancing with a low-cost infrastructure, characteristics from which collaboration systems, such as Wikipedia, can benefit. A major challenge in P2P collaboration systems is to maintain article quality after each modification in the presence of malicious peers. A way of achieving this goal is to allow modifications to take effect only if a majority of previous editors approve the changes through voting. The absence of a central authority makes voting a challenge in P2P systems. This paper proposes the fully decentralized voting mechanism PeerVote, which enables users to vote on modifications in articles in a P2P collaboration system. Simulations and experiments show the scalability and robustness of PeerVote, even in the presence of malicious peers.
- Published
- 2009
- Full Text
- View/download PDF
40. PeerCollaboration
- Author
-
Bocek, T, Stiller, B, University of Zurich, Sadre, R, Pras, A, and Bocek, T
- Subjects
10009 Department of Informatics ,1700 General Computer Science ,000 Computer science, knowledge & systems ,2614 Theoretical Computer Science - Published
- 2009
- Full Text
- View/download PDF
41. A market-based pricing scheme for grid networks
- Author
-
Peng Gao, Tao Liu, Xingyao Wu, David Hausheer, University of Zurich, Sadre, R, Pras, A, and Gao, Peng
- Subjects
Market based ,10009 Department of Informatics ,Computer science ,Distributed computing ,05 social sciences ,050801 communication & media studies ,020207 software engineering ,02 engineering and technology ,000 Computer science, knowledge & systems ,Load balancing (computing) ,Grid ,0508 media and communications ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,1700 General Computer Science ,2614 Theoretical Computer Science - Abstract
This paper presents a new market-based pricing scheme which aims to improve the link load balance in Grid networks. Simulation results show that the proposed scheme achieves a better link load balance and, thus, improves the network's robustness and stability. At the same time, the scheme increases the network's effective capacity as it enables to accommodate more new services. The results show that the proposed scheme leads indeed to a more efficient usage of network resources.
- Published
- 2009
- Full Text
- View/download PDF
42. Answering Queries Using Cooperative Semantic Caching
- Author
-
Andrei Vancea, Burkhard Stiller, University of Zurich, Sadre, R, Pras, A, and Vancea, A
- Subjects
10009 Department of Informatics ,Computer science ,View ,000 Computer science, knowledge & systems ,computer.software_genre ,Query language ,Query optimization ,Query expansion ,Web query classification ,Server ,Query by Example ,1700 General Computer Science ,2614 Theoretical Computer Science ,computer.programming_language ,Database server ,Hardware_MEMORYSTRUCTURES ,Information retrieval ,Web search query ,Database ,Materialized view ,InformationSystems_DATABASEMANAGEMENT ,Online aggregation ,Spatial query ,Sargable ,Cache ,computer - Abstract
Semantic caching is a technique used for optimizing the evaluation of database queries by caching results of previous answered queries at the client side and using the cached results when trying to answer new queries. Before sending a query to the database server, the client first checks, if there are any cached query results that semantically contain the new query or parts of the query. If such cached results are found, they can be used when answering the new query. Otherwise, the query will be answered by the database management server. This paper proposes to extend the general semantic caching mechanism by enabling clients to share their local semantic caches in a cooperative matter. If a particular query cannot be answered using the local cache, the system will verify, if there are other clients, located across the Internet, that are able to answer the query using the data stored in their caches. Such an approach will increase the throughput of database servers, because servers will only receive queries that cannot be answered using the cooperative cache concept.
- Published
- 2009
- Full Text
- View/download PDF
43. A demonstration of automatic bootstrapping of resilient OpenFlow networks
- Author
-
Sharma, Sachin, Staessens, Dimitri, Colle, Didier, Mario Pickavet, Demeester, Piet, De Turck, F, Diao, Y, Hong, CS, Medhi, D, and Sadre, R
- Subjects
Technology and Engineering ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,IBCN - Abstract
OpenFlow has disruptive potential in designing a flexible network that fosters innovation, reduces complexity, and delivers the right economics. The core idea of OpenFlow is to decouple the control plane functionality from switches, and to embed it into one or more servers called controllers. One of the challenges of OpenFlow is to deploy a network where control and data traffic are transmitted on the same channel. Implementing such a network is complex, since switches have to search and establish a path to the controller (bootstrapping) through the other switches in the network. We implemented this automatic bootstrapping of switches by using an algorithm where the controller establishes a path through the neighbor switches that are connected to it by the OpenFlow protocol. In the demonstration, we show this by using a GUI (Graphical User Interface) placed at the controller. Additionally, in the GUI, the OpenFlow switch topology gathered during bootstrapping is shown. During the demonstration, we insert a failure condition in one of the links in the topology and show failure recovery by a change in the GUI.
44. Energy-Aware Adaptive Network Resource Management
- Author
-
Marinos Charalambides, Tuncer, D., Mamatas, L., Pavlou, G., DeTurck, F, Diao, Y, Hong, CS, Medhi, D, and Sadre, R
- Subjects
online traffic engineering ,Technology ,Science & Technology ,Engineering ,green networking ,Computer Science, Theory & Methods ,Computer Science ,decentralized resource management ,Engineering, Electrical & Electronic ,virtualized routing planes ,bundled links - Abstract
Resource over-provisioning is common practice in network infrastructures. Coupled with energy unaware networking protocols, this can lead to periods of resource underutilization and constant energy consumption irrespective of the traffic load conditions. Driven by the rising cost of energy - and therefore OPEX - and increasing environmental consciousness, research in power saving techniques has recently received significant attention. Unlike the majority of previous work in the area, which has focused on centralized offline approaches, in this paper we propose an online approach by which the capacity of the network can be adapted in a decentralized fashion. Based on the modular architectures of modern IP routers, adaptation is achieved by configuring individual line cards to enter sleep mode. Re-configuration is performed periodically by intelligent ingress nodes that coordinate their actions in order to control the traffic distribution in the network, according to the actual demand. We evaluate our approach using real network topologies and traffic traces. In the case of the GEANT network, the proposed approach can, on average, reduce the energy to power the required line cards by 46% for a maximum utilization below 65%, and by 18% under heavily loaded conditions.
45. On the merits of SVC-based HTTP Adaptive Streaming
- Author
-
Jeroen Famaey, Latre, S., Bouten, N., Meerssche, W., Vleeschauwer, B., Leekwijck, W., Turck, F., DeTurck, F, Diao, Y, Hong, CS, Medhi, D, and Sadre, R
- Subjects
Technology and Engineering ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,H.264/AVC ,IBCN ,EXTENSION - Abstract
HTTP Adaptive Streaming (HAS) is quickly becoming the dominant type of video streaming in Over-The-Top multimedia services. HAS content is temporally segmented and each segment is offered in different video qualities to the client. It enables a video client to dynamically adapt the consumed video quality to match with the capabilities of the network and/or the client's device. As such, the use of HAS allows a service provider to offer video streaming over heterogeneous networks and to heterogeneous devices. Traditionally, the H. 264/AVC video codec is used for encoding the HAS content: for each offered video quality, a separate AVC video file is encoded. Obviously, this leads to a considerable storage redundancy at the video server as each video is available in a multitude of qualities. The recent Scalable Video Codec (SVC) extension of H. 264/AVC allows encoding a video into different quality layers: by dowloading one or more additional layers, the video quality can be improved. While this leads to an immediate reduction of required storage at the video server, the impact of using SVC-based HAS on the network and perceived quality by the user are less obvious. In this article, we characterize the performance of AVC- and SVC-based HAS in terms of perceived video quality, network load and client characteristics, with the goal of identifying advantages and disadvantages of both options.
46. Designer oleosins boost oil accumulation in plant biomass.
- Author
-
Sadre R
- Subjects
- Plant Proteins metabolism, Plant Proteins genetics, Biomass, Plant Oils metabolism
- Published
- 2024
- Full Text
- View/download PDF
47. Plant synthetic biology for human health: advances in producing medicines in heterologous expression systems.
- Author
-
Sadre R
- Subjects
- Humans, Plants, Medicinal metabolism, Plants, Medicinal genetics, Plants, Medicinal chemistry, Plants, Genetically Modified genetics, Plants, Genetically Modified metabolism, Plants metabolism, Plants genetics, Metabolic Engineering methods, Synthetic Biology methods, Biological Products metabolism
- Abstract
Plant synthetic biology has the capability to provide solutions to global challenges in the production and supply of medicines. Recent advances in 'omics' technologies have accelerated gene discoveries in medicinal plant research so that even multistep biosynthetic pathways for bioactive plant natural products with high structural complexity can be reconstituted in heterologous plant expression systems more rapidly. This review provides an overview of concept and strategies used to produce high-value plant natural products in heterologous plant systems and highlights recent successes in engineering the biosynthesis of conventional and new medicines in alternative plant hosts., Competing Interests: Declaration of Competing Interest The authors declare no conflict of interest., (Copyright © 2024 The Author(s). Published by Elsevier Ltd.. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
48. Metabolomics-guided discovery of cytochrome P450s involved in pseudotropine-dependent biosynthesis of modified tropane alkaloids.
- Author
-
Sadre R, Anthony TM, Grabar JM, Bedewitz MA, Jones AD, and Barry CS
- Subjects
- Cytochrome P-450 Enzyme System genetics, Metabolomics, Alkaloids chemistry, Tropanes metabolism
- Abstract
Plant alkaloids constitute an important class of bioactive chemicals with applications in medicine and agriculture. However, the knowledge gap of the diversity and biosynthesis of phytoalkaloids prevents systematic advances in biotechnology for engineered production of these high-value compounds. In particular, the identification of cytochrome P450s driving the structural diversity of phytoalkaloids has remained challenging. Here, we use a combination of reverse genetics with discovery metabolomics and multivariate statistical analysis followed by in planta transient assays to investigate alkaloid diversity and functionally characterize two candidate cytochrome P450s genes from Atropa belladonna without a priori knowledge of their functions or information regarding the identities of key pathway intermediates. This approach uncovered a largely unexplored root localized alkaloid sub-network that relies on pseudotropine as precursor. The two cytochrome P450s catalyze N-demethylation and ring-hydroxylation reactions within the early steps in the biosynthesis of diverse N-demethylated modified tropane alkaloids., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
49. Rotational dynamics and transition mechanisms of surface-adsorbed proteins.
- Author
-
Zhang S, Sadre R, Legg BA, Pyles H, Perciano T, Bethel EW, Baker D, Rübel O, and De Yoreo JJ
- Subjects
- Aluminum Silicates chemistry, Diffusion, Machine Learning, Microscopy, Atomic Force, Monte Carlo Method, Solutions, Surface Properties, Nanotubes chemistry, Proteins chemistry, Rotation
- Abstract
Assembly of biomolecules at solid–water interfaces requires molecules to traverse complex orientation-dependent energy landscapes through processes that are poorly understood, largely due to the dearth of in situ single-molecule measurements and statistical analyses of the rotational dynamics that define directional selection. Emerging capabilities in high-speed atomic force microscopy and machine learning have allowed us to directly determine the orientational energy landscape and observe and quantify the rotational dynamics for protein nanorods on the surface of muscovite mica under a variety of conditions. Comparisons with kinetic Monte Carlo simulations show that the transition rates between adjacent orientation-specific energetic minima can largely be understood through traditional models of in-plane Brownian rotation across a biased energy landscape, with resulting transition rates that are exponential in the energy barriers between states. However, transitions between more distant angular states are decoupled from barrier height, with jump-size distributions showing a power law decay that is characteristic of a nonclassical Levy-flight random walk, indicating that large jumps are enabled by alternative modes of motion via activated states. The findings provide insights into the dynamics of biomolecules at solid–liquid interfaces that lead to self-assembly, epitaxial matching, and other orientationally anisotropic outcomes and define a general procedure for exploring such dynamics with implications for hybrid biomolecular–inorganic materials design.
- Published
- 2022
- Full Text
- View/download PDF
50. Validating deep learning inference during chest X-ray classification for COVID-19 screening.
- Author
-
Sadre R, Sundaram B, Majumdar S, and Ushizima D
- Subjects
- COVID-19 epidemiology, COVID-19 virology, Humans, Lung diagnostic imaging, Lung virology, Neural Networks, Computer, Pandemics, Reproducibility of Results, SARS-CoV-2 physiology, Sensitivity and Specificity, X-Rays, Algorithms, COVID-19 diagnosis, Deep Learning, Radiography, Thoracic methods
- Abstract
The new coronavirus unleashed a worldwide pandemic in early 2020, and a fatality rate several times that of the flu. As the number of infections soared, and capabilities for testing lagged behind, chest X-ray (CXR) imaging became more relevant in the early diagnosis and treatment planning for patients with suspected or confirmed COVID-19 infection. In a few weeks, proposed new methods for lung screening using deep learning rapidly appeared, while quality assurance discussions lagged behind. This paper proposes a set of protocols to validate deep learning algorithms, including our ROI Hide-and-Seek protocol, which emphasizes or hides key regions of interest from CXR data. Our protocol allows assessing the classification performance for anomaly detection and its correlation to radiological signatures, an important issue overlooked in several deep learning approaches proposed so far. By running a set of systematic tests over CXR representations using public image datasets, we demonstrate the weaknesses of current techniques and offer perspectives on the advantages and limitations of automated radiography analysis when using heterogeneous data sources., (© 2021. The Author(s).)
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.