12 results on '"Algorithms -- Measurement"'
Search Results
2. Optimal volume anomaly detection and isolation in large-scale IP networks using coarse-grained measurements
- Author
-
Casas, P., Vaton, S., Fillatre, L., and Nikiforov, I.
- Subjects
Algorithm ,TCP/IP ,Mathematical optimization -- Measurement ,Mathematical optimization -- Models ,Mathematical optimization -- Analysis ,Computer networks -- Measurement ,Computer networks -- Models ,Computer networks -- Analysis ,Information networks -- Measurement ,Information networks -- Models ,Information networks -- Analysis ,Traffic congestion -- Measurement ,Traffic congestion -- Models ,Traffic congestion -- Analysis ,Algorithms -- Measurement ,Algorithms -- Models ,Algorithms -- Analysis ,Transmission Control Protocol/Internet Protocol (Computer network protocol) -- Measurement ,Transmission Control Protocol/Internet Protocol (Computer network protocol) -- Models ,Transmission Control Protocol/Internet Protocol (Computer network protocol) -- Analysis - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2010.01.013 Byline: P. Casas (a)(c), S. Vaton (a), L. Fillatre (b), I. Nikiforov (b) Keywords: Network Monitoring and Traffic Analysis; Traffic Matrix; Network Traffic Modeling; Optimal Volume Anomaly Detection and Isolation Abstract: Recent studies from major network technology vendors forecast the advent of the Exabyte era, a massive increase in network traffic driven by high-definition video and high-speed access technology penetration. One of the most formidable difficulties that this forthcoming scenario poses for the Internet is congestion problems due to traffic volume anomalies at the core network. In the light of this challenging near future, we develop in this work different network-wide anomaly detection and isolation algorithms to deal with volume anomalies in large-scale network traffic flows, using coarse-grained measurements as a practical constraint. These algorithms present well-established optimality properties in terms of false alarm and miss detection rate, or in terms of detection/isolation delay and false detection/isolation rate, a feature absent in previous works. This represents a paramount advantage with respect to current in-house methods, as it allows to generalize results independently of particular evaluations. The detection and isolation algorithms are based on a novel linear, parsimonious, and non-data-driven spatial model for a large-scale network traffic matrix. This model allows detecting and isolating anomalies in the Origin-Destination traffic flows from aggregated measurements, reducing the overhead and avoiding the challenges of direct flow measurement. Our proposals are analyzed and validated using real traffic and network topologies from three different large-scale IP backbone networks. Author Affiliation: (a) Telecom Bretagne, Brest, France (b) Universite de Technolgie de Troyes, Troyes, France (c) Universidad de la Republica, Montevideo, Uruguay Article History: Received 10 August 2009; Revised 13 January 2010; Accepted 23 January 2010 Article Note: (miscellaneous) Responsible Editor: A. Popescu
- Published
- 2010
3. Improved modeling of IEEE 802.11a PHY through fine-grained measurements
- Author
-
Lee, Jeongkeun, Ryu, Jiho, Lee, Sung-Ju, and Kwon, Ted Taekyoung
- Subjects
Wireless LAN/WAN system ,Wireless network ,Algorithm ,Chipset ,Chip sets (Computers) -- Measurement ,Chip sets (Computers) -- Models ,Chip sets (Computers) -- Analysis ,Wireless local area networks (Computer networks) -- Measurement ,Wireless local area networks (Computer networks) -- Models ,Wireless local area networks (Computer networks) -- Analysis ,Algorithms -- Measurement ,Algorithms -- Models ,Algorithms -- Analysis ,Wi-Fi -- Measurement ,Wi-Fi -- Models ,Wi-Fi -- Analysis - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2009.08.003 Byline: Jeongkeun Lee (a), Jiho Ryu (b), Sung-Ju Lee (a), Ted Taekyoung Kwon (b) Keywords: IEEE 802.11; Physical layer capture; Interference; Carrier sense; Simulation model Abstract: In wireless networks, modeling of the physical layer behavior is an important yet difficult task. Modeling and estimating wireless interference is receiving great attention, and is crucial in a wireless network performance study. The physical layer capture, preamble detection, and carrier sense threshold are three key components that play important roles in successful frame reception in the presence of interference. Using our IEEE 802.11a wireless network testbed, we carry out a measurement study that reveals the detailed operation of each component and in particular we show the terms and conditions (interference timing, signal power difference, bitrate) under which a frame survives interference according to the preamble detection and capture logic. Based on the measurement study, we show that the operations of the three components in real IEEE 802.11a systems differ from those of popular simulators and present our modifications of the IEEE 802.11a PHY models to the NS-2 and QualNet network simulators. The modifications can be summarized as follows. (i) The current simulators' frame reception is based only on the received signal strength. However, real 802.11 systems can start frame reception only when the Signal-to-Interference Ratio (SIR) is high enough to detect the preamble. (ii) Different chipset vendors implement the frame reception and capture algorithms differently, resulting in different operations for the same event. We provide different simulation models for several popular chipset vendors and show the performance differences between the models. (iii) Based on the 802.11a standard setting and our testbed observation, we revise the simulator to set the carrier sense threshold higher than the receiver sensitivity rather than equal to the receiver sensitivity. We implement our modifications to the QualNet simulator and evaluate the impact of PHY model implementations on the wireless network performance; these result in an up to six times increase of net throughput. Author Affiliation: (a) Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304, USA (b) Seoul National University, Building 301, Room 553-1, 599 Gwanak-ro, Gwanak-gu, Seoul 151-742, Republic of Korea Article Note: (footnote) [star] This work was supported in part by NAP of Korea Research Council of Fundamental Science and Technology (KRCF) and the Ministry of Knowledge Economy, Korea, under the Information Technology Research Center support program supervised by the Institute of Information Technology Advancement. (Grant No. IITA-2009-C1090-0902-0006) The ICT at Seoul National University provides research facilities for this study.
- Published
- 2010
4. Measurements and simulations of nadir-viewing radar returns from the melting layer at X and W bands
- Author
-
Liao, Liang, Meneghini, Robert, Tian, Lin, and Heymsfield, Gerald M.
- Subjects
Snow -- Measurement ,Snow -- Analysis ,Rain and rainfall -- Measurement ,Rain and rainfall -- Analysis ,Meteorological research -- Measurement ,Meteorological research -- Analysis ,Radar systems -- Measurement ,Radar systems -- Analysis ,Clouds -- Measurement ,Clouds -- Analysis ,Algorithms -- Measurement ,Algorithms -- Analysis ,Radar meteorology -- Measurement ,Radar meteorology -- Analysis ,Musical groups -- Measurement ,Musical groups -- Analysis ,Algorithm ,Earth sciences - Abstract
Simulated radar signatures within the melting layer in stratiform rain--namely, the radar bright band--are checked by means of comparisons with simultaneous measurements of the bright band made by the ER-2 Doppler radar (EDOP; X band) and Cloud Radar System (CRS; W band) airborne Doppler radars during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers-Florida-Area Cirrus Experiment (CRYSTAL-FACE) campaign in 2002. A stratified-sphere model, allowing the fractional water content to vary along the radius of the particle, is used to compute the scattering properties of individual melting snowflakes. Using the effective dielectric constants computed by the conjugate gradient-fast Fourier transform numerical method for X and W bands and expressing the fractional water content of a melting particle as an exponential function in particle radius, it is found that at X band the simulated radar brightband profiles are in an excellent agreement with the measured profiles. It is also found that the simulated W-band profiles usually resemble the shapes of the measured brightband profiles even though persistent offsets between them are present. These offsets, however, can be explained by the attenuation caused by cloud water and water vapor at W band. This is confirmed by comparisons of the radar profiles made in the rain regions where the unattenuated W-band reflectivity profiles can be estimated through the X- and W-band Doppler velocity measurements. The brightband model described in this paper has the potential to be used effectively for both radar and radiometer algorithms relevant to the satellite-based Tropical Rainfall Measuring Mission and Global Precipitation Measuring Mission. DOI: 10.1175/2009JAMC2033.1
- Published
- 2009
5. Environmental externalities and efficiency measurement
- Author
-
Picazo-Tadeo, AndreS J. and Prior, Diego
- Subjects
Algorithms -- Measurement ,Algorithms -- Analysis ,Externalities (Economics) -- Measurement ,Externalities (Economics) -- Analysis ,Algorithm ,Environmental issues - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jenvman.2009.05.015 Byline: Andres J. Picazo-Tadeo (a), Diego Prior (b) Abstract: Production of desirable outputs often produces by-products that have harmful effects on the environment. This paper investigates technologies where the biggest good output producer is not the greatest polluter, i.e. technologies located on the downward-sloping segment of the frontier depicted in . Directional distance functions and Data Envelopment Analysis techniques are used to define an algorithm that allows them to be identified empirically. Furthermore, we show that in such situations producers can contribute social goods, i.e. reducing polluting wastes, without limiting their capacity to maximise production of marketable output. Finally, we illustrate our methodology with an empirical application to a sample of Spanish ceramic tile producers. Author Affiliation: (a) Universitat de Valencia. Dpto. Economia Aplicada II, Avda. dels Tarongers, s/n. 46022. Valencia, Spain (b) Universitat AutA[sup.2]noma de Barcelona. Dpt. Economia de l'Empresa, 08193 Bellaterra, Barcelona, Spain Article History: Received 8 February 2008; Revised 7 April 2009; Accepted 7 May 2009
- Published
- 2009
6. Measurement and reduction of ISI in high-dynamic-range 1-bit signal generation
- Author
-
Gupta, Ajay K., Venkataraman, Jagadish, and Collins, Oliver M.
- Subjects
Algorithms -- Measurement ,Algorithms -- Analysis ,Circuit design -- Analysis ,Circuit design -- Measurement ,Algorithm ,Circuit designer ,Integrated circuit design ,Business ,Computers and office automation industries ,Electronics ,Electronics and electrical industries - Abstract
This paper studies spurious signals produced by the nonlinear interaction of the previous output symbols of a digital-to-analog converter (DAC) with its current symbol. This effect, called nonlinear intersymbol interference (ISI), significantly degrades the spurious-free dynamic range of most high-speed DACs. Many papers have been devoted to suppressing level inaccuracies in multibit DACs. However, even when all levels are accurate, nonlinear ISI causes significant spurious output. This paper presents a simple and very general model for nonlinear ISI and uses it to design binary signals that can both measure and suppress the spurious tones that arise in a single-bit DAC. While the analysis in this paper is based on a 1-bit DAC, extension to multibit DACs is possible, since a multibit DAC is merely a collection of 1-bit DACs and exhibits the same nonlinear effects. Experimental verification is presented for three different hardware setups. Measurements first establish the presence of the spurious tones in the hardware, as predicted by the model, and then show how the spur level can be reduced by as much as 22 dB. Index Terms--Filter-amplifier model, list-decoding algorithm, nonlinear intersymbol interference (ISI), tone-injection, [SIGMA][DELTA] modulation.
- Published
- 2008
7. Scheduling algorithms for conducting conflict-free measurements in overlay networks
- Author
-
Fraiwan, M. and Manimaran, G.
- Subjects
Algorithm ,Algorithms -- Measurement ,Algorithms -- Analysis - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.comnet.2008.06.006 Byline: M. Fraiwan, G. Manimaran Keywords: Measurement scheduling; Overlay networks; Measurement conflict; Network monitoring Abstract: Network monitoring is essential to the correct and efficient operation of ISP networks and the kind of applications they support, and active measurement is a key design problem in network monitoring. Unfortunately, almost all active probing algorithms ignore the measurement conflict problem: Active measurements conflict with each other - due to the nature of these measurements, the associated overhead, and the network topology - which leads to reporting incorrect results. In this paper, we consider the problem of scheduling periodic QoS measurement tasks in overlay networks. We first show that this problem is NP-hard, and then propose two conflict-aware scheduling algorithms for uniform and non-uniform tasks whose goal is to maximize the number of measurement tasks that can run concurrently, based on a well-known approximation algorithm. We incorporate topological information to devise a topology-aware scheduling algorithm that reduces the number of time slots required to produce a feasible measurement schedule. Also, we study the conflict-causing overlap among overlay paths using various real-life Internet topologies of the two major service carriers in the US. Experiments conducted using the PlanetLab testbed confirm that measurement conflict is a real issue that needs to be addressed. Simulation results show that our algorithms achieve at least 25% better schedulability over an existing algorithm. Finally, we discuss various practical considerations, and identify several interesting research problems in this context. Author Affiliation: Real-Time Computing and Networking Laboratory, Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011, United States
- Published
- 2008
8. Measurement and Meaningfulness in Conservation Science
- Author
-
Wolman, Abel G.
- Subjects
Algorithms -- Analysis ,Algorithms -- Measurement ,Algorithm ,Environmental issues ,Zoology and wildlife conservation - Abstract
To purchase or authenticate to the full-text of this article, please visit this link: http://dx.doi.org/10.1111/j.1523-1739.2006.00531.x Byline: ABEL G. WOLMAN (*) Keywords: biodiversity value; estimated data; expert judgment; measurement scale; systematic conservation planning Abstract: Abstract: Incomplete databases often require conservation scientists to estimate data either through expert judgment or other scoring, rating, and ranking procedures. At the same time, ecosystem complexity has led to the use of increasingly sophisticated algorithms and mathematical models to aid in conservation theorizing, planning, and decision making. Understanding the limitations imposed by the scales of measurement of conservation data is important for the development of sound conservation theory and policy. In particular, biodiversity valuation methods, systematic conservation planning algorithms, geographic information systems (GIS), and other conservation metrics and decision-support tools, when improperly applied to estimated data, may lead to conclusions based on numerical artifact rather than empirical evidence. The representational theory of measurement is described here, and the description includes definitions of the key concepts of scale, scale type, and meaningfulness. Representational measurement is the view that measurement entails the faithful assignment of numbers to empirical entities. These assignments form scales that are organized into a hierarchy of scale types. A statement involving scales is meaningful if its truth value is invariant under changes of scale within scale type. I apply these concepts to three examples of measurement practice in the conservation literature. The results of my analysis suggest that conservation scientists do not always investigate the scale type of estimated data and hence may derive results that are not meaningful. Recognizing the complexity of observation and measurement in conservation biology, and the constraints that measurement theory imposes, the examples are accompanied by suggestions for informal estimation of the scale type of conservation data and for conducting meaningful analysis and synthesis of this information. Abstract (Spanish): Medicion y Significado en la Ciencia de la Conservacion Resumen: Las bases de datos incompletas a menudo hacen que los cientificos de la conservacion estimen datos por medio de opinion experta u otros procedimientos de puntuacion, calificacion y clasificacion. Al mismo tiempo, la complejidad de l ecosistema ha conducido al uso de algoritmos y modelos matematicos cada vez mas sofisticados para auxiliar a la teoria, planificacion y toma de decisiones en conservacion. El entendimiento de las limitaciones impuestas por las escalas de medicion de los datos de conservacion es importante para el desarrollo de teorias y politicas de conservacion solidas. En particular, cuando son aplicados inapropiadamente a datos estimados, los metodos de valoracion de la biodiversidad, los algoritmos de planificacion sistematica de la conservacion, los SIG y otras medidas de la conservacion pueden conducir a conclusiones basadas en numeros artificiales y no en evidencia empirica. En este trabajo se describe la teoria representativa de la medicion, y se incluyen definiciones de los conceptos claves de escala, tipo de escala y significado. La medicion representativa considera que la medicion implica la asignacion confiable de numeros a entidades empiricas. Estas asignaciones forman escalas que estan organizadas en una jerarquia de tipos de escala. Una afirmacion que involucre escalas es significativa si su valor de veracidad es invariable bajo cambios de escala dentro de un tipo de escala. Aplique estos conceptos a tres ejemplos de practicas de medicion en la literatura sobre conservacion. Los resultados de mis analisis sugieren que los cientificos de la conservacion no siempre investigan el tipo de escala de los datos estimados y por lo tanto pueden derivar resultados que no son significativos. Reconociendo la complejidad de la observacion y la medicion en biologia de la conservacion, asi como las restricciones que impone la teoria de la medicion, los ejemplos son acompanados por sugerencias para la estimacion informal del tipo de escala para datos de conservacion y para realizar analisis y sintesis significativos de esta informacion. Author Affiliation: (*)AGW Consulting, Inc., 855 NW Lincoln Street, White Salmon, WA 98672-4326, U.S.A., emailwolman@gorge.net Article History: Paper submitted November 30, 2005; revised manuscript accepted March 23, 2006.
- Published
- 2006
9. A formal representation of functional size measurement methods
- Author
-
HeriAko, Marjan, Rozman, Ivan, and A1
- Subjects
Algorithm ,Algorithms -- Analysis ,Algorithms -- Measurement ,Algorithms -- Methods - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.jss.2005.11.568 Byline: Marjan HeriAko, Ivan Rozman, AleA A1/2ivkoviA Keywords: Function points; Software size; Formal model; Object-oriented projects Abstract: Estimating software size is a difficult task that requires a methodological approach. Many different methods that exist today use distinct abstractions to depict a software system. The gap between abstractions becomes even greater with object-oriented artifacts developed in unified modeling language (UML). In this paper, a formal foundation for the representation of functional size measurement (FSM) methods is presented. The generalized abstraction of the software system (GASS) is then used to formalize different functional measurement methods, namely the FPA, MK II FPA and COSMIC-FFP. The same model is also used for object-oriented projects where UML artifacts are mapped into the GASS form. The algorithms in symbolic code for those UML diagrams that are crucial for size estimation are also given. The mappings defined in this paper enable diverse FSM methods to be supported in estimation tools, the automation of counting steps and a higher-level of independence from the FSM method, since the software abstraction is written in a generalized form. Both improvements are crucial for the practical use of FSM methods. Author Affiliation: Faculty of Electrical Engineering and Computer Science, University of Maribor, Smetanova 17, SI-2000 Maribor, Slovenia Article History: Received 11 April 2005; Accepted 5 November 2005
- Published
- 2006
10. A combined neural network and DEA for measuring efficiency of large scale datasets
- Author
-
Emrouznejad, Ali and Shale, Estelle
- Subjects
Algorithm ,Neural network ,Algorithms -- Measurement ,Algorithms -- Analysis ,Neural networks -- Measurement ,Neural networks -- Analysis - Abstract
Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs), DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets, Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA, Keywords: Neural networks Data Envelopment Analysis Large datasets Back-propagation DEA
- Published
- 2009
11. A combined neural network and DEA for measuring efficiency of large scale datasets
- Author
-
Emrouznejad, Ali and Shale, Estelle
- Subjects
Algorithm ,Semiconductor memory ,Neural network ,Algorithms -- Measurement ,Algorithms -- Analysis ,Memory (Computers) -- Measurement ,Memory (Computers) -- Analysis ,Rocks, Sedimentary -- Measurement ,Rocks, Sedimentary -- Analysis ,Management science -- Measurement ,Management science -- Analysis ,Neural networks -- Measurement ,Neural networks -- Analysis - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.cie.2008.05.012 Byline: Ali Emrouznejad (a), Estelle Shale (b) Keywords: Neural networks; Data Envelopment Analysis; Large datasets; Back-propagation DEA Abstract: Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA. Author Affiliation: (a) Operations & Information Management, Aston Business School, Aston University, Birmingham B4 7ET, UK (b) Operational Research and Management Sciences, Warwick Business School, University of Warwick, Coventry CV4 7AL, UK Article History: Received 26 January 2007; Revised 12 December 2007; Accepted 23 May 2008
- Published
- 2009
12. Quantitative Thickness Measurement of Retinal Layers Imaged by Optical Coherence Tomography
- Author
-
Shahidi, Mahnaz, Wang, Zhangwei, and Zelkha, Ruth
- Subjects
Algorithms -- Measurement ,Algorithms -- Analysis ,Tomography -- Measurement ,Tomography -- Analysis ,Algorithm ,Health - Abstract
To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.ajo.2005.01.012 Byline: Mahnaz Shahidi, Zhangwei Wang, Ruth Zelkha Abstract: To report an image analysis algorithm that was developed to provide quantitative thickness measurement of retinal layers on optical coherence tomography (OCT) images. Author Affiliation: Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois. Article History: Accepted 6 January 2005 Article Note: (footnote) This study was supported by the Department of Veterans Affairs (Washington, DC); National Eye Institute EY14275 and EY01792 (Bethesda, MD); and an unrestricted fund from Research to Prevent Blindness (New York, NY).
- Published
- 2005
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.