25 results
Search Results
2. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.
- Author
-
Ran, Bin, Song, Li, Zhang, Jian, Cheng, Yang, and Tan, Huachun
- Subjects
TRAFFIC engineering ,ESTIMATION theory ,PROBLEM solving ,STATISTICAL correlation ,MISSING data (Statistics) - Abstract
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
3. Best Match: New relevance search for PubMed.
- Author
-
Fiorini, Nicolas, Canese, Kathi, Starchenko, Grisha, Kireev, Evgeny, Kim, Won, Miller, Vadim, Osipov, Maxim, Kholodov, Michael, Ismagilov, Rafis, Mohan, Sunil, Ostell, James, and Lu, Zhiyong
- Subjects
SEARCH engines ,SEARCH algorithms ,INTERNET searching ,DATA mining ,MEDICAL literature - Abstract
PubMed is a free search engine for biomedical literature accessed by millions of users from around the world each day. With the rapid growth of biomedical literature—about two articles are added every minute on average—finding and retrieving the most relevant papers for a given query is increasingly challenging. We present Best Match, a new relevance search algorithm for PubMed that leverages the intelligence of our users and cutting-edge machine-learning technology as an alternative to the traditional date sort order. The Best Match algorithm is trained with past user searches with dozens of relevance-ranking signals (factors), the most important being the past usage of an article, publication date, relevance score, and type of article. This new algorithm demonstrates state-of-the-art retrieval performance in benchmarking experiments as well as an improved user experience in real-world testing (over 20% increase in user click-through rate). Since its deployment in June 2017, we have observed a significant increase (60%) in PubMed searches with relevance sort order: it now assists millions of PubMed searches each week. In this work, we hope to increase the awareness and transparency of this new relevance sort option for PubMed users, enabling them to retrieve information more effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
4. Accurate and fast path computation on large urban road networks: A general approach.
- Author
-
Song, Qing, Li, Meng, and Li, Xiaolei
- Subjects
TRANSPORTATION ,TRAFFIC engineering ,ROADS ,NAVIGATION ,ALGORITHMS - Abstract
Accurate and fast path computation is essential for applications such as onboard navigation systems and traffic network routing. While a number of heuristic algorithms have been developed in the past few years for faster path queries, the accuracy of them are always far below satisfying. In this paper, we first develop an agglomerative graph partitioning method for generating high balanced traverse distance partitions, and we constitute a three-level graph model based on the graph partition scheme for structuring the urban road network. Then, we propose a new hierarchical path computation algorithm, which benefits from the hierarchical graph model and utilizes a region pruning strategy to significantly reduce the search space without compromising the accuracy. Finally, we present a detailed experimental evaluation on the real urban road network of New York City, and the experimental results demonstrate the effectiveness of the proposed approach to generate optimal fast paths and to facilitate real-time routing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
5. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis.
- Author
-
Fayyaz S., S. Kiavash, Liu, Xiaoyue Cathy, and Zhang, Guohui
- Subjects
METROPOLITAN areas ,PUBLIC transit ,TRAVEL time (Traffic engineering) ,POPULATION density ,POPULATION biology - Abstract
The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George’s transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
6. Relay discovery and selection for large-scale P2P streaming.
- Author
-
Zhang, Chengwei, Wang, Angela Yunxian, and Hei, Xiaojun
- Subjects
PEER-to-peer architecture (Computer networks) ,ERROR analysis in mathematics ,ESTIMATION theory ,HASHING ,NUMERICAL analysis - Abstract
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
7. Mapping technological innovation dynamics in artificial intelligence domains: Evidence from a global patent analysis
- Author
-
Na Liu, Philip Shapira, Xiaoxu Yue, and Jiancheng Guan
- Subjects
Computer and Information Sciences ,China ,Technology ,Asia ,Science ,Social Sciences ,Technology/methods ,Research and Analysis Methods ,Machine Learning ,Geographical Locations ,Patents as Topic ,Machine Learning Algorithms ,Automation ,Japan ,Inventions ,Artificial Intelligence ,Support Vector Machines ,Humans ,Patents ,Language Acquisition ,Multidisciplinary ,Models, Statistical ,Applied Mathematics ,Simulation and Modeling ,Linguistics ,Automation/methods ,United States ,Intellectual Property ,Models, Organizational ,Physical Sciences ,People and Places ,North America ,Medicine ,Law and Legal Sciences ,Commercial Law ,Diffusion of Innovation ,Mathematics ,Algorithms ,Research Article - Abstract
Artificial intelligence (AI) is emerging as a technology at the center of many political, economic, and societal debates. This paper formulates a new AI patent search strategy and applies this to provide a landscape analysis of AI innovation dynamics and technology evolution. The paper uses patent analyses, network analyses, and source path link count algorithms to examine AI spatial and temporal trends, cooperation features, cross-organization knowledge flow and technological routes. Results indicate a growing yet concentrated, non-collaborative and multi-path development and protection profile for AI patenting, with cross-organization knowledge flows based mainly on interorganizational knowledge citation links.
- Published
- 2021
8. Mapping climate discourse to climate opinion: An approach for augmenting surveys with social media to enhance understandings of climate opinion in the United States
- Author
-
Benjamin Rachunok, Jackson B. Bennett, Roger Flage, and Roshanak Nateghi
- Subjects
Atmospheric Science ,010504 meteorology & atmospheric sciences ,Climate ,Social Sciences ,010501 environmental sciences ,Surveys ,01 natural sciences ,Global Warming ,Machine Learning ,Mathematical and Statistical Techniques ,Sociology ,Surveys and Questionnaires ,Climatology ,education.field_of_study ,Multidisciplinary ,Geography ,Statistics ,Social Communication ,Public relations ,Social Networks ,Research Design ,Physical Sciences ,Medicine ,Natural language ,Algorithms ,Network Analysis ,Research Article ,Computer and Information Sciences ,Process (engineering) ,Science ,Climate Change ,Population ,Twitter ,Climate change ,Research and Analysis Methods ,Artificial Intelligence ,Social media ,Statistical Methods ,education ,0105 earth and related environmental sciences ,Motivation ,Survey Research ,business.industry ,Samfunnsvitenskap: 200 [VDP] ,Global warming ,Models, Theoretical ,United States ,Communications ,Framing (social sciences) ,Attitude ,13. Climate action ,Sustainability ,Earth Sciences ,Anthropogenic Climate Change ,business ,Social Media ,Mathematics ,Forecasting - Abstract
Surveys are commonly used to quantify public opinions of climate change and to inform sustainability policies. However, conducting large-scale population-based surveys is often a difficult task due to time and resource constraints. This paper outlines a machine learning framework—grounded in statistical learning theory and natural language processing—to augment climate change opinion surveys with social media data. The proposed framework maps social media discourse to climate opinion surveys, allowing for discerning the regionally distinct topics and themes that contribute to climate opinions. The analysis reveals significant regional variation in the emergent social media topics associated with climate opinions. Furthermore, significant correlation is identified between social media discourse and climate attitude. However, the dependencies between topic discussion and climate opinion are not always intuitive and often require augmenting the analysis with a topic’s most frequent n-grams and most representative tweets to effectively interpret the relationship. Finally, the paper concludes with a discussion of how these results can be used in the policy framing process to quickly and effectively understand constituents’ opinions on critical issues.
- Published
- 2020
9. A personalized channel recommendation and scheduling system considering both section video clips and full video clips
- Author
-
SeungGwan Lee and Daeho Lee
- Subjects
Computer science ,Section (typography) ,Video Recording ,Social Sciences ,lcsh:Medicine ,02 engineering and technology ,computer.software_genre ,Geographical locations ,Machine Learning ,Database and Informatics Methods ,Learning and Memory ,0202 electrical engineering, electronic engineering, information engineering ,Data Mining ,Psychology ,Computer Networks ,CLIPS ,lcsh:Science ,Statistical Data ,computer.programming_language ,Multidisciplinary ,Multimedia ,Applied Mathematics ,Simulation and Modeling ,IPTV ,Scheduling system ,Physical Sciences ,Information Retrieval ,020201 artificial intelligence & image processing ,Information Technology ,Algorithms ,Statistics (Mathematics) ,Research Article ,Communication channel ,Computer and Information Sciences ,Schedule ,Minnesota ,Broadcasting ,Research and Analysis Methods ,Computer Communication Networks ,Artificial Intelligence ,Learning ,Humans ,Internet ,business.industry ,Communications Media ,lcsh:R ,Cognitive Psychology ,Biology and Life Sciences ,020207 software engineering ,Models, Theoretical ,United States ,North America ,Cognitive Science ,lcsh:Q ,People and places ,business ,computer ,Mathematics ,Neuroscience - Abstract
With the convergence of various broadcasting systems, the amount of content available in mobile terminals including IPTV has significantly increased. In this paper, we propose a system that enables users to schedule programs considering both section video clips and full video clips based on the user detection method with similar preference. And, since the system constituting the contents can be classified according to the program, the proposed method can store a program desired by the user, and thus create and schedule a kind of individual channel. Experimental results show that the proposed method has a higher prediction accuracy; this is accomplished by comparing existing channel recommendation methods with the program recommendation methods proposed in this paper.
- Published
- 2018
10. Applying GIS and Machine Learning Methods to Twitter Data for Multiscale Surveillance of Influenza
- Author
-
Anoshé A Aslam, Anna C Nagel, Ming-Hsiang Tsou, Chris Allen, and Jean Mark Gawron
- Subjects
Viral Diseases ,Geographic information system ,020205 medical informatics ,Computer science ,lcsh:Medicine ,Social Sciences ,02 engineering and technology ,Filter (software) ,Disease Outbreaks ,Machine Learning ,0302 clinical medicine ,Public health surveillance ,Sociology ,Geoinformatics ,0202 electrical engineering, electronic engineering, information engineering ,Information system ,Medicine and Health Sciences ,Public and Occupational Health ,Public Health Surveillance ,030212 general & internal medicine ,Geography, Medical ,lcsh:Science ,Multidisciplinary ,Geography ,Applied Mathematics ,Simulation and Modeling ,Social Communication ,3. Good health ,Infectious Diseases ,Social Networks ,Physical Sciences ,Network Analysis ,Algorithms ,Research Article ,Computer and Information Sciences ,Twitter ,Research and Analysis Methods ,03 medical and health sciences ,Machine Learning Algorithms ,Artificial Intelligence ,Support Vector Machines ,Influenza, Human ,Flu season ,Humans ,Social media ,business.industry ,lcsh:R ,Outbreak ,Biology and Life Sciences ,Data science ,Communications ,Influenza ,United States ,Geographic Information Systems ,Earth Sciences ,Cognitive Science ,lcsh:Q ,business ,Social Media ,Mathematics ,Neuroscience - Abstract
Traditional methods for monitoring influenza are haphazard and lack fine-grained details regarding the spatial and temporal dynamics of outbreaks. Twitter gives researchers and public health officials an opportunity to examine the spread of influenza in real-time and at multiple geographical scales. In this paper, we introduce an improved framework for monitoring influenza outbreaks using the social media platform Twitter. Relying upon techniques from geographic information science (GIS) and data mining, Twitter messages were collected, filtered, and analyzed for the thirty most populated cities in the United States during the 2013-2014 flu season. The results of this procedure are compared with national, regional, and local flu outbreak reports, revealing a statistically significant correlation between the two data sources. The main contribution of this paper is to introduce a comprehensive data mining process that enhances previous attempts to accurately identify tweets related to influenza. Additionally, geographical information systems allow us to target, filter, and normalize Twitter messages.
- Published
- 2016
11. Evaluating the role of land cover and climate uncertainties in computing gross primary production in Hawaiian Island ecosystems.
- Author
-
Kimball, Heather L., Selmants, Paul C., Moreno, Alvaro, Running, Steve W., and Giardina, Christian P.
- Subjects
LAND cover ,PRIMARY productivity (Biology) ,ISLAND ecology ,FLUX (Energy) ,MODIS (Spectroradiometer) - Abstract
Gross primary production (GPP) is the Earth’s largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California.
- Author
-
Boyda, Edward, Basu, Saikat, Ganguly, Sangram, Michaelis, Andrew, Mukhopadhyay, Supratik, and Nemani, Ramakrishna R.
- Subjects
QUANTUM annealing ,COMPUTER vision ,AERIAL photography ,REMOTE sensing ,GROUND cover plants - Abstract
Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
13. Design of a BIST implemented AES crypto-processor ASIC
- Author
-
Md. Liakot Ali, Md. Shazzatur Rahman, and Fakir Sharif Hossain
- Subjects
Computer and Information Sciences ,Computer science ,Science ,Encryption ,Social Sciences ,Research and Analysis Methods ,Pattern Recognition, Automated ,Computer Software ,Secure cryptoprocessor ,Application-specific integrated circuit ,Sociology ,Industry ,Computer Simulation ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Testability ,Computer Security ,computer.programming_language ,Damage Mechanics ,Multidisciplinary ,business.industry ,Applied Mathematics ,Simulation and Modeling ,Physics ,Advanced Encryption Standard ,Hardware description language ,Software Engineering ,Classical Mechanics ,Social Communication ,United States ,Communications ,Symmetric-key algorithm ,Research Design ,Embedded system ,Physical Sciences ,Cryptography ,Engineering and Technology ,Medicine ,Electronic design automation ,Electrical Faults ,business ,computer ,Algorithms ,Mathematics ,Electrical Engineering ,Research Article - Abstract
This paper presents the design of a Built-in-self-Test (BIST) implemented Advanced Encryption Standard (AES) cryptoprocessor Application Specific Integrated Circuit (ASIC). AES has been proved as the strongest symmetric encryption algorithm declared by USA Govt. and it outperforms all other existing cryptographic algorithms. Its hardware implementation offers much higher speed and physical security than that of its software implementation. Due to this reason, a number of AES cryptoprocessor ASIC have been presented in the literature, but the problem of testability in the complex AES chip is not addressed yet. This research introduces a solution to the problem for the AES cryptoprocessor ASIC implementing mixed-mode BIST technique, a hybrid of pseudo-random and deterministic techniques. The BIST implemented ASIC is designed using IEEE industry standard Hardware Description Language(HDL). It has been simulated using Electronic Design Automation (EDA)tools for verification and validation using the input-output data from the National Institute of Standard and Technology (NIST) of the USA Govt. The simulation results show that the design is working as per desired functionalities in different modes of operation of the ASIC. The current research is compared with those of other researchers, and it shows that it is unique in terms of BIST implementation into the ASIC chip.
- Published
- 2021
14. Forecasting dengue and influenza incidences using a sparse representation of Google trends, electronic health records, and time series data
- Author
-
Madhav V. Marathe, Sandeep K. Mody, and Prashant Rangarajan
- Subjects
0301 basic medicine ,Viral Diseases ,Time Factors ,Computer science ,Electronic Medical Records ,Machine Learning ,Dengue ,Database and Informatics Methods ,Mathematical and Statistical Techniques ,0302 clinical medicine ,Lasso (statistics) ,Statistics ,Medicine and Health Sciences ,Electronic Health Records ,Ensemble Methods ,Computer Networks ,Biology (General) ,Peak Values ,Singapore ,education.field_of_study ,Ecology ,Applied Mathematics ,Simulation and Modeling ,Incidence ,Standard time ,Sparse approximation ,Thailand ,3. Good health ,Infectious Diseases ,Computational Theory and Mathematics ,Autoregressive model ,Population Surveillance ,Modeling and Simulation ,Physical Sciences ,Engineering and Technology ,Kalman Filter ,Algorithms ,Brazil ,Research Article ,Count data ,Computer and Information Sciences ,QH301-705.5 ,Population ,Taiwan ,Health Informatics ,Feature selection ,Research and Analysis Methods ,03 medical and health sciences ,Cellular and Molecular Neuroscience ,Artificial Intelligence ,Influenza, Human ,Genetics ,Humans ,Statistical Methods ,Time series ,education ,Mexico ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Internet ,Models, Statistical ,Influenza ,United States ,030104 developmental biology ,Signal Processing ,030217 neurology & neurosurgery ,Mathematics ,Forecasting - Abstract
Dengue and influenza-like illness (ILI) are two of the leading causes of viral infection in the world and it is estimated that more than half the world’s population is at risk for developing these infections. It is therefore important to develop accurate methods for forecasting dengue and ILI incidences. Since data from multiple sources (such as dengue and ILI case counts, electronic health records and frequency of multiple internet search terms from Google Trends) can improve forecasts, standard time series analysis methods are inadequate to estimate all the parameter values from the limited amount of data available if we use multiple sources. In this paper, we use a computationally efficient implementation of the known variable selection method that we call the Autoregressive Likelihood Ratio (ARLR) method. This method combines sparse representation of time series data, electronic health records data (for ILI) and Google Trends data to forecast dengue and ILI incidences. This sparse representation method uses an algorithm that maximizes an appropriate likelihood ratio at every step. Using numerical experiments, we demonstrate that our method recovers the underlying sparse model much more accurately than the lasso method. We apply our method to dengue case count data from five countries/states: Brazil, Mexico, Singapore, Taiwan, and Thailand and to ILI case count data from the United States. Numerical experiments show that our method outperforms existing time series forecasting methods in forecasting the dengue and ILI case counts. In particular, our method gives a 18 percent forecast error reduction over a leading method that also uses data from multiple sources. It also performs better than other methods in predicting the peak value of the case count and the peak time., Author summary Dengue and influenza-like illness (ILI) are leading causes of viral infection in the world and hence it is important to develop accurate methods for forecasting their incidence. We use Autoregressive Likelihood Ratio method, which is a computationally efficient implementation of the variable selection method, in order to obtain a sparse (non-lasso) representation of time series, Google Trends and electronic health records (for ILI) data. This method is used to forecast dengue incidence in five countries/states and ILI incidence in USA. We show that this method outperforms existing time series methods in forecasting these diseases. The method is general and can also be used to forecast other diseases.
- Published
- 2019
- Full Text
- View/download PDF
15. Navigating optimal treaty-shopping routes using a multiplex network model
- Author
-
Sung Jae Park, Kyu-Min Lee, and Jae-Suk Yang
- Subjects
Computer and Information Sciences ,Economics ,Political Science ,Science ,International Cooperation ,Social Sciences ,Public Policy ,Smoking Prevention ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Foreign direct investment ,Multiplex Networks ,Research and Analysis Methods ,Geographical locations ,Tax revenue ,Income tax ,Centrality ,Humans ,Treaty ,Industrial organization ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Income Tax ,Smoking ,Commerce ,Tobacco Products ,Taxes ,Tax avoidance ,United States ,Taxation ,ComputingMilieux_GENERAL ,Tax treaty ,Multinational corporation ,North America ,Physical Sciences ,Medicine ,Business ,People and places ,Network Analysis ,Finance ,Mathematics ,Algorithms ,Research Article - Abstract
The international tax treaty system is a highly integrated and complex network. In this system, many multinational enterprises (MNEs) explore ways of reducing taxes by choosing optimal detour routes. Treaty abuse by these MNEs causes significant loss of tax revenues for many countries, but there is no systematic way of regulating their actions. However, it may be helpful to find a way of detecting the optimal routes by which MNEs avoid taxes and observe the effects of this behavior. In this paper, we investigate the international tax treaty network system of foreign investment channels based on real data and introduce a novel measure of tax-routing centrality and other centralities via network analysis. Our analysis of tax routing in a multiplex network reveals not only various tax-minimizing routes and their rates, but also new paths which cannot be found by navigating a single network layer. In addition, we identify strongly connected components of the multiplex tax treaty system with minimal tax shopping routes; more than 80 countries are included in this system. This means that there are far more pathways to be observed than can be detected on any given individual single layer. We provide a unified framework for analyzing the international tax treaty system and observing the effects of tax avoidance by MNEs.
- Published
- 2021
16. Fuel shortages during hurricanes: Epidemiological modeling and optimal control
- Author
-
Sirish Namilae, Dahai Liu, Sabique Islam, and Richard J. Prazenica
- Subjects
0301 basic medicine ,0209 industrial biotechnology ,Operations research ,Economics ,Social Sciences ,Economic shortage ,02 engineering and technology ,Shortages ,Systems Science ,Geographical locations ,020901 industrial engineering & automation ,Sociology ,Per capita ,Medicine and Health Sciences ,Resource Management ,Public and Occupational Health ,Materials ,Multidisciplinary ,Emergency management ,Covariance ,Cyclonic Storms ,Applied Mathematics ,Simulation and Modeling ,Resource constraints ,Social Communication ,Vaccination and Immunization ,Southeastern United States ,Dynamical Systems ,Social Networks ,Physical Sciences ,Florida ,Medicine ,Engineering and Technology ,Kalman Filter ,Gasoline ,Algorithms ,Network Analysis ,Research Article ,Computer and Information Sciences ,Science ,Materials Science ,Immunology ,Disaster Planning ,Fuels ,Research and Analysis Methods ,03 medical and health sciences ,Humans ,Landfall ,Estimation ,business.industry ,Biology and Life Sciences ,Random Variables ,Optimal control ,Probability Theory ,United States ,Communications ,Energy and Power ,030104 developmental biology ,North America ,Environmental science ,Preventive Medicine ,People and places ,business ,Epidemic model ,Social Media ,Mathematics - Abstract
Hurricanes are powerful agents of destruction with significant socioeconomic impacts. A persistent problem due to the large-scale evacuations during hurricanes in the southeastern United States is the fuel shortages during the evacuation. Computational models can aid in emergency preparedness and help mitigate the impacts of hurricanes. In this paper, we model the hurricane fuel shortages using the SIR epidemic model. We utilize the crowd-sourced data corresponding to Hurricane Irma and Florence to parametrize the model. An estimation technique based on Unscented Kalman filter (UKF) is employed to evaluate the SIR dynamic parameters. Finally, an optimal control approach for refueling based on a vaccination analogue is presented to effectively reduce the fuel shortages under a resource constraint. We find the basic reproduction number corresponding to fuel shortages in Miami during Hurricane Irma to be 3.98. Using the control model we estimated the level of intervention needed to mitigate the fuel-shortage epidemic. For example, our results indicate that for Naples- Fort Myers affected by Hurricane Irma, a per capita refueling rate of 0.1 for 2.2 days would have reduced the peak fuel shortage from 55% to 48% and a refueling rate of 0.75 for half a day before landfall would have reduced to 37%.
- Published
- 2019
17. Economic development and wage inequality: A complex system analysis
- Author
-
Emanuele Pugliese, Luciano Pietronero, and Angelica Sbardella
- Subjects
Genetics and Molecular Biology (all) ,Labour economics ,Index (economics) ,General Economics (econ.GN) ,Article economic aspect economic development empiricism employment geography gross national product industrialization private sector United States ,Economics ,lcsh:Medicine ,Social Sciences ,economic developmenth ,Economic Geography ,umansalary and fringe benefit ,Biochemistry ,Systems Science ,Geographical locations ,Per capita ,statistics and numerical data ,Salaries ,050207 economics ,lcsh:Science ,050205 econometrics ,Economics - General Economics ,Multidisciplinary ,Geography ,05 social sciences ,1. No poverty ,Complex Systems ,North American Industry Classification System ,Scale (social sciences) ,8. Economic growth ,Physical Sciences ,Economic Development ,Algorithms ,Research Article ,Employment ,Computer and Information Sciences ,Complex system ,FOS: Economics and business ,socioeconomics ,Development Economics ,0502 economics and business ,Humans ,Social inequality ,Salaries and Fringe Benefits ,Socioeconomic Factors ,United States ,Biochemistry, Genetics and Molecular Biology (all) ,Agricultural and Biological Sciences (all) ,lcsh:R ,wage inequality algorithm ,Industrialisation ,Income inequality metrics ,Labor Economics ,North America ,Earth Sciences ,lcsh:Q ,People and places ,Mathematics - Abstract
By borrowing methods from complex system analysis, in this paper we analyze the features of the complex relationship that links the development and the industrialization of a country to economic inequality. In order to do this, we identify industrialization as a combination of a monetary index, the GDP per capita, and a recently introduced measure of the complexity of an economy, the Fitness. At first we explore these relations on a global scale over the time period 1990--2008 focusing on two different dimensions of inequality: the capital share of income and a Theil measure of wage inequality. In both cases, the movement of inequality follows a pattern similar to the one theorized by Kuznets in the fifties. We then narrow down the object of study ad we concentrate on wage inequality within the United States. By employing data on wages and employment on the approximately 3100 US counties for the time interval 1990--2014, we generalize the Fitness-Complexity algorithm for counties and NAICS sectors, and we investigate wage inequality between industrial sectors within counties. At this scale, in the early nineties we recover a behavior similar to the global one. While, in more recent years, we uncover a trend reversal: wage inequality monotonically increases as industrialization levels grow. Hence at a county level, at net of the social and institutional factors that differ among countries, we not only observe an upturn in inequality but also a change in the structure of the relation between wage inequality and development.
- Published
- 2017
18. A machine learning approach to estimation of downward solar radiation from satellite-derived data products: An application over a semi-arid ecosystem in the U.S
- Author
-
Nancy F. Glenn, Bangshuai Han, Qingtao Zhou, Reggie D. Walters, and Alejandro N. Flores
- Subjects
Albedo ,Satellite Imagery ,Atmospheric Science ,010504 meteorology & atmospheric sciences ,0208 environmental biotechnology ,lcsh:Medicine ,Astronomical Sciences ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Remote Sensing ,Machine Learning ,lcsh:Science ,media_common ,Climatology ,Numerical Analysis ,Multidisciplinary ,Ecology ,Physics ,Electromagnetic Radiation ,Planetary Sciences ,Simulation and Modeling ,Satellite Communications ,Radiation flux ,Physical Sciences ,Engineering and Technology ,Solar Radiation ,Moderate-resolution imaging spectroradiometer ,Alternative Energy ,Algorithms ,Research Article ,Environmental Monitoring ,Computer and Information Sciences ,Watershed ,media_common.quotation_subject ,Terrain ,Machine learning ,Research and Analysis Methods ,Ecosystems ,Artificial Intelligence ,Solar Energy ,Humans ,Ecosystem ,0105 earth and related environmental sciences ,business.industry ,lcsh:R ,Ecology and Environmental Sciences ,Biology and Life Sciences ,United States ,020801 environmental engineering ,Interpolation ,Energy and Power ,Sky ,Remote Sensing Technology ,Earth Sciences ,Environmental science ,Satellite ,lcsh:Q ,Artificial intelligence ,business ,Shortwave ,computer ,Mathematics - Abstract
Shortwave solar radiation is an important component of the surface energy balance and provides the principal source of energy for terrestrial ecosystems. This paper presents a machine learning approach in the form of a random forest (RF) model for estimating daily downward solar radiation flux at the land surface over complex terrain using MODIS (MODerate Resolution Imaging Spectroradiometer) remote sensing data. The model-building technique makes use of a unique network of 16 solar flux measurements in the semi-arid Reynolds Creek Experimental Watershed and Critical Zone Observatory, in southwest Idaho, USA. Based on a composite RF model built on daily observations from all 16 sites in the watershed, the model simulation of downward solar radiation matches well with the observation data (r2 = 0.96). To evaluate model performance, RF models were built from 12 of 16 sites selected at random and validated against the observations at the remaining four sites. Overall root mean square errors (RMSE), bias, and mean absolute error (MAE) are small (range: 37.17 W/m2-81.27 W/m2, -48.31 W/m2-15.67 W/m2, and 26.56 W/m2-63.77 W/m2, respectively). When extrapolated to the entire watershed, spatiotemporal patterns of solar flux are largely consistent with expected trends in this watershed. We also explored significant predictors of downward solar flux in order to reveal important properties and processes controlling downward solar radiation. Based on the composite RF model built on all 16 sites, the three most important predictors to estimate downward solar radiation include the black sky albedo (BSA) near infrared band (0.858 μm), BSA visible band (0.3–0.7 μm), and clear day coverage. This study has important implications for improving the ability to derive downward solar radiation through a fusion of multiple remote sensing datasets and can potentially capture spatiotemporally varying trends in solar radiation that is useful for land surface hydrologic and terrestrial ecosystem modeling.
- Published
- 2017
19. An efficient semi-supervised community detection framework in social networks
- Author
-
Zhen Li, Guyu Hu, Yong Gong, and Zhisong Pan
- Subjects
Economics ,Social Sciences ,Marine and Aquatic Sciences ,Datasets as Topic ,lcsh:Medicine ,02 engineering and technology ,Elections ,computer.software_genre ,01 natural sciences ,Machine Learning ,Governments ,Sociology ,0202 electrical engineering, electronic engineering, information engineering ,lcsh:Science ,Mammals ,Physics ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Politics ,Sports Science ,Spectral clustering ,Social Networks ,Physical Sciences ,Vertebrates ,020201 artificial intelligence & image processing ,Algorithms ,Network Analysis ,Martial Arts ,Research Article ,Sports ,Political Parties ,Optimization ,Computer and Information Sciences ,Process (engineering) ,Dolphins ,Political Science ,Closeness ,Football ,Marine Biology ,Research and Analysis Methods ,Machine learning ,Network topology ,Statistical Mechanics ,0103 physical sciences ,Animals ,Humans ,Marine Mammals ,Social Behavior ,010306 general physics ,Behavior ,Internet ,business.industry ,Books ,lcsh:R ,Organisms ,Biology and Life Sciences ,United Kingdom ,United States ,Amniotes ,Earth Sciences ,Recreation ,Domain knowledge ,Pairwise comparison ,lcsh:Q ,Artificial intelligence ,Noise (video) ,business ,computer ,Mathematics - Abstract
Community detection is an important tasks across a number of research fields including social science, biology, and physics. In the real world, topology information alone is often inadequate to accurately find out community structure due to its sparsity and noise. The potential useful prior information such as pairwise constraints which contain must-link and cannot-link constraints can be obtained from domain knowledge in many applications. Thus, combining network topology with prior information to improve the community detection accuracy is promising. Previous methods mainly utilize the must-link constraints while cannot make full use of cannot-link constraints. In this paper, we propose a semi-supervised community detection framework which can effectively incorporate two types of pairwise constraints into the detection process. Particularly, must-link and cannot-link constraints are represented as positive and negative links, and we encode them by adding different graph regularization terms to penalize closeness of the nodes. Experiments on multiple real-world datasets show that the proposed framework significantly improves the accuracy of community detection.
- Published
- 2017
20. Parallel point-multiplication architecture using combined group operations for high-speed cryptographic applications
- Author
-
Ehsan Saeedi, Selim Hossain, and Yinan Kong
- Subjects
Computer and Information Sciences ,Computer science ,Political Science ,Social Sciences ,lcsh:Medicine ,Cryptography ,02 engineering and technology ,Research and Analysis Methods ,Polynomials ,Computer Architecture ,Polynomial basis ,Application-specific integrated circuit ,0202 electrical engineering, electronic engineering, information engineering ,Multiplication ,National Security ,Arithmetic ,Elliptic curve cryptography ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Field-programmable gate array ,lcsh:Science ,Computer Security ,Multidisciplinary ,Cryptographic primitive ,Computers ,business.industry ,Applied Mathematics ,Simulation and Modeling ,020208 electrical & electronic engineering ,lcsh:R ,United States ,United States Government Agencies ,020202 computer hardware & architecture ,Elliptic curve point multiplication ,Elliptic curve ,Algebra ,Physical Sciences ,lcsh:Q ,business ,Algebraic Geometry ,Mathematics ,Algorithms ,Research Article - Abstract
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
- Published
- 2017
21. A General Approach for Predicting the Behavior of the Supreme Court of the United States
- Author
-
Daniel Martin Katz, Josh Blackman, and Michael James Bommarito
- Subjects
Computer science ,Social Sciences ,lcsh:Medicine ,02 engineering and technology ,Forests ,Trees ,Machine Learning ,Mathematical and Statistical Techniques ,Cognition ,Learning and Memory ,050602 political science & public administration ,0202 electrical engineering, electronic engineering, information engineering ,Econometrics ,lcsh:Science ,Multidisciplinary ,Ecology ,Applied Mathematics ,Simulation and Modeling ,Administrative law ,05 social sciences ,Plants ,Terrestrial Environments ,0506 political science ,Null (SQL) ,Physical Sciences ,Engineering and Technology ,020201 artificial intelligence & image processing ,Statistics (Mathematics) ,Algorithms ,Research Article ,Computer and Information Sciences ,Physics - Physics and Society ,FOS: Physical sciences ,Context (language use) ,Physics and Society (physics.soc-ph) ,Research and Analysis Methods ,Ecosystems ,Machine Learning Algorithms ,Artificial Intelligence ,Memory ,Social Justice ,Humans ,Statistical Methods ,Ecology and Environmental Sciences ,lcsh:R ,Organisms ,Biology and Life Sciences ,United States ,Supreme court ,Term (time) ,Supreme Court Decisions ,Cognitive Science ,Law and Legal Sciences ,lcsh:Q ,Mathematics ,Forecasting ,Neuroscience - Abstract
Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time evolving random forest classifier which leverages some unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications., version 2.02; 18 pages, 5 figures. This paper is related to but distinct from arXiv:1407.6333, and the results herein supersede arXiv:1407.6333. Source code available at https://github.com/mjbommar/scotus-predict-v2
- Published
- 2016
22. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data
- Author
-
Yang Cheng, Jian Zhang, Huachun Tan, Li Song, and Bin Ran
- Subjects
Computer science ,Aviation ,Intelligence ,lcsh:Medicine ,Social Sciences ,Transportation ,02 engineering and technology ,Geographical locations ,Mathematical and Statistical Techniques ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,Computer Science::Networking and Internet Architecture ,Psychology ,lcsh:Science ,Intelligent transportation system ,Principal Component Analysis ,Multidisciplinary ,geography.geographical_feature_category ,Applied Mathematics ,Simulation and Modeling ,05 social sciences ,Floating car data ,Transportation Infrastructure ,Physical Sciences ,Engineering and Technology ,020201 artificial intelligence & image processing ,Algorithm ,Algorithms ,Statistics (Mathematics) ,Network analysis ,Research Article ,Optimization ,Computer and Information Sciences ,Research and Analysis Methods ,Civil Engineering ,Wisconsin ,0502 economics and business ,Computer Simulation ,Tensor ,Statistical Methods ,Traffic generation model ,050210 logistics & transportation ,geography ,business.industry ,lcsh:R ,Cognitive Psychology ,Biology and Life Sciences ,Computing Methods ,United States ,Roads ,ComputerSystemsOrganization_MISCELLANEOUS ,Multivariate Analysis ,North America ,Cognitive Science ,lcsh:Q ,State (computer science) ,People and places ,business ,Automobiles ,Mathematics ,Water well ,Neuroscience - Abstract
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
- Published
- 2016
23. Accurate and fast path computation on large urban road networks: A general approach
- Author
-
Meng Li, Xiaolei Li, and Qing Song
- Subjects
Optimization ,Computer and Information Sciences ,Traverse ,Urban Population ,Computer science ,Heuristic (computer science) ,Computation ,New York ,0211 other engineering and technologies ,lcsh:Medicine ,Social Sciences ,Transportation ,02 engineering and technology ,Fast path ,Research and Analysis Methods ,Civil Engineering ,Geographical locations ,Sociology ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Pruning (decision trees) ,lcsh:Science ,021103 operations research ,Multidisciplinary ,Heuristic ,Applied Mathematics ,Simulation and Modeling ,lcsh:R ,Graph partition ,Transportation Infrastructure ,United States ,Navigation ,Roads ,Signaling Networks ,Hierarchical clustering ,Social Networks ,Computer engineering ,Physical Sciences ,North America ,Path (graph theory) ,Engineering and Technology ,lcsh:Q ,New York City ,People and places ,Routing (electronic design automation) ,Algorithms ,Mathematics ,Network Analysis ,Research Article - Abstract
Accurate and fast path computation is essential for applications such as onboard navigation systems and traffic network routing. While a number of heuristic algorithms have been developed in the past few years for faster path queries, the accuracy of them are always far below satisfying. In this paper, we first develop an agglomerative graph partitioning method for generating high balanced traverse distance partitions, and we constitute a three-level graph model based on the graph partition scheme for structuring the urban road network. Then, we propose a new hierarchical path computation algorithm, which benefits from the hierarchical graph model and utilizes a region pruning strategy to significantly reduce the search space without compromising the accuracy. Finally, we present a detailed experimental evaluation on the real urban road network of New York City, and the experimental results demonstrate the effectiveness of the proposed approach to generate optimal fast paths and to facilitate real-time routing applications.
- Published
- 2018
24. Relay discovery and selection for large-scale P2P streaming
- Author
-
Angela Yunxian Wang, Chengwei Zhang, and Xiaojun Hei
- Subjects
Computer and Information Sciences ,Computer science ,Vector Spaces ,lcsh:Medicine ,02 engineering and technology ,Research and Analysis Methods ,Geographical locations ,law.invention ,Computer Communication Networks ,Relay ,law ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,Computer Networks ,lcsh:Science ,Selection (genetic algorithm) ,Ohio ,Internet ,Key generation ,Multidisciplinary ,business.industry ,Applied Mathematics ,Simulation and Modeling ,Node (networking) ,lcsh:R ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,020206 networking & telecommunications ,Relays ,United States ,Algebra ,Linear Algebra ,Physical Sciences ,North America ,Cryptography ,Engineering and Technology ,Bandwidth (Computing) ,lcsh:Q ,020201 artificial intelligence & image processing ,Electronics ,People and places ,business ,Mathematics ,Algorithms ,Research Article ,Computer network - Abstract
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.
- Published
- 2017
25. Comparing Simultaneous and Pointwise Confidence Intervals for Hydrological Processes
- Author
-
Alejandro Quintela-del-Río and Mario Francisco-Fernández
- Subjects
Computer and Information Sciences ,010504 meteorology & atmospheric sciences ,0208 environmental biotechnology ,Marine and Aquatic Sciences ,lcsh:Medicine ,02 engineering and technology ,Research and Analysis Methods ,01 natural sciences ,Rivers ,Flooding ,Statistics ,Confidence Intervals ,Computer Simulation ,Computer Networks ,lcsh:Science ,0105 earth and related environmental sciences ,Mathematics ,Parametric statistics ,Pointwise ,Multidisciplinary ,Approximation Methods ,Applied Mathematics ,Simulation and Modeling ,Ecology and Environmental Sciences ,lcsh:R ,Probabilistic logic ,Nonparametric statistics ,Aquatic Environments ,Random Variables ,Bodies of Water ,Probability Theory ,United States ,Confidence interval ,020801 environmental engineering ,Physical Sciences ,Earth Sciences ,Bandwidth (Computing) ,Probability distribution ,lcsh:Q ,Hydrology ,Random variable ,Statistics (Mathematics) ,Algorithms ,Research Article ,Freshwater Environments ,Quantile - Abstract
Distribution function estimation of the random variable of river flow is an important problem in hydrology. This issue is directly related to quantile estimation, and consequently to return level prediction. The estimation process can be complemented with the construction of confidence intervals (CIs) to perform a probabilistic assessment of the different variables and/or estimated functions. In this work, several methods for constructing CIs using bootstrap techniques, and parametric and nonparametric procedures in the estimation process are studied and compared. In the case that the target is the joint estimation of a vector of values, some new corrections to obtain joint coverage probabilities closer to the corresponding nominal values are also presented. A comprehensive simulation study compares the different approaches, and the application of the different procedures to real data sets from four rivers in the United States and one in Spain complete the paper.
- Published
- 2016
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.