44 results
Search Results
2. Dynamics in the Fitness-Income plane: Brazilian states vs World countries.
- Author
-
Operti, Felipe G., Pugliese, Emanuele, Jr.Andrade, José S., Pietronero, Luciano, and Gabrielli, Andrea
- Subjects
ALGORITHMS ,PHYSICAL fitness ,GROSS domestic product ,ECONOMICS - Abstract
In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDP
p ), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
3. Reliable Facility Location Problem with Facility Protection.
- Author
-
Tang, Luohao, Zhu, Cheng, Lin, Zaili, Shi, Jianmai, and Zhang, Weiming
- Subjects
LOCATION problems (Programming) ,FACILITY location problems ,INTEGER programming ,LAGRANGIAN functions ,APPLIED mathematics - Abstract
This paper studies a reliable facility location problem with facility protection that aims to hedge against random facility disruptions by both strategically protecting some facilities and using backup facilities for the demands. An Integer Programming model is proposed for this problem, in which the failure probabilities of facilities are site-specific. A solution approach combining Lagrangian Relaxation and local search is proposed and is demonstrated to be both effective and efficient based on computational experiments on random numerical examples with 49, 88, 150 and 263 nodes in the network. A real case study for a 100-city network in Hunan province, China, is presented, based on which the properties of the model are discussed and some managerial insights are analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.
- Author
-
Ran, Bin, Song, Li, Zhang, Jian, Cheng, Yang, and Tan, Huachun
- Subjects
TRAFFIC engineering ,ESTIMATION theory ,PROBLEM solving ,STATISTICAL correlation ,MISSING data (Statistics) - Abstract
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
5. Best Match: New relevance search for PubMed.
- Author
-
Fiorini, Nicolas, Canese, Kathi, Starchenko, Grisha, Kireev, Evgeny, Kim, Won, Miller, Vadim, Osipov, Maxim, Kholodov, Michael, Ismagilov, Rafis, Mohan, Sunil, Ostell, James, and Lu, Zhiyong
- Subjects
SEARCH engines ,SEARCH algorithms ,INTERNET searching ,DATA mining ,MEDICAL literature - Abstract
PubMed is a free search engine for biomedical literature accessed by millions of users from around the world each day. With the rapid growth of biomedical literature—about two articles are added every minute on average—finding and retrieving the most relevant papers for a given query is increasingly challenging. We present Best Match, a new relevance search algorithm for PubMed that leverages the intelligence of our users and cutting-edge machine-learning technology as an alternative to the traditional date sort order. The Best Match algorithm is trained with past user searches with dozens of relevance-ranking signals (factors), the most important being the past usage of an article, publication date, relevance score, and type of article. This new algorithm demonstrates state-of-the-art retrieval performance in benchmarking experiments as well as an improved user experience in real-world testing (over 20% increase in user click-through rate). Since its deployment in June 2017, we have observed a significant increase (60%) in PubMed searches with relevance sort order: it now assists millions of PubMed searches each week. In this work, we hope to increase the awareness and transparency of this new relevance sort option for PubMed users, enabling them to retrieve information more effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Capturing the influence of geopolitical ties from Wikipedia with reduced Google matrix.
- Author
-
El Zant, Samer, Jaffrès-Runser, Katia, and Shepelyansky, Dima L.
- Subjects
SOCIOCULTURAL factors ,GEOPOLITICS ,POWER (Social sciences) ,MARKOV processes - Abstract
Interactions between countries originate from diverse aspects such as geographic proximity, trade, socio-cultural habits, language, religions, etc. Geopolitics studies the influence of a country’s geographic space on its political power and its relationships with other countries. This work reveals the potential of Wikipedia mining for geopolitical study. Actually, Wikipedia offers solid knowledge and strong correlations among countries by linking web pages together for different types of information (e.g. economical, historical, political, and many others). The major finding of this paper is to show that meaningful results on the influence of country ties can be extracted from the hyperlinked structure of Wikipedia. We leverage a novel stochastic matrix representation of Markov chains of complex directed networks called the reduced Google matrix theory. For a selected small size set of nodes, the reduced Google matrix concentrates direct and indirect links of the million-node sized Wikipedia network into a small Perron-Frobenius matrix keeping the PageRank probabilities of the global Wikipedia network. We perform a novel sensitivity analysis that leverages this reduced Google matrix to characterize the influence of relationships between countries from the global network. We apply this analysis to two chosen sets of countries (i.e. the set of 27 European Union countries and a set of 40 top worldwide countries). We show that with our sensitivity analysis we can exhibit easily very meaningful information on geopolitics from five different Wikipedia editions (English, Arabic, Russian, French and German). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Accurate and fast path computation on large urban road networks: A general approach.
- Author
-
Song, Qing, Li, Meng, and Li, Xiaolei
- Subjects
TRANSPORTATION ,TRAFFIC engineering ,ROADS ,NAVIGATION ,ALGORITHMS - Abstract
Accurate and fast path computation is essential for applications such as onboard navigation systems and traffic network routing. While a number of heuristic algorithms have been developed in the past few years for faster path queries, the accuracy of them are always far below satisfying. In this paper, we first develop an agglomerative graph partitioning method for generating high balanced traverse distance partitions, and we constitute a three-level graph model based on the graph partition scheme for structuring the urban road network. Then, we propose a new hierarchical path computation algorithm, which benefits from the hierarchical graph model and utilizes a region pruning strategy to significantly reduce the search space without compromising the accuracy. Finally, we present a detailed experimental evaluation on the real urban road network of New York City, and the experimental results demonstrate the effectiveness of the proposed approach to generate optimal fast paths and to facilitate real-time routing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis.
- Author
-
Fayyaz S., S. Kiavash, Liu, Xiaoyue Cathy, and Zhang, Guohui
- Subjects
METROPOLITAN areas ,PUBLIC transit ,TRAVEL time (Traffic engineering) ,POPULATION density ,POPULATION biology - Abstract
The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George’s transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. Windowed persistent homology: A topological signal processing algorithm applied to clinical obesity data.
- Author
-
Biwer, Craig, Rothberg, Amy, IglayReger, Heidi, Derksen, Harm, Burant, Charles F., and Najarian, Kayvan
- Subjects
OBESITY ,HOMOLOGY theory ,SIGNAL processing ,ALGORITHMS ,WEIGHT loss ,WEIGHT gain - Abstract
Overweight and obesity are highly prevalent in the population of the United States, affecting roughly 2/3 of Americans. These diseases, along with their associated conditions, are a major burden on the healthcare industry in terms of both dollars spent and effort expended. Volitional weight loss is attempted by many, but weight regain is common. The ability to predict which patients will lose weight and successfully maintain the loss versus those prone to regain weight would help ease this burden by allowing clinicians the ability to skip treatments likely to be ineffective. In this paper we introduce a new windowed approach to the persistent homology signal processing algorithm that, when paired with a modified, semimetric version of the Hausdorff distance, can differentiate the two groups where other commonly used methods fail. The novel approach is tested on accelerometer data gathered from an ongoing study at the University of Michigan. While most standard approaches to signal processing show no difference between the two groups, windowed persistent homology and the modified Hausdorff semimetric show a clear separation. This has significant implications for clinical decision making and patient care. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. Relay discovery and selection for large-scale P2P streaming.
- Author
-
Zhang, Chengwei, Wang, Angela Yunxian, and Hei, Xiaojun
- Subjects
PEER-to-peer architecture (Computer networks) ,ERROR analysis in mathematics ,ESTIMATION theory ,HASHING ,NUMERICAL analysis - Abstract
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. Mapping technological innovation dynamics in artificial intelligence domains: Evidence from a global patent analysis
- Author
-
Na Liu, Philip Shapira, Xiaoxu Yue, and Jiancheng Guan
- Subjects
Computer and Information Sciences ,China ,Technology ,Asia ,Science ,Social Sciences ,Technology/methods ,Research and Analysis Methods ,Machine Learning ,Geographical Locations ,Patents as Topic ,Machine Learning Algorithms ,Automation ,Japan ,Inventions ,Artificial Intelligence ,Support Vector Machines ,Humans ,Patents ,Language Acquisition ,Multidisciplinary ,Models, Statistical ,Applied Mathematics ,Simulation and Modeling ,Linguistics ,Automation/methods ,United States ,Intellectual Property ,Models, Organizational ,Physical Sciences ,People and Places ,North America ,Medicine ,Law and Legal Sciences ,Commercial Law ,Diffusion of Innovation ,Mathematics ,Algorithms ,Research Article - Abstract
Artificial intelligence (AI) is emerging as a technology at the center of many political, economic, and societal debates. This paper formulates a new AI patent search strategy and applies this to provide a landscape analysis of AI innovation dynamics and technology evolution. The paper uses patent analyses, network analyses, and source path link count algorithms to examine AI spatial and temporal trends, cooperation features, cross-organization knowledge flow and technological routes. Results indicate a growing yet concentrated, non-collaborative and multi-path development and protection profile for AI patenting, with cross-organization knowledge flows based mainly on interorganizational knowledge citation links.
- Published
- 2021
12. Seed-Fill-Shift-Repair: A redistricting heuristic for civic deliberation
- Author
-
Lee Hachadoorian, Christian Haas, Peter Miller, Steven O. Kimbrough, and Frederic H. Murphy
- Subjects
Social Sciences ,Public administration ,Elections ,Geographical locations ,American Community Survey ,Sociology ,Electoral district ,050602 political science & public administration ,Heuristics ,media_common ,050502 law ,education.field_of_study ,Schools ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,05 social sciences ,Politics ,Arizona ,0506 political science ,Redistricting ,Research Design ,Physical Sciences ,Medicine ,Polity ,Algorithms ,Research Article ,Optimization ,Census ,Science ,Political Science ,media_common.quotation_subject ,Population ,Gerrymandering ,Research and Analysis Methods ,Education ,Political science ,Humans ,education ,0505 law ,Survey Research ,Descriptive statistics ,Pennsylvania ,Deliberation ,United States ,North America ,People and places ,Mathematics - Abstract
Political redistricting is the redrawing of electoral district boundaries. It is normally undertaken to reflect population changes. The process can be abused, in what is called gerrymandering, to favor one party or interest group over another, resulting thereby in broadly undemocratic outcomes that misrepresent the views of the voters. Gerrymandering is especially vexing in the United States. This paper introduces an algorithm, with an implementation, for creating districting plans (whether for political redistricting or for other districting applications). The algorithm, Seed-Fill-Shift-Repair (SFSR), is demonstrated for Congressional redistricting in American states. SFSR is able to create thousands of valid redistricting plans, which may then be used as points of departure for public deliberation regarding how best to redistrict a given polity. The main objectives of this paper are: (i) to present SFSR in a broadly accessible form, including code that implements it and test data, so that it may be used for both civic deliberations by the public and for research purposes. (ii) to make the case for what SFSR essays to do, which is to approach redistricting, and districting generally, from a constraint satisfaction perspective and from the perspective of producing a plurality of feasible solutions that may then serve in subsequent deliberations. To further these goals, we make the code publicly available. The paper presents, for illustration purposes, a corpus of 11,206 valid redistricting plans for the Commonwealth of Pennsylvania (produced by SFSR), using the 2017 American Community Survey, along with descriptive statistics. Also, the paper presents 1,000 plans for each of the states of Arizona, Michigan, North Carolina, Pennsylvania, and Wisconsin, using the 2018 American Community Survey, along with descriptive statistics on these plans and the computations involved in their creation.
- Published
- 2020
13. Estimation of the shared mobility demand based on the daily regularity of the urban mobility and the similarity of individual trips
- Author
-
Veve, Cyril, Chiabaut, Nicolas, Laboratoire d'Ingénierie Circulation Transport (LICIT UMR TE ), and École Nationale des Travaux Publics de l'État (ENTPE)-Université de Lyon-Université Gustave Eiffel
- Subjects
Economics ,Shared mobility ,NEW YORK ,INDIVIDUAL TRIPS ,Social Sciences ,Transportation ,Geographical locations ,Cognition ,Customer base ,11. Sustainability ,TRAFIC ROUTIER ,Psychology ,Economic impact analysis ,ECOMOBILITE ,0303 health sciences ,Multidisciplinary ,Geography ,Applied Mathematics ,Simulation and Modeling ,05 social sciences ,Transportation Infrastructure ,COVOITURAGE ,GESTION DU TRAFIC ,Physical Sciences ,Engineering and Technology ,Medicine ,Algorithms ,Research Article ,Science ,Decision Making ,DEPLACEMENT URBAIN ,Human Geography ,Research and Analysis Methods ,Civil Engineering ,03 medical and health sciences ,MOBILITY DEMAND ,0502 economics and business ,Similarity (psychology) ,Cities ,030304 developmental biology ,Estimation ,MOBILITE ,050210 logistics & transportation ,Models, Statistical ,TRAITEMENT DES DONNEES ,Arithmetic ,Cognitive Psychology ,Biology and Life Sciences ,Environmental economics ,URBAN MOBILITY ,[INFO.INFO-MO]Computer Science [cs]/Modeling and Simulation ,United States ,Economic Analysis ,Roads ,Economic Impact Analysis ,North America ,Earth Sciences ,Human Mobility ,Cognitive Science ,TRIPS architecture ,Business ,VOLUME DE TRAFIC ,People and places ,Mathematics ,Neuroscience - Abstract
Even if shared mobility services are encouraged by transportation policies, they remain underused and inefficient transportation modes because they struggle to find their customer base. This paper aims to estimate the potential demand for such services by focusing on individual trips and determining the number of passengers who perform similar trips. Contrary to existing papers, this study focuses on the demand without assuming any specific shared mobility system. The experiment performed on data coming from New York City conducts to cluster more than 85% of the trips. Consequently, shared mobility services such as ride-sharing can find their customer base and, at a long time, to a significantly reduce the number of cars flowing in the city. After a detailed analysis, commonalities in the clusters are identified: regular patterns from one day to the next exist in shared mobility demand. This regularity makes it possible to anticipate the potential shared mobility demand to help transportation suppliers to optimize their operations.
- Published
- 2020
14. A leader-follower model for discrete competitive facility location problem under the partially proportional rule with a threshold
- Author
-
Wuyang Yu
- Subjects
0209 industrial biotechnology ,Computer science ,0211 other engineering and technologies ,Social Sciences ,02 engineering and technology ,Geographical locations ,020901 industrial engineering & automation ,Cognition ,Mississippi ,Chain (algebraic topology) ,Psychology ,Market share ,Workplace ,021103 operations research ,Multidisciplinary ,Economic Competition ,Applied Mathematics ,Simulation and Modeling ,Commerce ,Facility location problem ,Models, Economic ,Physical Sciences ,Florida ,Medicine ,Leader follower ,Algorithms ,Research Article ,Mathematical optimization ,Current (mathematics) ,Science ,Decision Making ,New York ,Models, Psychological ,Research and Analysis Methods ,Ranking Algorithms ,Humans ,Cognitive Psychology ,Biology and Life Sciences ,Consumer Behavior ,Pennsylvania ,Louisiana ,United States ,Leadership ,Ranking ,North America ,Cognitive Science ,People and places ,Mathematics ,Neuroscience - Abstract
When consumers are faced with the choice of competitive chain facilities that offer exclusive services, current rules do not properly describe the behavior pattern of these consumers. To eliminate the gap between the current rules and this kind of customers behavior pattern, the partially proportional rule with a threshold is proposed in this paper. A leader-follower model for discrete competitive facility location problem is established under the partially proportional rule with a threshold. Combining with the greedy strategy and the 2-opt strategy, a heuristical algorithm (GFA) is designed to solve the follower's problem. By embedding the algorithm (GFA), an improved ranking-based algorithm (IRGA) is proposed to solve the leader-follower model. Numerical tests show that the algorithm proposed in this paper can solve the leader-follower model for discrete competitive facility location problem effectively. The effects of different parameters on the market share captured by the leader firm and the follower firm are analyzed in detail using a quasi-real example. An interesting finding is that in some cases the leader firm does not have a first-mover advantage.
- Published
- 2019
15. Exploring the Role of Interdisciplinarity in Physics: Success, Talent and Luck
- Author
-
Alfredo Ferro, Alfredo Pulvirenti, Toni Giorgino, Alessio Emanuele Biondo, Alessandro Pluchino, Andrea Rapisarda, and Giulio Burgio
- Subjects
FOS: Computer and information sciences ,Science and Technology Workforce ,Systems Analysis ,Economics ,Physics education ,Social Sciences ,Interdisciplinary Studies ,Careers in Research ,01 natural sciences ,Systems Science ,010305 fluids & plasmas ,Agent-Based Modeling ,Citation analysis ,Number Theory ,Digital Libraries (cs.DL) ,media_common ,Multidisciplinary ,Careers ,Physics ,Simulation and Modeling ,Publications ,Computer Science - Digital Libraries ,Research Assessment ,Professions ,Luck ,Physical Sciences ,Citation Analysis ,Medicine ,Research Article ,Societies, Scientific ,Employment ,Computer and Information Sciences ,Physics - Physics and Society ,Leverage (finance) ,Science Policy ,Science ,media_common.quotation_subject ,FOS: Physical sciences ,Physics and Society (physics.soc-ph) ,Bibliometrics ,physics, interdisciplinary, complex systems ,Research and Analysis Methods ,0103 physical sciences ,Real Numbers ,Humans ,Computer Simulation ,010306 general physics ,complex systems ,model ,Serendipity ,Counterintuitive ,agent ,United States ,Epistemology ,Systems analysis ,Labor Economics ,interdisciplinary ,People and Places ,Scientists ,Population Groupings ,Mathematics - Abstract
Although interdisciplinarity is often touted as a necessity for modern research, the evidence on the relative impact of sectorial versus to interdisciplinary science is qualitative at best. In this paper we leverage the bibliographic data set of the American Physical Society to quantify the role of interdisciplinarity in physics, and that of talent and luck in achieving success in scientific careers. We analyze a period of 30 years (1980-2009) tagging papers and their authors by means of the Physics and Astronomy Classification Scheme (PACS), to show that some degree of interdisciplinarity is quite helpful to reach success, measured as a proxy of either the number of articles or the citations score. We also propose an agent-based model of the publication-reputation-citation dynamics reproduces the trends observed in the APS data set. On the one hand, the results highlight the crucial role of randomness and serendipity in real scientific research; on the other, they shed light on a counter-intuitive effect indicating that the most talented authors are not necessarily the most successful ones., 21 pages, 19 figures
- Published
- 2019
16. A personalized channel recommendation and scheduling system considering both section video clips and full video clips
- Author
-
SeungGwan Lee and Daeho Lee
- Subjects
Computer science ,Section (typography) ,Video Recording ,Social Sciences ,lcsh:Medicine ,02 engineering and technology ,computer.software_genre ,Geographical locations ,Machine Learning ,Database and Informatics Methods ,Learning and Memory ,0202 electrical engineering, electronic engineering, information engineering ,Data Mining ,Psychology ,Computer Networks ,CLIPS ,lcsh:Science ,Statistical Data ,computer.programming_language ,Multidisciplinary ,Multimedia ,Applied Mathematics ,Simulation and Modeling ,IPTV ,Scheduling system ,Physical Sciences ,Information Retrieval ,020201 artificial intelligence & image processing ,Information Technology ,Algorithms ,Statistics (Mathematics) ,Research Article ,Communication channel ,Computer and Information Sciences ,Schedule ,Minnesota ,Broadcasting ,Research and Analysis Methods ,Computer Communication Networks ,Artificial Intelligence ,Learning ,Humans ,Internet ,business.industry ,Communications Media ,lcsh:R ,Cognitive Psychology ,Biology and Life Sciences ,020207 software engineering ,Models, Theoretical ,United States ,North America ,Cognitive Science ,lcsh:Q ,People and places ,business ,computer ,Mathematics ,Neuroscience - Abstract
With the convergence of various broadcasting systems, the amount of content available in mobile terminals including IPTV has significantly increased. In this paper, we propose a system that enables users to schedule programs considering both section video clips and full video clips based on the user detection method with similar preference. And, since the system constituting the contents can be classified according to the program, the proposed method can store a program desired by the user, and thus create and schedule a kind of individual channel. Experimental results show that the proposed method has a higher prediction accuracy; this is accomplished by comparing existing channel recommendation methods with the program recommendation methods proposed in this paper.
- Published
- 2018
17. Dynamics in the Fitness-Income plane: Brazilian states vs World countries
- Author
-
Luciano Pietronero, Andrea Gabrielli, Emanuele Pugliese, José S. Andrade, Felipe G. Operti, Operti, F. G., Pugliese, E., Andrade, J. S., Pietronero, L., and Gabrielli, A.
- Subjects
Economic Complexity ,Complex Systems ,Dinamical Systems ,010504 meteorology & atmospheric sciences ,Economics ,lcsh:Medicine ,Social Sciences ,01 natural sciences ,Geographical locations ,Gross domestic product ,Russia ,Spectrum Analysis Techniques ,Mathematical and Statistical Techniques ,Econometrics ,Per capita ,050207 economics ,lcsh:Science ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,05 social sciences ,Absorption Spectroscopy ,BRIC ,Europe ,Physical Sciences ,Metric (mathematics) ,Income ,Algorithms ,Statistics (Mathematics) ,Brazil ,Human ,Research Article ,Asia ,Gross Domestic Product ,India ,Developing country ,Research and Analysis Methods ,Developing Countrie ,Life Expectancy ,0502 economics and business ,Humans ,Statistical Methods ,Predictability ,Developing Countries ,0105 earth and related environmental sciences ,lcsh:R ,Correction ,South America ,United States ,Socioeconomic Factors ,Ranking ,Economic complexity index ,North America ,People and Places ,lcsh:Q ,Mathematics ,Forecasting - Abstract
In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDP(p)), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index.
- Published
- 2018
18. Global and country-specific mainstreaminess measures: Definitions, analysis, and usage for improving personalized music recommendation systems.
- Author
-
Bauer, Christine and Schedl, Markus
- Abstract
Relevance: Popularity-based approaches are widely adopted in music recommendation systems, both in industry and research. These approaches recommend to the target user what is currently popular among all users of the system. However, as the popularity distribution of music items typically is a long-tail distribution, popularity-based approaches to music recommendation fall short in satisfying listeners that have specialized music preferences far away from the global music mainstream. Addressing this gap, the contribution of this article is three-fold. Definition of mainstreaminess measures: First, we provide several quantitative measures describing the proximity of a user’s music preference to the music mainstream. Assuming that there is a difference between the global music mainstream and a country-specific one, we define the measures at two levels: relating a listener’s music preferences to the global music preferences of all users, or relating them to music preferences of the user’s country. To quantify such music preferences, we define a music item’s popularity in terms of artist playcounts (APC) and artist listener counts (ALC). Moreover, we adopt a distribution-based and a rank-based approach as means to decrease bias towards the head of the long-tail distribution. This eventually results in a framework of 6 measures to quantify music mainstream. Differences between countries with respect to music mainstream: Second, we perform in-depth quantitative and qualitative studies of music mainstream in that we (i) analyze differences between countries in terms of their level of mainstreaminess, (ii) uncover both positive and negative outliers (substantially higher and lower country-specific popularity, respectively, compared to the global mainstream), analyzing these with a mixed-methods approach, and (iii) investigate differences between countries in terms of listening preferences related to popular music artists. We conduct our studies and experiments using the standardized LFM-1b dataset, from which we analyze about 800,000,000 listening events shared by about 53,000 users (from 47 countries) of the music streaming platform Last.fm. We show that there are substantial country-specific differences in listeners’ music consumption behavior with respect to the most popular artists listened to. Rating prediction experiments: Third, we demonstrate the applicability of our study results to improve music recommendation systems. To this end, we conduct rating prediction experiments in which we tailor recommendations to a user’s level of preference for the music mainstream using the proposed 6 mainstreaminess measures: defined by a distribution-based or rank-based approach, defined on a global level or on a country level (for the user’s country), and for APC or ALC. Our approach roughly equals a hybrid recommendation approach in which a demographic filtering strategy is implemented before collaborative filtering is performed. Results suggest that, in terms of rating prediction accuracy, each of the presented mainstreaminess definitions has its merits. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
19. Using climate envelope models to identify potential ecological trajectories on the Kenai Peninsula, Alaska.
- Author
-
Magness, Dawn Robin and Morton, John M.
- Subjects
CLIMATE change ,ATMOSPHERIC models ,PLANT communities ,MOUNTAIN plants - Abstract
Managers need information about the vulnerability of historical plant communities, and their potential future conditions, to respond appropriately to landscape change driven by global climate change. We model the climate envelopes of plant communities on the Kenai Peninsula in Southcentral Alaska and forecast to 2020, 2050, and 2080. We assess 6 model outputs representing downscaled climate data from 3 global climate model outputs and 2 representative concentration pathways. We use two lines of evidence, model convergence and empirically measured rates of change, to identify the following plausible ecological trajectories for the peninsula: (1.) alpine tundra and sub-alpine shrub decrease, (2.) perennial snow and ice decrease, (3.) forests remain on the Kenai Lowlands, (4.) the contiguous white-Lutz-Sitka spruce complex declines, and (5.) mixed conifer afforestation occurs along the Gulf of Alaska coast. We suggest that converging models in the context of other lines of evidence is a viable approach to increase certainty for adaptation planning. Extremely dynamic areas with multiple outcomes (i.e., disagreement) among models represent ecological risk, but may also represent opportunities for facilitated adaptation and other managerial approaches to help tip the balance one way or another. By reducing uncertainty, this eclectic approach can be used to inform expectations about the future. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. The number of undocumented immigrants in the United States: Estimates based on demographic modeling with data from 1990 to 2016.
- Author
-
Fazel-Zarandi, Mohammad M., Feinstein, Jonathan S., and Kaplan, Edward H.
- Subjects
DEMOGRAPHIC change ,SENSORY perception ,EMIGRATION & immigration ,POPULATION ,GOVERNMENT policy - Abstract
We apply standard demographic principles of inflows and outflows to estimate the number of undocumented immigrants in the United States, using the best available data, including some that have only recently become available. Our analysis covers the years 1990 to 2016. We develop an estimate of the number of undocumented immigrants based on parameter values that tend to underestimate undocumented immigrant inflows and overstate outflows; we also show the probability distribution for the number of undocumented immigrants based on simulating our model over parameter value ranges. Our conservative estimate is 16.7 million for 2016, nearly fifty percent higher than the most prominent current estimate of 11.3 million, which is based on survey data and thus different sources and methods. The mean estimate based on our simulation analysis is 22.1 million, essentially double the current widely accepted estimate. Our model predicts a similar trajectory of growth in the number of undocumented immigrants over the years of our analysis, but at a higher level. While our analysis delivers different results, we note that it is based on many assumptions. The most critical of these concern border apprehension rates and voluntary emigration rates of undocumented immigrants in the U.S. These rates are uncertain, especially in the 1990’s and early 2000’s, which is when—both based on our modeling and the very different survey data approach—the number of undocumented immigrants increases most significantly. Our results, while based on a number of assumptions and uncertainties, could help frame debates about policies whose consequences depend on the number of undocumented immigrants in the United States. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Ancestry-specific recent effective population size in the Americas.
- Author
-
Browning, Sharon R., Browning, Brian L., Daviglus, Martha L., Durazo-Arvizu, Ramon A., Schneiderman, Neil, Kaplan, Robert C., and Laurie, Cathy C.
- Subjects
POPULATION biology ,EMIGRATION & immigration ,NATURAL disasters ,POPULATION measurement (Population biology) ,GENE mapping ,HAPLOTYPES - Abstract
Populations change in size over time due to factors such as population growth, migration, bottleneck events, natural disasters, and disease. The historical effective size of a population affects the power and resolution of genetic association studies. For admixed populations, it is not only the overall effective population size that is of interest, but also the effective sizes of the component ancestral populations. We use identity by descent and local ancestry inferred from genome-wide genetic data to estimate overall and ancestry-specific effective population size during the past hundred generations for nine admixed American populations from the Hispanic Community Health Study/Study of Latinos, and for African-American and European-American populations from two US cities. In these populations, the estimated pre-admixture effective sizes of the ancestral populations vary by sampled population, suggesting that the ancestors of different sampled populations were drawn from different sub-populations. In addition, we estimate that overall effective population sizes dropped substantially in the generations immediately after the commencement of European and African immigration, reaching a minimum around 12 generations ago, but rebounded within a small number of generations afterwards. Of the populations that we considered, the population of individuals originating from Puerto Rico has the smallest bottleneck size of one thousand, while the Pittsburgh African-American population has the largest bottleneck size of two hundred thousand. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
22. Evaluating the influential priority of the factors on insurance loss of public transit.
- Author
-
Zhang, Wenhui, Su, Yongmin, Ke, Ruimin, and Chen, Xinqiang
- Subjects
PUBLIC transit ,INSURANCE claims ,GREY relational analysis ,K-means clustering - Abstract
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. Mediterranean California’s water use future under multiple scenarios of developed and agricultural land use change.
- Author
-
Wilson, Tamara S., Sleeter, Benjamin M., and Cameron, D. Richard
- Subjects
WATER supply ,WATER use ,CLIMATE change ,AGRICULTURAL intensification ,URBANIZATION & the environment - Abstract
With growing demand and highly variable inter-annual water supplies, California’s water use future is fraught with uncertainty. Climate change projections, anticipated population growth, and continued agricultural intensification, will likely stress existing water supplies in coming decades. Using a state-and-transition simulation modeling approach, we examine a broad suite of spatially explicit future land use scenarios and their associated county-level water use demand out to 2062. We examined a range of potential water demand futures sampled from a 20-year record of historical (1992–2012) data to develop a suite of potential future land change scenarios, including low/high change scenarios for urbanization and agriculture as well as “lowest of the low” and “highest of the high” anthropogenic use. Future water demand decreased 8.3 billion cubic meters (Bm
3 ) in the lowest of the low scenario and decreased 0.8 Bm3 in the low agriculture scenario. The greatest increased water demand was projected for the highest of the high land use scenario (+9.4 Bm3 ), high agricultural expansion (+4.6 Bm3 ), and high urbanization (+2.1 Bm3 ) scenarios. Overall, these scenarios show agricultural land use decisions will likely drive future demand more than increasing municipal and industrial uses, yet improved efficiencies across all sectors could lead to potential water use savings. Results provide water managers with information on diverging land use and water use futures, based on historical, observed land change trends and water use histories. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
24. Evaluating the role of land cover and climate uncertainties in computing gross primary production in Hawaiian Island ecosystems.
- Author
-
Kimball, Heather L., Selmants, Paul C., Moreno, Alvaro, Running, Steve W., and Giardina, Christian P.
- Subjects
LAND cover ,PRIMARY productivity (Biology) ,ISLAND ecology ,FLUX (Energy) ,MODIS (Spectroradiometer) - Abstract
Gross primary production (GPP) is the Earth’s largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
25. Incorporating abundance information and guiding variable selection for climate-based ensemble forecasting of species' distributional shifts.
- Author
-
Tanner, Evan P., Papeş, Monica, Elmore, R. Dwayne, Fuhlendorf, Samuel D., and Davis, Craig A.
- Subjects
ECOLOGICAL niche ,CLIMATE change ,SPECIES distribution ,STATISTICAL correlation ,MATHEMATICAL variables - Abstract
Ecological niche models (ENMs) have increasingly been used to estimate the potential effects of climate change on species’ distributions worldwide. Recently, predictions of species abundance have also been obtained with such models, though knowledge about the climatic variables affecting species abundance is often lacking. To address this, we used a well-studied guild (temperate North American quail) and the Maxent modeling algorithm to compare model performance of three variable selection approaches: correlation/variable contribution (CVC), biological (i.e., variables known to affect species abundance), and random. We then applied the best approach to forecast potential distributions, under future climatic conditions, and analyze future potential distributions in light of available abundance data and presence-only occurrence data. To estimate species’ distributional shifts we generated ensemble forecasts using four global circulation models, four representative concentration pathways, and two time periods (2050 and 2070). Furthermore, we present distributional shifts where 75%, 90%, and 100% of our ensemble models agreed. The CVC variable selection approach outperformed our biological approach for four of the six species. Model projections indicated species-specific effects of climate change on future distributions of temperate North American quail. The Gambel’s quail (Callipepla gambelii) was the only species predicted to gain area in climatic suitability across all three scenarios of ensemble model agreement. Conversely, the scaled quail (Callipepla squamata) was the only species predicted to lose area in climatic suitability across all three scenarios of ensemble model agreement. Our models projected future loss of areas for the northern bobwhite (Colinus virginianus) and scaled quail in portions of their distributions which are currently areas of high abundance. Climatic variables that influence local abundance may not always scale up to influence species’ distributions. Special attention should be given to selecting variables for ENMs, and tests of model performance should be used to validate the choice of variables. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. Simulated bat populations erode when exposed to climate change projections for western North America.
- Author
-
Hayes, Mark A. and Adams, Rick A.
- Subjects
BATS ,LACTATION ,FRINGED myotis ,CLIMATE change ,SPECIES diversity - Abstract
Recent research has demonstrated that temperature and precipitation conditions correlate with successful reproduction in some insectivorous bat species that live in arid and semiarid regions, and that hot and dry conditions correlate with reduced lactation and reproductive output by females of some species. However, the potential long-term impacts of climate-induced reproductive declines on bat populations in western North America are not well understood. We combined results from long-term field monitoring and experiments in our study area with information on vital rates to develop stochastic age-structured population dynamics models and analyzed how simulated fringed myotis (Myotis thysanodes) populations changed under projected future climate conditions in our study area near Boulder, Colorado (Boulder Models) and throughout western North America (General Models). Each simulation consisted of an initial population of 2,000 females and an approximately stable age distribution at the beginning of the simulation. We allowed each population to be influenced by the mean annual temperature and annual precipitation for our study area and a generalized range-wide model projected through year 2086, for each of four carbon emission scenarios (representative concentration pathways RCP2.6, RCP4.5, RCP6.0, RCP8.5). Each population simulation was repeated 10,000 times. Of the 8 Boulder Model simulations, 1 increased (+29.10%), 3 stayed approximately stable (+2.45%, +0.05%, -0.03%), and 4 simulations decreased substantially (-44.10%, -44.70%, -44.95%, -78.85%). All General Model simulations for western North America decreased by >90% (-93.75%, -96.70%, -96.70%, -98.75%). These results suggest that a changing climate in western North America has the potential to quickly erode some forest bat populations including species of conservation concern, such as fringed myotis. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California.
- Author
-
Boyda, Edward, Basu, Saikat, Ganguly, Sangram, Michaelis, Andrew, Mukhopadhyay, Supratik, and Nemani, Ramakrishna R.
- Subjects
QUANTUM annealing ,COMPUTER vision ,AERIAL photography ,REMOTE sensing ,GROUND cover plants - Abstract
Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. Modeled Impacts of Chronic Wasting Disease on White-Tailed Deer in a Semi-Arid Environment.
- Author
-
Foley, Aaron M., Hewitt, David G., DeYoung, Charles A., DeYoung, Randy W., and Schnupp, Matthew J.
- Subjects
CHRONIC wasting disease ,WHITE-tailed deer ,DISEASE prevalence ,DEATH rate ,PRION diseases ,DISEASES - Abstract
White-tailed deer are a culturally and economically important game species in North America, especially in South Texas. The recent discovery of chronic wasting disease (CWD) in captive deer facilities in Texas has increased concern about the potential emergence of CWD in free-ranging deer. The concern is exacerbated because much of the South Texas region is a semi-arid environment with variable rainfall, where precipitation is strongly correlated with fawn recruitment. Further, the marginally productive rangelands, in combination with erratic fawn recruitment, results in populations that are frequently density-independent, and thus sensitive to additive mortality. It is unknown how a deer population in semi-arid regions would respond to the presence of CWD. We used long-term empirical datasets from a lightly harvested (2% annual harvest) population in conjunction with 3 prevalence growth rates from CWD afflicted areas (0.26%, 0.83%, and 2.3% increases per year) via a multi-stage partially deterministic model to simulate a deer population for 25 years under four scenarios: 1) without CWD and without harvest, 2) with CWD and without harvest, 3) with CWD and male harvest only, and 4) with CWD and harvest of both sexes. The modeled populations without CWD and without harvest averaged a 1.43% annual increase over 25 years; incorporation of 2% annual harvest of both sexes resulted in a stable population. The model with slowest CWD prevalence rate growth (0.26% annually) without harvest resulted in stable populations but the addition of 1% harvest resulted in population declines. Further, the male age structure in CWD models became skewed to younger age classes. We incorporated fawn:doe ratios from three CWD afflicted areas in Wisconsin and Wyoming into the model with 0.26% annual increase in prevalence and populations did not begin to decline until ~10%, ~16%, and ~26% of deer were harvested annually. Deer populations in variable environments rely on high adult survivorship to buffer the low and erratic fawn recruitment rates. The increase in additive mortality rates for adults via CWD negatively impacted simulated population trends to the extent that hunter opportunity would be greatly reduced. Our results improve understanding of the potential influences of CWD on deer populations in semi-arid environments with implications for deer managers, disease ecologists, and policy makers. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. Biologically Informed Individual-Based Network Model for Rift Valley Fever in the US and Evaluation of Mitigation Strategies.
- Author
-
Scoglio, Caterina M., Bosca, Claudio, Riad, Mahbubul H., Sahneh, Faryad D., Britch, Seth C., Cohnstaedt, Lee W., and Linthicum, Kenneth J.
- Subjects
RIFT Valley fever ,MOSQUITO vectors ,MOSQUITO control ,EPIDEMICS - Abstract
Rift Valley fever (RVF) is a zoonotic disease endemic in sub-Saharan Africa with periodic outbreaks in human and animal populations. Mosquitoes are the primary disease vectors; however, Rift Valley fever virus (RVFV) can also spread by direct contact with infected tissues. The transmission cycle is complex, involving humans, livestock, and multiple species of mosquitoes. The epidemiology of RVFV in endemic areas is strongly affected by climatic conditions and environmental variables. In this research, we adapt and use a network-based modeling framework to simulate the transmission of RVFV among hypothetical cattle operations in Kansas, US. Our model considers geo-located livestock populations at the individual level while incorporating the role of mosquito populations and the environment at a coarse resolution. Extensive simulations show the flexibility of our modeling framework when applied to specific scenarios to quantitatively evaluate the efficacy of mosquito control and livestock movement regulations in reducing the extent and intensity of RVF outbreaks in the United States. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
30. Automated Retinal Layer Segmentation Using Spectral Domain Optical Coherence Tomography: Evaluation of Inter-Session Repeatability and Agreement between Devices.
- Author
-
Terry, Louise, Cassels, Nicola, Lu, Kelly, Acton, Jennifer H., Margrain, Tom H., North, Rachel V., Fergusson, James, White, Nick, and Wood, Ashley
- Subjects
OCULAR radiography ,IMAGE segmentation ,OPTICAL coherence tomography ,STATISTICAL reliability ,STATISTICAL correlation - Abstract
Retinal and intra-retinal layer thicknesses are routinely generated from optical coherence tomography (OCT) images, but on-board software capabilities and image scaling assumptions are not consistent across devices. This study evaluates the device-independent Iowa Reference Algorithms (Iowa Institute for Biomedical Imaging) for automated intra-retinal layer segmentation and image scaling for three OCT systems. Healthy participants (n = 25) underwent macular volume scans using a Cirrus HD-OCT (Zeiss), 3D-OCT 1000 (Topcon), and a non-commercial long-wavelength (1040nm) OCT on two occasions. Mean thickness of 10 intra-retinal layers was measured in three ETDRS subfields (fovea, inner ring and outer ring) using the Iowa Reference Algorithms. Where available, total retinal thicknesses were measured using on-board software. Measured axial eye length (AEL)-dependent scaling was used throughout, with a comparison made to the system-specific fixed-AEL scaling. Inter-session repeatability and agreement between OCT systems and segmentation methods was assessed. Inter-session coefficient of repeatability (CoR) for the foveal subfield total retinal thickness was 3.43μm, 4.76μm, and 5.98μm for the Zeiss, Topcon, and long-wavelength images respectively. For the commercial software, CoR was 4.63μm (Zeiss) and 7.63μm (Topcon). The Iowa Reference Algorithms demonstrated higher repeatability than the on-board software and, in addition, reliably segmented all 10 intra-retinal layers. With fixed-AEL scaling, the algorithm produced significantly different thickness values for the three OCT devices (P<0.05), with these discrepancies generally characterized by an overall offset (bias) and correlations with axial eye length for the foveal subfield and outer ring (P<0.05). This correlation was reduced to an insignificant level in all cases when AEL-dependent scaling was used. Overall, the Iowa Reference Algorithms are viable for clinical and research use in healthy eyes imaged with these devices, however ocular biometry is required for accurate quantification of OCT images. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
31. Climate Change and Maize Yield in Iowa.
- Author
-
Xu, Hong, Twine, Tracy E., and Girvetz, Evan
- Subjects
CORN yields ,EFFECT of climate on corn ,CORN farming ,PHYSIOLOGICAL adaptation ,AGRICULTURAL productivity ,CORN -- Economic aspects ,PLANTS - Abstract
Climate is changing across the world, including the major maize-growing state of Iowa in the USA. To maintain crop yields, farmers will need a suite of adaptation strategies, and choice of strategy will depend on how the local to regional climate is expected to change. Here we predict how maize yield might change through the 21
st century as compared with late 20th century yields across Iowa, USA, a region representing ideal climate and soils for maize production that contributes substantially to the global maize economy. To account for climate model uncertainty, we drive a dynamic ecosystem model with output from six climate models and two future climate forcing scenarios. Despite a wide range in the predicted amount of warming and change to summer precipitation, all simulations predict a decrease in maize yields from late 20th century to middle and late 21st century ranging from 15% to 50%. Linear regression of all models predicts a 6% state-averaged yield decrease for every 1°C increase in warm season average air temperature. When the influence of moisture stress on crop growth is removed from the model, yield decreases either remain the same or are reduced, depending on predicted changes in warm season precipitation. Our results suggest that even if maize were to receive all the water it needed, under the strongest climate forcing scenario yields will decline by 10–20% by the end of the 21st century. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
32. Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences.
- Author
-
Sabatelli, Lorenzo
- Subjects
ELASTICITY (Economics) ,ECONOMIC demand ,ADDITIVES ,FINANCIAL instruments ,MARGINAL utility - Abstract
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
33. Demographic and Component Allee Effects in Southern Lake Superior Gray Wolves.
- Author
-
Stenglein, Jennifer L. and Van Deelen, Timothy R.
- Subjects
ALLEE effect ,WOLVES ,ANIMAL populations ,BIOLOGICAL extinction ,CONSERVATIONISTS ,BAYESIAN analysis ,SIMULATION methods & models - Abstract
Recovering populations of carnivores suffering Allee effects risk extinction because positive population growth requires a minimum number of cooperating individuals. Conservationists seldom consider these issues in planning for carnivore recovery because of data limitations, but ignoring Allee effects could lead to overly optimistic predictions for growth and underestimates of extinction risk. We used Bayesian splines to document a demographic Allee effect in the time series of gray wolf (Canis lupus) population counts (1980–2011) in the southern Lake Superior region (SLS, Wisconsin and the upper peninsula of Michigan, USA) in each of four measures of population growth. We estimated that the population crossed the Allee threshold at roughly 20 wolves in four to five packs. Maximum per-capita population growth occurred in the mid-1990s when there were approximately 135 wolves in the SLS population. To infer mechanisms behind the demographic Allee effect, we evaluated a potential component Allee effect using an individual-based spatially explicit model for gray wolves in the SLS region. Our simulations varied the perception neighborhoods for mate-finding and the mean dispersal distances of wolves. Simulation of wolves with long-distance dispersals and reduced perception neighborhoods were most likely to go extinct or experience Allee effects. These phenomena likely restricted population growth in early years of SLS wolf population recovery. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
34. Navigating optimal treaty-shopping routes using a multiplex network model
- Author
-
Sung Jae Park, Kyu-Min Lee, and Jae-Suk Yang
- Subjects
Computer and Information Sciences ,Economics ,Political Science ,Science ,International Cooperation ,Social Sciences ,Public Policy ,Smoking Prevention ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,Foreign direct investment ,Multiplex Networks ,Research and Analysis Methods ,Geographical locations ,Tax revenue ,Income tax ,Centrality ,Humans ,Treaty ,Industrial organization ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Income Tax ,Smoking ,Commerce ,Tobacco Products ,Taxes ,Tax avoidance ,United States ,Taxation ,ComputingMilieux_GENERAL ,Tax treaty ,Multinational corporation ,North America ,Physical Sciences ,Medicine ,Business ,People and places ,Network Analysis ,Finance ,Mathematics ,Algorithms ,Research Article - Abstract
The international tax treaty system is a highly integrated and complex network. In this system, many multinational enterprises (MNEs) explore ways of reducing taxes by choosing optimal detour routes. Treaty abuse by these MNEs causes significant loss of tax revenues for many countries, but there is no systematic way of regulating their actions. However, it may be helpful to find a way of detecting the optimal routes by which MNEs avoid taxes and observe the effects of this behavior. In this paper, we investigate the international tax treaty network system of foreign investment channels based on real data and introduce a novel measure of tax-routing centrality and other centralities via network analysis. Our analysis of tax routing in a multiplex network reveals not only various tax-minimizing routes and their rates, but also new paths which cannot be found by navigating a single network layer. In addition, we identify strongly connected components of the multiplex tax treaty system with minimal tax shopping routes; more than 80 countries are included in this system. This means that there are far more pathways to be observed than can be detected on any given individual single layer. We provide a unified framework for analyzing the international tax treaty system and observing the effects of tax avoidance by MNEs.
- Published
- 2021
35. Fuel shortages during hurricanes: Epidemiological modeling and optimal control
- Author
-
Sirish Namilae, Dahai Liu, Sabique Islam, and Richard J. Prazenica
- Subjects
0301 basic medicine ,0209 industrial biotechnology ,Operations research ,Economics ,Social Sciences ,Economic shortage ,02 engineering and technology ,Shortages ,Systems Science ,Geographical locations ,020901 industrial engineering & automation ,Sociology ,Per capita ,Medicine and Health Sciences ,Resource Management ,Public and Occupational Health ,Materials ,Multidisciplinary ,Emergency management ,Covariance ,Cyclonic Storms ,Applied Mathematics ,Simulation and Modeling ,Resource constraints ,Social Communication ,Vaccination and Immunization ,Southeastern United States ,Dynamical Systems ,Social Networks ,Physical Sciences ,Florida ,Medicine ,Engineering and Technology ,Kalman Filter ,Gasoline ,Algorithms ,Network Analysis ,Research Article ,Computer and Information Sciences ,Science ,Materials Science ,Immunology ,Disaster Planning ,Fuels ,Research and Analysis Methods ,03 medical and health sciences ,Humans ,Landfall ,Estimation ,business.industry ,Biology and Life Sciences ,Random Variables ,Optimal control ,Probability Theory ,United States ,Communications ,Energy and Power ,030104 developmental biology ,North America ,Environmental science ,Preventive Medicine ,People and places ,business ,Epidemic model ,Social Media ,Mathematics - Abstract
Hurricanes are powerful agents of destruction with significant socioeconomic impacts. A persistent problem due to the large-scale evacuations during hurricanes in the southeastern United States is the fuel shortages during the evacuation. Computational models can aid in emergency preparedness and help mitigate the impacts of hurricanes. In this paper, we model the hurricane fuel shortages using the SIR epidemic model. We utilize the crowd-sourced data corresponding to Hurricane Irma and Florence to parametrize the model. An estimation technique based on Unscented Kalman filter (UKF) is employed to evaluate the SIR dynamic parameters. Finally, an optimal control approach for refueling based on a vaccination analogue is presented to effectively reduce the fuel shortages under a resource constraint. We find the basic reproduction number corresponding to fuel shortages in Miami during Hurricane Irma to be 3.98. Using the control model we estimated the level of intervention needed to mitigate the fuel-shortage epidemic. For example, our results indicate that for Naples- Fort Myers affected by Hurricane Irma, a per capita refueling rate of 0.1 for 2.2 days would have reduced the peak fuel shortage from 55% to 48% and a refueling rate of 0.75 for half a day before landfall would have reduced to 37%.
- Published
- 2019
36. Using meta-predictions to identify experts in the crowd when past performance is unknown
- Author
-
Marcellin Martinie, Piers D. L. Howe, and Tom Wilkening
- Subjects
Computer science ,Social Sciences ,Surveys ,computer.software_genre ,Geographical locations ,Mathematical and Statistical Techniques ,Sociology ,Psychology ,050207 economics ,Schools ,050208 finance ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,Experimental Design ,Physics ,Statistics ,05 social sciences ,Sports Science ,Research Design ,Physical Sciences ,Medicine ,Algorithms ,Research Article ,Sports ,Leverage (finance) ,Science ,Decision Making ,Research and Analysis Methods ,Machine learning ,Education ,0502 economics and business ,Humans ,Leverage (statistics) ,Statistical Methods ,Sound Waves ,Behavior ,Survey Research ,business.industry ,Probabilistic logic ,Biology and Life Sciences ,Acoustics ,United States ,North America ,Recreation ,Artificial intelligence ,People and places ,business ,computer ,Mathematics ,Forecasting - Abstract
A common approach to improving probabilistic forecasts is to identify and leverage the forecasts from experts in the crowd based on forecasters' performance on prior questions with known outcomes. However, such information is often unavailable to decision-makers on many forecasting problems, and thus it can be difficult to identify and leverage expertise. In the current paper, we propose a novel algorithm for aggregating probabilistic forecasts using forecasters' meta-predictions about what other forecasters will predict. We test the performance of an extremised version of our algorithm against current forecasting approaches in the literature and show that our algorithm significantly outperforms all other approaches on a large collection of 500 binary decision problems varying in five levels of difficulty. The success of our algorithm demonstrates the potential of using meta-predictions to leverage latent expertise in environments where forecasters' expertise cannot otherwise be easily identified.
- Published
- 2020
37. Global and country-specific mainstreaminess measures: Definitions, analysis, and usage for improving personalized music recommendation systems
- Author
-
Bauer, Christine and Schedl, Markus
- Subjects
music recommendation ,FOS: Computer and information sciences ,country-specific differences ,Databases, Factual ,country ,Culture ,Social Sciences ,Geographical Locations ,Japan ,Sociology ,Animal Cells ,Medicine and Health Sciences ,Psychology ,recommender system ,music preferences ,Applied Mathematics ,Simulation and Modeling ,Computer Science - Social and Information Networks ,Music Perception ,Dewey Decimal Classification -- Computer science, information & general works (0) ,Physical Sciences ,Medicine ,Sensory Perception ,mainstreaminess ,Cellular Types ,Information Retrieval (cs.IR) ,Brazil ,Algorithms ,Research Article ,Asia ,Science ,Immune Cells ,Immunology ,Antigen-Presenting Cells ,Research and Analysis Methods ,Computer Science - Information Retrieval ,Clustering Algorithms ,Humans ,personalization ,Social and Information Networks (cs.SI) ,Internet ,Behavior ,Music Cognition ,Dewey-Dezimalklassifikation -- Informatik, Informationswissenschaft, allgemeine Werke (0) ,Cognitive Psychology ,Biology and Life Sciences ,Cell Biology ,South America ,United States ,culture ,People and Places ,North America ,Cognitive Science ,Social Media ,Music ,Mathematics ,Neuroscience - Abstract
Popularity-based approaches are widely adopted in music recommendation systems, both in industry and research. However, as the popularity distribution of music items typically is a long-tail distribution, popularity-based approaches to music recommendation fall short in satisfying listeners that have specialized music. The contribution of this article is three-fold. We provide several quantitative measures describing the proximity of a user's music preference to the music mainstream. We define the measures at two levels: relating a listener's music preferences to the global music preferences of all users, or relating them to music preferences of the user's country. Moreover, we adopt a distribution-based and a rank-based approach as means to decrease bias towards the head of the long-tail distribution. We analyze differences between countries in terms of their level of mainstreaminess, uncover both positive and negative outliers (substantially higher and lower country-specific popularity, respectively, compared to the global mainstream), and investigate differences between countries in terms of listening preferences related to popular music artists. We use the standardized LFM-1b dataset, from which we analyze about 8 million listening events shared by about 53,000 users (from 47 countries) of the music streaming platform Last.fm. We show that there are substantial country-specific differences in listeners' music consumption behavior with respect to the most popular artists listened to. We conduct rating prediction experiments in which we tailor recommendations to a user's level of preference for the music mainstream using the proposed 6 mainstreaminess measures. Results suggest that, in terms of rating prediction accuracy, each of the presented mainstreaminess definitions has its merits., Comment: 36 pages, 4 figures, 10 tables, PLOS ONE 14(6), paper e0217389
- Published
- 2018
38. Ancestry-specific recent effective population size in the Americas
- Author
-
Ramon A. Durazo-Arvizu, Sharon R. Browning, Robert C. Kaplan, Neil Schneiderman, Cathy C. Laurie, Martha L. Daviglus, and Brian L. Browning
- Subjects
0301 basic medicine ,Cancer Research ,Heredity ,Immigration ,Population density ,Identity by descent ,Geographical Locations ,0302 clinical medicine ,Effective population size ,Ethnicities ,African American people ,Genetics (clinical) ,media_common ,education.field_of_study ,Population size ,Simulation and Modeling ,Hispanic or Latino ,Population groupings ,Europe ,Genetic Mapping ,Research Article ,lcsh:QH426-470 ,Population Size ,media_common.quotation_subject ,Population ,Biology ,Research and Analysis Methods ,Bottleneck ,White People ,03 medical and health sciences ,Population Metrics ,Effective Population Size ,Genetics ,Population growth ,Humans ,Computer Simulation ,education ,Molecular Biology ,Ecology, Evolution, Behavior and Systematics ,Genetic Association Studies ,Population Density ,Evolutionary Biology ,Population Biology ,Genome, Human ,Biology and Life Sciences ,United States ,Black or African American ,lcsh:Genetics ,030104 developmental biology ,Genetics, Population ,Haplotypes ,People and Places ,Africa ,North America ,Americas ,030217 neurology & neurosurgery ,Population Genetics ,Demography - Abstract
Populations change in size over time due to factors such as population growth, migration, bottleneck events, natural disasters, and disease. The historical effective size of a population affects the power and resolution of genetic association studies. For admixed populations, it is not only the overall effective population size that is of interest, but also the effective sizes of the component ancestral populations. We use identity by descent and local ancestry inferred from genome-wide genetic data to estimate overall and ancestry-specific effective population size during the past hundred generations for nine admixed American populations from the Hispanic Community Health Study/Study of Latinos, and for African-American and European-American populations from two US cities. In these populations, the estimated pre-admixture effective sizes of the ancestral populations vary by sampled population, suggesting that the ancestors of different sampled populations were drawn from different sub-populations. In addition, we estimate that overall effective population sizes dropped substantially in the generations immediately after the commencement of European and African immigration, reaching a minimum around 12 generations ago, but rebounded within a small number of generations afterwards. Of the populations that we considered, the population of individuals originating from Puerto Rico has the smallest bottleneck size of one thousand, while the Pittsburgh African-American population has the largest bottleneck size of two hundred thousand., Author summary Using genome-wide genetic data on several hundred individuals sampled from a population, we can estimate the current effective size of the population and the changes in effective size that have occurred over the past hundred generations. Many populations in the Americas are admixed, having ancestry from Europe, Africa, and the Americas. In such cases, one can learn not only about the effective population size history of the admixed population since admixture, but also about the effective population size histories of the contributing ancestral populations. In this paper we develop methodology for estimating past effective population size and analyze data from Hispanic, African-American, and European-American populations resident in the United States. We observe differences between populations in their historical effective sizes. These differences are useful for understanding differences in disease incidence between populations and for identifying populations that will maximize power in genetic association studies.
- Published
- 2018
39. The impact of measurement differences on cross-country depression prevalence estimates: A latent transition analysis
- Author
-
Theresa S. Betancourt, Katherine E. Masyn, Pamela Scorza, and Joshua A. Salomon
- Subjects
Databases, Factual ,Psychological intervention ,lcsh:Medicine ,Surveys ,Global Health ,Geographical Locations ,South Africa ,0302 clinical medicine ,Medicine and Health Sciences ,Prevalence ,Global health ,Ethnicities ,Public and Occupational Health ,030212 general & internal medicine ,10. No inequality ,lcsh:Science ,Depression (differential diagnoses) ,Multidisciplinary ,Depression ,Applied Mathematics ,Simulation and Modeling ,CIDI ,Latent class model ,3. Good health ,Research Design ,Physical Sciences ,Psychology ,Algorithms ,Research Article ,Oceania ,Nigeria ,Research and Analysis Methods ,World Health Organization ,03 medical and health sciences ,Mental Health and Psychiatry ,Humans ,Psychiatric epidemiology ,Disease burden ,African People ,Survey Research ,Mood Disorders ,lcsh:R ,Health Surveys ,Mental health ,United States ,030227 psychiatry ,People and Places ,Africa ,Population Groupings ,lcsh:Q ,Epidemiologic Methods ,Mathematics ,New Zealand ,Demography - Abstract
Background Depression is currently the second largest contributor to non-fatal disease burden globally. For that reason, economic evaluations are increasingly being conducted using data from depression prevalence estimates to analyze return on investments for services that target mental health. Psychiatric epidemiology studies have reported large cross-national differences in the prevalence of depression. These differences may impact the cost-effectiveness assessments of mental health interventions, thereby affecting decisions regarding government and multi-lateral investment in mental health services. Some portion of the differences in prevalence estimates across countries may be due to true discrepancies in depression prevalence, resulting from differential levels of risk in environmental and demographic factors. However, some portion of those differences may reflect non-invariance in the way standard tools measure depression across countries. This paper attempts to discern the extent to which measurement differences are responsible for reported differences in the prevalence of depression across countries. Methods and findings This analysis uses data from the World Mental Health Surveys, a coordinated series of psychiatric epidemiology studies in 27 countries using multistage household probability samples to assess prevalence and correlates of mental disorders. Data in the current study include responses to the depression module of the World Mental Health Composite International Diagnostic Interview (CIDI) in four countries: Two high-income, western countries-the United States (n = 20, 015) and New Zealand (n = 12,992)-an upper-middle income sub-Saharan African country, South Africa (n = 4,351), and a lower-middle income sub-Saharan African country, Nigeria (n = 6,752). Latent class analysis, a type of finite mixture modeling, was used to categorize respondents into underlying categories based on the variation in their responses to questions in each of three sequential parts of the CIDI depression module: 1) The initial screening items, 2) Additional duration and severity exclusion criteria, and 3) The core symptom questions. After each of these parts, exclusion criteria expel respondents from the remainder of the diagnostic interview, rendering a diagnosis of "not depressed". Latent class models were fit to each of the three parts in each of the four countries, and model fit was assessed using overall chi-square values and Pearson standardized residuals. Latent transition analysis was then applied in order to model participants' progression through the CIDI depression module. Proportion of individuals falling into each latent class and probabilities of transitioning into subsequent classes were used to estimate the percentage in each country that ultimately fell into the more symptomatic class, i.e. classified as "depressed". This latent variable design allows for a non-zero probability that individuals were incorrectly excluded from or retained in the diagnostic interview at any of the three exclusion points and therefore incorrectly diagnosed. Prevalence estimates based on the latent transition model reversed the order of depression prevalence across countries. Based on the latent transition model in this analysis, Nigeria has the highest prevalence (21.6%), followed by New Zealand (17.4%), then South Africa (15.0%), and finally the US (12.5%). That is compared to the estimates in the World Mental Health Surveys that do not allow for measurement differences, in which Nigeria had by far the lowest prevalence (3.1%), followed by South Africa (9.8%), then the United States (13.5%) and finally New Zealand (17.8%). Individuals endorsing the screening questions in Nigeria and South Africa were more likely to endorse more severe depression symptomology later in the module (i.e. they had higher transition probabilities), suggesting that individuals in the two Western countries may be more likely to endorse screening questions even when they don't have as severe symptoms. These differences narrow the range of depression prevalence between countries 14 percentage points in the original estimates to 6 percentage points in the estimate taking account of measurement differences. Conclusions These data suggest fewer differences in cross-national prevalence of depression than previous estimates. Given that prevalence data are used to support key decisions regarding resource-allocation for mental health services, more critical attention should be paid to differences in the functioning of measurement across contexts and the impact these differences have on prevalence estimates. Future research should include qualitative methods as well as external measures of disease severity, such as impairment, to assess how the latent classes predict these external variables, to better understand the way that standard tools estimate depression prevalence across contexts. Adjustments could then be made to prevalence estimates used in cost-effectiveness analyses.
- Published
- 2018
40. Windowed persistent homology: A topological signal processing algorithm applied to clinical obesity data
- Author
-
Kayvan Najarian, Amy E. Rothberg, Harm Derksen, Craig Biwer, Heidi B. IglayReger, and Charles F. Burant
- Subjects
Michigan ,Computer science ,Physiology ,Economics ,Entropy ,Social Sciences ,Health Care Sector ,lcsh:Medicine ,02 engineering and technology ,Overweight ,computer.software_genre ,01 natural sciences ,Geographical locations ,Weight loss ,0202 electrical engineering, electronic engineering, information engineering ,Medicine and Health Sciences ,Entropy (energy dispersal) ,lcsh:Science ,education.field_of_study ,Multidisciplinary ,Entropy (statistical thermodynamics) ,Applied Mathematics ,Simulation and Modeling ,Physics ,Hausdorff space ,Signal Processing, Computer-Assisted ,Physiological Parameters ,Physical Sciences ,Signal processing algorithms ,Engineering and Technology ,Thermodynamics ,medicine.symptom ,Algorithms ,Research Article ,Population ,Machine learning ,Research and Analysis Methods ,Entropy (classical thermodynamics) ,Health Economics ,Weight Loss ,medicine ,Entropy (information theory) ,Humans ,Obesity ,0101 mathematics ,education ,Entropy (arrow of time) ,Persistent homology ,business.industry ,010102 general mathematics ,Body Weight ,lcsh:R ,Biology and Life Sciences ,020206 networking & telecommunications ,medicine.disease ,United States ,Health Care ,Hausdorff distance ,Signal Processing ,North America ,lcsh:Q ,Artificial intelligence ,People and places ,Electronics ,Accelerometers ,business ,computer ,Mathematics ,Entropy (order and disorder) - Abstract
Overweight and obesity are highly prevalent in the population of the United States, affecting roughly 2/3 of Americans. These diseases, along with their associated conditions, are a major burden on the healthcare industry in terms of both dollars spent and effort expended. Volitional weight loss is attempted by many, but weight regain is common. The ability to predict which patients will lose weight and successfully maintain the loss versus those prone to regain weight would help ease this burden by allowing clinicians the ability to skip treatments likely to be ineffective. In this paper we introduce a new windowed approach to the persistent homology signal processing algorithm that, when paired with a modified, semimetric version of the Hausdorff distance, can differentiate the two groups where other commonly used methods fail. The novel approach is tested on accelerometer data gathered from an ongoing study at the University of Michigan. While most standard approaches to signal processing show no difference between the two groups, windowed persistent homology and the modified Hausdorff semimetric show a clear separation. This has significant implications for clinical decision making and patient care.
- Published
- 2017
41. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data
- Author
-
Yang Cheng, Jian Zhang, Huachun Tan, Li Song, and Bin Ran
- Subjects
Computer science ,Aviation ,Intelligence ,lcsh:Medicine ,Social Sciences ,Transportation ,02 engineering and technology ,Geographical locations ,Mathematical and Statistical Techniques ,0202 electrical engineering, electronic engineering, information engineering ,Range (statistics) ,Computer Science::Networking and Internet Architecture ,Psychology ,lcsh:Science ,Intelligent transportation system ,Principal Component Analysis ,Multidisciplinary ,geography.geographical_feature_category ,Applied Mathematics ,Simulation and Modeling ,05 social sciences ,Floating car data ,Transportation Infrastructure ,Physical Sciences ,Engineering and Technology ,020201 artificial intelligence & image processing ,Algorithm ,Algorithms ,Statistics (Mathematics) ,Network analysis ,Research Article ,Optimization ,Computer and Information Sciences ,Research and Analysis Methods ,Civil Engineering ,Wisconsin ,0502 economics and business ,Computer Simulation ,Tensor ,Statistical Methods ,Traffic generation model ,050210 logistics & transportation ,geography ,business.industry ,lcsh:R ,Cognitive Psychology ,Biology and Life Sciences ,Computing Methods ,United States ,Roads ,ComputerSystemsOrganization_MISCELLANEOUS ,Multivariate Analysis ,North America ,Cognitive Science ,lcsh:Q ,State (computer science) ,People and places ,business ,Automobiles ,Mathematics ,Water well ,Neuroscience - Abstract
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
- Published
- 2016
42. Accurate and fast path computation on large urban road networks: A general approach
- Author
-
Meng Li, Xiaolei Li, and Qing Song
- Subjects
Optimization ,Computer and Information Sciences ,Traverse ,Urban Population ,Computer science ,Heuristic (computer science) ,Computation ,New York ,0211 other engineering and technologies ,lcsh:Medicine ,Social Sciences ,Transportation ,02 engineering and technology ,Fast path ,Research and Analysis Methods ,Civil Engineering ,Geographical locations ,Sociology ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Pruning (decision trees) ,lcsh:Science ,021103 operations research ,Multidisciplinary ,Heuristic ,Applied Mathematics ,Simulation and Modeling ,lcsh:R ,Graph partition ,Transportation Infrastructure ,United States ,Navigation ,Roads ,Signaling Networks ,Hierarchical clustering ,Social Networks ,Computer engineering ,Physical Sciences ,North America ,Path (graph theory) ,Engineering and Technology ,lcsh:Q ,New York City ,People and places ,Routing (electronic design automation) ,Algorithms ,Mathematics ,Network Analysis ,Research Article - Abstract
Accurate and fast path computation is essential for applications such as onboard navigation systems and traffic network routing. While a number of heuristic algorithms have been developed in the past few years for faster path queries, the accuracy of them are always far below satisfying. In this paper, we first develop an agglomerative graph partitioning method for generating high balanced traverse distance partitions, and we constitute a three-level graph model based on the graph partition scheme for structuring the urban road network. Then, we propose a new hierarchical path computation algorithm, which benefits from the hierarchical graph model and utilizes a region pruning strategy to significantly reduce the search space without compromising the accuracy. Finally, we present a detailed experimental evaluation on the real urban road network of New York City, and the experimental results demonstrate the effectiveness of the proposed approach to generate optimal fast paths and to facilitate real-time routing applications.
- Published
- 2018
43. Relay discovery and selection for large-scale P2P streaming
- Author
-
Angela Yunxian Wang, Chengwei Zhang, and Xiaojun Hei
- Subjects
Computer and Information Sciences ,Computer science ,Vector Spaces ,lcsh:Medicine ,02 engineering and technology ,Research and Analysis Methods ,Geographical locations ,law.invention ,Computer Communication Networks ,Relay ,law ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,Computer Networks ,lcsh:Science ,Selection (genetic algorithm) ,Ohio ,Internet ,Key generation ,Multidisciplinary ,business.industry ,Applied Mathematics ,Simulation and Modeling ,Node (networking) ,lcsh:R ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,020206 networking & telecommunications ,Relays ,United States ,Algebra ,Linear Algebra ,Physical Sciences ,North America ,Cryptography ,Engineering and Technology ,Bandwidth (Computing) ,lcsh:Q ,020201 artificial intelligence & image processing ,Electronics ,People and places ,business ,Mathematics ,Algorithms ,Research Article ,Computer network - Abstract
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.
- Published
- 2017
44. Reliable Facility Location Problem with Facility Protection
- Author
-
Cheng Zhu, Luohao Tang, Jianmai Shi, Weiming Zhang, and Zaili Lin
- Subjects
Operations research ,Computer science ,0211 other engineering and technologies ,lcsh:Medicine ,Transportation ,02 engineering and technology ,Geographical locations ,Reliability Engineering ,Local search (optimization) ,lcsh:Science ,Hedge (finance) ,021103 operations research ,Multidisciplinary ,Applied Mathematics ,Simulation and Modeling ,05 social sciences ,Reliability ,Outsourcing ,Facility location problem ,Research Design ,Lagrangian relaxation ,Physical Sciences ,Costs and Cost Analysis ,symbols ,Engineering and Technology ,Management Engineering ,Algorithms ,Research Article ,Optimization ,China ,Asia ,Facility information model ,Reliability (computer networking) ,Constraint Relaxation ,Research and Analysis Methods ,Security Measures ,symbols.namesake ,0502 economics and business ,Probability ,050210 logistics & transportation ,business.industry ,lcsh:R ,Models, Theoretical ,United States ,Facility Design and Construction ,Organizational Case Studies ,North America ,lcsh:Q ,People and places ,business ,Mathematics - Abstract
This paper studies a reliable facility location problem with facility protection that aims to hedge against random facility disruptions by both strategically protecting some facilities and using backup facilities for the demands. An Integer Programming model is proposed for this problem, in which the failure probabilities of facilities are site-specific. A solution approach combining Lagrangian Relaxation and local search is proposed and is demonstrated to be both effective and efficient based on computational experiments on random numerical examples with 49, 88, 150 and 263 nodes in the network. A real case study for a 100-city network in Hunan province, China, is presented, based on which the properties of the model are discussed and some managerial insights are analyzed.
- Published
- 2016
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.