21 results
Search Results
2. Dynamics in the Fitness-Income plane: Brazilian states vs World countries.
- Author
-
Operti, Felipe G., Pugliese, Emanuele, Jr.Andrade, José S., Pietronero, Luciano, and Gabrielli, Andrea
- Subjects
ALGORITHMS ,PHYSICAL fitness ,GROSS domestic product ,ECONOMICS - Abstract
In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDP
p ), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
3. Reliable Facility Location Problem with Facility Protection.
- Author
-
Tang, Luohao, Zhu, Cheng, Lin, Zaili, Shi, Jianmai, and Zhang, Weiming
- Subjects
LOCATION problems (Programming) ,FACILITY location problems ,INTEGER programming ,LAGRANGIAN functions ,APPLIED mathematics - Abstract
This paper studies a reliable facility location problem with facility protection that aims to hedge against random facility disruptions by both strategically protecting some facilities and using backup facilities for the demands. An Integer Programming model is proposed for this problem, in which the failure probabilities of facilities are site-specific. A solution approach combining Lagrangian Relaxation and local search is proposed and is demonstrated to be both effective and efficient based on computational experiments on random numerical examples with 49, 88, 150 and 263 nodes in the network. A real case study for a 100-city network in Hunan province, China, is presented, based on which the properties of the model are discussed and some managerial insights are analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.
- Author
-
Ran, Bin, Song, Li, Zhang, Jian, Cheng, Yang, and Tan, Huachun
- Subjects
TRAFFIC engineering ,ESTIMATION theory ,PROBLEM solving ,STATISTICAL correlation ,MISSING data (Statistics) - Abstract
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
5. Best Match: New relevance search for PubMed.
- Author
-
Fiorini, Nicolas, Canese, Kathi, Starchenko, Grisha, Kireev, Evgeny, Kim, Won, Miller, Vadim, Osipov, Maxim, Kholodov, Michael, Ismagilov, Rafis, Mohan, Sunil, Ostell, James, and Lu, Zhiyong
- Subjects
SEARCH engines ,SEARCH algorithms ,INTERNET searching ,DATA mining ,MEDICAL literature - Abstract
PubMed is a free search engine for biomedical literature accessed by millions of users from around the world each day. With the rapid growth of biomedical literature—about two articles are added every minute on average—finding and retrieving the most relevant papers for a given query is increasingly challenging. We present Best Match, a new relevance search algorithm for PubMed that leverages the intelligence of our users and cutting-edge machine-learning technology as an alternative to the traditional date sort order. The Best Match algorithm is trained with past user searches with dozens of relevance-ranking signals (factors), the most important being the past usage of an article, publication date, relevance score, and type of article. This new algorithm demonstrates state-of-the-art retrieval performance in benchmarking experiments as well as an improved user experience in real-world testing (over 20% increase in user click-through rate). Since its deployment in June 2017, we have observed a significant increase (60%) in PubMed searches with relevance sort order: it now assists millions of PubMed searches each week. In this work, we hope to increase the awareness and transparency of this new relevance sort option for PubMed users, enabling them to retrieve information more effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Capturing the influence of geopolitical ties from Wikipedia with reduced Google matrix.
- Author
-
El Zant, Samer, Jaffrès-Runser, Katia, and Shepelyansky, Dima L.
- Subjects
SOCIOCULTURAL factors ,GEOPOLITICS ,POWER (Social sciences) ,MARKOV processes - Abstract
Interactions between countries originate from diverse aspects such as geographic proximity, trade, socio-cultural habits, language, religions, etc. Geopolitics studies the influence of a country’s geographic space on its political power and its relationships with other countries. This work reveals the potential of Wikipedia mining for geopolitical study. Actually, Wikipedia offers solid knowledge and strong correlations among countries by linking web pages together for different types of information (e.g. economical, historical, political, and many others). The major finding of this paper is to show that meaningful results on the influence of country ties can be extracted from the hyperlinked structure of Wikipedia. We leverage a novel stochastic matrix representation of Markov chains of complex directed networks called the reduced Google matrix theory. For a selected small size set of nodes, the reduced Google matrix concentrates direct and indirect links of the million-node sized Wikipedia network into a small Perron-Frobenius matrix keeping the PageRank probabilities of the global Wikipedia network. We perform a novel sensitivity analysis that leverages this reduced Google matrix to characterize the influence of relationships between countries from the global network. We apply this analysis to two chosen sets of countries (i.e. the set of 27 European Union countries and a set of 40 top worldwide countries). We show that with our sensitivity analysis we can exhibit easily very meaningful information on geopolitics from five different Wikipedia editions (English, Arabic, Russian, French and German). [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Accurate and fast path computation on large urban road networks: A general approach.
- Author
-
Song, Qing, Li, Meng, and Li, Xiaolei
- Subjects
TRANSPORTATION ,TRAFFIC engineering ,ROADS ,NAVIGATION ,ALGORITHMS - Abstract
Accurate and fast path computation is essential for applications such as onboard navigation systems and traffic network routing. While a number of heuristic algorithms have been developed in the past few years for faster path queries, the accuracy of them are always far below satisfying. In this paper, we first develop an agglomerative graph partitioning method for generating high balanced traverse distance partitions, and we constitute a three-level graph model based on the graph partition scheme for structuring the urban road network. Then, we propose a new hierarchical path computation algorithm, which benefits from the hierarchical graph model and utilizes a region pruning strategy to significantly reduce the search space without compromising the accuracy. Finally, we present a detailed experimental evaluation on the real urban road network of New York City, and the experimental results demonstrate the effectiveness of the proposed approach to generate optimal fast paths and to facilitate real-time routing applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
8. An efficient General Transit Feed Specification (GTFS) enabled algorithm for dynamic transit accessibility analysis.
- Author
-
Fayyaz S., S. Kiavash, Liu, Xiaoyue Cathy, and Zhang, Guohui
- Subjects
METROPOLITAN areas ,PUBLIC transit ,TRAVEL time (Traffic engineering) ,POPULATION density ,POPULATION biology - Abstract
The social functions of urbanized areas are highly dependent on and supported by the convenient access to public transportation systems, particularly for the less privileged populations who have restrained auto ownership. To accurately evaluate the public transit accessibility, it is critical to capture the spatiotemporal variation of transit services. This can be achieved by measuring the shortest paths or minimum travel time between origin-destination (OD) pairs at each time-of-day (e.g. every minute). In recent years, General Transit Feed Specification (GTFS) data has been gaining popularity for between-station travel time estimation due to its interoperability in spatiotemporal analytics. Many software packages, such as ArcGIS, have developed toolbox to enable the travel time estimation with GTFS. They perform reasonably well in calculating travel time between OD pairs for a specific time-of-day (e.g. 8:00 AM), yet can become computational inefficient and unpractical with the increase of data dimensions (e.g. all times-of-day and large network). In this paper, we introduce a new algorithm that is computationally elegant and mathematically efficient to address this issue. An open-source toolbox written in C++ is developed to implement the algorithm. We implemented the algorithm on City of St. George’s transit network to showcase the accessibility analysis enabled by the toolbox. The experimental evidence shows significant reduction on computational time. The proposed algorithm and toolbox presented is easily transferable to other transit networks to allow transit agencies and researchers perform high resolution transit performance analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
9. Windowed persistent homology: A topological signal processing algorithm applied to clinical obesity data.
- Author
-
Biwer, Craig, Rothberg, Amy, IglayReger, Heidi, Derksen, Harm, Burant, Charles F., and Najarian, Kayvan
- Subjects
OBESITY ,HOMOLOGY theory ,SIGNAL processing ,ALGORITHMS ,WEIGHT loss ,WEIGHT gain - Abstract
Overweight and obesity are highly prevalent in the population of the United States, affecting roughly 2/3 of Americans. These diseases, along with their associated conditions, are a major burden on the healthcare industry in terms of both dollars spent and effort expended. Volitional weight loss is attempted by many, but weight regain is common. The ability to predict which patients will lose weight and successfully maintain the loss versus those prone to regain weight would help ease this burden by allowing clinicians the ability to skip treatments likely to be ineffective. In this paper we introduce a new windowed approach to the persistent homology signal processing algorithm that, when paired with a modified, semimetric version of the Hausdorff distance, can differentiate the two groups where other commonly used methods fail. The novel approach is tested on accelerometer data gathered from an ongoing study at the University of Michigan. While most standard approaches to signal processing show no difference between the two groups, windowed persistent homology and the modified Hausdorff semimetric show a clear separation. This has significant implications for clinical decision making and patient care. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
10. Relay discovery and selection for large-scale P2P streaming.
- Author
-
Zhang, Chengwei, Wang, Angela Yunxian, and Hei, Xiaojun
- Subjects
PEER-to-peer architecture (Computer networks) ,ERROR analysis in mathematics ,ESTIMATION theory ,HASHING ,NUMERICAL analysis - Abstract
In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
11. Global and country-specific mainstreaminess measures: Definitions, analysis, and usage for improving personalized music recommendation systems.
- Author
-
Bauer, Christine and Schedl, Markus
- Abstract
Relevance: Popularity-based approaches are widely adopted in music recommendation systems, both in industry and research. These approaches recommend to the target user what is currently popular among all users of the system. However, as the popularity distribution of music items typically is a long-tail distribution, popularity-based approaches to music recommendation fall short in satisfying listeners that have specialized music preferences far away from the global music mainstream. Addressing this gap, the contribution of this article is three-fold. Definition of mainstreaminess measures: First, we provide several quantitative measures describing the proximity of a user’s music preference to the music mainstream. Assuming that there is a difference between the global music mainstream and a country-specific one, we define the measures at two levels: relating a listener’s music preferences to the global music preferences of all users, or relating them to music preferences of the user’s country. To quantify such music preferences, we define a music item’s popularity in terms of artist playcounts (APC) and artist listener counts (ALC). Moreover, we adopt a distribution-based and a rank-based approach as means to decrease bias towards the head of the long-tail distribution. This eventually results in a framework of 6 measures to quantify music mainstream. Differences between countries with respect to music mainstream: Second, we perform in-depth quantitative and qualitative studies of music mainstream in that we (i) analyze differences between countries in terms of their level of mainstreaminess, (ii) uncover both positive and negative outliers (substantially higher and lower country-specific popularity, respectively, compared to the global mainstream), analyzing these with a mixed-methods approach, and (iii) investigate differences between countries in terms of listening preferences related to popular music artists. We conduct our studies and experiments using the standardized LFM-1b dataset, from which we analyze about 800,000,000 listening events shared by about 53,000 users (from 47 countries) of the music streaming platform Last.fm. We show that there are substantial country-specific differences in listeners’ music consumption behavior with respect to the most popular artists listened to. Rating prediction experiments: Third, we demonstrate the applicability of our study results to improve music recommendation systems. To this end, we conduct rating prediction experiments in which we tailor recommendations to a user’s level of preference for the music mainstream using the proposed 6 mainstreaminess measures: defined by a distribution-based or rank-based approach, defined on a global level or on a country level (for the user’s country), and for APC or ALC. Our approach roughly equals a hybrid recommendation approach in which a demographic filtering strategy is implemented before collaborative filtering is performed. Results suggest that, in terms of rating prediction accuracy, each of the presented mainstreaminess definitions has its merits. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
12. The number of undocumented immigrants in the United States: Estimates based on demographic modeling with data from 1990 to 2016.
- Author
-
Fazel-Zarandi, Mohammad M., Feinstein, Jonathan S., and Kaplan, Edward H.
- Subjects
DEMOGRAPHIC change ,SENSORY perception ,EMIGRATION & immigration ,POPULATION ,GOVERNMENT policy - Abstract
We apply standard demographic principles of inflows and outflows to estimate the number of undocumented immigrants in the United States, using the best available data, including some that have only recently become available. Our analysis covers the years 1990 to 2016. We develop an estimate of the number of undocumented immigrants based on parameter values that tend to underestimate undocumented immigrant inflows and overstate outflows; we also show the probability distribution for the number of undocumented immigrants based on simulating our model over parameter value ranges. Our conservative estimate is 16.7 million for 2016, nearly fifty percent higher than the most prominent current estimate of 11.3 million, which is based on survey data and thus different sources and methods. The mean estimate based on our simulation analysis is 22.1 million, essentially double the current widely accepted estimate. Our model predicts a similar trajectory of growth in the number of undocumented immigrants over the years of our analysis, but at a higher level. While our analysis delivers different results, we note that it is based on many assumptions. The most critical of these concern border apprehension rates and voluntary emigration rates of undocumented immigrants in the U.S. These rates are uncertain, especially in the 1990’s and early 2000’s, which is when—both based on our modeling and the very different survey data approach—the number of undocumented immigrants increases most significantly. Our results, while based on a number of assumptions and uncertainties, could help frame debates about policies whose consequences depend on the number of undocumented immigrants in the United States. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. Evaluating the influential priority of the factors on insurance loss of public transit.
- Author
-
Zhang, Wenhui, Su, Yongmin, Ke, Ruimin, and Chen, Xinqiang
- Subjects
PUBLIC transit ,INSURANCE claims ,GREY relational analysis ,K-means clustering - Abstract
Understanding correlation between influential factors and insurance losses is beneficial for insurers to accurately price and modify the bonus-malus system. Although there have been a certain number of achievements in insurance losses and claims modeling, limited efforts focus on exploring the relative role of accidents characteristics in insurance losses. The primary objective of this study is to evaluate the influential priority of transit accidents attributes, such as the time, location and type of accidents. Based on the dataset from Washington State Transit Insurance Pool (WSTIP) in USA, we implement several key algorithms to achieve the objectives. First, K-means algorithm contributes to cluster the insurance loss data into 6 intervals; second, Grey Relational Analysis (GCA) model is applied to calculate grey relational grades of the influential factors in each interval; in addition, we implement Naive Bayes model to compute the posterior probability of factors values falling in each interval. The results show that the time, location and type of accidents significantly influence the insurance loss in the first five intervals, but their grey relational grades show no significantly difference. In the last interval which represents the highest insurance loss, the grey relational grade of the time is significant higher than that of the location and type of accidents. For each value of the time and location, the insurance loss most likely falls in the first and second intervals which refers to the lower loss. However, for accidents between buses and non-motorized road users, the probability of insurance loss falling in the interval 6 tends to be highest. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Mediterranean California’s water use future under multiple scenarios of developed and agricultural land use change.
- Author
-
Wilson, Tamara S., Sleeter, Benjamin M., and Cameron, D. Richard
- Subjects
WATER supply ,WATER use ,CLIMATE change ,AGRICULTURAL intensification ,URBANIZATION & the environment - Abstract
With growing demand and highly variable inter-annual water supplies, California’s water use future is fraught with uncertainty. Climate change projections, anticipated population growth, and continued agricultural intensification, will likely stress existing water supplies in coming decades. Using a state-and-transition simulation modeling approach, we examine a broad suite of spatially explicit future land use scenarios and their associated county-level water use demand out to 2062. We examined a range of potential water demand futures sampled from a 20-year record of historical (1992–2012) data to develop a suite of potential future land change scenarios, including low/high change scenarios for urbanization and agriculture as well as “lowest of the low” and “highest of the high” anthropogenic use. Future water demand decreased 8.3 billion cubic meters (Bm
3 ) in the lowest of the low scenario and decreased 0.8 Bm3 in the low agriculture scenario. The greatest increased water demand was projected for the highest of the high land use scenario (+9.4 Bm3 ), high agricultural expansion (+4.6 Bm3 ), and high urbanization (+2.1 Bm3 ) scenarios. Overall, these scenarios show agricultural land use decisions will likely drive future demand more than increasing municipal and industrial uses, yet improved efficiencies across all sectors could lead to potential water use savings. Results provide water managers with information on diverging land use and water use futures, based on historical, observed land change trends and water use histories. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
15. Evaluating the role of land cover and climate uncertainties in computing gross primary production in Hawaiian Island ecosystems.
- Author
-
Kimball, Heather L., Selmants, Paul C., Moreno, Alvaro, Running, Steve W., and Giardina, Christian P.
- Subjects
LAND cover ,PRIMARY productivity (Biology) ,ISLAND ecology ,FLUX (Energy) ,MODIS (Spectroradiometer) - Abstract
Gross primary production (GPP) is the Earth’s largest carbon flux into the terrestrial biosphere and plays a critical role in regulating atmospheric chemistry and global climate. The Moderate Resolution Imaging Spectrometer (MODIS)-MOD17 data product is a widely used remote sensing-based model that provides global estimates of spatiotemporal trends in GPP. When the MOD17 algorithm is applied to regional scale heterogeneous landscapes, input data from coarse resolution land cover and climate products may increase uncertainty in GPP estimates, especially in high productivity tropical ecosystems. We examined the influence of using locally specific land cover and high-resolution local climate input data on MOD17 estimates of GPP for the State of Hawaii, a heterogeneous and discontinuous tropical landscape. Replacing the global land cover data input product (MOD12Q1) with Hawaii-specific land cover data reduced statewide GPP estimates by ~8%, primarily because the Hawaii-specific land cover map had less vegetated land area compared to the global land cover product. Replacing coarse resolution GMAO climate data with Hawaii-specific high-resolution climate data also reduced statewide GPP estimates by ~8% because of the higher spatial variability of photosynthetically active radiation (PAR) in the Hawaii-specific climate data. The combined use of both Hawaii-specific land cover and high-resolution Hawaii climate data inputs reduced statewide GPP by ~16%, suggesting equal and independent influence on MOD17 GPP estimates. Our sensitivity analyses within a heterogeneous tropical landscape suggest that refined global land cover and climate data sets may contribute to an enhanced MOD17 product at a variety of spatial scales. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. Incorporating abundance information and guiding variable selection for climate-based ensemble forecasting of species' distributional shifts.
- Author
-
Tanner, Evan P., Papeş, Monica, Elmore, R. Dwayne, Fuhlendorf, Samuel D., and Davis, Craig A.
- Subjects
ECOLOGICAL niche ,CLIMATE change ,SPECIES distribution ,STATISTICAL correlation ,MATHEMATICAL variables - Abstract
Ecological niche models (ENMs) have increasingly been used to estimate the potential effects of climate change on species’ distributions worldwide. Recently, predictions of species abundance have also been obtained with such models, though knowledge about the climatic variables affecting species abundance is often lacking. To address this, we used a well-studied guild (temperate North American quail) and the Maxent modeling algorithm to compare model performance of three variable selection approaches: correlation/variable contribution (CVC), biological (i.e., variables known to affect species abundance), and random. We then applied the best approach to forecast potential distributions, under future climatic conditions, and analyze future potential distributions in light of available abundance data and presence-only occurrence data. To estimate species’ distributional shifts we generated ensemble forecasts using four global circulation models, four representative concentration pathways, and two time periods (2050 and 2070). Furthermore, we present distributional shifts where 75%, 90%, and 100% of our ensemble models agreed. The CVC variable selection approach outperformed our biological approach for four of the six species. Model projections indicated species-specific effects of climate change on future distributions of temperate North American quail. The Gambel’s quail (Callipepla gambelii) was the only species predicted to gain area in climatic suitability across all three scenarios of ensemble model agreement. Conversely, the scaled quail (Callipepla squamata) was the only species predicted to lose area in climatic suitability across all three scenarios of ensemble model agreement. Our models projected future loss of areas for the northern bobwhite (Colinus virginianus) and scaled quail in portions of their distributions which are currently areas of high abundance. Climatic variables that influence local abundance may not always scale up to influence species’ distributions. Special attention should be given to selecting variables for ENMs, and tests of model performance should be used to validate the choice of variables. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. Simulated bat populations erode when exposed to climate change projections for western North America.
- Author
-
Hayes, Mark A. and Adams, Rick A.
- Subjects
BATS ,LACTATION ,FRINGED myotis ,CLIMATE change ,SPECIES diversity - Abstract
Recent research has demonstrated that temperature and precipitation conditions correlate with successful reproduction in some insectivorous bat species that live in arid and semiarid regions, and that hot and dry conditions correlate with reduced lactation and reproductive output by females of some species. However, the potential long-term impacts of climate-induced reproductive declines on bat populations in western North America are not well understood. We combined results from long-term field monitoring and experiments in our study area with information on vital rates to develop stochastic age-structured population dynamics models and analyzed how simulated fringed myotis (Myotis thysanodes) populations changed under projected future climate conditions in our study area near Boulder, Colorado (Boulder Models) and throughout western North America (General Models). Each simulation consisted of an initial population of 2,000 females and an approximately stable age distribution at the beginning of the simulation. We allowed each population to be influenced by the mean annual temperature and annual precipitation for our study area and a generalized range-wide model projected through year 2086, for each of four carbon emission scenarios (representative concentration pathways RCP2.6, RCP4.5, RCP6.0, RCP8.5). Each population simulation was repeated 10,000 times. Of the 8 Boulder Model simulations, 1 increased (+29.10%), 3 stayed approximately stable (+2.45%, +0.05%, -0.03%), and 4 simulations decreased substantially (-44.10%, -44.70%, -44.95%, -78.85%). All General Model simulations for western North America decreased by >90% (-93.75%, -96.70%, -96.70%, -98.75%). These results suggest that a changing climate in western North America has the potential to quickly erode some forest bat populations including species of conservation concern, such as fringed myotis. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California.
- Author
-
Boyda, Edward, Basu, Saikat, Ganguly, Sangram, Michaelis, Andrew, Mukhopadhyay, Supratik, and Nemani, Ramakrishna R.
- Subjects
QUANTUM annealing ,COMPUTER vision ,AERIAL photography ,REMOTE sensing ,GROUND cover plants - Abstract
Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Automated Retinal Layer Segmentation Using Spectral Domain Optical Coherence Tomography: Evaluation of Inter-Session Repeatability and Agreement between Devices.
- Author
-
Terry, Louise, Cassels, Nicola, Lu, Kelly, Acton, Jennifer H., Margrain, Tom H., North, Rachel V., Fergusson, James, White, Nick, and Wood, Ashley
- Subjects
OCULAR radiography ,IMAGE segmentation ,OPTICAL coherence tomography ,STATISTICAL reliability ,STATISTICAL correlation - Abstract
Retinal and intra-retinal layer thicknesses are routinely generated from optical coherence tomography (OCT) images, but on-board software capabilities and image scaling assumptions are not consistent across devices. This study evaluates the device-independent Iowa Reference Algorithms (Iowa Institute for Biomedical Imaging) for automated intra-retinal layer segmentation and image scaling for three OCT systems. Healthy participants (n = 25) underwent macular volume scans using a Cirrus HD-OCT (Zeiss), 3D-OCT 1000 (Topcon), and a non-commercial long-wavelength (1040nm) OCT on two occasions. Mean thickness of 10 intra-retinal layers was measured in three ETDRS subfields (fovea, inner ring and outer ring) using the Iowa Reference Algorithms. Where available, total retinal thicknesses were measured using on-board software. Measured axial eye length (AEL)-dependent scaling was used throughout, with a comparison made to the system-specific fixed-AEL scaling. Inter-session repeatability and agreement between OCT systems and segmentation methods was assessed. Inter-session coefficient of repeatability (CoR) for the foveal subfield total retinal thickness was 3.43μm, 4.76μm, and 5.98μm for the Zeiss, Topcon, and long-wavelength images respectively. For the commercial software, CoR was 4.63μm (Zeiss) and 7.63μm (Topcon). The Iowa Reference Algorithms demonstrated higher repeatability than the on-board software and, in addition, reliably segmented all 10 intra-retinal layers. With fixed-AEL scaling, the algorithm produced significantly different thickness values for the three OCT devices (P<0.05), with these discrepancies generally characterized by an overall offset (bias) and correlations with axial eye length for the foveal subfield and outer ring (P<0.05). This correlation was reduced to an insignificant level in all cases when AEL-dependent scaling was used. Overall, the Iowa Reference Algorithms are viable for clinical and research use in healthy eyes imaged with these devices, however ocular biometry is required for accurate quantification of OCT images. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
20. Relationship between the Uncompensated Price Elasticity and the Income Elasticity of Demand under Conditions of Additive Preferences.
- Author
-
Sabatelli, Lorenzo
- Subjects
ELASTICITY (Economics) ,ECONOMIC demand ,ADDITIVES ,FINANCIAL instruments ,MARGINAL utility - Abstract
Income and price elasticity of demand quantify the responsiveness of markets to changes in income and in prices, respectively. Under the assumptions of utility maximization and preference independence (additive preferences), mathematical relationships between income elasticity values and the uncompensated own and cross price elasticity of demand are here derived using the differential approach to demand analysis. Key parameters are: the elasticity of the marginal utility of income, and the average budget share. The proposed method can be used to forecast the direct and indirect impact of price changes and of financial instruments of policy using available estimates of the income elasticity of demand. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. Demographic and Component Allee Effects in Southern Lake Superior Gray Wolves.
- Author
-
Stenglein, Jennifer L. and Van Deelen, Timothy R.
- Subjects
ALLEE effect ,WOLVES ,ANIMAL populations ,BIOLOGICAL extinction ,CONSERVATIONISTS ,BAYESIAN analysis ,SIMULATION methods & models - Abstract
Recovering populations of carnivores suffering Allee effects risk extinction because positive population growth requires a minimum number of cooperating individuals. Conservationists seldom consider these issues in planning for carnivore recovery because of data limitations, but ignoring Allee effects could lead to overly optimistic predictions for growth and underestimates of extinction risk. We used Bayesian splines to document a demographic Allee effect in the time series of gray wolf (Canis lupus) population counts (1980–2011) in the southern Lake Superior region (SLS, Wisconsin and the upper peninsula of Michigan, USA) in each of four measures of population growth. We estimated that the population crossed the Allee threshold at roughly 20 wolves in four to five packs. Maximum per-capita population growth occurred in the mid-1990s when there were approximately 135 wolves in the SLS population. To infer mechanisms behind the demographic Allee effect, we evaluated a potential component Allee effect using an individual-based spatially explicit model for gray wolves in the SLS region. Our simulations varied the perception neighborhoods for mate-finding and the mean dispersal distances of wolves. Simulation of wolves with long-distance dispersals and reduced perception neighborhoods were most likely to go extinct or experience Allee effects. These phenomena likely restricted population growth in early years of SLS wolf population recovery. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.