687 results
Search Results
52. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis.
- Author
-
Zu, Xianghuan, Yang, Chuanlei, Wang, Hechun, and Wang, Yinyan
- Subjects
EXHAUST gas recirculation ,GREY relational analysis ,DECISION making ,PERFORMANCE of diesel motors ,NITROGEN oxides emission control - Abstract
Exhaust gas recirculation (EGR) is one of the main methods of reducing NO
X emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
53. Knowledge categorization affects popularity and quality of Wikipedia articles.
- Author
-
Lerner, Jürgen and Lomi, Alessandro
- Subjects
EDITORS ,THIN layer chromatography ,CHROMATOGRAPHIC analysis ,COGNITIVE psychology - Abstract
The existence of a shared classification system is essential to knowledge production, transfer, and sharing. Studies of knowledge classification, however, rarely consider the fact that knowledge categories exist within hierarchical information systems designed to facilitate knowledge search and discovery. This neglect is problematic whenever information about categorical membership is itself used to evaluate the quality of the items that the category contains. The main objective of this paper is to show that the effects of category membership depend on the position that a category occupies in the hierarchical knowledge classification system of Wikipedia—an open knowledge production and sharing platform taking the form of a freely accessible on-line encyclopedia. Using data on all English-language Wikipedia articles, we examine how the position that a category occupies in the classification hierarchy affects the attention that articles in that category attract from Wikipedia editors, and their evaluation of quality of the Wikipedia articles. Specifically, we show that Wikipedia articles assigned to coarse-grained categories (i. e., categories that occupy higher positions in the hierarchical knowledge classification system) garner more attention from Wikipedia editors (i. e., attract a higher volume of text editing activity), but receive lower evaluations (i. e., they are considered to be of lower quality). The negative relation between attention and quality implied by this result is consistent with current theories of social categorization, but it also goes beyond available results by showing that the effects of categorization on evaluation depend on the position that a category occupies in a hierarchical knowledge classification system. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
54. Face recognition algorithm using extended vector quantization histogram features.
- Author
-
Yan, Yan, Lee, Feifei, Wu, Xueqian, and Chen, Qiu
- Subjects
HUMAN facial recognition software ,VECTOR quantization ,ALGORITHMS ,COGNITIVE psychology ,ARTIFICIAL intelligence - Abstract
In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
55. Ensemble machine learning and forecasting can achieve 99% uptime for rural handpumps.
- Author
-
Wilson, Daniel L., Coyle, Jeremy R., and Thomas, Evan A.
- Subjects
MACHINE learning ,HAND pumps ,WATER supply ,WATER pollution prevention ,PUMPING machinery maintenance & repair - Abstract
Broken water pumps continue to impede efforts to deliver clean and economically-viable water to the global poor. The literature has demonstrated that customers’ health benefits and willingness to pay for clean water are best realized when clean water infrastructure performs extremely well (>99% uptime). In this paper, we used sensor data from 42 Afridev-brand handpumps observed for 14 months in western Kenya to demonstrate how sensors and supervised ensemble machine learning could be used to increase total fleet uptime from a best-practices baseline of about 70% to >99%. We accomplish this increase in uptime by forecasting pump failures and identifying existing failures very quickly. Comparing the costs of operating the pump per functional year over a lifetime of 10 years, we estimate that implementing this algorithm would save 7% on the levelized cost of water relative to a sensor-less scheduled maintenance program. Combined with a rigorous system for dispatching maintenance personnel, implementing this algorithm in a real-world program could significantly improve health outcomes and customers’ willingness to pay for water services. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
56. On the development of a semi-nonparametric generalized multinomial logit model for travel-related choices.
- Author
-
Wang, Ke, Ye, Xin, Pendyala, Ram M., and Zou, Yajie
- Subjects
ORTHONORMAL basis ,PROBABILITY density function ,MAXIMUM likelihood statistics ,LOGITS ,ERRORS - Abstract
A semi-nonparametric generalized multinomial logit model, formulated using orthonormal Legendre polynomials to extend the standard Gumbel distribution, is presented in this paper. The resulting semi-nonparametric function can represent a probability density function for a large family of multimodal distributions. The model has a closed-form log-likelihood function that facilitates model estimation. The proposed method is applied to model commute mode choice among four alternatives (auto, transit, bicycle and walk) using travel behavior data from Argau, Switzerland. Comparisons between the multinomial logit model and the proposed semi-nonparametric model show that violations of the standard Gumbel distribution assumption lead to considerable inconsistency in parameter estimates and model inferences. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
57. The continuous reaction time test for minimal hepatic encephalopathy validated by a randomized controlled multi-modal intervention—A pilot study.
- Author
-
Lauridsen, M. M., Mikkelsen, S., Svensson, T., Holm, J., Klüver, C., Gram, J., Vilstrup, H., and Schaffalitzky de Muckadell, O. B.
- Subjects
HEPATIC encephalopathy ,COGNITIVE testing ,REACTION time ,CIRRHOSIS of the liver ,PLACEBOS ,PATIENTS ,DIAGNOSIS - Abstract
Background: Minimal hepatic encephalopathy (MHE) is clinically undetectable and the diagnosis requires psychometric tests. However, a lack of clarity exists as to whether the tests are in fact able to detect changes in cognition. Aim: To examine if the continuous reaction time test (CRT) can detect changes in cognition with anti-HE intervention in patients with cirrhosis and without clinically manifest hepatic encephalopathy (HE). Methods: Firstly, we conducted a reproducibility analysis and secondly measured change in CRT induced by anti-HE treatment in a randomized controlled pilot study: We stratified 44 patients with liver cirrhosis and without clinically manifest HE according to a normal (n = 22) or abnormal (n = 22) CRT. Each stratum was then block randomized to receive multimodal anti-HE intervention (lactulose+branched-chain amino acids+rifaximin) or triple placebos for 3 months in a double-blinded fashion. The CRT is a simple PC-based test and the test result, the CRT index (normal threshold > 1.9), describes the patient’s stability of alertness during the 10–minute test. Our study outcome was the change in CRT index in each group at study exit. The portosystemic encephalopathy (PSE) test, a paper-and-pencil test battery (normal threshold above -5), was used as a comparator test according to international guidelines. Results: The patients with an abnormal CRT index who were randomized to receive the active intervention normalized or improved their CRT index (mean change 0.92 ± 0.29, p = 0.01). Additionally, their PSE improved (change 3.85 ± 1.83, p = 0.03). There was no such effect in any of the other study groups. Conclusion: In this cohort of patients with liver cirrhosis and no manifest HE, the CRT identified a group in whom cognition improved with intensive anti-HE intervention. This finding infers that the CRT can detect a response to treatment and might help in selecting patients for treatment. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
58. Logical inconsistencies in time trade-off valuation of EQ-5D-5L health states: Whose fault is it?
- Author
-
Yang, Zhihao, van Busschbach, Jan, Timman, Reinier, Janssen, M. F., and Luo, Nan
- Subjects
INCONSISTENCY (Logic) ,DATA quality ,ITERATIVE methods (Mathematics) ,REGRESSION analysis ,HEALTH status indicators ,GENERALIZATION - Abstract
Introduction: Inconsistency in the time trade-off (TTO) task in EQ-5D-5L occurs when a respondent gives a higher value to a logically worse health state, the occurrence of inconsistency compromises the quality of the data. It is not yet clear which factors are associated with individual level inconsistency. Relating inconsistency to the characteristics of the respondent, interviewer, and the interview process could be helpful in understanding the causes of inconsistency. The objective of this paper is to discover the factors associated with individual level inconsistencies. Methods: Twenty interviewers interviewed 1,296 respondents and each respondent valued 10 health states using the EQ-VT platform in 5 cities in China. At the respondent level, inconsistency was identified in terms of severity and quantity and related to the respondent’s background characteristics, the time and iterations spent on the wheelchair example task, and the formal TTO tasks, using multilevel multinomial regression analyses. Interviewers’ impact on inconsistencies was analyzed using single level multinomial regression analyses. Results: In the full dataset, slight inconsistency was more related to the interview process (Time spent on TTO task: RRR = 1.246 with 95%CI: 1.076,1.441; time spent on Wheelchair example: RRR = 0.815 with 95%CI:0.699,0.952) while severe inconsistency was more related to respondent’s gender (Gender: RRR = 2.347 with 95%CI:1.429,3.855). One Interviewer (Interviewer 7: RRR = 7.335 with 95%CI:1.908,28.195) and interviewer’s experience (Sequence: RRR = 0.511 with 95%CI:0.385,0.678) in general showed strong influence over inconsistency in the TTO task. Conclusion: In conclusion, logical inconsistency in the valuation of EQ-5D-5L health states is associated not only with respondents’ characteristics but also with interviewers’ performance and the interview process. The role of interviewers and the importance of interviewer training may be more crucial than hitherto believed. This finding could be generalizable to other interviewer-administered health-state valuation study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
59. The rise of the middle author: Investigating collaboration and division of labor in biomedical research using partial alphabetical authorship.
- Author
-
Mongeon, Philippe, Smith, Elise, Joyal, Bruno, and Larivière, Vincent
- Subjects
MEDICAL research ,AUTHORS ,BIBLIOMETRICS ,CLINICAL medicine ,PERIODICALS - Abstract
Contemporary biomedical research is performed by increasingly large teams. Consequently, an increasingly large number of individuals are being listed as authors in the bylines, which complicates the proper attribution of credit and responsibility to individual authors. Typically, more importance is given to the first and last authors, while it is assumed that the others (the middle authors) have made smaller contributions. However, this may not properly reflect the actual division of labor because some authors other than the first and last may have made major contributions. In practice, research teams may differentiate the main contributors from the rest by using partial alphabetical authorship (i.e., by listing middle authors alphabetically, while maintaining a contribution-based order for more substantial contributions). In this paper, we use partial alphabetical authorship to divide the authors of all biomedical articles in the Web of Science published over the 1980–2015 period in three groups: primary authors, middle authors, and supervisory authors. We operationalize the concept of middle author as those who are listed in alphabetical order in the middle of an authors’ list. Primary and supervisory authors are those listed before and after the alphabetical sequence, respectively. We show that alphabetical ordering of middle authors is frequent in biomedical research, and that the prevalence of this practice is positively correlated with the number of authors in the bylines. We also find that, for articles with 7 or more authors, the average proportion of primary, middle and supervisory authors is independent of the team size, more than half of the authors being middle authors. This suggests that growth in authors lists are not due to an increase in secondary contributions (or middle authors) but, rather, in equivalent increases of all types of roles and contributions (including many primary authors and many supervisory authors). Nevertheless, we show that the relative contribution of alphabetically ordered middle authors to the overall production of knowledge in the biomedical field has greatly increased over the last 35 years. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
60. ‘Spin’ in published biomedical literature: A methodological systematic review.
- Author
-
Chiu, Kellia, Grundy, Quinn, and Bero, Lisa
- Subjects
SPIN-spin interactions ,CLINICAL trials ,SCIENTIFIC observation ,META-analysis ,PROCTOLOGY - Abstract
In the scientific literature, spin refers to reporting practices that distort the interpretation of results and mislead readers so that results are viewed in a more favourable light. The presence of spin in biomedical research can negatively impact the development of further studies, clinical practice, and health policies. This systematic review aims to explore the nature and prevalence of spin in the biomedical literature. We searched MEDLINE, PreMEDLINE, Embase, Scopus, and hand searched reference lists for all reports that included the measurement of spin in the biomedical literature for at least 1 outcome. Two independent coders extracted data on the characteristics of reports and their included studies and all spin-related outcomes. Results were grouped inductively into themes by spin-related outcome and are presented as a narrative synthesis. We used meta-analyses to analyse the association of spin with industry sponsorship of research. We included 35 reports, which investigated spin in clinical trials, observational studies, diagnostic accuracy studies, systematic reviews, and meta-analyses. The nature of spin varied according to study design. The highest (but also greatest) variability in the prevalence of spin was present in trials. Some of the common practices used to spin results included detracting from statistically nonsignificant results and inappropriately using causal language. Source of funding was hypothesised by a few authors to be a factor associated with spin; however, results were inconclusive, possibly due to the heterogeneity of the included papers. Further research is needed to assess the impact of spin on readers’ decision-making. Editors and peer reviewers should be familiar with the prevalence and manifestations of spin in their area of research in order to ensure accurate interpretation and dissemination of research. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
61. Development and validation of a multi-dimensional measure of intellectual humility.
- Author
-
Alfano, Mark, Iurino, Kathryn, Stey, Paul, Robinson, Brian, Christen, Markus, Yu, Feng, and Lapsley, Daniel
- Subjects
HUMILITY ,COGNITIVE structures ,NARCISSISM ,CROSS-cultural differences ,SOCIOCULTURAL factors - Abstract
This paper presents five studies on the development and validation of a scale of intellectual humility. This scale captures cognitive, affective, behavioral, and motivational components of the construct that have been identified by various philosophers in their conceptual analyses of intellectual humility. We find that intellectual humility has four core dimensions: Open-mindedness (versus Arrogance), Intellectual Modesty (versus Vanity), Corrigibility (versus Fragility), and Engagement (versus Boredom). These dimensions display adequate self-informant agreement, and adequate convergent, divergent, and discriminant validity. In particular, Open-mindedness adds predictive power beyond the Big Six for an objective behavioral measure of intellectual humility, and Intellectual Modesty is uniquely related to Narcissism. We find that a similar factor structure emerges in Germanophone participants, giving initial evidence for the model’s cross-cultural generalizability. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
62. Adaptive fuzzy sliding control of single-phase PV grid-connected inverter.
- Author
-
Fei, Juntao and Zhu, Yunkai
- Subjects
ADAPTIVE fuzzy control ,ADAPTIVE control systems ,PHOTOVOLTAIC power systems ,PHOTOVOLTAIC power generation ,MAXIMUM power point trackers ,SIMULATION methods & models - Abstract
In this paper, an adaptive fuzzy sliding mode controller is proposed to control a two-stage single-phase photovoltaic (PV) grid-connected inverter. Two key technologies are discussed in the presented PV system. An incremental conductance method with adaptive step is adopted to track the maximum power point (MPP) by controlling the duty cycle of the controllable power switch of the boost DC-DC converter. An adaptive fuzzy sliding mode controller with an integral sliding surface is developed for the grid-connected inverter where a fuzzy system is used to approach the upper bound of the system nonlinearities. The proposed strategy has strong robustness for the sliding mode control can be designed independently and disturbances can be adaptively compensated. Simulation results of a PV grid-connected system verify the effectiveness of the proposed method, demonstrating the satisfactory robustness and performance. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
63. Will higher traffic flow lead to more traffic conflicts? A crash surrogate metric based analysis.
- Author
-
Kuang, Yan, Qu, Xiaobo, and Yan, Yadan
- Subjects
TRAFFIC flow ,TRAFFIC conflicts ,TRAFFIC accidents ,SOFTWARE compatibility ,COMPUTER simulation - Abstract
In this paper, we aim to examine the relationship between traffic flow and potential conflict risks by using crash surrogate metrics. It has been widely recognized that one traffic flow corresponds to two distinct traffic states with different speeds and densities. In view of this, instead of simply aggregating traffic conditions with the same traffic volume, we represent potential conflict risks at a traffic flow fundamental diagram. Two crash surrogate metrics, namely, Aggregated Crash Index and Time to Collision, are used in this study to represent the potential conflict risks with respect to different traffic conditions. Furthermore, Beijing North Ring III and Next Generation SIMulation Interstate 80 datasets are utilized to carry out case studies. By using the proposed procedure, both datasets generate similar trends, which demonstrate the applicability of the proposed methodology and the transferability of our conclusions. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
64. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.
- Author
-
Wang, Jiaxi, Gronalt, Manfred, and Sun, Yan
- Subjects
TRANSPORTATION planning ,ELECTRIC multiple units ,MATHEMATICAL optimization ,REGRESSION analysis ,PARETO analysis - Abstract
Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
65. Evidence for arrogance: On the relative importance of expertise, outcome, and manner.
- Author
-
Milyavsky, Maxim, Kruglanski, Arie W., Chernikova, Marina, and Schori-Eyal, Noa
- Subjects
PRIDE & vanity ,IMPULSE (Psychology) ,HUMAN behavior ,BUDDHISM ,SOCIAL skills - Abstract
Arrogant behavior is as old as human nature. Nonetheless, the factors that cause people to be perceived as arrogant have received very little research attention. In this paper, we focused on a typical manifestation of arrogance: dismissive behavior. In particular, we explored the conditions under which a person who dismissed advice would be perceived as arrogant. We examined two factors: the advisee’s competence, and the manner in which he or she dismissed the advice. The effect of the advisee’s competence was tested by manipulating two competence cues: relative expertise, and the outcome of the advice dismissal (i.e., whether the advisee was right or wrong). In six studies (N = 1304), participants made arrogance judgments about protagonists who dismissed the advice of another person while the advisees’ relative expertise (compared to the advisor), their eventual correctness, and the manner of their dismissal were manipulated in between-participant designs. Across various types of decisions and advisee-advisor relationships, the results show that less expert, less correct, and ruder advisees are perceived as more arrogant. We also find that outcome trumps expertise, and manner trumps both expertise and outcomes. In two additional studies (N = 101), we examined people’s naïve theories about the relative importance of the aforementioned arrogance cues. These studies showed that people overestimate the role of expertise information as compared to the role of interpersonal manner and outcomes. Thus, our results suggest that people may commit arrogant faux pas because they erroneously expect that their expertise will justify their dismissive behavior. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
66. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research.
- Author
-
Golino, Hudson F. and Epskamp, Sacha
- Subjects
- *
PSYCHOMETRICS , *RANDOM walks , *ALGORITHMS , *GRAPH theory , *SIMULATION methods & models - Abstract
The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman’s eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
67. A machine learning approach for gait speed estimation using skin-mounted wearable sensors: From healthy controls to individuals with multiple sclerosis.
- Author
-
McGinnis, Ryan S., Mahadevan, Nikhil, Moon, Yaejin, Seagers, Kirsten, Sheth, Nirav, Jr.Wright, John A., DiCristofaro, Steven, Silva, Ikaro, Jortberg, Elise, Ceruolo, Melissa, Pindado, Jesus A., Sosnoff, Jacob, Ghaffari, Roozbeh, and Patel, Shyamal
- Subjects
MULTIPLE sclerosis ,MACHINE learning ,GAIT in humans ,WALKING speed ,WEARABLE technology ,PATIENTS - Abstract
Gait speed is a powerful clinical marker for mobility impairment in patients suffering from neurological disorders. However, assessment of gait speed in coordination with delivery of comprehensive care is usually constrained to clinical environments and is often limited due to mounting demands on the availability of trained clinical staff. These limitations in assessment design could give rise to poor ecological validity and limited ability to tailor interventions to individual patients. Recent advances in wearable sensor technologies have fostered the development of new methods for monitoring parameters that characterize mobility impairment, such as gait speed, outside the clinic, and therefore address many of the limitations associated with clinical assessments. However, these methods are often validated using normal gait patterns; and extending their utility to subjects with gait impairments continues to be a challenge. In this paper, we present a machine learning method for estimating gait speed using a configurable array of skin-mounted, conformal accelerometers. We establish the accuracy of this technique on treadmill walking data from subjects with normal gait patterns and subjects with multiple sclerosis-induced gait impairments. For subjects with normal gait, the best performing model systematically overestimates speed by only 0.01 m/s, detects changes in speed to within less than 1%, and achieves a root-mean-square-error of 0.12 m/s. Extending these models trained on normal gait to subjects with gait impairments yields only minor changes in model performance. For example, for subjects with gait impairments, the best performing model systematically overestimates speed by 0.01 m/s, quantifies changes in speed to within 1%, and achieves a root-mean-square-error of 0.14 m/s. Additional analyses demonstrate that there is no correlation between gait speed estimation error and impairment severity, and that the estimated speeds maintain the clinical significance of ground truth speed in this population. These results support the use of wearable accelerometer arrays for estimating walking speed in normal subjects and their extension to MS patient cohorts with gait impairment. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
68. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.
- Author
-
Born, Jannis, Galeazzi, Juan M., and Stringer, Simon M.
- Subjects
ARTIFICIAL neural networks ,COGNITIVE learning ,VISUAL perception ,NEUROPLASTICITY ,NEUROSCIENCES ,COMPUTER simulation - Abstract
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
69. Evidence conflict measure based on OWA operator in open world.
- Author
-
Jiang, Wen, Wang, Shiyu, Liu, Xiang, Zheng, Hanqing, and Wei, Boya
- Subjects
DATA fusion (Statistics) ,PARAMETERS (Statistics) ,COEFFICIENTS (Statistics) ,CONFLICT management ,GENERALIZATION - Abstract
Dempster-Shafer evidence theory has been extensively used in many information fusion systems since it was proposed by Dempster and extended by Shafer. Many scholars have been conducted on conflict management of Dempster-Shafer evidence theory in past decades. However, how to determine a potent parameter to measure evidence conflict, when the given environment is in an open world, namely the frame of discernment is incomplete, is still an open issue. In this paper, a new method which combines generalized conflict coefficient, generalized evidence distance, and generalized interval correlation coefficient based on ordered weighted averaging (OWA) operator, to measure the conflict of evidence is presented. Through ordered weighted average of these three parameters, the combinatorial coefficient can still measure the conflict effectively when one or two parameters are not valid. Several numerical examples demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
70. The clinico-radiological paradox of cognitive function and MRI burden of white matter lesions in people with multiple sclerosis: A systematic review and meta-analysis.
- Author
-
Mollison, Daisy, Sellar, Robin, Bastin, Mark, Mollison, Denis, Chandran, Siddharthan, Wardlaw, Joanna, and Connick, Peter
- Subjects
COGNITIVE ability ,MULTIPLE sclerosis ,MAGNETIC resonance imaging of the brain ,WHITE matter (Nerve tissue) ,META-analysis ,PATIENTS - Abstract
Background: Moderate correlation exists between the imaging quantification of brain white matter lesions and cognitive performance in people with multiple sclerosis (MS). This may reflect the greater importance of other features, including subvisible pathology, or methodological limitations of the primary literature. Objectives: To summarise the cognitive clinico-radiological paradox and explore the potential methodological factors that could influence the assessment of this relationship. Methods: Systematic review and meta-analysis of primary research relating cognitive function to white matter lesion burden. Results: Fifty papers met eligibility criteria for review, and meta-analysis of overall results was possible in thirty-two (2050 participants). Aggregate correlation between cognition and T2 lesion burden was r = -0.30 (95% confidence interval: -0.34, -0.26). Wide methodological variability was seen, particularly related to key factors in the cognitive data capture and image analysis techniques. Conclusions: Resolving the persistent clinico-radiological paradox will likely require simultaneous evaluation of multiple components of the complex pathology using optimum measurement techniques for both cognitive and MRI feature quantification. We recommend a consensus initiative to support common standards for image analysis in MS, enabling benchmarking while also supporting ongoing innovation. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
71. An online paradigm for exploring the self-reference effect.
- Author
-
Bentley, Sarah V., Greenaway, Katharine H., and Haslam, S. Alexander
- Subjects
SELF-perception ,SENSORY perception ,AWARENESS ,COGNITION ,AUTOPOIESIS ,SOCIAL stability - Abstract
People reliably encode information more effectively when it is related in some way to the self—a phenomenon known as the self-reference effect. This effect has been recognized in psychological research for almost 40 years, and its scope as a tool for investigating the self-concept is still expanding. The self-reference effect has been used within a broad range of psychological research, from cultural to neuroscientific, cognitive to clinical. Traditionally, the self-reference effect has been investigated in a laboratory context, which limits its applicability in non-laboratory samples. This paper introduces an online version of the self-referential encoding paradigm that yields reliable effects in an easy-to-administer procedure. Across four studies (total N = 658), this new online tool reliably replicated the traditional self-reference effect: in all studies self-referentially encoded words were recalled significantly more than semantically encoded words (d = 0.63). Moreover, the effect sizes obtained with this online tool are similar to those obtained in laboratory samples, and are robust to experimental variations in encoding time (Studies 1 and 2) and recall procedure (Studies 3 and 4), and persist independent of primacy and recency effects (all studies). [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
72. Eliciting interval beliefs: An experimental study.
- Author
-
Peeters, Ronald and Wolk, Leonard
- Subjects
TIME series analysis ,MATHEMATICAL analysis ,STOCHASTIC processes ,STANDARD deviations ,MATHEMATICAL statistics ,PROBABILITY theory - Abstract
In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
73. Density-based clustering: A ‘landscape view’ of multi-channel neural data for inference and dynamic complexity analysis.
- Author
-
Baglietto, Gabriel, Gigante, Guido, and Del Giudice, Paolo
- Subjects
NEURAL circuitry ,SYNAPSES ,ELECTROPHYSIOLOGY ,ALGORITHMS ,MATHEMATICAL models - Abstract
Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the ‘mean-shift’ algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters’ centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network’s state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
74. Developing a benchmark for emotional analysis of music.
- Author
-
Aljanaki, Anna, Yang, Yi-Hsuan, and Soleymani, Mohammad
- Subjects
MUSIC psychology ,EMOTION recognition ,ALGORITHMS ,ARTIFICIAL neural networks ,DATA scrubbing - Abstract
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
75. Unrealistic comparative optimism: An unsuccessful search for evidence of a genuinely motivational bias.
- Author
-
Harris, Adam J. L., de Molière, Laura, Soh, Melinda, and Hahn, Ulrike
- Subjects
OPTIMISM ,MOTIVATION (Psychology) ,JUDGMENT (Psychology) ,COMPARATIVE studies ,DATA analysis - Abstract
One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events—they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a ‘severity effect’ whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
76. A simulation study of the strength of evidence in the recommendation of medications based on two trials with statistically significant results.
- Author
-
van Ravenzwaaij, Don and Ioannidis, John P. A.
- Subjects
STATISTICAL hypothesis testing ,BAYES' estimation ,STANDARD deviations ,PARTICIPANT observation ,DRUG development - Abstract
A typical rule that has been used for the endorsement of new medications by the Food and Drug Administration is to have two trials, each convincing on its own, demonstrating effectiveness. “Convincing” may be subjectively interpreted, but the use of p-values and the focus on statistical significance (in particular with p < .05 being coined significant) is pervasive in clinical research. Therefore, in this paper, we calculate with simulations what it means to have exactly two trials, each with p < .05, in terms of the actual strength of evidence quantified by Bayes factors. Our results show that different cases where two trials have a p-value below .05 have wildly differing Bayes factors. Bayes factors of at least 20 in favor of the alternative hypothesis are not necessarily achieved and they fail to be reached in a large proportion of cases, in particular when the true effect size is small (0.2 standard deviations) or zero. In a non-trivial number of cases, evidence actually points to the null hypothesis, in particular when the true effect size is zero, when the number of trials is large, and when the number of participants in both groups is low. We recommend use of Bayes factors as a routine tool to assess endorsement of new medications, because Bayes factors consistently quantify strength of evidence. Use of p-values may lead to paradoxical and spurious decision-making regarding the use of new medications. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
77. Network switching strategy for energy conservation in heterogeneous networks.
- Author
-
Song, Yujae, Choi, Wooyeol, and Baek, Seungjae
- Subjects
ENERGY conservation ,SWITCHING systems (Telecommunication) ,ROAMING (Telecommunication) ,MARKOV processes ,ENERGY consumption - Abstract
In heterogeneous networks (HetNets), the large-scale deployment of small base stations (BSs) together with traditional macro BSs is an economical and efficient solution that is employed to address the exponential growth in mobile data traffic. In dense HetNets, network switching, i.e., handovers, plays a critical role in connecting a mobile terminal (MT) to the best of all accessible networks. In the existing literature, a handover decision is made using various handover metrics such as the signal-to-noise ratio, data rate, and movement speed. However, there are few studies on handovers that focus on energy efficiency in HetNets. In this paper, we propose a handover strategy that helps to minimize energy consumption at BSs in HetNets without compromising the quality of service (QoS) of each MT. The proposed handover strategy aims to capture the effect of the stochastic behavior of handover parameters and the expected energy consumption due to handover execution when making a handover decision. To identify the validity of the proposed handover strategy, we formulate a handover problem as a constrained Markov decision process (CMDP), by which the effects of the stochastic behaviors of handover parameters and consequential handover energy consumption can be accurately reflected when making a handover decision. In the CMDP, the aim is to minimize the energy consumption to service an MT over the lifetime of its connection, and the constraint is to guarantee the QoS requirements of the MT given in terms of the transmission delay and call-dropping probability. We find an optimal policy for the CMDP using a combination of the Lagrangian method and value iteration. Simulation results verify the validity of the proposed handover strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
78. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability.
- Author
-
Dawson, Michael R. W. and Gupta, Maya
- Subjects
PERCEPTRONS ,PROBABILITY theory ,COGNITIVE psychology ,STIMULUS & response (Psychology) ,QUANTITATIVE research - Abstract
Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent’s environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
79. SoilGrids250m: Global gridded soil information based on machine learning.
- Author
-
Hengl, Tomislav, Mendes de Jesus, Jorge, Heuvelink, Gerard B. M., Ruiperez Gonzalez, Maria, Kilibarda, Milan, Blagotić, Aleksandar, Shangguan, Wei, Wright, Marvin N., Geng, Xiaoyuan, Bauer-Marschallinger, Bernhard, Guevara, Mario Antonio, Vargas, Rodrigo, MacMillan, Robert A., Batjes, Niels H., Leenaars, Johan G. B., Ribeiro, Eloi, Wheeler, Ichsani, Mantel, Stephan, and Kempen, Bas
- Subjects
SOIL texture ,SOIL testing ,MODIS (Spectroradiometer) ,MACHINE learning ,LANDFORMS ,REMOTE sensing - Abstract
This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods—random forest and gradient boosting and/or multinomial logistic regression—as implemented in the packages , , and . The results of 10–fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
80. Theta-burst transcranial magnetic stimulation to the prefrontal or parietal cortex does not impair metacognitive visual awareness.
- Author
-
Bor, Daniel, Schwartzman, David J., Barrett, Adam B., and Seth, Anil K.
- Subjects
TRANSCRANIAL magnetic stimulation ,BRAIN imaging ,CONSCIOUSNESS ,METACOGNITION ,PARIETAL lobe ,SIGNAL detection - Abstract
Neuroimaging studies commonly associate dorsolateral prefrontal cortex (DLPFC) and posterior parietal cortex with conscious perception. However, such studies only investigate correlation, rather than causation. In addition, many studies conflate objective performance with subjective awareness. In an influential recent paper, Rounis and colleagues addressed these issues by showing that continuous theta burst transcranial magnetic stimulation (cTBS) applied to the DLPFC impaired metacognitive (subjective) awareness for a perceptual task, while objective performance was kept constant. We attempted to replicate this finding, with minor modifications, including an active cTBS control site. Using a between-subjects design for both DLPFC and posterior parietal cortices, we found no evidence of a cTBS-induced metacognitive impairment. In a second experiment, we devised a highly rigorous within-subjects cTBS design for DLPFC, but again failed to find any evidence of metacognitive impairment. One crucial difference between our results and the Rounis study is our strict exclusion of data deemed unsuitable for a signal detection theory analysis. Indeed, when we included this unstable data, a significant, though invalid, metacognitive impairment was found. These results cast doubt on previous findings relating metacognitive awareness to DLPFC, and inform the current debate concerning whether or not prefrontal regions are preferentially implicated in conscious perception. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
81. Flow Cytometric Single-Cell Identification of Populations in Synthetic Bacterial Communities.
- Author
-
Rubbens, Peter, Props, Ruben, Boon, Nico, and Waegeman, Willem
- Subjects
BACTERIAL communities ,BACTERIAL population ,FLOW cytometry ,MACHINE learning ,DATA analysis - Abstract
Bacterial cells can be characterized in terms of their cell properties using flow cytometry. Flow cytometry is able to deliver multiparametric measurements of up to 50,000 cells per second. However, there has not yet been a thorough survey concerning the identification of the population to which bacterial single cells belong based on flow cytometry data. This paper not only aims to assess the quality of flow cytometry data when measuring bacterial populations, but also suggests an alternative approach for analyzing synthetic microbial communities. We created so-called in silico communities, which allow us to explore the possibilities of bacterial flow cytometry data using supervised machine learning techniques. We can identify single cells with an accuracy >90% for more than half of the communities consisting out of two bacterial populations. In order to assess to what extent an in silico community is representative for its synthetic counterpart, we created so-called abundance gradients, a combination of synthetic (i.e., in vitro) communities containing two bacterial populations in varying abundances. By showing that we are able to retrieve an abundance gradient using a combination of in silico communities and supervised machine learning techniques, we argue that in silico communities form a viable representation for synthetic bacterial communities, opening up new opportunities for the analysis of synthetic communities and bacterial flow cytometry data in general. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
82. Affective Norms for Italian Words in Older Adults: Age Differences in Ratings of Valence, Arousal and Dominance.
- Author
-
Fairfield, Beth, Ambrosini, Ettore, Mammarella, Nicola, and Montefinese, Maria
- Subjects
HEALTH of older people ,AGE differences ,AGING ,SOCIAL dominance ,SELF-evaluation - Abstract
In line with the dimensional theory of emotional space, we developed affective norms for words rated in terms of valence, arousal and dominance in a group of older adults to complete the adaptation of the Affective Norms for English Words (ANEW) for Italian and to aid research on aging. Here, as in the original Italian ANEW database, participants evaluated valence, arousal, and dominance by means of the Self-Assessment Manikin (SAM) in a paper-and-pencil procedure. We observed high split-half reliabilities within the older sample and high correlations with the affective ratings of previous research, especially for valence, suggesting that there is large agreement among older adults within and across-languages. More importantly, we found high correlations between younger and older adults, showing that our data are generalizable across different ages. However, despite this across-ages accord, we obtained age-related differences on three affective dimensions for a great number of words. In particular, older adults rated as more arousing and more unpleasant a number of words that younger adults rated as moderately unpleasant and arousing in our previous affective norms. Moreover, older participants rated negative stimuli as more arousing and positive stimuli as less arousing than younger participants, thus leading to a less-curved distribution of ratings in the valence by arousal space. We also found more extreme ratings for older adults for the relationship between dominance and arousal: older adults gave lower dominance and higher arousal ratings for words rated by younger adults with middle dominance and arousal values. Together, these results suggest that our affective norms are reliable and can be confidently used to select words matched for the affective dimensions of valence, arousal and dominance across younger and older participants for future research in aging. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
83. A Systematic Review and Meta-Analysis of a Measure of Staff/Child Interaction Quality (the Classroom Assessment Scoring System) in Early Childhood Education and Care Settings and Child Outcomes.
- Author
-
Perlman, Michal, Falenchuk, Olesya, Fletcher, Brooke, McMullen, Evelyn, Beyene, Joseph, and Shah, Prakesh S.
- Subjects
CHILD care ,CHILD psychology ,SOCIAL interaction ,SYSTEMATIC reviews ,META-analysis - Abstract
The quality of staff/child interactions as measured by the Classroom Assessment Scoring System (CLASS) in Early Childhood Education and Care (ECEC) programs is thought to be important for children’s outcomes. The CLASS is made of three domains that assess Emotional Support, Classroom Organization and Instructional Support. It is a relatively new measure that is being used increasingly for research, quality monitoring/accountability and other applied purposes. Our objective was to evaluate the association between the CLASS and child outcomes. Searches of Medline, PsycINFO, ERIC, websites of large datasets and reference sections of all retrieved articles were conducted up to July 3, 2015. Studies that measured association between the CLASS and child outcomes for preschool-aged children who attended ECEC programs were included after screening by two independent reviewers. Searches and data extraction were conducted by two independent reviewers. Thirty-five studies were systematically reviewed of which 19 provided data for meta-analyses. Most studies had moderate to high risk of bias. Of the 14 meta-analyses we conducted, associations between Classroom Organization and Pencil Tapping and between Instructional Support and SSRS Social Skills were significant with pooled correlations of .06 and .09 respectively. All associations were in the expected direction. In the systematic review, significant correlations were reported mainly from one large dataset. Substantial heterogeneity in use of the CLASS, its dimensions, child outcomes and statistical measures was identified. Greater consistency in study methodology is urgently needed. Given the multitude of factors that impact child development it is encouraging that our analyses revealed some, although small, associations between the CLASS and children’s outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
84. A Data Fusion Approach to Enhance Association Study in Epilepsy.
- Author
-
Marini, Simone, Limongelli, Ivan, Rizzo, Ettore, Malovini, Alberto, Errichiello, Edoardo, Vetro, Annalisa, Da, Tan, Zuffardi, Orsetta, and Bellazzi, Riccardo
- Subjects
EPILEPSY research ,DATA fusion (Statistics) ,GENETICS of epilepsy ,EPILEPSY ,GENOMICS ,NOSOLOGY ,PROGNOSIS - Abstract
Among the scientific challenges posed by complex diseases with a strong genetic component, two stand out. One is unveiling the role of rare and common genetic variants; the other is the design of classification models to improve clinical diagnosis and predictive models for prognosis and personalized therapies. In this paper, we present a data fusion framework merging gene, domain, pathway and protein-protein interaction data related to a next generation sequencing epilepsy gene panel. Our method allows integrating association information from multiple genomic sources and aims at highlighting the set of common and rare variants that are capable to trigger the occurrence of a complex disease. When compared to other approaches, our method shows better performances in classifying patients affected by epilepsy. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
85. DNABP: Identification of DNA-Binding Proteins Based on Feature Selection Using a Random Forest and Predicting Binding Residues.
- Author
-
Ma, Xin, Guo, Jing, and Sun, Xiao
- Subjects
DNA-binding proteins ,NUCLEOTIDE sequence ,FEATURE selection ,RANDOM forest algorithms ,BIOINFORMATICS - Abstract
DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at . [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
86. Robust Satisficing Decision Making for Unmanned Aerial Vehicle Complex Missions under Severe Uncertainty.
- Author
-
Ji, Xiaoting, Niu, Yifeng, and Shen, Lincheng
- Subjects
DRONE aircraft ,MATHEMATICAL optimization ,DECISION making ,DECISION theory ,MARKOV processes ,ROBUST control - Abstract
This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs) executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP) is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL). A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
87. Comparing Machine Learning Classifiers and Linear/Logistic Regression to Explore the Relationship between Hand Dimensions and Demographic Characteristics.
- Author
-
Miguel-Hurtado, Oscar, Guest, Richard, Stevenage, Sarah V., Neil, Greg J., and Black, Sue
- Subjects
HAND anatomy ,MACHINE learning ,BIOMETRIC identification ,LOGISTIC regression analysis ,FORENSIC sciences ,CLASSIFICATION algorithms - Abstract
Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
88. Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture.
- Author
-
Li, Lingling, Wang, Pengchong, Chao, Kuei-Hsiang, Zhou, Yatong, and Xie, Yang
- Subjects
LITHIUM-ion batteries ,GAUSSIAN processes ,SUPPORT vector machines ,ARTIFICIAL intelligence ,COGNITIVE science ,ARTIFICIAL neural networks ,COMPUTATIONAL biology ,COMPUTATIONAL neuroscience - Abstract
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
89. The Statistical Determinants of the Speed of Motor Learning.
- Author
-
He, Kang, Liang, You, Abdollahi, Farnaz, Fisher Bittmann, Moria, Kording, Konrad, and Wei, Kunlin
- Subjects
MOTOR learning ,MOTOR ability ,VISUOMOTOR coordination ,PSYCHOLOGY of movement ,COMPUTER simulation - Abstract
It has recently been suggested that movement variability directly increases the speed of motor learning. Here we use computational modeling of motor adaptation to show that variability can have a broad range of effects on learning, both negative and positive. Experimentally, we also find contributing and decelerating effects. Lastly, through a meta-analysis of published papers, we verify that across a wide range of experiments, movement variability has no statistical relation with learning rate. While motor learning is a complex process that can be modeled, further research is needed to understand the relative importance of the involved factors. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
90. Multivariate Longitudinal Analysis with Bivariate Correlation Test.
- Author
-
Adjakossa, Eric Houngla, Sadissou, Ibrahim, Hounkonnou, Mahouton Norbert, and Nuel, Gregory
- Subjects
MULTIVARIATE analysis ,STATISTICAL correlation ,BIVARIATE analysis ,LIKELIHOOD ratio tests ,COMPUTER simulation ,LONGITUDINAL method - Abstract
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
91. Assumptions of Mixed Treatment Comparisons in Health Technology Assessments - Challenges and Possible Steps for Practical Application.
- Author
-
Reken, Stefanie, Sturtz, Sibylle, Kiefer, Corinna, Böhler, Yvonne-Beatrice, and Wieseler, Beate
- Subjects
ANTIDEPRESSANTS ,MEDICAL technology ,CLINICAL trials ,MEDICAL economics ,COGNITIVE science - Abstract
The validity of mixed treatment comparisons (MTCs), also called network meta-analysis, relies on whether it is reasonable to accept the underlying assumptions on similarity, homogeneity, and consistency. The aim of this paper is to propose a practicable approach to addressing the underlying assumptions of MTCs. Using data from clinical studies of antidepressants included in a health technology assessment (HTA), we present a stepwise approach to dealing with challenges related to checking the above assumptions and to judging the robustness of the results of an MTC. At each step, studies that were dissimilar or contributed to substantial heterogeneity or inconsistency were excluded from the primary analysis. In a comparison of the MTC estimates from the consistent network with the MTC estimates from the homogeneous network including inconsistencies, few were affected by notable changes; that is, a change in effect size (factor 2), direction of effect or statistical significance. Considering the small proportion of studies excluded from the network due to inconsistency, as well as the number of notable changes, the MTC results were deemed sufficiently robust. In the absence of standard methods, our approach to checking assumptions in MTCs may inform other researchers in need of practical options, particularly in HTA. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
92. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making.
- Author
-
Cardoso, Ricardo Lopes, Leite, Rodrigo Oliveira, and de Aquino, André Carlos Busanelli
- Subjects
CORPORATE finance ,ECONOMIC decision making ,GRAPH theory ,INFORMATION theory in economics ,SAMPLE size (Statistics) - Abstract
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts’ accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts’ accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
93. Supervised Filter Learning for Representation Based Face Recognition.
- Author
-
Bi, Chao, Zhang, Lei, Qi, Miao, Zheng, Caixia, Yi, Yugen, Wang, Jianzhong, and Zhang, Baoxue
- Subjects
FACE perception ,REGRESSION analysis ,HETEROGENEOUS catalysis ,VISUAL perception ,ALGORITHMS - Abstract
Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
94. Machine Learning Meta-analysis of Large Metagenomic Datasets: Tools and Biological Insights.
- Author
-
Pasolli, Edoardo, Truong, Duy Tin, Malik, Faizan, Waldron, Levi, and Segata, Nicola
- Subjects
MACHINE learning ,METAGENOMICS ,HUMAN microbiota ,BIOMARKERS ,META-analysis ,METADATA ,PREDICTION models - Abstract
Shotgun metagenomic analysis of the human associated microbiome provides a rich set of microbial features for prediction and biomarker discovery in the context of human diseases and health conditions. However, the use of such high-resolution microbial features presents new challenges, and validated computational tools for learning tasks are lacking. Moreover, classification rules have scarcely been validated in independent studies, posing questions about the generality and generalization of disease-predictive models across cohorts. In this paper, we comprehensively assess approaches to metagenomics-based prediction tasks and for quantitative assessment of the strength of potential microbiome-phenotype associations. We develop a computational framework for prediction tasks using quantitative microbiome profiles, including species-level relative abundances and presence of strain-specific markers. A comprehensive meta-analysis, with particular emphasis on generalization across cohorts, was performed in a collection of 2424 publicly available metagenomic samples from eight large-scale studies. Cross-validation revealed good disease-prediction capabilities, which were in general improved by feature selection and use of strain-specific markers instead of species-level taxonomic abundance. In cross-study analysis, models transferred between studies were in some cases less accurate than models tested by within-study cross-validation. Interestingly, the addition of healthy (control) samples from other studies to training sets improved disease prediction capabilities. Some microbial species (most notably Streptococcus anginosus) seem to characterize general dysbiotic states of the microbiome rather than connections with a specific disease. Our results in modelling features of the “healthy” microbiome can be considered a first step toward defining general microbial dysbiosis. The software framework, microbiome profiles, and metadata for thousands of samples are publicly available at . [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
95. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.
- Author
-
Ayllón, David, Gil-Pita, Roberto, and Seoane, Fernando
- Subjects
IMPEDANCE spectroscopy ,MEASUREMENT errors ,SPECTROMETERS ,ROBUST control ,MACHINE learning ,ELECTRIC capacity - Abstract
Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
96. The Success of Linear Bootstrapping Models: Decision Domain-, Expertise-, and Criterion-Specific Meta-Analysis.
- Author
-
Kaufmann, Esther and Wittmann, Werner W.
- Subjects
STATISTICAL bootstrapping ,JUDGMENT (Psychology) ,DECISION making ,MATHEMATICAL models ,META-analysis - Abstract
The success of bootstrapping or replacing a human judge with a model (e.g., an equation) has been demonstrated in Paul Meehl’s (1954) seminal work and bolstered by the results of several meta-analyses. To date, however, analyses considering different types of meta-analyses as well as the potential dependence of bootstrapping success on the decision domain, the level of expertise of the human judge, and the criterion for what constitutes an accurate decision have been missing from the literature. In this study, we addressed these research gaps by conducting a meta-analysis of lens model studies. We compared the results of a traditional (bare-bones) meta-analysis with findings of a meta-analysis of the success of bootstrap models corrected for various methodological artifacts. In line with previous studies, we found that bootstrapping was more successful than human judgment. Furthermore, bootstrapping was more successful in studies with an objective decision criterion than in studies with subjective or test score criteria. We did not find clear evidence that the success of bootstrapping depended on the decision domain (e.g., education or medicine) or on the judge’s level of expertise (novice or expert). Correction of methodological artifacts increased the estimated success of bootstrapping, suggesting that previous analyses without artifact correction (i.e., traditional meta-analyses) may have underestimated the value of bootstrapping models. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
97. Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand.
- Author
-
Reich, Nicholas G., Lauer, Stephen A., Sakrejda, Krzysztof, Iamsirithaworn, Sopon, Hinjoy, Soawapak, Suangtho, Paphanij, Suthachana, Suthanun, Clapham, Hannah E., Salje, Henrik, Cummings, Derek A. T., and Lessler, Justin
- Subjects
EPIDEMICS ,PUBLIC health ,ARBOVIRUS diseases ,MOSQUITO vectors - Abstract
Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
98. Early Childhood Developmental Status in Low- and Middle-Income Countries: National, Regional, and Global Prevalence Estimates Using Predictive Modeling.
- Author
-
McCoy, Dana Charles, Peet, Evan D., Ezzati, Majid, Danaei, Goodarz, Black, Maureen M., Sudfeld, Christopher R., Fawzi, Wafaie, and Fink, Günther
- Subjects
- *
CHILD development , *CHILD psychology , *COGNITIVE development , *DISEASE prevalence , *LOW-income countries , *MIDDLE-income countries , *REGRESSION analysis , *COGNITION , *ECONOMICS , *EMOTIONS , *MATHEMATICAL models , *RESEARCH funding , *SOCIAL skills , *THEORY ,DEVELOPING countries - Abstract
Background: The development of cognitive and socioemotional skills early in life influences later health and well-being. Existing estimates of unmet developmental potential in low- and middle-income countries (LMICs) are based on either measures of physical growth or proxy measures such as poverty. In this paper we aim to directly estimate the number of children in LMICs who would be reported by their caregivers to show low cognitive and/or socioemotional development.Methods and Findings: The present paper uses Early Childhood Development Index (ECDI) data collected between 2005 and 2015 from 99,222 3- and 4-y-old children living in 35 LMICs as part of the Multiple Indicator Cluster Survey (MICS) and Demographic and Health Surveys (DHS) programs. First, we estimate the prevalence of low cognitive and/or socioemotional ECDI scores within our MICS/DHS sample. Next, we test a series of ordinary least squares regression models predicting low ECDI scores across our MICS/DHS sample countries based on country-level data from the Human Development Index (HDI) and the Nutrition Impact Model Study. We use cross-validation to select the model with the best predictive validity. We then apply this model to all LMICs to generate country-level estimates of the prevalence of low ECDI scores globally, as well as confidence intervals around these estimates. In the pooled MICS and DHS sample, 14.6% of children had low ECDI scores in the cognitive domain, 26.2% had low socioemotional scores, and 36.8% performed poorly in either or both domains. Country-level prevalence of low cognitive and/or socioemotional scores on the ECDI was best represented by a model using the HDI as a predictor. Applying this model to all LMICs, we estimate that 80.8 million children ages 3 and 4 y (95% CI 48.1 million, 113.6 million) in LMICs experienced low cognitive and/or socioemotional development in 2010, with the largest number of affected children in sub-Saharan Africa (29.4.1 million; 43.8% of children ages 3 and 4 y), followed by South Asia (27.7 million; 37.7%) and the East Asia and Pacific region (15.1 million; 25.9%). Positive associations were found between low development scores and stunting, poverty, male sex, rural residence, and lack of cognitive stimulation. Additional research using more detailed developmental assessments across a larger number of LMICs is needed to address the limitations of the present study.Conclusions: The number of children globally failing to reach their developmental potential remains large. Additional research is needed to identify the specific causes of poor developmental outcomes in diverse settings, as well as potential context-specific interventions that might promote children's early cognitive and socioemotional well-being. [ABSTRACT FROM AUTHOR]- Published
- 2016
- Full Text
- View/download PDF
99. Multilevel Weighted Support Vector Machine for Classification on Healthcare Data with Missing Values.
- Author
-
Razzaghi, Talayeh, Roderick, Oleg, Safro, Ilya, and Marko, Nicholas
- Subjects
SUPPORT vector machines ,ELECTRONIC health records ,PREDICTION models ,DATA mining ,ELECTRONIC data processing - Abstract
This work is motivated by the needs of predictive analytics on healthcare data as represented by Electronic Medical Records. Such data is invariably problematic: noisy, with missing entries, with imbalance in classes of interests, leading to serious bias in predictive modeling. Since standard data mining methods often produce poor performance measures, we argue for development of specialized techniques of data-preprocessing and classification. In this paper, we propose a new method to simultaneously classify large datasets and reduce the effects of missing values. It is based on a multilevel framework of the cost-sensitive SVM and the expected maximization imputation method for missing values, which relies on iterated regression analyses. We compare classification results of multilevel SVM-based algorithms on public benchmark datasets with imbalanced classes and missing values as well as real data in health applications, and show that our multilevel SVM-based method produces fast, and more accurate and robust classification results. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
100. Accurate Prediction of Transposon-Derived piRNAs by Integrating Various Sequential and Physicochemical Features.
- Author
-
Luo, Longqiang, Li, Dingfang, Zhang, Wen, Tu, Shikui, Zhu, Xiaopeng, and Tian, Gang
- Subjects
TRANSPOSONS ,PIWI genes ,NON-coding RNA ,PREDICTION theory ,MACHINE learning - Abstract
Background: Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods: In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results: We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions: Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in . [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.