29 results
Search Results
2. Multilevel dimensionality-reduction methods
- Abstract
When data sets are multilevel (group nesting or repeated measures), different sources of variations must be identified. In the framework of unsupervised analyses, multilevel simultaneous component analysis (MSCA) has recently been proposed as the most satisfactory option for analyzing multilevel data. MSCA estimates submodels for the different levels in data and thereby separates the “within”-subject and “between”-subject variations in the variables. Following the principles of MSCA and the strategy of decomposing the available data matrix into orthogonal blocks, and taking into account the between- and the within data structures, we generalize, in a multilevel perspective, multivariate models in which a matrix of response variables can be used to guide the projections (formed by responses predicted by explanatory variables or by a limited number of their combinations/composites) into choices of meaningful directions. To this end, the current paper proposes the multilevel version of the multivariate regression model and dimensionality-reduction methods (used to predict responses with fewer linear composites of explanatory variables). The principle findings of the study are that the minimization of the loss functions related to multivariate regression, principal-component regression, reduced-rank regression, and canonical-correlation regression are equivalent to the separate minimization of the sum of two separate loss functions corresponding to the between and within structures, under some constraints. The paper closes with a case study of an application focusing on the relationships between mental health severity and the intensity of care in the Lombardy region mental health system.
- Published
- 2013
3. The uniformly minimum variance unbiased estimator of odds ratio in case–control studies under inverse sampling
- Abstract
The stated goal of this paper is to propose the uniformly minimum variance unbiased estimator of odds ratio in case–control studies under inverse sampling design. The problem of estimating odds ratio plays a central role in case–control studies. However, the traditional sampling schemes appear inadequate when the expected frequencies of not exposed cases and exposed controls can be very low. In such a case, it is convenient to use the inverse sampling design, which requires that random drawings shall be continued until a given number of relevant events has emerged. In this paper we prove that a uniformly minimum variance unbiased estimator of odds ratio does not exist under usual binomial sampling, while the standard odds ratio estimator is uniformly minimum variance unbiased under inverse sampling. In addition, we compare these two sampling schemes by means of large-sample theory and small-sample simulation
- Published
- 2012
4. On the Undecidability of Fuzzy Description Logics with GCIs and Product t-norm
- Abstract
The combination of Fuzzy Logics and Description Logics (DLs) has been investigated for at least two decades because such fuzzy DLs can be used to formalize imprecise concepts. In particular, tableau algorithms for crisp Description Logics have been extended to reason also with their fuzzy counterparts. Recently, it has been shown that, in the presence of general concept inclusion axioms (GCIs), some of these fuzzy DLs actually do not have the finite model property, thus throwing doubt on the correctness of tableau algorithm for which it was claimed that they can handle fuzzy DLs with GCIs. In a previous paper, we have shown that these doubts are indeed justified, by proving that a certain fuzzy DL with product t-norm and involutive negation is undecidable. In the present paper, we show that undecidability also holds if we consider a t-norm-based fuzzy DL where disjunction and involutive negation are replaced by the constructor implication, which is interpreted as the residuum. The only condition on the t-norm is that it is a continuous t-norm "starting" with the product t-norm, which covers an uncountable family of t-norms. © 2011 Springer-Verlag.
- Published
- 2011
5. On the Undecidability of Fuzzy Description Logics with GCIs and Product t-norm
- Abstract
The combination of Fuzzy Logics and Description Logics (DLs) has been investigated for at least two decades because such fuzzy DLs can be used to formalize imprecise concepts. In particular, tableau algorithms for crisp Description Logics have been extended to reason also with their fuzzy counterparts. Recently, it has been shown that, in the presence of general concept inclusion axioms (GCIs), some of these fuzzy DLs actually do not have the finite model property, thus throwing doubt on the correctness of tableau algorithm for which it was claimed that they can handle fuzzy DLs with GCIs. In a previous paper, we have shown that these doubts are indeed justified, by proving that a certain fuzzy DL with product t-norm and involutive negation is undecidable. In the present paper, we show that undecidability also holds if we consider a t-norm-based fuzzy DL where disjunction and involutive negation are replaced by the constructor implication, which is interpreted as the residuum. The only condition on the t-norm is that it is a continuous t-norm "starting" with the product t-norm, which covers an uncountable family of t-norms. © 2011 Springer-Verlag.
- Published
- 2011
6. Improvements to the tableau prover PITP
- Abstract
In this paper we discuss the new version of PITP, a proce- dure to decide propositional intuitionistic logic, which turns out at the moment to be the best propositional prover on ILTP. The changes in the strategy and implementation make the new version of PITP faster and capable of deciding more formulas than the previous one. We give a short account both of the old optimizations and the changes in the strategy with respect to the previous version. We use ILTP library and random generated formulas to compare the implementation described in this paper to the other provers (including our old version of PITP).
- Published
- 2007
7. Application of a SPH depth-integrated model to landslide run-out analysis
- Abstract
Hazard and risk assessment of landslides with potentially long run-out is becoming more and more important. Numerical tools exploiting different constitutive models, initial data and numerical solution techniques are important for making the expert's assessment more objective, even though they cannot substitute for the expert's understanding of the site-specific conditions and the involved processes. This paper presents a depth-integrated model accounting for pore water pressure dissipation and applications both to real events and problems for which analytical solutions exist. The main ingredients are: (i) The mathematical model, which includes pore pressure dissipation as an additional equation. This makes possible to model flowslide problems with a high mobility at the beginning, the landslide mass coming to rest once pore water pressures dissipate. (ii) The rheological models describing basal friction: Bingham, frictional, Voellmy and cohesive-frictional viscous models. (iii) We have implemented simple erosion laws, providing a comparison between the approaches of Egashira, Hungr and Blanc. (iv) We propose a Lagrangian SPH model to discretize the equations, including pore water pressure information associated to the moving SPH nodes. © 2014 Springer-Verlag Berlin Heidelberg
- Published
- 2014
8. Chasing a complete understanding of the triggering mechanisms of a large rapidly evolving rockslide
- Abstract
Rockslides in alpine areas can reach large volumes and, owing to their position along slopes, can either undergo large and rapid evolution originating large rock avalanches or can decelerate and stabilize. As a consequence, in particular when located within large deep-seated deformations, this type of instability requires accurate observation and monitoring. In this paper, the case study of the La Saxe rockslide (ca. 8 × 106 m3), located within a deep-seated deformation, undergoing a major phase of acceleration in the last decade and exposing the valley bottom to a high risk, is discussed. To reach a more complete understanding of the process, in the last 3 years, an intense investigation program has been developed. Boreholes have been drilled, logged, and instrumented (open-pipe piezometers, borehole wire extensometers, inclinometric casings) to assess the landslide volume, the rate of displacement at depth, and the water pressure. Displacement monitoring has been undertaken with optical targets, a GPS network, a ground-based interferometer, and four differential multi-parametric borehole probes. A clear seasonal acceleration is observed related to snow melting periods. Deep displacements are clearly localized at specific depths. The analysis of the piezometric and snowmelt data and the calibration of a 1D block model allows the forecast of the expected displacements. To this purpose, a 1D pseudo-dynamic visco-plastic approach, based on Perzyna’s theory, has been developed. The viscous nucleus has been assumed to be bi-linear: in one case, irreversible deformations develop uniquely for positive yield function values; in a more general case, visco-plastic deformations develop even for negative values. The model has been calibrated and subsequently validated on a long temporal series of monitoring data, and it seems reliable for simulating the in situ data. A 3D simplified approach is suggested by subdividing the landslide mass into distinct interacting blocks.
- Published
- 2014
9. A trust-based approach for a competitive cloud/grid computing scenario
- Abstract
Cloud/Grid systems are composed of nodes that individually manage local resources, and when a client request is submitted to the system, it is necessary to find the most suitable nodes to satisfy that request. In a competitive scenario, each node is in competition with each other to obtain the assignment of available Tasks. In such a situation, it is possible that a node, in order to obtain the Assignment of a task, can lie when declaring its own capability. Therefore, lying nodes will need to require the collaboration of other nodes to complete the task and consequently the problem arises of finding the most promising collaborators. In such a context, to make effective this selection, each node should have a trust model for accurately choosing its interlocutors. In this paper, a trust-based approach is proposed to make a node capable of finding the most reliable interlocutors. This approach, in order to avoid the exploration of the whole node space, exploits a P2P resource finding approach for clouds/grids, capable of determining the admissible region of nodes to be considered for the search of the interlocutors.
- Published
- 2013
10. Fallacies as argumentative devices in political debates
- Abstract
The current paper attempts to contribute to the study of argumentation in political debates. We propose an examination of the role of fallacies in political argumentation. In the first two sections we conduct a brief review of literature on the concepts of argumentation and fallacies to show that they both converge in emphasizing the role of discourse type when evaluating the efficacy of communicative strategies. This perspective is then applied in the analysis section to look at the role of fallacies in a political debate on nuclear energy held in Italy. We conduct a discourse analysis of the transcript based on which we identify a variety of relevant paths followed by speakers when constructing arguments. The findings demonstrate how several informal fallacies (argumentum ad baculum, argumentum ad hominem, argument from analogy, argumentum ad consequentiam) are strategically used by politicians in order to put forward coherent and strong positions.
- Published
- 2013
11. Defining Accounting Information Systems Boundaries
- Abstract
It is clear that organizations today expect much more than simple double-entry bookkeeping from their Accounting Information System (AIS): ERPs not only support all transaction-related activities, but also provide comprehensive tools that are useful to analyze data and make decisions. However, the definition of an AIS, or what it should be, is highly dependant on the definition of accounting itself. An initial objective of this paper is therefore to not only analyze the various kinds of accounting which are adopted in companies and any related computer-based subsystems, but also to determine if the contents of our accounting cours-es, particularly Management Accounting (MA) courses, are effectively aligned with the current needs of organizations. Secondly - this work will try to assess if the current definition and contents of MA are still valid or rather new perspectives suggest to broad its focus, to in-clude new and promising fields of interest
- Published
- 2013
12. An object-oriented application framework for the development of real-time systems
- Abstract
The paper presents an object-oriented application framework that supports the development of real-time systems. The framework consists of a set of architectural abstractions that allow time-related aspects to be explicitly treated as first-class objects at the application level. Both the temporal behavior of an application and the way the application deals with information placed in a temporal context can be modeled by means of such abstractions, thus narrowing the semantic gap between specification and implementation. Moreover, the framework carefully separates behavioral policies from implementation details improving portability and simplifying the realization of adaptive systems.
- Published
- 2012
13. Can Children Tell Us Something about the Semantics of Adjectives?
- Abstract
In this paper we discuss some data about the acquisition of relative gradable adjectives in order to evaluate two theories that have been proposed to account for the meaning of gradable adjectives, i.e. the degree-based analysis and the partial function approach. We claim that younger children start by assigning a nominal like interpretation to relative gradable adjectives (tall means “with a vertical dimension”), and that only at a later stage, for informativeness reasons, they access the comparative reading (tall means “taller than a standard”). We present and discuss the results of an experimental study in which we aimed at “turning adults into children”. We show that, when informativeness is not at stake, even adults seem to access the nominal interpretation of relative adjectives. We argue that the transition from the nominal to the comparative reading of relative adjectives might be easily accounted for by a partial function approach.
- Published
- 2012
14. A comparison of genetic algorithms and particle swarm optimization for parameter estimation in stochastic biochemical systems
- Abstract
The modelling of biochemical systems requires the knowledge of several quantitative parameters (e.g. reaction rates) which are often hard to measure in laboratory experiments. Furthermore, when the system involves small numbers of molecules, the modelling approach should also take into account the effects of randomness on the system dynamics. In this paper, we tackle the problem of estimating the unknown parameters of stochastic biochemical systems by means of two optimization heuristics, genetic algorithms and particle swarm optimization. Their performances are tested and compared on two basic kinetics schemes: the Michaelis-Menten equation and the Brussellator. The experimental results suggest that particle swarm optimization is a suitable method for this problem. The set of parameters estimated by particle swarm optimization allows us to reliably reconstruct the dynamics of the Michaelis-Menten system and of the Brussellator in the oscillating regime. © Springer-Verlag Berlin Heidelberg 2009.
- Published
- 2009
15. NK landscapes difficulty and negative slope coefficient: How sampling influences the results
- Abstract
Negative Slope Coefficient is an indicator of problem hardness that has been introduced in 2004 and that has returned promising results on a large set of problems. It is based on the concept of fitness cloud and works by partitioning the cloud into a number of bins representing as many different regions of the fitness landscape. The measure is calculated by joining the bins centroids by segments and summing all their negative slopes. In this paper, for the first time, we point out a potential problem of the Negative Slope Coefficient: we study its value for different instances of the well known NK-landscapes and we show how this indicator is dramatically influenced by the minimum number of points contained in a bin. Successively, we formally justify this behavior of the Negative Slope Coefficient and we discuss pros and cons of this measure. ©Springer-Verlag Berlin Heidelberg 2009.
- Published
- 2009
16. How redundant is your universal computation device?
- Abstract
Given a computational model , and a "reasonable" encoding function that encodes any computation device M of as a finite bit string, we define the description size of M (under the encoding ) as the length of . The description size of the entire class (under the encoding ) can then be defined as the length of the shortest bit string that encodes a universal device of . In this paper we propose the description size as a complexity measure that allows to compare different computational models. We compute upper bounds to the description size of deterministic register machines, Turing machines, spiking neural P systems and UREM P systems. By comparing these sizes, we provide a first partial answer to the following intriguing question: what is the minimal (description) size of a universal computation device?
- Published
- 2009
17. Reference ontology design for a neurovascular knowledge network
- Abstract
In this paper we describe the ontological model developed within the NEUROWEB project, a consortium of four European excellence centers for the diagnosis and treatment of the Ischemic Stroke pathology. The aim of the project is the development of a support system for association studies, intended as the search for statistical correlations between a feature (e.g., a genotype) and the clinical phenotype. Clinical phenotypes are assessed through the diagnostic activity, performed by clinical experts operating within different neurovascular sites. These sites operate according to specific procedures, but they also conform to the minimal requirements of international guidelines, displayed by the adoption of a common standard for the patient classification. We developed a central model for the clinical phenotypes (the NEUROWEB Reference Ontology), able to reconcile the different methodologies into a common classificatory system. To support the integrative analysis of genotype-phenotype relations, the Reference Ontology was extended to handle concepts of the genomic world. We identified the general theory of biological function as the common ground between the clinical medicine and molecular biosciences; therefore we decomposed the clinical phenotypes into elementary phenotypes with a homogeneous physiological background, and we connected them to the biological processes, acting as the elementary units of the genomic world.
- Published
- 2009
18. Dynamically computing reputation of recommender agents with learning capabilities
- Abstract
The importance of mutual monitoring in recommender systems based on learning agents derives from the consideration that a learning agent needs to interact with other agents in its environment in order to Improve its individual performances. In this paper we present a novel framework, called EVA, that introduces a strategy to improve the performances of recommender agents based on a dynamic computation of the agent's reputation. Some preliminary experiments on real users show that our approach, implemented on the top of some well-known recommender systems, introduces significant improvements in terms of effectiveness.
- Published
- 2008
19. Towards an ontology for crowds description: A proposal based on description logic
- Abstract
he research context of this paper refers to bottom-up approaches to crowd dynamics that is, the study of how and where crowds form and move [1]. Several phenomena like crowd aggregation, dispersion and self-organized movement have been observed and studied by multiple disciplines interested to crowds (e.g. physics, sociology, ethology, social and behavioral psychology, building design, urban planning, security management, among others), each one with its specific viewpoint and ontological setting. SCA4CROWDS is an interdisciplinary research within this context that aims at contributing towards the development of a unifying ontology on crowds allowing the integration of contributions coming from several disciplines and that could be exploited for scientific and applicative issues (e.g. model comparison, validation, calibration). Potential exploitations of SCA4CROWDS results are towards the support of design and management of public crowded spaces and events to improve security, safety and comfort of people. SCA4CROWDS, in particular, aims at developing formal and computational tools to support the design, execution and analysis of crowds' behavior as effect of individual interactions (e.g. physical, social, emotional) according to Situated Cellular Agent (SCA) [2]. SCA is a modeling and simulation framework to model and study crowd dynamics phenomena with an approach based on Multi-Agent Systems (MAS) and Cellular Automata [3] principles. © 2008 Springer-Verlag Berlin Heidelberg.
- Published
- 2008
20. GP generation of pedestrian behavioral rules in an evacuation model based on SCA
- Abstract
This paper presents a research in the context of pedestrian dynamics according to Situated Cellular Agent (SCA), a Multi-Agent Systems approach whose roots are on Cellular Automata (CA). The aim of this work is to apply Genetic Programming (GP) approach, a well known Machine Learning method belonging to the family of Evolutionary Algorithms, to generate suitable behavioral rules for pedestrians in an evacuation scenario. The main contribution of this work is in the design of a testset of GP generated behaviors to represent basic behavioral models of evacuees populating a only locally known environment, a typical scenario for CA-based models. © 2008 Springer-Verlag Berlin Heidelberg.
- Published
- 2008
21. Management of multi-services structures through an access control framework
- Abstract
This paper aims to present an architectural model designed to manage the information related to a multi-functional structure offering various types of services through an access control framework. It describes several implementation details of our prototype focusing on the emerging technologies based on radio frequency identification approaches exploited to reveal the presence of customers inside a multi-functional area.
- Published
- 2008
22. Evaluating Graph Kernel Methods for Relation Discovery in GO-annotated Clusters
- Abstract
The application of various clustering techniques for largescale gene-expression measurement experiments is an established method in bioinformatics. Clustering is also usually accompanied by functional characterization of gene sets by assessing statistical enrichments of structured vocabularies, such as the Gene Ontology (GO) [1]. If different cluster sets are generated for correlated experiments, a machine learning step termed cluster meta-analysis may be performed, in order to discover relations among the components of such sets. Several approaches have been proposed for this step: in particular, kernel methods may be used to exploit the graphical structure of typical ontologies such as GO. Following up the formulation of such approach [2], in this paper we present and discuss further results about its applicability and its performance, always in the context of the well known Spellman's Yeast Cell Cycle dataset [3].
- Published
- 2007
23. Robustness of parameter estimation procedures in multilevel models when random effects are MEP distributed
- Abstract
In this paper we examine maximum likelihood estimation procedures in multilevel models for two level nesting structures. Usually, for fixed effects and variance components estimation, level-one error terms and random effects are assumed to be normally distributed. Nevertheless, in some circumstances this assumption might not be realistic, especially as concerns random effects. Thus we assume for random effects the family of multivariate exponential power distributions (MEP); subsequently, by means of Monte Carlo simulation procedures, we study robustness of maximum likelihood estimators under normal assumption when, actually, random effects are MEP distributed. © Springer-Verlag 2007.
- Published
- 2007
24. UP-DRES user profiling for a dynamic REcommendation system
- Abstract
The WWW is actually the most dynamic and attractive information exchange place. Finding useful information is hard due to huge data amount, varied topics and unstructured contents. In this paper we present a web browsing support system that proposes personalized contents. It is integrated in the content management system and it runs on the server hosting the site. It processes periodically site contents, extracting vectors of the most significant words. A topology tree is defined applying hierarchical clustering. During online browsing, viewed contents are processed and mapped in the vector space previously defined. The centroid of these vectors is compared with the topology tree nodes’ centroids to find the most similar; its contents are presented to the user as link suggestions or dynamically created pages. Personal profile is saved after every session and included in the analysis during same user’s subsequent visits, avoiding the cold start problem
- Published
- 2006
25. Knowledge maintenance and sharing in the KM context: the case of P-Truck
- Abstract
This paper illustrates a Knowledge Management framework developed in the context of the P–Truck Project, that aims to support the design and manufacturing of innovative products in a restricted and specific domain (i.e. the design and manufacturing of truck tires) through a Knowledge Based System approach. The domain is characterized by heterogenous knowledge, that can be difficulty captured with generic knowledge engineering methodologies and tools. Thus, a dedicated Knowledge Elicitation tool (KEPT) has been designed and implemented into the P–Truck system for this task. The main feature of KEPT is the possibility to manage the heterogeneous knowledge concerning the different phases of the production process in a centralized fashion, with benefits from the knowledge maintenance and sharing standpoints. Moreover, KEPT allows to support the different types of knowledge– based systems (i.e. rule–based and case–based) that within the P–Truck system support domain experts in handling tire design and production.
- Published
- 2003
26. Knowledge maintenance and sharing in the KM context: the case of P-Truck
- Abstract
This paper illustrates a Knowledge Management framework developed in the context of the P-Truck Project, that aims to support the design and manufacturing of innovative products in a restricted and specific domain (i.e. the design and manufacturing of truck tires) through a Knowledge Based System approach. The domain is characterized by heterogenous knowledge, that can be difficulty captured with generic knowledge engineering methodologies and tools. Thus, a dedicated Knowledge Elicitation tool (KEPT) has been designed and implemented into the P-Truck system for this task. The main feature of KEPT is the possibility to manage the heterogeneous knowledge concerning the different phases of the production process in a centralized fashion, with benefits from the knowledge maintenance and sharing standpoints. Moreover, KEPT allows to support the different types of knowledge based systems (i.e. rule-based and case-based) that within the P-Truck system support domain experts in handling tire design and production.
- Published
- 2003
27. P systems with Gemmation of Mobile Membranes
- Abstract
P systems are computational models inspired by some biological features of the structure and the functioning of real cells. In this paper we introduce a new kind of communication between membranes, based upon the natural budding of vesicles in a cell. We de-ne the operations of gemmation and fusion of mobile membranes, and we use membrane structures and rules over strings of biological inspiration only. We prove that P systems of this type can generate all recursively enumerable languages and, moreover, the Hamiltonian Path Problem can be solved in a quadratic time. Some open problems are also formulated
- Published
- 2001
28. Reactive path-planning: a directional diffusion algorithm on multilayered cellular automata
- Abstract
Cellular Automata model is a powerful instrument used in many applications. In this paper we present a Reactive Path-Planning Algorithm for a non-holonomic robot moving on a 2D surface based on Multilayered Cellular Automata. The robot considered has a preferential motion direction and has to move using smoothed trajectories, without stopping and turning in place, and with a minimum steering radius. We have implemented a new algorithm based on a directional (anisotropic) propagation of repulsive and attracting potential values in a Multilayered Cellular Automata model. The algorithm finds all the optimal trajectories following the minimum valley of a potential hypersurface embedded in a 4D space, built respecting the imposed constraints. Our approach results to be distributed and incremental: whenever changing the initial or the final pose, or the obstacles distribution, the automata start evolving towards a new global steady state, looking for a new set of solutions. Because it reacts to obstacles distribution changes, it can be also used in unknown or dynamical environments in combination with a world modeler. The path-planning algorithm is applicable on a wide class of vehicles kinematics, selected changing a set of weights
- Published
- 2001
29. APPROXIMATION OF MULTIVARIABLE FUNCTIONS WITH RESPECT TO RANDOM POINTS LESS THAN 2K, K-DIMENSION OF SPACE
- Abstract
The problem of the numerical approximation of multivariable functions has been solved by the Monte Carlo method when the data points are assumed to be given on discrete lattice points [5, 8, 2]. When the data points are randomly distributed and very numerous there are some results in the literature [3, 6] but if the number of the points is less than 2k, where k is the dimension of the space, it is very difficult to develop approximation formulas. This paper gives a solution to this problem by local approximations. © 1981 Springer-Verlag.
- Published
- 1981
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.