63 results
Search Results
2. SCICERO: A deep learning and NLP approach for generating scientific knowledge graphs in the computer science domain
- Abstract
Science communication has a number of bottlenecks that include the rising number of published research papers and its non-machine-accessible and document-based paradigm, which makes the exploration, reading, and reuse of research outcomes rather inefficient. Recently, Knowledge Graphs (KG), i.e., semantic interlinked networks of entities, have been proposed as a new core technology to describe and curate scholarly information with the goal to make it machine readable and understandable. However, the main drawback of the use of such a technology is that researchers are asked to manually annotate their research papers and add their contributions within the KGs. To address this problem, in this paper we propose SCICERO, a novel KG generation approach that takes in input text from research articles and generates a KG of research entities. SCICERO uses Natural Language Processing techniques to parse the content of scientific papers to discover entities and relationships, exploits state-of-the-art Deep Learning Transformer models to make sense and validate extracted information, and uses Semantic Web best practices to formally represent the extracted entities and relationships, making the written content of research papers machine-actionable. SCICERO has been tested on a dataset of 6.7M papers about Computer Science generating a KG of about 10M entities. It has been evaluated on a manually generated gold standard of 3,600 triples that cover three Computer Science subdomains (Information Retrieval, Natural Language Processing, and Machine Learning) obtaining remarkable results.
- Published
- 2022
3. VIC — A Tangible User Interface to train memory skills in children with Intellectual Disability
- Abstract
Memory is defined as the capability of encoding, storing, and retrieving information and is a pillar of our cognitive functions. Memory is one the most investigated processes in people with Intellectual Disability (ID), and studies have documented severe deficits in its functioning. While there are several attempts to exploit GUIs (Graphical User Interfaces) to support memory training for children with ID, limited research explores Tangible User Interfaces (TUIs) for this purpose. The paper describes the design and technology of a novel TUI named VIC (VIsual spatial Cubes for memory training), a system composed by a set of digitally enhanced cubes that emit light and sound, a sensorized board and a mobile app. VIC enables children to perform multiple, configurable memory training activities that involve block manipulation and block placements. The research is based on a vast analysis of the state of the art and on validated methods adopted in memory rehabilitation contexts. From this analysis, and from the theories underlying TUIs, we distilled a set of design principles that informed the design of the multi-sensory affordances of VIC, and can be exploited for other researchers to develop TUIs in this field. The paper also reports an exploratory study that involved 12 children with ID and 3 therapists from a specialized daycare Center. The study focused on the evaluation of the quality of the system in terms of usability, likability and potential for adoption. Although preliminary, the results suggest that our design approach was sound and VIC has the potential to become a valid tool to complement existing practices in memory training and can be expanded to support also memory assessment for children with ID.
- Published
- 2022
4. Self-ion irradiation effects on nanoindentation-induced plasticity of crystalline iron: A joint experimental and computational study: Ion irradiation effects on hardening mechanisms of crystalline iron
- Abstract
In this paper, experimental work is supported by multi-scale numerical modeling to investigate nanomechanical response of pristine and ion irradiated with Fe2+ ions with energy 5 MeV high purity iron specimens by nanoindentation and Electron Backscatter Diffraction. The appearance of a sudden displacement burst that is observed during the loading process in the load–displacement curves is connected with increased shear stress in a small subsurface volume due to dislocation slip activation and mobilization of pre-existing dislocations by irradiation. The molecular dynamics (MD) and 3D-discrete dislocation dynamics (3D-DDD) simulations are applied to model geometrically necessary dislocations (GNDs) nucleation mechanisms at early stages of nanoindentation test; providing an insight to the mechanical response of the material and its plastic instability and are in a qualitative agreement with GNDs density mapping images. Finally, we noted that dislocations and defects nucleated are responsible the material hardness increase, as observed in recorded load–displacement curves and pop-ins analysis.
- Published
- 2023
5. Particle physics at the European Spallation Source
- Abstract
Presently under construction in Lund, Sweden, the European Spallation Source (ESS) will be the world's brightest neutron source. As such, it has the potential for a particle physics program with a unique reach and which is complementary to that available at other facilities. This paper describes proposed particle physics activities for the ESS. These encompass the exploitation of both the neutrons and neutrinos produced at the ESS for high precision (sensitivity) measurements (searches).
- Published
- 2023
6. Meeting decarbonization targets: Techno-economic insights from the Italian scenario
- Abstract
The European plan for a green transition includes the Fit for 55 package, designed to pave the way for climate neutrality. Despite its significant implications for cleaner technologies, it potentially correlates with high investment requirements, necessitating the pursuit of cost-effective environmental policies. Starting from the reference scenario previously envisaged in the Energy and Climate Plan, socioeconomic and environmental impacts are assessed using mixed methods. It is estimated that €1120 bn in investments are needed to meet decarbonization targets, while the total impact on public finance revenues to 2030 is projected at €529 bn. Additionally, the avoided costs of emissions amount to €36 bn, while those from energy savings are expected to reach €30 bn. This paper adds value by contributing to the literature on European climate policies, offering an in-depth appraisal of implications that integrates technoeconomic and environmental perspectives. Furthermore, it informs policymakers' public spending decisions for decarbonization.
- Published
- 2023
7. Sub-10 ps time tagging of electromagnetic showers with scintillating glasses and SiPMs
- Abstract
The high energy physics community has recently identified an e+e− Higgs factory as one of the next-generation collider experiments, following the completion of the High Luminosity LHC program at CERN. The moderate radiation levels expected at such colliders compared to hadron colliders, enable the use of less radiation tolerant but cheaper technologies for the construction of the particle detectors. This opportunity has triggered a renewed interest in the development of scintillating glasses for the instrumentation of large detector volumes such as homogeneous calorimeters. While the performance of such scintillators remains typically inferior in terms of light yield and radiation tolerance compared to that of many scintillating crystals, substantial progress has been made over the recent years. In this paper we discuss the time resolution of cerium-doped Alkali Free Fluorophosphate scintillating glasses, read-out with silicon photo-multipliers in detecting single charged tracks and at different positions along the longitudinal development of an electromagnetic shower, using respectively 150 GeV pions and 100 GeV electron beams at the CERN SPS H2 beam line. A single sensor time resolution of 14.4 ps and 5–7 ps was measured respectively in the two cases. With such a performance the present technology has the potential to address an emerging requirement of future detectors at collider experiments: measuring the time-of-flight of single charged particles as well as that of neutral particles showering inside the calorimeter and the time development of showers.
- Published
- 2023
8. Optimal hierarchical EWMA forecasting
- Abstract
Prediction of demand at different levels of aggregation is a crucial task in many business and industrial activities. This task may be extremely challenging when the number of time series increases together with the number of parameters governing the dynamics of the underlying model. This paper proposes theoretical and empirical contributions providing practical tools for managers needing efficient, flexible, and timely instruments. We first derive optimal results for predicting a system of time series following multivariate Exponentially Weighted Moving Average (EWMA) dynamics. Our results have relevant practical consequences. Indeed, we propose a fast EM algorithm that maximizes the Gaussian multivariate likelihood regardless of the model's dimension. Secondly, we show optimal results for the hierarchies, deriving closed-form results for the underlying parameters. Finally, using more than one hundred Walmart sales time series, we show that our approach is competitive with the optimal forecast reconciliation approach based on univariate forecasts.
- Published
- 2023
9. An explicit upper bound on the number of subgroups of a finite group
- Abstract
In this paper we prove that a finite group of order r has at most 7.3722⋅r[Formula present]+1.5315 subgroups.
- Published
- 2023
10. Modeling Value of Information in remote sensing from correlated sources
- Abstract
This paper investigates data correlation in remote sensing networks and how it can be characterized through diverse models quantifying the Value of Information (VoI), a metric that describes how informative the data transmitted by the sensors are. For each sensor, the VoI evaluations comprise the average node-specific Age of Information (AoI), the average cost spent for sending updates, and the AoI of neighbor nodes, assumed to be correlated sources of information and therefore benefiting the VoI of other sensors nearby. We discuss how this metric can be tracked through a two-dimensional Markov chain, but we also show how this representation can be simplified by including the impact of neighbor nodes within the transition probabilities, so as to obtain a simpler model that gives the same insight in terms of VoI evaluations.
- Published
- 2023
11. Mega Engineering Projects in Challenging Geological Environments–A Modern Perspective
- Abstract
In the article “Engineering geology-A fifty year perspective”(Juang et al., 2016), which celebrated the 50th anniversary of Engineering Geology, the authors described Engineering Geology for Engineering Projects (EGEP) together with Geological Engineering and Geotechnical Engineering (GEGE) as important elements of Engineering Geology. For example, four-hundred and twelve papers were published in Engineering Geology from 1986–1995. Among which, one-hundred and twenty (about 30%) were related to EGEP. The research topics mainly included design and construction of foundations, tunnels, and slopes (eg dams, embankments, landslides). However, the papers published on EGEP and GEGE declined significantly from 2006 to 2015. It was speculated that the worldwide economic recession that began in 2008 might have contributed to the decline of infrastructure development and the number of …
- Published
- 2019
12. Nonparametric Bayesian modelling of longitudinally integrated covariance functions on spheres
- Abstract
Taking into account axial symmetry in the covariance function of a Gaussian random field is essential when the purpose is modelling data defined over a large portion of the sphere representing our planet. Axially symmetric covariance functions admit a convoluted spectral representation that makes modelling and inference difficult. This motivates the interest in devising alternative strategies to attain axial symmetry, an appealing option being longitudinal integration of isotropic random fields on the sphere. This paper provides a comprehensive theoretical framework to model longitudinal integration on spheres through a nonparametric Bayesian approach. Longitudinally integrated covariances are treated as random objects, where the randomness is implied by the randomised spectrum associated with the covariance function. After investigating the topological support induced by our construction, we give the posterior distribution a thorough inspection. A Bayesian nonparametric model for the analysis of data defined on the sphere is described and implemented, its performance investigated by means of the analysis of both simulated and real data sets.
- Published
- 2022
13. The agenda of the global patient reported outcomes for multiple sclerosis (PROMS) initiative: Progresses and open questions
- Abstract
On 12 September 2019, the global Patient Reported Outcome for Multiple Sclerosis (PROMS) Initiative was launched at the 35th Congress of the European Committee for Treatment and Research in Multiple Sclerosis (ECTRIMS). The multi-stakeholder PROMS Initiative is jointly led by the European Charcot Foundation (ECF) and the Multiple Sclerosis International Federation (MSIF), with the Italian Multiple Sclerosis Society (AISM) acting as the lead agency for and on behalf of the global MSIF movement. The initiative has the ambitious mission to (i) maximize the impact of science with and of patient input on the life of people affected by MS, and (ii) to represent a unified view on Patient-Reported Outcomes for MS to people affected by MS, healthcare providers, regulatory agencies and Health Technologies Assessments agencies. Equipped with an innovative participatory governance of an international and interdisciplinary network of different stakeholders, PROMS has the potential to guide future breakthroughs in MS patient-focused research and care. In this paper we present the progresses of the global PROMS Initiative and discuss the open questions that we aim to address.
- Published
- 2022
14. Sympathetic nervous system and hypertension: New evidences
- Abstract
Evidences collected in the past few years have strengthened the concept that the sympathetic nervous system plays a primary role in the development and progression of the hypertensive state, starting from the early stage, and in the hypertension-related cardiovascular diseases. Several pathophysiological mechanisms are involved. Among them the genetic background, the immune system in conjunction with sympathetic activation. The present review will briefly discuss the importance of the above mentioned mechanisms in the development of hypertension. The paper will also examine the sympathetic mechanisms underlying attended vs unattended blood pressure measurements as well as their role in resistant vs pseudo-resistant hypertension. Finally evidence from recent meta-analysis on the relevance of sympathetic nerve traffic activation in the pathogenesis of hypertension will be briefly discussed.
- Published
- 2022
15. Preferences and strategic behavior in public goods games
- Abstract
In finitely repeated public goods games, contributions are initially high, and gradually decrease over time. Two main explanations are consistent with this pattern: (i) the population is composed of free-riders, who never contribute, and conditional cooperators, who contribute if others do so as well; (ii) strategic players contribute to sustain mutually beneficial future cooperation, but reduce their contributions as the end of the game approaches. This paper analyzes experimentally these explanations, by manipulating group composition to form homogeneous groups on both the preference and the strategic ability dimensions. Our results highlight the role of strategic ability in sustaining contributions, and suggest that the interaction between the two dimensions also matters: we find that groups that sustain high levels of cooperation are composed of members who share a common inclination toward cooperation and also have the strategic abilities to recognize and reap the benefits of enduring cooperation.
- Published
- 2022
16. A denoising tool for the reconstruction of cortical geometries from MRI
- Abstract
The reconstruction of individual geometries from medical imaging is quite a standard in the framework of patient-specific medicine. A major drawback in such a context is represented by noise inherent to the data acquisition. Low signal-to-noise ratios can negatively impact extraction algorithms, and result in artefacts or poor quality of the reconstructed meshes. Direct application of numerical methods on such meshes can yield misleading results. Indeed, artefacts and badly shaped elements may corrupt numerical simulations or induce relevant errors in the computation of meaningful geometrical quantities, such as the curvature or the geodesic surface distance. In this paper, we propose a denoising procedure to remove artefacts from a triangular mesh of a three-dimensional closed surface which represents a brain cortex. For this purpose, we combine a smoothing technique (i.e., the Taubin or the HC-Laplacian smoothing) with an edge-flipping algorithm. To control the denoising procedure, we introduce a stopping criterion that takes into account both the improvement of the mesh quality and the loss of volume enclosed by the surface. On a brain cortical surface reconstructed from Magnetic Resonance Imaging (MRI) data, we first perform a tuning analysis of the parameters involved in the smoothing algorithm, then we investigate the effectiveness of the denoising procedure. Finally, as an example of relevant geometrical feature, we study the improvement generated by the proposed algorithm on the computation of the cortical curvature.
- Published
- 2022
17. A non-clausal tableau calculus for MINSAT
- Abstract
In this paper we provide a non-clausal tableau calculus for the minimum satisfiability problem. Moreover we describe how to adapt it to some variants. Our starting point is a calculus for non-clausal maximum satisfiability problem.
- Published
- 2022
18. On the rank of Suzuki polytopes: An answer to Hubard and Leemans
- Abstract
In this paper we show that the rank of every chiral polytope having a Suzuki group as automorphism group is 3. This gives a positive answer to a conjecture of Isabel Hubard and Dimitri Leemans.
- Published
- 2021
19. Manganese-mediated hydrochemistry and microbiology in a meromictic subalpine lake (Lake Idro, Northern Italy) - A biogeochemical approach
- Abstract
This study presents the findings from several field campaigns carried out in Lake Idro (Northern Italy), a deep (124 m) meromictic-subalpine lake, whose water column is subdivided in a mixolimnion (~0–40 m) and a monimolimnion (~40–124 m). Hydrochemical data highlight two main peculiarities characterizing the Lake Idro meromixis: a) presence of a high manganese/iron ratio (up to 20 mol/mol), b) absence of a clear chemocline between the two main layers. The high manganese content contributed to the formation of a stable manganese dominated deep turbid stratum (40–65 m), enveloping the redoxcline (~45–55 m) in the upper monimolimnion. The presence of this turbid stratum in Lake Idro is described for the first time in this study. The paper examines the distribution of dissolved and particulate forms of transition metals (Mn and Fe), alkaline earth metals (Ca and Mg), and other macro-constituents or nutrients (S, P, NO3-N, NH4-N), discussing their behavior over the redoxcline, where the main transition processes occur. Field measurements and theoretical considerations suggest that the deep turbid stratum is formed by a complex mixture of manganese and iron compounds with a prevalence of Mn(II)/Mn(III) in different forms including dissolved, colloidal, and fine particles, that give to the turbid stratum a white-pink opalescent coloration. The bacteria populations show a clear stratification with the upper aerobic layer dominated by the heterotrophic Flavobacterium sp., the turbid stratum hosting a specific microbiological pool, dominated by Caldimonas sp., and the deeper anaerobic layer dominated by the sulfur-oxidizing and denitrifier Sulfuricurvum sp. The occurrence in August 2010 of an anomalous lake surface coloration lasting about four weeks and developing from milky white-green to red-brown suggests that the upper zone of the turbid stratum could be eroded during intense weather-hydrological conditions with the final red-brown coloration resulting from the oxidatio
- Published
- 2021
20. Production-induced instability of a gentle submarine slope: Potential impact of gas hydrate exploitation with the huff-puff method
- Abstract
Natural gas in clathrate hydrates is regarded as a potential energy source that has received increased attention to optimize production strategies with controllable impacts on the environment. This paper investigates possible instability of a gently sloping reservoir of oceanic hydrates induced by gas production using the huff-puff method through a horizontal well. The geomechanical stability of the slope is analyzed within the framework of the limit equilibrium method by considering the dynamic change in the pore pressure and the strength parameters of the slope during gas production. The production process is simulated by a coupled analysis of heat and flow transport considering thermal effects of hydrate dissociation and formation, and the time-dependent pore pressure and strength parameters are attained from this analysis and passed to the slope stability analysis. Parametric studies are performed to screen the optimal production scenario under different site conditions. Being part of the huff-puff production process, thermal stimulation during the huff stage poses a risk of production-induced instability to the slope. Overpressure is the dominant cause for slope failure, and strength reduction due to hydrate dissociation plays a secondary role in the studied scenario. The production-induced slope failure likely takes place at a site with interbedded geological structures that promote overpressure expansion in a laterally extending band beneath the potential failure surface. Thus, the geological structures should be properly modelled in reservoir simulations, as they could impact the production effectiveness and geomechanical response of the reservoir remarkably. This study demonstrates a need for a multi-objective optimization procedure to seek the overall optimal production strategy, since the economically optimal option is not necessarily free of risk of production-induced geo-hazards.
- Published
- 2021
21. Measurement of 216Po half-life with the CUPID-0 experiment
- Abstract
Rare event physics demands very detailed background control, high-performance detectors, and custom analysis strategies. Cryogenic calorimeters combine all these ingredients very effectively, representing a promising tool for next-generation experiments. CUPID-0 is one of the most advanced examples of such a technique, having demonstrated its potential with several results obtained with limited exposure. In this paper, we present a further application. Exploiting the analysis of delayed coincidence, we can identify the signals caused by the 220Rn-216Po decay sequence on an event-by-event basis. The analysis of these events allows us to extract the time differences between the two decays, leading to a new evaluation of 216Po half-life, estimated as ms.
- Published
- 2021
22. On multiplicities of cocharacters for algebras with superinvolution
- Abstract
In this paper we deal with finitely generated superalgebras with superinvolution, satisfying a non-trivial identity, whose multiplicities of the cocharacters are bounded by a constant. Along the way, we prove that the codimension sequence of such algebras is polynomially bounded if and only if their colength sequence is bounded by a constant.
- Published
- 2021
23. Phytotoxicity, nematicidal activity and chemical constituents of Peucedanum ostruthium (L.) W.D.J.Koch (Apiaceae)
- Abstract
Peucedanum ostruthium (L.) W.D.J.Koch (Apiaceae) is an alpine medicinal plant traditionally used as a panacea to treat various ailments. For the first time, its phytotoxic and nematotoxic properties were investigated. The inhibitory activity toward germination and seedling growth of the weeds Echinochloa oryzoides (Ard.) Fritsch and Lolium multiflorum Lam. was evaluated by two in vitro assays, carried out on filter paper and soil, using different aqueous extract concentrations (1, 10, and 20 %) and 0.25 g of powder of P. ostruthium leaves, inflorescences, and rhizomes. The study showed that all samples were more effective on L. multiflorum than E. oryzoides with p-values = 0.000 on both substrate types. Nevertheless, in all cases, the soil mitigated the P. ostruthium effects. Regarding nematicidal activity, the leaf extract was the most active against larvae and adults of the nematode Panagrolaimus rigidus. According to the motility test, their death was 85.6 ± 2.7 % and 90.5 ± 3.1 % 24 h after treatment. Lastly, NMR and UPLC-HR-MS analyses led to the identification of several compounds in the aqueous extracts, including mono- and di-substituted chlorogenic acids, flavonol glycosides, coumarins, and furanocoumarin glycosides. 5-Caffeoylquinic acid was the most abundant phenolic component in all plant organs.
- Published
- 2021
24. Robust variable selection in the framework of classification with label noise and outliers: Applications to spectroscopic data in agri-food
- Abstract
Classification of high-dimensional spectroscopic data is a common task in analytical chemistry. Well-established procedures like support vector machines (SVMs) and partial least squares discriminant analysis (PLS-DA) are the most common methods for tackling this supervised learning problem. Nonetheless, interpretation of these models remains sometimes difficult, and solutions based on feature selection are often adopted as they lead to the automatic identification of the most informative wavelengths. Unfortunately, for some delicate applications like food authenticity, mislabeled and adulterated spectra occur both in the calibration and/or validation sets, with dramatic effects on the model development, its prediction accuracy and robustness. Motivated by these issues, the present paper proposes a robust model-based method that simultaneously performs variable selection, outliers and label noise detection. We demonstrate the effectiveness of our proposal in dealing with three agri-food spectroscopic studies, where several forms of perturbations are considered. Our approach succeeds in diminishing problem complexity, identifying anomalous spectra and attaining competitive predictive accuracy considering a very low number of selected wavelengths.
- Published
- 2021
25. On the longest common prefix of suffixes in an inverse Lyndon factorization and other properties
- Abstract
The Lyndon factorization of a word has been largely studied and recently variants of it have been introduced and investigated with different motivations. In particular, the canonical inverse Lyndon factorization ICFL(w) of a word w, introduced in [1], maintains the main properties of the Lyndon factorization since it can be computed in linear time and it is uniquely determined. In this paper we investigate new properties of this factorization with the aim of exploring their use in some classical queries on w. The main property we prove is related to a classical query on words. We prove that there are relations between the length of the longest common prefix (or longest common extension) lcp(x,y) of two different suffixes x,y of a word w and the maximum length M of two consecutive factors of ICFL(w). More precisely, M is an upper bound on the length of lcp(x,y). A main tool used in the proof of the above result is a property that we state for factors mi with nonempty borders in ICFL(w): a nonempty border of mi cannot be a prefix of the next factor mi+1. Another interesting result relates sorting of global suffixes, i.e., suffixes of a word w, and sorting of local suffixes, i.e., suffixes of products of factors in ICFL(w). This is the counterpart for ICFL(w) of the compatibility property, proved in [2,3] for the Lyndon factorization. Roughly, the compatibility property allows us to extend the mutual order between suffixes of products of the (inverse) Lyndon factors to the suffixes of the whole word. The last property we prove focuses on the Lyndon factorizations of a word and its factors. It suggests that the Lyndon factorizations of two words sharing a common overlap could be used to capture the common overlap of these two words.
- Published
- 2021
26. MEET-LM: A method for embeddings evaluation for taxonomic data in the labour market
- Abstract
Taxonomies are the mainstay of the semantic web as they aim at organising knowledge in concepts linked by IS-A relationships. However, keeping such hierarchies updated and able to represent the domain from which they have been drawn is still a time-consuming, costly and error prone activity. Here, word embeddings have proven to be effective in catching lexicon and semantic similarities to enrich taxonomies from text data. This, in turn, would require to evaluate the generated embeddings to estimate the extent to which they encode the semantic similarity derived from the hierarchy itself. In this paper, we propose and implement MEET-LM, a methodology that aims at generating and evaluating embeddings from a text corpus preserving the co-hyponymy relations synthesised from a domain-specific taxonomy. We apply MEET-LM to a real-life dataset of 2M+ vacancies related to ICT-jobs, framed within the research activities of an EU project that collects millions of Online Job Vacancies and classifies them within the European standard hierarchy ESCO. To show MEET-LM is useful in practice, we also trained a neural network to classify co-hyponym relations using the selected embeddings as features. Our experiments reach 99.4% of accuracy and 86.5% of f1-score.
- Published
- 2021
27. Risk Parity with Expectiles
- Abstract
A recent popular approach to portfolio selection aims at diversifying risk by looking for the so called Risk Parity portfolios. These are defined by the condition that the risk contributions of all assets to the global risk of the portfolio are equal. The Risk Parity approach has been originally introduced for the volatility risk measure. In this paper we consider expectiles as risk measures, we refine results on their differentiability and additivity, and we show how to define Risk Parity portfolios when the expectiles are used. Furthermore, we propose three different classes of methods for practically finding Risk Parity portfolios with respect to expectiles, and we compare the accuracy and efficiency of these methods on real-world data. Expectiles are also used as risk measures in the classical risk-return approach to portfolio selection, where we present a new linear programming formulation.
- Published
- 2021
28. Demand-side vs. supply-side technology policies: Hidden treatment and new empirical evidence on the policy mix
- Abstract
This paper provides new empirical evidence about the impact of various technological policies upon firms' innovative behaviour. We take into consideration the role of policies for innovative activities and we focus on their interaction. While supply-side policies such as R&D subsidies and tax credits have been both extensively discussed in the literature and empirically investigated, the analysis of innovative public procurement is a growing trend in the literature, which still lacks robust empirical evidence. In this paper, we replicate the existing results on supply-side policies, surmise fresh empirical evidence on the outcome of innovative public procurement, and address the issue of possible interaction among the various tools. When controlling for the interaction with other policies, supply-side subsidies cease to be as effective as reported in previous studies and innovative public procurement seems to be more effective than other tools. The preliminary evidence suggests that technology policies exert the highest impact when different policies interact.
- Published
- 2015
29. A digital tool based on genetic algorithms and limit analysis for the seismic assessment of historic masonry buildings
- Abstract
New technologies are changing the way engineers work within the construction sector. Newly developed software solutions have provided effective methods to explore the design space at the interface between Structural Engineering and Architecture, allowing more efficient design strategies. These technologies are based on the integration of parametric generation and visualisation of geometries with powerful numerical solvers, employing user-customised routines. While the construction industry is rapidly moving the design of new construction towards a fully digitalised process, the assessment and the analysis of existing structures with such tools are still largely unexplored. In this context, a visual script for the structural assessment of out-of-plane mechanisms in historic masonry structures subject to seismic loading has recently been proposed by the authors. This relies on two successive steps of analysis, which are integrated into a digital work-flow. Datasets describing the geometric configuration of masonry structures are employed to automatically generate a non-linear Finite Element (FE) model and investigate possible collapse modes. A preliminary global analysis is performed using the commercial software ABAQUS CAE. This, in combination with the Control Surface Method (CSM), allows identifying the most likely failure mechanisms which are described by the geometry of the macro-blocks. The parametric modelling of the macro-blocks geometry allows exploring the domain of possible solutions using the upper bound method of limit analysis. A Genetic Algorithms (GA) solver is used to refine the geometry of the macro-blocks and search the minimum of the upper-bound load multipliers, which guarantees equilibrium. The script is implemented in the visual programming environment offered by Rhino3D+Grasshopper. In this paper, a set of parametric analyses considering various input variables such as friction coefficient and opening incidence are performed to verify both the
- Published
- 2020
30. Experimental method to monitor temperature stability of SiPMs operating in conditions of extremely high dark count rate
- Abstract
The use of Silicon Photomultipliers (SiPMs) in high radiation environments (e.g. detectors at collider experiments) or in outdoor conditions (e.g. LIDAR applications), requires to deal with a high level of dark count noise. An undesired effect emerging under these conditions is the self-heating of the SiPM due to a large power dissipation that causes variations of the breakdown voltage and thus affects operating parameters such as gain and PDE. A method to evaluate the heat dissipation properties of different SiPM packages and the temperature stability of SiPMs during operation under extremely high dark count rates (larger than 30 GHz) is presented in this paper. Starting from a condition in which the SiPM is at thermal equilibrium with the environment, a change in the current measured under intense illumination with a LED can be attributed to a temperature variation of the SiPM due to self-heating. Under certain assumptions, a quantitative estimate of the local change of temperature can be obtained if certain parameters of the SiPM are known: gain, photon detection efficiency, equivalent charge factor and drift of the SiPM breakdown voltage with temperature. The method is applied to evaluate the thermal conductivity of different SiPM package configurations, which is a key requirement for reliable operation in conditions of extremely high dark count rate. Among the SiPM package configurations tested, the one consisting of a ceramic substrate coupled to a copper heat sink provided the best performance in terms of good heat dissipation with less than 0.5 °C temperature variation for 100 mW dissipated power.
- Published
- 2020
31. A fully automated approach to a complete Semantic Table Interpretation
- Abstract
In recent years, there has been an increasing interest in extracting and annotating tables on the Web. This activity allows the transformation of text data into machine-readable formats to enable the execution of various artificial intelligence tasks, e.g. semantic search and dataset extension. Semantic Table Interpretation is the process of annotating elements in a table. Current approaches are mainly based on lexical matching algorithms that rely on metadata associated with tables or custom Knowledge Graphs. Their main limitations are due to the lack of metadata, the little use of contextual semantics, and the incompleteness of the proposed methods that do not include all the necessary steps. In this paper, we propose a comprehensive approach and a tool that provides an unsupervised method to annotate independent tables, possibly without header row or other external information. The approach is based on the definition of a context created from the elements within the table in order to discriminate among matching entities found in shared Knowledge Graphs and create high-quality annotations. The approach has achieved excellent results in an international challenge, thus proving its effectiveness.
- Published
- 2020
32. 3D simulation of Vajont disaster. Part 1: Numerical formulation and validation
- Abstract
This work presents a numerical method for the simulation of landslides generated impulse waves and its application to the historical Vajont case study. The computational tool is based on the Particle Finite Element Method (PFEM), a Lagrangian strategy that combines the finite element solution of the governing equations with an efficient remeshing strategy to deal with large deformation problems. After presenting the numerical formulation, different landslide impulse wave problems with Froude number ranging from 0.5 to 2.8, are analyzed to validate the proposed methodology. The computational method is shown to be able to reproduce accurately the landslide runout, the momentum transfer between the sliding material and the impounded water, and the consequent wave propagation observed in experimental physical models. Then, the PFEM model is applied to the numerical simulation of the Vajont disaster, which is analyzed with a fully-resolved three-dimensional model. The numerical results are discussed and compared to the post-event observations and the numerical results of other computational methods. The results in terms of landslide velocity and runout, geometry of the deposit, maximum water runup, dam overtopping wave, and water discharge in the downstream valley are in good agreement with observations and reconstructions. The calibration and validation performed for this study form the basis for the PFEM analyses presented in a companion paper finalized to simulate different scenarios of the Vajont rockslide considered in the experimental tests done a year before the disaster.
- Published
- 2020
33. 3D simulation of Vajont disaster. Part 2: Multi-failure scenarios
- Abstract
Prediction of multi-hazard slope stability events requires an informed and judicious choice of the possible scenarios. An incorrect definition of landslide conditions in terms of expected failure volume, material behavior, or boundary conditions can lead to inaccurate predictions and, in turn, to wrong engineering and risk management decisions. Reduced-scale experiments carried out two years before the Vajont disaster were carried out with a material not representative of the actual rockslide behavior and failed in not considering the simultaneous failure of the whole landslide body. Based on these inappropriate assumptions, the physical models led to wrong estimates of the safety operational level for the Vajont reservoir. This work uses the Particle Finite Element Method (PFEM) to analyze the implications of the wrong hypotheses considered in the pre-event experiments, simulating numerically the Vajont disaster for different sliding volumes and material properties. The use of the PFEM for the accurate assessment of the consequences of landslides impinging in water reservoirs has been already validated in a companion paper. In this work, we demonstrate the capabilities of a robust and reliable numerical modeling approach for the simulation of different scenarios, assessing what could have been a safe operational reservoir level in the case of a landslide generated impulse wave. The three-dimensional analyses were run with a high mesh resolution and demonstrate the suitability and robustness of the PFEM model for large-scale landslide and multi-hazard events simulation.
- Published
- 2020
34. Dynamic rockfall risk analysis
- Abstract
Rockfall is a dangerous hazard on steep and susceptible slope. However, it is not easy to identify potential rockfalls on cliff mountains. Traditionally, rockfall risk analysis ignores the temporal evolution of risk due to the changes of the elements at risk, or changes in the annual probability. This may lead to an underestimation of risk in time. In this paper, we present an innovative approach for a dynamic risk analysis, and we demonstrate this approach for the ShenXianjJu scenic area case study, where rockfall represent a threat for thousands of tourists each year. Moreover, we studied the role of rock mass quality in controlling the rockfall potential and rockfall volume, since this may lead to significant changes of rockfall risk in space. We describe the use of Unmanned Aerial Vehicle, Terrestrial LiDAR and detailed field surveys to identify 34 potential rockfalls on slopes where historical rockfalls have occurred. Within ignimbrite and rhyolite rock masses rockfall blocks range in size between large and very large. Andesite derived rockfalls are characterized by medium-size to large block size. Faulted and fractured ignimbrite and rhyolite rock masses within the fault damage zone exhibit small-very small rockfall block sizes. Based on the identified potential rockfalls in the study area, we quantified the dynamic risk by considering the temporal–spatial changes of tourist activity. To demonstrate the methodology, two potential rockfalls on two heavily-used tourist routes were selected. For these scenarios we quantified the annual probability of occurrence, the reach probability, the dynamic temporal-spatial probability, and the vulnerability of tourists. The dynamic temporal probability was also calculated considering different visiting periods (e.g., working days, weekends, holidays), showing significant changes in the risk level among the different visiting periods and with time. Some of the investigated scenarios were within the ALARP zone in the first y
- Published
- 2020
35. A fully automated approach to a complete Semantic Table Interpretation
- Abstract
In recent years, there has been an increasing interest in extracting and annotating tables on the Web. This activity allows the transformation of text data into machine-readable formats to enable the execution of various artificial intelligence tasks, e.g. semantic search and dataset extension. Semantic Table Interpretation is the process of annotating elements in a table. Current approaches are mainly based on lexical matching algorithms that rely on metadata associated with tables or custom Knowledge Graphs. Their main limitations are due to the lack of metadata, the little use of contextual semantics, and the incompleteness of the proposed methods that do not include all the necessary steps. In this paper, we propose a comprehensive approach and a tool that provides an unsupervised method to annotate independent tables, possibly without header row or other external information. The approach is based on the definition of a context created from the elements within the table in order to discriminate among matching entities found in shared Knowledge Graphs and create high-quality annotations. The approach has achieved excellent results in an international challenge, thus proving its effectiveness.
- Published
- 2020
36. Proteomics turns functional
- Abstract
Proteomics is acquiring a pivotal role in the comprehensive understanding of human biology. Biochemical processes involved in complex diseases, such as neurodegenerative diseases, diabetes and cancer, can be identified by combining proteomics analysis and bioinformatics tools. In the last ten years, the main output of differential proteomics investigations evolved from long lists of proteins to the generation of new hypotheses and their functional verification. The Journal of Proteomics participated to this progress, reporting more and more biologically-oriented papers with functional interpretation of proteomics data. This change in the field was due to both technological development and novel strategies in exploiting the deep characterization of proteomes. In this review, we explore several approaches that allow proteomics to turn functional. In particular, systems biology tools for data analysis are now routinely used to interpret results, thus defining the biological meaning of differentially abundant proteins. Moreover, by considering the importance of protein-protein interactions and the composition of macromolecular complexes, interactomics is complementing the information given by differential quantitative proteomics. Eventually, terminomics is unveiling new functions for cleaved proteoforms, by analyzing the effect of proteolysis globally. Significance: Proteomics is rapidly evolving not only technologically but also strategically. The correct interpretation of proteomics data can reveal new functions of proteins in several biological backgrounds. Systems biology tools allow researchers to formulate new hypotheses to be further functionally tested. Interactomics is shedding new light on protein complexes truly involved in biochemical pathways and how their alteration can lead to dysfunctionality (in disease pathogenesis, for example). Terminomics is revealing the function of new discovered proteoforms and attributing a novel role to proteolysis. This review
- Published
- 2019
37. Petrographic classification of sand and sandstone
- Abstract
Petrographic classifications of sand and sandstone proposed more than half a century ago are still in use, although they were formulated at a time when depositional and post-depositional sedimentary processes were poorly understood, and before the relationships between tectonics and sedimentation could be interpreted in modern plate-tectonic terms. As a consequence, too many scientific articles and technical reports are still encumbered with obsolete concepts, graphical tools, and ambiguous terminology that make sediment descriptions awkward and misleading. A renovation that treasures the legacy of the pioneers is required. The descriptive petrographic classification of sand and sandstone proposed in this paper is based on the quasi-universally used Gazzi-Dickinson point-counting method, and simply translates into words ternary compositions of quartz, feldspar, and lithic fragments without introducing any new names. The classic QFL plot is subdivided into 15 fields - labelled by adjectives introduced long ago by K.A.W. Crook and endorsed by W.R. Dickinson and more recently by G.J. Weltje - which reflect relative abundances of the three main framework components (provided they exceed 10%QFL). According to standard use, the less abundant component goes first, the more abundant last (e.g., litho-feldspatho-quartzose composition translates into Q > F > L > 10%QFL). For lithic-rich sand and sandstone, information on the prevailing rock fragment type can be added by an additional free adjective (e.g., metamorphiclastic, carbonaticlastic), as proposed long ago by R.V. Ingersoll. For lithic-poor feldspatho-quartzose and quartzose sand and sandstone, further formal subdivisions are proposed based on the Q/F ratio, thus reaching a total of 18 compositional fields overall. Modern sand known to be derived from different source rocks and found in major world's rivers, deserts, and deep-sea fans fits in the pigeonholes defined by the relative abundance of quartz, feldspa
- Published
- 2019
38. A novel option for reducing the optical density of liquid digestate to achieve a more productive microalgal culturing
- Abstract
The liquid fraction of digestate produced by agricultural biogas plants is rich in macro and micronutrients that are valuable for the culturing of microalgae. Nonetheless, the high ammonium concentration may cause toxicity and the high optical density may reduce light penetration, negatively affecting the biomass production rate. Dilution with fresh water has been frequently suggested as a mean for improving the digestate characteristics in view of microalgal culturing. In this paper, the feasibility of culturing microalgae on undiluted raw digestate or on digestate after pretreatment by stripping and adsorption was investigated. First, adsorption tests were performed using commercial activated carbon from wood in order to identify appropriate conditions for optical density (OD) reduction. Up to 88% reduction was obtained by dosing 40 g L− 1 after 24 h of contact time. Then, culturing tests were performed on a microalgal inoculum including mainly Chlorella spp. and Scenedesmus spp. under controlled temperature and light conditions during 6–14 weeks. Raw, stripped, and stripped and adsorbed digestate samples were tested. The biomass production rate increased from 27 ± 13 mg TSS L− 1 d− 1 on raw digestate, to 82 ± 18 mg TSS L− 1 d− 1 by using stripped digestate, and to 220 ± 78 mg TSS L− 1 d− 1 by using the stripped and adsorbed digestate. Moreover, nitrification was constantly suppressed when using the stripped and adsorbed digestate, while relevant nitrite built-up was observed when using raw and stripped digestate. These results suggest that microalgae are able to grow on the raw digestate, provided that long hydraulic retention times are applied. A much faster growth (up to 10 times) can be obtained by pretreating the liquid fraction of digestate by stripping and adsorption, which may be an effective means of improving th
- Published
- 2017
39. Orthology Correction for Gene Tree Reconstruction: Theoretical and Experimental Results
- Abstract
We consider how the orthology/paralogy information can be corrected in order to represent a gene tree, a problem that has recently gained interest in phylogenomics. Interestingly, the problem is related to the Minimum CoGraph Editing problem on the relation graph that represents orthology/paralogy information, where we want to minimize the number of edit operations on the given relation graph in order to obtain a cograph. In this paper we provide both theoretical and experimental results on the Minimum CoGraph Editing problem. On the theoretical side, we provide approximation algorithms for bounded degree relation graphs, for the general problem and for the problem restricted to deletion of edges. On the experimental side, we present a genetic algorithm for Minimum CoGraph Editing and we provide an experimental evaluation of the genetic algorithm on synthetic data.
- Published
- 2017
40. Radiation hardness assurance of the CLARO8 front-end chip for the LHCb RICH detector upgrade
- Abstract
The CLARO8 chip has been designed for single-photon counting in the upgraded RICH detector of the LHCb experiment at CERN. The chip has 8 channels with 5 ns peaking time and a recovery time better than 25 ns. Each channel is made of a charge amplifier with 2-bit settable attenuation, plus a comparator with a 6-bit settable threshold, and the configuration register is protected against Single Event Upsets by triple modular redundancy. In order to ensure stable operation of the upgraded RICH detectors over the expected lifetime of the experiment after the upgrade, the performance of the CLARO8 in high radiation fields has been assessed. These chips will be exposed, during the whole upgrade running phase, to a total ionizing dose of 200 krad, a neutron fluence of 3Ã10121 MeV neq/cm2and a high energy hadrons fluence of 1.2Ã1012cmâ2. Systematic irradiation campaigns have been performed using ions, protons and mixed-field high-energy hadron beams. This paper describes the radiation hardness campaign of the CLARO8 chips and the main results of its extensive characterisation.
- Published
- 2017
41. Orthology Correction for Gene Tree Reconstruction: Theoretical and Experimental Results
- Abstract
We consider how the orthology/paralogy information can be corrected in order to represent a gene tree, a problem that has recently gained interest in phylogenomics. Interestingly, the problem is related to the Minimum CoGraph Editing problem on the relation graph that represents orthology/paralogy information, where we want to minimize the number of edit operations on the given relation graph in order to obtain a cograph. In this paper we provide both theoretical and experimental results on the Minimum CoGraph Editing problem. On the theoretical side, we provide approximation algorithms for bounded degree relation graphs, for the general problem and for the problem restricted to deletion of edges. On the experimental side, we present a genetic algorithm for Minimum CoGraph Editing and we provide an experimental evaluation of the genetic algorithm on synthetic data.
- Published
- 2017
42. Determinants of PhD holders’ use of social networking sites: An analysis based on LinkedIn
- Abstract
Social networking sites are an increasingly important tool for career development, especially for highly skilled individuals. Moreover, they may constitute valuable sources of data for scholars and policy makers. However, little research has been conducted on the use by highly skilled individuals of those social networks. In this paper, we focus on PhD graduates, who play an important role in the innovation process and in particular in knowledge creation and diffusion. We seek to increase understanding of the determinants that induce PhD graduates to register on LinkedIn and to develop wider or narrower networks. Controlling for the most relevant individual characteristics, we find that (i) PhD holders moving to the industry sector are more likely to have a LinkedIn account and to have a larger network of connections in LinkedIn; (ii) PhD holders are more likely to use LinkedIn if they have co-authors abroad; and (iii) they have wider networks if they have moved abroad after obtaining their PhD. In light of our analyses, we discuss the usefulness of – and main concerns about – the adoption of LinkedIn as a new data source for research and innovation studies.
- Published
- 2017
43. A toolbox for simpler active membrane algorithms
- Abstract
We show that recogniser P systems with active membranes can be augmented with a priority over their set of rules and any number of membrane charges without loss of generality, as they can be simulated by standard P systems with active membranes, in particular using only two charges. Furthermore, we show that more general accepting conditions, such as sending out several, possibly contradictory results and keeping only the first one, or rejecting by halting without output, are also equivalent to the standard accepting conditions. The simulations we propose are always without significant loss of efficiency, and thus the results of this paper can hopefully simplify the design of algorithms for P systems with active membranes.
- Published
- 2017
44. A novel option for reducing the optical density of liquid digestate to achieve a more productive microalgal culturing
- Abstract
The liquid fraction of digestate produced by agricultural biogas plants is rich in macro and micronutrients that are valuable for the culturing of microalgae. Nonetheless, the high ammonium concentration may cause toxicity and the high optical density may reduce light penetration, negatively affecting the biomass production rate. Dilution with fresh water has been frequently suggested as a mean for improving the digestate characteristics in view of microalgal culturing. In this paper, the feasibility of culturing microalgae on undiluted raw digestate or on digestate after pretreatment by stripping and adsorption was investigated. First, adsorption tests were performed using commercial activated carbon from wood in order to identify appropriate conditions for optical density (OD) reduction. Up to 88% reduction was obtained by dosing 40 g L− 1 after 24 h of contact time. Then, culturing tests were performed on a microalgal inoculum including mainly Chlorella spp. and Scenedesmus spp. under controlled temperature and light conditions during 6–14 weeks. Raw, stripped, and stripped and adsorbed digestate samples were tested. The biomass production rate increased from 27 ± 13 mg TSS L− 1 d− 1 on raw digestate, to 82 ± 18 mg TSS L− 1 d− 1 by using stripped digestate, and to 220 ± 78 mg TSS L− 1 d− 1 by using the stripped and adsorbed digestate. Moreover, nitrification was constantly suppressed when using the stripped and adsorbed digestate, while relevant nitrite built-up was observed when using raw and stripped digestate. These results suggest that microalgae are able to grow on the raw digestate, provided that long hydraulic retention times are applied. A much faster growth (up to 10 times) can be obtained by pretreating the liquid fraction of digestate by stripping and adsorption, which may be an effective means of improving th
- Published
- 2017
45. Deep learning for logo recognition
- Abstract
In this paper we propose a method for logo recognition using deep learning. Our recognition pipeline is composed of a logo region proposal followed by a Convolutional Neural Network (CNN) specifically trained for logo classification, even if they are not precisely localized. Experiments are carried out on the FlickrLogos-32 database, and we evaluate the effect on recognition performance of synthetic versus real data augmentation, and image pre-processing. Moreover, we systematically investigate the benefits of different training choices such as class-balancing, sample-weighting and explicit modeling the background class (i.e. no-logo regions). Experimental results confirm the feasibility of the proposed method, that outperforms the methods in the state of the art.
- Published
- 2017
46. Higher order assortativity in complex networks
- Abstract
Assortativity was first introduced by Newman and has been extensively studied and applied to many real world networked systems since then. Assortativity is a graph metric and describes the tendency of high degree nodes to be directly connected to high degree nodes and low degree nodes to low degree nodes. It can be interpreted as a first order measure of the connection between nodes, i.e. the first autocorrelation of the degree–degree vector. Even though assortativity has been used so extensively, to the author's knowledge, no attempt has been made to extend it theoretically. Indeed, Newman assortativity is about “being adjacent”, but even though two nodes may not by connected through an edge, they could have possibly a strong level of connectivity through a large number of walks and paths between them. This is the scope of our paper. We introduce, for undirected and unweighted networks, higher order assortativity by extending the Newman index based on a suitable choice of the matrix driving the connections. Higher order assortativity be defined for paths, shortest paths and random walks of a given length. The Newman assortativity is a particular case of each of these measures when the matrix is the adjacency matrix, or, in other words, the autocorrelation is of order 1. Our higher order assortativity indices help discriminating networks having the same Newman index and may reveal new topological network features. An application to airline network (Italy and US) and to Enron email network, as well as examples and simulations, are discussed.
- Published
- 2017
47. Large Age-Gap face verification by feature injection in deep networks
- Abstract
This paper introduces a new method for face verification across large age gaps and also a dataset containing variations of age in the wild, the Large Age-Gap (LAG) dataset, with images ranging from child/young to adult/old. The proposed method exploits a deep convolutional neural network (DCNN) pre-trained for the face recognition task on a large dataset and then fine-tuned for the large age-gap face verification task. Fine-tuning is performed in a Siamese architecture using a contrastive loss function. A feature injection layer is introduced to boost verification accuracy, showing the ability of the DCNN to learn a similarity metric leveraging external features. Experimental results on the LAG dataset show that our method is able to outperform the face verification solutions in the state of the art considered.
- Published
- 2017
48. Option pricing under deformed Gaussian distributions
- Abstract
In financial literature many have been the attempts to overcome the option pricing drawbacks that affect the Black and Scholes model. Starting from the Tsallis deformation of the usual exponential function, this paper presents, in a complete market setup, a class of deformed geometric Brownian motions flexible enough to reproduce fat tails and to capture the volatility behavior observed in models that consider both stochastic volatility and jumps.
- Published
- 2016
49. Degenerate tetrahedra removal
- Abstract
Standard 3D mesh generation algorithms may produce a low quality tetrahedral mesh, i.e., a mesh where the tetrahedra have very small dihedral angles. In this paper, we propose a series of operations to recover these badly-shaped tetrahedra. In particular, we will focus on the shape of these undesired mesh elements by proposing a novel method to distinguish and classify them. For each of these configurations, we apply a suitable sequence of operations to get a higher mesh quality. Finally, we employ a random algorithm to avoid locks and loops in the procedure. The reliability of the proposed mesh optimization algorithm is numerically proved with several examples.
- Published
- 2016
50. FLARES: A flexible scintillation light apparatus for rare event searches
- Abstract
FLARES is a project for an innovative detector technology to be applied to rare event searches, and in particular to neutrinoless double beta decay experiments. Its novelty is the enhancement and optimization of the collection of the scintillation light emitted by ultra-pure crystals through the use of arrays of high performance silicon photodetectors cooled to 120 K. This would provide scintillation detectors with 1% level energy resolution, with the advantages of a technology offering relatively simple low cost mass scalability and powerful background reduction handles, as requested by future neutrinoless double beta decay experimental programs. The performances of a first production of matrices of Silicon Drift Detectors are presented and discussed in this paper.
- Published
- 2016
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.