330 results
Search Results
2. Graphene oxide: A substrate for optimizing preparations of frozen-hydrated samples
- Author
-
Radosav S. Pantelic, Jürgen M. Plitzko, Jannik C. Meyer, Wolfgang Baumeister, and Ute Kaiser
- Subjects
Materials science ,Oxide ,chemistry.chemical_element ,Nanotechnology ,02 engineering and technology ,law.invention ,03 medical and health sciences ,chemistry.chemical_compound ,Structural Biology ,law ,030304 developmental biology ,Graphene oxide paper ,0303 health sciences ,Histocytological Preparation Techniques ,Graphene ,Carbon nanofiber ,Graphene foam ,Oxides ,021001 nanoscience & nanotechnology ,Microscopy, Electron ,chemistry ,Amorphous carbon ,Graphite ,0210 nano-technology ,Carbon ,Graphene nanoribbons - Abstract
Graphene oxide is a hydrophilic derivative of graphene to which biological macromolecules readily attach, with properties superior to those of amorphous carbon films commonly used in electron microscopy. The single-layered crystalline lattice of carbon is highly electron transparent, and exhibits conductivity higher than amorphous carbon. Hence, graphene oxide is a particularly promising substrate for the examination of biological materials by electron microscopy. In this manuscript we compare graphene oxide films to commonly used amorphous carbon films, describing the use of graphene in optimizing the preparation of unstained, vitrified biological macromolecules.
- Published
- 2010
- Full Text
- View/download PDF
3. Comment on A theoretical foundation of ambiguity measurement
- Subjects
ambiguity ,model uncertainty ,probability weighting - Abstract
In this paper, we study asymptotic expansions for distorted probabilities under ambiguity, revisiting the framework and analysis of Izhakian (2020b). We argue that the first order terms in these expansions need to be corrected and provide alternatives. We also revisit later results in this paper on the separation of ambiguity and ambiguity attitudes. We argue that a crucial lemma is flawed implying that Izhakian's ambiguity measure ℧ 2 is not an equivalent way of representing the preferences it is supposed to represent.
- Published
- 2023
- Full Text
- View/download PDF
4. The opportunities and challenges of behavioral field research on misconduct
- Author
-
Lamar Pierce, Ian Larkin, Ann E. Tenbrunsel, Shaul Shalvi, Microeconomics (ASE, FEB), Experimental and Political Economics / CREED (ASE, FEB), and Faculteit Economie en Bedrijfskunde
- Subjects
Organizational Behavior and Human Resource Management ,Misconduct ,Behavioral data ,Field (Bourdieu) ,Field research ,Key (cryptography) ,Engineering ethics ,Set (psychology) ,Psychology ,Applied Psychology ,Strengths and weaknesses - Abstract
Research on behavioral misconduct and ethics across many fields has provided important managerial and policy implications, but has primarily relied on laboratory experiments and survey-based methods to quantify and explain predictors of and mechanisms behind such behavior. This introduction to the Special Issue explains how these more common methods can be complemented by studying misconduct through behavioral data from field settings. We present four classes of behavioral field research, describe their relative strengths and weaknesses, and provide examples from both the Special Issue papers and some of the best preexisting papers. We then explain the key opportunities and challenges facing behavioral field researchers and the tools that address them. Finally, we argue that a combination of methodological approaches will provide the most robust knowledge set on the determinants, mechanisms, and consequences of misconduct and unethical behavior.
- Published
- 2021
- Full Text
- View/download PDF
5. From inference to design: A comprehensive framework for uncertainty quantification in engineering with limited information
- Author
-
Enrique Miralles-Dolz, M. de Angelis, P.O. Hristov, Dominic Calleja, Ander Gray, Alexander Wimbush, Roberto Rocchetta, Industrial Statistics, Eindhoven MedTech Innovation Center, Security, EAISI Health, and EAISI High Tech Systems
- Subjects
Propagation of uncertainty ,Probability bounds analysis ,Epistemic uncertainty ,Computer science ,Mechanical Engineering ,Bayesian calibration ,Aerospace Engineering ,Inference ,Computer Science Applications ,Variety (cybernetics) ,Control and Systems Engineering ,Optimisation under uncertainty ,Signal Processing ,Uncertainty propagation ,Systems engineering ,Uncertainty quantification ,Engineering design process ,Uncertainty reduction ,Reliability (statistics) ,Uncertainty reduction theory ,Civil and Structural Engineering - Abstract
In this paper we present a framework for addressing a variety of engineering design challenges with limited empirical data and partial information. This framework includes guidance on the characterisation of a mixture of uncertainties, efficient methodologies to integrate data into design decisions, and to conduct reliability analysis, and risk/reliability based design optimisation. To demonstrate its efficacy, the framework has been applied to the NASA 2020 uncertainty quantification challenge. The results and discussion in the paper are with respect to this application.
- Published
- 2022
6. Long-term labor market returns to upper secondary school track choice: Leveraging idiosyncratic variation in peers’ choices
- Author
-
Jesper Fels Birkelund, Herman G. van de Werfhorst, and Institutions, Inequalities, and Life courses (IIL, AISSR, FMG)
- Subjects
Labor market outcomes ,Sociology and Political Science ,Inequality ,media_common.quotation_subject ,Safety net ,education ,Wage ,Instrumental variables ,Faculty of Social Sciences ,Education ,SocArXiv|Social and Behavioral Sciences|Sociology ,Educational tracking ,Vocational education ,Margin (finance) ,Economics ,Humans ,SocArXiv|Social and Behavioral Sciences|Sociology|Sociology of Education ,Occupations ,Socioeconomic status ,media_common ,Schools ,bepress|Social and Behavioral Sciences|Sociology|Educational Sociology ,Earnings ,Salaries and Fringe Benefits ,SocArXiv|Social and Behavioral Sciences|Sociology|Inequality, Poverty, and Mobility ,Instrumental variable ,fungi ,bepress|Social and Behavioral Sciences|Sociology ,Socioeconomic Factors ,Peer effects ,Income ,bepress|Social and Behavioral Sciences ,Educational Status ,Demographic economics ,SocArXiv|Social and Behavioral Sciences ,bepress|Social and Behavioral Sciences|Sociology|Inequality and Stratification - Abstract
Vocational education and training (VET) is theorized to play a dual role for inequality of labor market outcomes: the role of a safety net and the role of socioeconomic diversion. In this paper, we test these hypotheses by examining the long-term labor market returns to track choice in upper secondary education in Denmark using an instrumental variable approach that relies on random variation in school peers’ educational decisions. We report two main findings. First, VET diverts students on the margin to the academic track away from higher-status but not higher-paying occupations. Second, VET protects students on the margin to leaving school from risks of non-employment and unskilled work, also leading to higher earnings. These results suggest that in countries with a highly compressed wage structure, a strong VET system benefits students unlikely to continue to college, while causing few adverse consequences for students on the margin to choosing academic education. Vocational education and training (VET) is theorized to play a dual role for inequality of labor market outcomes: the role of a safety net and the role of socioeconomic diversion. In this paper, we test these hypotheses by examining the long-term labor market returns to track choice in upper secondary education in Denmark using an instrumental variable approach that relies on random variation in school peers’ educational decisions. We report two main findings. First, VET diverts students on the margin to the academic track away from higher-status but not higher-paying occupations. Second, VET protects students on the margin to leaving school from risks of non-employment and unskilled work, also leading to higher earnings. These results suggest that in countries with a highly compressed wage structure, a strong VET system benefits students unlikely to continue to college, while causing few adverse consequences for students on the margin to choosing academic education.
- Published
- 2022
- Full Text
- View/download PDF
7. Ky-Fan inequality, Nash equilibria in some idempotent and harmonic convex structure
- Author
-
Ilknur Yesilce, Walter Briec, and Sabire Yazıcı Fen Edebiyat Fakültesi
- Subjects
Pure mathematics ,Computer Science::Computer Science and Game Theory ,Applied Mathematics ,Ky Fan inequality ,Stochastic game ,Inverse B-convexity (B−1-convexity) ,Structure (category theory) ,Inverse ,Harmonic Structures ,Fixed point ,Convexity ,Ky-Fan Inequality ,Nash Equilibrium ,symbols.namesake ,Nash equilibrium ,Fixed Points ,symbols ,Limit (mathematics) ,Analysis ,Mathematics - Abstract
B -convexity is defined as a suitable Peano-Kuratowski limit of linear convexities. An alternative idempotent convex structure called inverse B -convexity was recently proposed in the literature. This paper continues and extends some investigation started in these papers. In particular we focus on the Ky-Fan inequality and prove the existence of a Nash equilibrium for inverse B -convex games. This we do by considering a suitable “harmonic” topological structure which allows to establish a KKM theorem as well as some important related properties. Among other things a coincidence theorem is established. The paper also establishes fixed point results and Nash equilibriums properties in the case where two different convex topological structures are merged. It follows that one can consider a large class of games where the players may optimize their payoff subject to different forms of convexity. Among other things an inverse B -convex version of the Debreu-Gale-Nikaido theorem is proposed.
- Published
- 2022
8. Multi-element flow-driven spectral chaos (ME-FSC) method for uncertainty quantification of dynamical systems
- Author
-
Arun Prakash, Guang Lin, and Hugo Esquivel
- Subjects
Stochastic discontinuities ,Numerical Analysis ,Physics and Astronomy (miscellaneous) ,Applied Mathematics ,Multi-element flow-driven spectral chaos (ME-FSC) ,Stochastic dynamical systems ,Stochastic flow map ,Numerical Analysis (math.NA) ,Computer Science Applications ,Computational Mathematics ,Modeling and Simulation ,FOS: Mathematics ,Mathematics - Numerical Analysis ,Uncertainty quantification ,Long-time integration - Abstract
The flow-driven spectral chaos (FSC) is a recently developed method for tracking and quantifying uncertainties in the long-time response of stochastic dynamical systems using the spectral approach. The method uses a novel concept called 'enriched stochastic flow maps' as a means to construct an evolving finite-dimensional random function space that is both accurate and computationally efficient in time. In this paper, we present a multi-element version of the FSC method (the ME-FSC method for short) to tackle (mainly) those dynamical systems that are inherently discontinuous over the probability space. In ME-FSC, the random domain is partitioned into several elements, and then the problem is solved separately on each random element using the FSC method. Subsequently, results are aggregated to compute the probability moments of interest using the law of total probability. To demonstrate the effectiveness of the ME-FSC method in dealing with discontinuities and long-time integration of stochastic dynamical systems, four representative numerical examples are presented in this paper, including the Van-der-Pol oscillator problem and the Kraichnan-Orszag three-mode problem. Results show that the ME-FSC method is capable of solving problems that have strong nonlinear dependencies over the probability space, both reliably and at low computational cost., Comment: Preprint submitted to Journal of Computational Physics (Elsevier)
- Published
- 2022
9. Signatures of Witt spaces with boundary
- Author
-
Paolo Piazza and Boris Vertman
- Subjects
Mathematics - Differential Geometry ,General Mathematics ,Geometric Topology (math.GT) ,53C44, Secondary 58J35, 35K08 ,edge singularities ,index theorem ,Mathematics - Spectral Theory ,Mathematics - Geometric Topology ,Mathematics - Analysis of PDEs ,Differential Geometry (math.DG) ,Mathematics::K-Theory and Homology ,FOS: Mathematics ,eta-invariants ,signature ,Spectral Theory (math.SP) ,Analysis of PDEs (math.AP) - Abstract
Let M be a compact smoothly stratified pseudomanifold with boundary, satisfying the Witt assumption. In this paper we introduce the de Rham signature and the Hodge signature of M, and prove their equality. Next, building also on recent work of Albin and Gell-Redman, we extend the Atiyah-Patodi-Singer index theory established in our previous work under the hypothesis that M has stratification depth 1 to the general case, establishing in particular a signature formula on Witt spaces with boundary. In a parallel way we also pass to the case of a Galois covering M' of M with Galois group Gamma. Employing von Neumann algebras we introduce the de Rham Gamma-signature and the Hodge Gamma-signature and prove their equality, thus extending to Witt spaces a result proved by Lueck and Schick in the smooth case. Finally, extending work of Vaillant in the smooth case, we establish a formula for the Hodge Gamma-signature. As a consequence we deduce the fundamental result that equates the Cheeger-Gromov rho-invariant of the boundary of M' with the difference of the signatures of M and M'. We end the paper with two geometric applications of our results., 58 pages, 4 figures
- Published
- 2022
10. Contemporary career orientations and career self-management: a systematic review and integrative framework
- Abstract
Successful career development requires increased career self-management and contemporary career orientations accordingly stress the importance of being self-directed, values-driven, and flexible. This paper provides an overview of key perspectives on contemporary career orientations in relation to career self-management (CSM), as well as a systematic review of these two streams of literatures. With a focus on highly influential classic and recent papers as well as on all papers published in the Journal of Vocational Behavior on these topics, we aim to integrate the literatures on career orientations and CSM and advance future research. To this purpose, we present an integrative framework of career self-regulation which views CSM as a dynamic process consisting of goal setting and development, information seeking, planning and execution of behaviors, and monitoring and feedback processing. This process is influenced by, and subsequently affects, individual career orientations. We finish the paper by providing several directions for future research in terms of examining more dynamic and self-regulatory processes, unpacking the role of context, integrating the larger proactivity literature, applying a work-nonwork perspective, and developing and testing interventions.
- Published
- 2021
- Full Text
- View/download PDF
11. Reflections on the unexpected laboratory finding of hemorheological alterations observed in some haematological disorders
- Author
-
Gregorio Caimi, Rosalia Lo Presti, Melania Carlisi, Caimi G., Lo Presti R., and Carlisi M.
- Subjects
0301 basic medicine ,medicine.medical_specialty ,Laboratory finding ,Hyperviscosity ,Monoclonal gammopathy of undetermined significance ,030204 cardiovascular system & hematology ,Biochemistry ,Hyperviscosity syndromes ,03 medical and health sciences ,0302 clinical medicine ,Polycythemia vera ,Multiple myeloma ,Erythrocyte Deformability ,Hyperviscosity syndrome ,medicine ,Animals ,Humans ,Intensive care medicine ,Polycythemia Vera ,business.industry ,Models, Cardiovascular ,Cell Biology ,medicine.disease ,Blood Viscosity ,030104 developmental biology ,Abnormality ,Cardiology and Cardiovascular Medicine ,business ,Haematological disorders - Abstract
Hyperviscosity syndrome is a clinical condition characterized by the slowing of blood flow through the vessels and it may be associated with several diseases. The nosographic classification of primary hyperviscosity conditions (Wells classification 1970) divided the primary hyperviscosity syndromes in polycythaemic, sclerocytemic and sieric. Recent and personal laboratory observations have highlighted an unexpected behaviour of the erythrocyte deformability observed in some haematological disorders such as polycythemia vera, multiple myeloma and monoclonal gammopathy of undetermined significance. The interest of this observation depends on the fact that up to now, according to the Wells classification, the hemorheological alteration present in PV was related to the increase of RBC mass while that present in MM and MGUS was attributable to the abnormality of plasma or serum viscosity only. Through an extensive research among the literature, using MEDLINE/PubMed to identify all published reports on the hyperviscosity syndromes, issues that until now have been dealt with separately will therefore be analyzed in a unique paper, allowing a global view. The aim of this paper is to provide some suggestions for reflection and emphasizing the need of a nosographic framework of hyperviscosity that, probably, deserves to be reviewed.
- Published
- 2021
12. On twisted A-harmonic sums and Carlitz finite zeta values
- Author
-
Federico Pellarin, Rudolph Perkins, Combinatoire, théorie des nombres (CTN), Institut Camille Jordan [Villeurbanne] (ICJ), École Centrale de Lyon (ECL), Université de Lyon-Université de Lyon-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-Université Jean Monnet [Saint-Étienne] (UJM)-Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS)-École Centrale de Lyon (ECL), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS), Interdisciplinary Center for Scientific Computing (IWR), and Universität Heidelberg [Heidelberg]
- Subjects
Carlitz module ,Class (set theory) ,Pure mathematics ,Algebra and Number Theory ,Mathematics - Number Theory ,Mathematics::General Mathematics ,Mathematics::Number Theory ,010102 general mathematics ,Zero (complex analysis) ,A-harmonic sums ,Harmonic (mathematics) ,16. Peace & justice ,01 natural sciences ,Multiple zeta values ,[MATH.MATH-NT]Mathematics [math]/Number Theory [math.NT] ,multiple zeta values ,11M38 ,0103 physical sciences ,FOS: Mathematics ,Number Theory (math.NT) ,010307 mathematical physics ,0101 mathematics ,Phenomenology (particle physics) ,Mathematics - Abstract
In this paper, we study various twisted A-harmonic sums, named following the seminal log-algebraicity papers of G. Anderson. These objects are partial sums of new types of special zeta values introduced by the first author and linked to certain rank one Drinfeld modules over Tate algebras in positive characteristic by Angl\`es, Tavares Ribeiro and the first author. We prove, by using techniques introduced by the second author, that various infinite families of such sums may be interpolated by polynomials, and we deduce, among several other results, properties of analogues of finite zeta values but inside the framework of the Carlitz module. In the theory of finite multi-zeta values in characteristic zero, finite zeta values are all zero. In the Carlitzian setting, there exist non-vanishing finite zeta values, and we study some of their properties in the present paper., Comment: Simplified presentation for proof of Theorem 1 following suggestions by A. Maurischat. Several technical corollaries and various remarks removed for streamlining. Submitted. 22 pp
- Published
- 2021
13. Journal of Parallel and Distributed Computing / Randomized renaming in shared memory systems
- Author
-
Berenbrink, Petra, Brinkmann, André, Elsässer, Robert, Friedetzky, Tom, and Nagel, Lars
- Subjects
Randomized algorithm ,Tight renaming ,Shared memory model ,Distributed algorithm ,Loose renaming - Abstract
Renaming is a task in distributed computing where n processes are assigned new names from a name space of size m. The problem is called tight if m=n, and loose if m>n. In recent years renaming came to the fore again and new algorithms were developed. For tight renaming in asynchronous shared memory systems, Alistarh et al. describe a construction based on the AKS network that assigns all names within O(logn) steps per process. They also show that, depending on the size of the name space, loose renaming can be done considerably faster. For m=(1+ϵ)⋅n and constant ϵ, they achieve a step complexity of O(loglogn). In this paper we consider tight as well as loose renaming and introduce randomized algorithms that achieve their tasks with high probability. The model assumed is the asynchronous shared-memory model against an adaptive adversary. Our algorithm for loose renaming maps n processes to a name space of size m=(1+2∕(logn)ℓ)⋅n=(1+o(1))⋅n performing O(ℓ⋅(loglogn)2) test-and-set operations. In the case of tight renaming, we present a protocol that assigns n processes to n names with step complexity O(logn), but without the overhead and impracticality of the AKS network. This algorithm utilizes modern hardware features in form of a counting device which is also described in the paper. This device may have the potential to speed up other distributed algorithms as well.
- Published
- 2021
- Full Text
- View/download PDF
14. Applications of fast field cycling NMR relaxometry
- Author
-
Pellegrino Conte and Conte P.
- Subjects
Food science ,Soil science ,Water science ,Relaxometry ,Field cycling ,Computer science ,Settore AGR/13 - Chimica Agraria ,New materials ,Sediment science ,Biochemical engineering ,Material science ,FFC NMR relaxometry - Abstract
Fast field cycling (FFC) NMR relaxometry is emerging as a powerful tool to investigate physical chemistry properties of many systems in a number of different scientific fields. As an example, it is used to investigate environmental issues such as soil erosion, water, and nutrient dynamics in environmentally relevant porous systems, to discriminate among different kinds of foodstuff in order to understand possible source of adulteration and fraud, to evaluate the properties of new materials, and much more. In the present study, an overview about the possible applications of FFC NMR relaxometry is given. The paper is not intended to be exhaustive. Rather, it is thought to provide an ensemble of information as wider as possible in order to allow scientists from different cultural extractions to “open their eyes” in fields to which they are not familiar, thereby hoping that new and inspirational ideas can emerge from the reading of this paper.
- Published
- 2021
15. Open Cross-Domain Visual Search
- Author
-
William Thong, Cees G. M. Snoek, Pascal Mettes, and Intelligent Sensory Information Systems (IVI, FNWI)
- Subjects
Visual search ,FOS: Computer and information sciences ,Information retrieval ,Computer science ,media_common.quotation_subject ,Computer Vision and Pattern Recognition (cs.CV) ,Semantic space ,Computer Science - Computer Vision and Pattern Recognition ,020207 software engineering ,02 engineering and technology ,Common space ,Sketch ,Domain (software engineering) ,Simple (abstract algebra) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Function (engineering) ,Software ,media_common - Abstract
This paper addresses cross-domain visual search, where visual queries retrieve category samples from a different domain. For example, we may want to sketch an airplane and retrieve photographs of airplanes. Despite considerable progress, the search occurs in a closed setting between two pre-defined domains. In this paper, we make the step towards an open setting where multiple visual domains are available. This notably translates into a search between any pair of domains, from a combination of domains or within multiple domains. We introduce a simple -- yet effective -- approach. We formulate the search as a mapping from every visual domain to a common semantic space, where categories are represented by hyperspherical prototypes. Open cross-domain visual search is then performed by searching in the common semantic space, regardless of which domains are used as source or target. Domains are combined in the common space to search from or within multiple domains simultaneously. A separate training of every domain-specific mapping function enables an efficient scaling to any number of domains without affecting the search performance. We empirically illustrate our capability to perform open cross-domain visual search in three different scenarios. Our approach is competitive with respect to existing closed settings, where we obtain state-of-the-art results on several benchmarks for three sketch-based search tasks., Accepted at Computer Vision and Image Understanding (CVIU)
- Published
- 2020
16. Post-crisis evolution of banking and financial markets: Introduction
- Author
-
Arnoud W. A. Boot, Anjan V. Thakor, Corporate Governance, Finance (ABS, FEB), Faculteit Economie en Bedrijfskunde, and Amsterdam Center for Law and Economics (FdR, FEB)
- Subjects
040101 forestry ,Economics and Econometrics ,050208 finance ,Cover (telecommunications) ,05 social sciences ,Financial market ,Financial intermediary ,Financial system ,04 agricultural and veterinary sciences ,Post crisis ,0502 economics and business ,Financial crisis ,Economics ,0401 agriculture, forestry, and fisheries ,Finance - Abstract
This editorial reviews the papers that were presented at a conference at Washington University in St. Louis, a subset of which were published in a special issue of The Journal of Financial Intermediation. The papers cover a wide range of issues on how banks and financial markets have evolved since the financial crisis and the blurring of boundaries between institutions and markets.
- Published
- 2019
- Full Text
- View/download PDF
17. Assessing the impacts of citizen-led policies on emissions, air quality and health
- Author
-
Per Sieverts Nielsen, Kris Vanherle, Enda T Hayes, K. Oliveira, Alexandra Monteiro, Vera Rodrigues, Stephan Slingerland, J. H. N. Soares, Iason Diafas, Ana Isabel Miranda, Marta B. Lopes, Joana Ferreira, Carlo Trozzi, Sandra Rafael, Angreine Kewo, Evert A. Bouman, and Environmental Policy Analysis
- Subjects
Environmental Engineering ,SDG 16 - Peace ,Impact assessment ,Control (management) ,0207 environmental engineering ,Air pollution ,Health benefits ,02 engineering and technology ,010501 environmental sciences ,Management, Monitoring, Policy and Law ,medicine.disease_cause ,01 natural sciences ,Sustainability & Climate Change ,European cities ,Business as usual ,Air Pollution ,11. Sustainability ,medicine ,Citizen engagement ,Cities ,020701 environmental engineering ,Baseline (configuration management) ,Air quality index ,Environmental planning ,Waste Management and Disposal ,0105 earth and related environmental sciences ,Air Pollutants ,Air pollution reduction ,SDG 16 - Peace, Justice and Strong Institutions ,1. No poverty ,Stakeholder ,General Medicine ,Air Quality Management Resource Centre ,Justice and Strong Institutions ,Policy ,13. Climate action ,Who guidelines ,Particulate Matter ,Business ,Environmental Monitoring ,Urban emissions - Abstract
Air pollution is a global challenge, and especially urban areas are particularly affected by acute episodes. Traditional approaches used to mitigate air pollution primarily consider the technical aspects of the problem but not the role of citizen behaviour and day-to-day practices. ClairCity, a Horizon 2020 funded project, created an impact assessment framework considering the role of citizen behaviour to create future scenarios, aiming to improve urban environments and the wellbeing and health of its inhabitants. This framework was applied to six pilot cases: Bristol, Amsterdam, Ljubljana, Sosnowiec, Aveiro Region and Liguria Region, considering three-time horizons: 2025, 2035 and 2050. The scenarios approach includes the Business As Usual (BAU) scenario and a Final Unified Policy Scenarios (FUPS) established by citizens, decision-makers, local planners and stakeholders based on data collected through a citizen and stakeholder co-creation process. Therefore, this paper aims to present the ClairCity outcomes, analysing the quantified impacts of selected measures in terms of emissions, air quality, population exposure, and health. Each case study has established a particular set of measures with different levels of ambition, therefore different levels of success were achieved towards the control and mitigation of their specific air pollution problems. The transport sector was the most addressed by the measures showing substantial improvements for NO2, already with the BAU scenarios, and overall, even better results when applying the citizen-led FUPS scenarios. In some cases, due to a lack of ambition for the residential and commercial sector, the results were not sufficient to fulfil the WHO guidelines. Overall, it was found in all cities that the co-created scenarios would lead to environmental improvements in terms of air quality and citizens’ health compared to the baseline year of 2015. However, in some cases, the health impacts were lower than air quality due to the implementation of the measures not affecting the most densely populated areas. Benefits from the FUPS comparing to the BAU scenario were found to be highest in Amsterdam and Bristol, with further NO2 and PM10 emission reductions around 10%–16% by 2025 and 19%–28% by 2050, compared to BAU.
- Published
- 2022
- Full Text
- View/download PDF
18. A method for comparing multiple imputation techniques: A case study on the U.S. national COVID cohort collaborative
- Author
-
Elena Casiraghi, Rachel Wong, Margaret Hall, Ben Coleman, Marco Notaro, Michael D. Evans, Jena S. Tronieri, Hannah Blau, Bryan Laraway, Tiffany J. Callahan, Lauren E. Chan, Carolyn T. Bramante, John B. Buse, Richard A. Moffitt, Til Stürmer, Steven G. Johnson, Yu Raymond Shao, Justin Reese, Peter N. Robinson, Alberto Paccanaro, Giorgio Valentini, Jared D. Huling, and Kenneth J. Wilkins
- Subjects
Health Informatics ,Computer Science Applications - Abstract
Healthcare datasets obtained from Electronic Health Records have proven to be extremely useful for assessing associations between patients’ predictors and outcomes of interest. However, these datasets often suffer from missing values in a high proportion of cases, whose removal may introduce severe bias. Several multiple imputation algorithms have been proposed to attempt to recover the missing information under an assumed missingness mechanism. Each algorithm presents strengths and weaknesses, and there is currently no consensus on which multiple imputation algorithm works best in a given scenario. Furthermore, the selection of each algorithm's parameters and data-related modeling choices are also both crucial and challenging. In this paper we propose a novel framework to numerically evaluate strategies for handling missing data in the context of statistical analysis, with a particular focus on multiple imputation techniques. We demonstrate the feasibility of our approach on a large cohort of type-2 diabetes patients provided by the National COVID Cohort Collaborative (N3C) Enclave, where we explored the influence of various patient characteristics on outcomes related to COVID-19. Our analysis included classic multiple imputation techniques as well as simple complete-case Inverse Probability Weighted models. Extensive experiments show that our approach can effectively highlight the most promising and performant missing-data handling strategy for our case study. Moreover, our methodology allowed a better understanding of the behavior of the different models and of how it changed as we modified their parameters. Our method is general and can be applied to different research fields and on datasets containing heterogeneous types.
- Published
- 2023
- Full Text
- View/download PDF
19. On the soluble graph of a finite group
- Author
-
Timothy C. Burness, Andrea Lucchini, and Daniele Nemmi
- Subjects
Computational Theory and Mathematics ,Diameter ,Soluble graph ,FOS: Mathematics ,Discrete Mathematics and Combinatorics ,Group Theory (math.GR) ,Finite groups ,Simple groups ,Mathematics - Group Theory ,Theoretical Computer Science - Abstract
Let $G$ be a finite insoluble group with soluble radical $R(G)$. In this paper we investigate the soluble graph of $G$, which is a natural generalisation of the widely studied commuting graph. Here the vertices are the elements in $G \setminus R(G)$, with $x$ adjacent to $y$ if they generate a soluble subgroup of $G$. Our main result states that this graph is always connected and its diameter, denoted $\delta_{\mathcal{S}}(G)$, is at most $5$. More precisely, we show that $\delta_{\mathcal{S}}(G) \leqslant 3$ if $G$ is not almost simple and we obtain stronger bounds for various families of almost simple groups. For example, we will show that $\delta_{\mathcal{S}}(S_n) = 3$ for all $n \geqslant 6$. We also establish the existence of simple groups with $\delta_{\mathcal{S}}(G) \geqslant 4$. For instance, we prove that $\delta_{\mathcal{S}}(A_{2p+1}) \geqslant 4$ for every Sophie Germain prime $p \geqslant 5$, which demonstrates that our general upper bound of $5$ is close to best possible. We conclude by briefly discussing some variations of the soluble graph construction and we present several open problems., Comment: 28 pages, to appear in JCTA
- Published
- 2023
- Full Text
- View/download PDF
20. Analyses of drives power reduction techniques for multi-axis random vibration control tests
- Author
-
Bart Peeters, Umberto Musella, Giacomo D’Elia, Emiliano Mucchi, Patrick Guillaume, Faculty of Engineering, Applied Mechanics, and Acoustics & Vibration Research Group
- Subjects
0209 industrial biotechnology ,Computer science ,MIMO ,Vibration control ,Aerospace Engineering ,02 engineering and technology ,01 natural sciences ,MIMO control ,NO ,Reduction (complexity) ,020901 industrial engineering & automation ,Data acquisition ,Minimum drives power ,0103 physical sciences ,Shaker ,MIMO control, Minimum drives power, Multi-axis vibration testing, Random vibration ,Spectral density matrix ,010301 acoustics ,Civil and Structural Engineering ,Mechanical Engineering ,Control engineering ,Computer Science Applications ,Power (physics) ,Control and Systems Engineering ,Signal Processing ,Random vibration ,Interrupt ,Multi-axis vibration testing - Abstract
In multi-axis vibration control testing, the power required by the excitation system for replicating the user defined test specifications is a limiting factor which cannot be overlooked. Excessive power, on top of over-stressing the often expensive test equipment, could cause data acquisition overloads which inevitably interrupt the test even before the full level run. An accurate definition of the Multi-Input Multi-Output (MIMO) control target allows to perform the control test minimising the overall power required by the shakers. In this sense, in the recent years advanced procedures have been developed in this research direction. This paper analyses the available drives power reduction techniques, offering a detailed overview of the current state-of-the-art. Furthermore, this paper provides a novel solution to manage the cases where most of the power is required by a single drive of the multiple inputs excitation system. In order to point out the pros and cons of each procedure and to show the capabilities of the novel technique, the MIMO target generation algorithms are firstly theoretically explained and then experimentally compared by using a three-axial electrodynamic shaker.
- Published
- 2020
21. Cities and tasks
- Author
-
Hans R.A. Koster, Ceren Ozgen, Spatial Economics, and Tinbergen Institute
- Subjects
Wage inequality ,Economics and Econometrics ,Labour economics ,Matching (statistics) ,media_common.quotation_subject ,Wage ,Agglomeration economies ,Task (project management) ,Learning opportunities ,Skills mismatch ,0502 economics and business ,050207 economics ,050205 econometrics ,media_common ,Technological change ,Economies of agglomeration ,05 social sciences ,Employment density ,SDG 10 - Reduced Inequalities ,Urban Studies ,Routinisation ,8. Economic growth ,Business ,Tasks ,Meaning (linguistics) - Abstract
This paper explores the relationship between routine-biased technological change and agglomeration economies. Using administrative data from the Netherlands, we first show that in dense areas, jobs are less routine-task intensive (i.e. less repetitive and automatable), meaning that jobs cover a larger spectrum of tasks. We then explore how the routine intensity of jobs affects the urban wage premium. We find that the urban wage premium is higher for workers performing non-routine tasks, particularly analytic tasks, while it is absent for workers in routine task intensive jobs. These findings also hold within skill groups and suggest that routinisation increases spatial wage inequality within urban areas. We further provide suggestive evidence that a better matching of skills to jobs and increased learning opportunities in cities can explain these findings.
- Published
- 2021
- Full Text
- View/download PDF
22. An algebraic approach to Erdős-Ko-Rado sets of flags in spherical buildings
- Author
-
Jan De Beule, Sam Mattheus, Klaus Metsch, Mathematics, Digital Mathematics, and Algebra and Analysis
- Subjects
Erdos-Ko-Rado ,Computational Theory and Mathematics ,Flags ,Discrete Mathematics and Combinatorics ,Buildings ,Oppositeness ,Theoretical Computer Science - Abstract
In this paper, oppositeness in spherical buildings is used to define an EKR-problem for flags in projective and polar spaces. A novel application of the theory of buildings and Iwahori-Hecke algebras is developed to prove sharp upper bounds for EKR-sets of flags. In this framework, we can reprove and generalize previous upper bounds for EKR-problems in projective and polar spaces. The bounds are obtained by the application of the Delsarte-Hoffman coclique bound to the opposition graph. The computation of its eigenvalues is due to earlier work by Andries Brouwer and an explicit algorithm is worked out. For the classical geometries, the execution of this algorithm boils down to elementary combinatorics. Connections to building theory, Iwahori-Hecke algebras, classical groups and diagram geometries are briefly discussed. Several open problems are posed throughout and at the end.
- Published
- 2022
23. A data-driven nonlinear state-space model of the unsteady lift force on a pitching wing
- Author
-
Tim De Troyer, Muhammad Faheem Siddiqui, Mark Runacres, Johan Schoukens, Péter Zoltán Csurcsia, Jan Decuyper, Thermodynamics and Fluid Mechanics Group, Engineering Technology, Acoustics & Vibration Research Group, Faculty of Engineering, Basic (bio-) Medical Sciences, and Vriendenkring VUB
- Subjects
Mechanical Engineering - Abstract
Accurate unsteady aerodynamic models are essential to estimate the forces on rapidly pitching wings and to develop model-based controllers. As system identification is arguably the most successful framework for model predictive control in general, in this paper we investigate whether system identification can be used to build data-driven models of pitching wings. The forces acting on the pitching wing can be considered a nonlinear dynamic function of the pitching angle and therefore require a nonlinear dynamic model. In this work, a nonlinear data-driven model is developed for a pitching wing. The proposed model structure is a polynomial nonlinear state-space model (PNLSS), which is an extension of the classical linear state-space model with nonlinear functions. The PNLSS model is trained on experimental data of a pitching wing. The experiments are performed using a dedicated wind tunnel setup. The pitch angle is considered as the input to the model, while the lift coefficient is considered as the output. Three models are trained on swept-sine signals at three offset angles with a fixed pitch amplitude and a range of reduced frequencies. The three training datasets are selected to cover the linear and nonlinear operating regimes of the pitching wing. The PNLSS models are validated on single-sine experimental data at the respective pitch offset angles. The PNLSS models are able to capture the nonlinear aerodynamic forces more accurately than a linear and semi-empirical models, especially at higher offset angles.
- Published
- 2022
- Full Text
- View/download PDF
24. A generalised formulation of G-continuous Bezier elements applied to non-linear MHD simulations
- Author
-
S.J.P. Pamela, G.T.A. Huijsmans, M. Hoelzl, EIRES Eng. for Sustainable Energy Systems, Magneto-Hydro-Dynamic Stability of Fusion Plasmas, Science and Technology of Nuclear Fusion, and JOREK Team
- Subjects
Computational Mathematics ,Numerical Analysis ,FEM ,Plasma ,Physics and Astronomy (miscellaneous) ,MHD ,Applied Mathematics ,Modeling and Simulation ,Bezier ,Fusion ,Computer Science Applications - Abstract
The international tokamak ITER is progressing towards assembly completion and first-plasma operation, which will be a physics and engineering challenge for the fusion community. In the preparation for ITER experimental scenarios, non-linear MHD simulations are playing an essential role to actively understand and predict the behaviour and stability of tokamak plasmas in future fusion power plant. The development of MHD codes like JOREK is a key aspect of this research effort, and provides invaluable insight into the plasma stability and the control of global and localised plasma events, like Edge-Localised-Mode and disruptions. In this paper, we present an operational implementation of a new, generalised formulation of Bezier finite-elements applied to the JOREK code, a significant advancement from the previously G1-continuous bi-cubic Bezier elements. This new mathematical method enables any polynomial order of Bezier elements, with a guarantee of G-continuity at the level of (n−1)/2, for any odd n, where n is the order of the Bezier polynomials. The generalised method is defined, and a rigorous mathematical proof is provided for the G-continuity requirement. Key details on the code implementation are mentioned, together with a suite of tests to demonstrate the mathematical reliability of the finite-element method, as well as the practical usability for typical non-linear tokamak MHD simulations. A demonstration for a state-of-the-art simulation of an Edge-Localised-Mode instability in the future ITER tokamak, with realistic grid geometry, finalises the study.
- Published
- 2022
- Full Text
- View/download PDF
25. Exploring common dialectical tensions constraining collaborative communication required for post-2020 conservation
- Author
-
Wayne Stanley Rice and Governance and Inclusive Development (GID, AISSR, FMG)
- Subjects
Conservation of Natural Resources ,Environmental Engineering ,Communication ,General Medicine ,Management, Monitoring, Policy and Law ,Waste Management and Disposal ,Language - Abstract
Contemporary conservation requires improved collaboration characterized by greater recognition and incorporation of multiple and diverse actors. Effective communication is central to this endeavour. However, the expression of concerns, perspectives, and the exchange of knowledge between actors and across multiple scales (i.e., collaborative communication), must navigate inevitable competing systems of meaning and motivation (i.e., dialectical tensions). Yet, a lack of understanding of how to improve collaborative communication within conservation interventions persists within the literature. Consequently, this paper reviews relevant literature to propose a framework that identifies common sources of dialectical tensions in collaborative conservation interventions that if managed effectively can improve required collaborative communication. The framework is then revised based on interviews conducted with 277 respondents in three African coastal-marine collaborative conservation interventions. Findings reinforce the effect of continued marginalization of certain actors' ‘voices’ within governance processes. More specifically, enabling collaborative communication requires managing several identified institutional-, agenda-, cultural-, and perception-based tensions. In particular, tensions emerging from formal-informal institutional interactions; gender-based exclusion; conflicting livelihood-ecological and economic-environmental agendas, and project-funder objectives; between indigenous/local-scientific knowledge and values; and perceived necessary-acceptable change. Furthermore, specific local-scale tensions identified included those associated with local-customary institutions; democratic-meritocratically elected local representatives; and exclusion based on cultural diversity. Consequently, these tensions require the ‘co-creation’ of communicative strategies amongst all actors to promote greater social equity that better aligns with local priorities to achieve ‘positive’ post-2020 ecological and social outcomes. Findings should be relevant to diverse conservation actors, and many others working within multi-stakeholder environmental interventions.
- Published
- 2022
- Full Text
- View/download PDF
26. Torque estimation in marine propulsion systems
- Author
-
Mikael Manngård, Ivar Koene, Wictor Lund, Sampo Haikonen, Fredrik A. Fagerholm, Michał Wilczek, Konrad Mnich, Joni Keski-Rahkonen, Raine Viitala, Jerker Björkqvist, Hannu T. Toivonen, Åbo Akademi University, Mechatronics, Department of Mechanical Engineering, Kongsberg Maritime Finland, Abo Akademi University, Aalto-yliopisto, and Aalto University
- Subjects
Control and Systems Engineering ,Mechanical Engineering ,Signal Processing ,Maritime systems ,Aerospace Engineering ,Optimal filtering ,Computer Science Applications ,Civil and Structural Engineering ,Simultaneous input and state estimation - Abstract
Funding Information: This work was done within the Business Finland funded research project Reboot IoT Factory. The authors would like to thank the anonymous reviewers for their meticulous reviews and constructive comments, which were of great help to improve the manuscript. Publisher Copyright: © 2022 The Authors An augmented Kalman filter for torque estimation in marine propulsion-system drive trains is presented. Propeller and motor excitations and torque responses are estimated based on a dynamical model of the system and inboard shaft measurements. Input excitations affecting marine propulsion systems are signals whose statistical properties vary between finite time intervals. Hence, in this paper, excitations are characterized as quasi-stationary signals with bounded power spectral density. Given that upper bounds on the spectral densities are known prior to estimation, it is shown that a linear time-invariant input-and-state observer, minimizing the worst-case power of the estimation errors, can be synthesized by conventional Kalman-filtering techniques. Experiments have been conducted on a laboratory-scale test bench to assess the applicability of the proposed observer for use in marine propulsion systems. The test bench was built to emulate the behavior of a full-scale propulsion system operated in ice and other high load conditions. Estimation results from a full-size underwater mountable azimuthing thruster are also presented. Experiment results show that torque excitations and torque responses at all locations of interest on the engine-propeller drivetrain can be estimated with high accuracy based on a few indirect measurements at convenient locations on the motor shaft.
- Published
- 2022
27. Multi-branch convolutional neural network for multiple sclerosis lesion segmentation
- Author
-
Loredana Storelli, Shahab Aslani, Vittorio Murino, Diego Sona, Maria A. Rocca, Michael Dayan, Massimo Filippi, Aslani, S., Dayan, M., Storelli, L., Filippi, M., Murino, V., Rocca, M. A., and Sona, D.
- Subjects
FOS: Computer and information sciences ,Adult ,Male ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Cognitive Neuroscience ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Convolutional neural network ,050105 experimental psychology ,Upsampling ,Multiple sclerosis ,03 medical and health sciences ,0302 clinical medicine ,Imaging, Three-Dimensional ,Segmentation ,Brain ,Lesions ,Multiple image modality ,Leverage (statistics) ,Humans ,0501 psychology and cognitive sciences ,Multiple sclerosi ,Lesion ,Multiple sclerosis lesion ,business.industry ,05 social sciences ,Pattern recognition ,Middle Aged ,Magnetic Resonance Imaging ,Neurology ,Female ,Artificial intelligence ,Neural Networks, Computer ,business ,030217 neurology & neurosurgery - Abstract
In this paper, we present an automated approach for segmenting multiple sclerosis (MS) lesions from multi-modal brain magnetic resonance images. Our method is based on a deep end-to-end 2D convolutional neural network (CNN) for slice-based segmentation of 3D volumetric data. The proposed CNN includes a multi-branch downsampling path, which enables the network to encode information from multiple modalities separately. Multi-scale feature fusion blocks are proposed to combine feature maps from different modalities at different stages of the network. Then, multi-scale feature upsampling blocks are introduced to upsize combined feature maps to leverage information from lesion shape and location. We trained and tested the proposed model using orthogonal plane orientations of each 3D modality to exploit the contextual information in all directions. The proposed pipeline is evaluated on two different datasets: a private dataset including 37 MS patients and a publicly available dataset known as the ISBI 2015 longitudinal MS lesion segmentation challenge dataset, consisting of 14 MS patients. Considering the ISBI challenge, at the time of submission, our method was amongst the top performing solutions. On the private dataset, using the same array of performance metrics as in the ISBI challenge, the proposed approach shows high improvements in MS lesion segmentation compared with other publicly available tools., This paper has been accepted for publication in NeuroImage
- Published
- 2019
28. Modelling fungal hypha tip growth via viscous sheet approximation
- Author
-
T.G. de Jong, Josephus Hulshof, G Georg Prokert, Mathematics, Applied Analysis, and Center for Analysis, Scientific Computing & Appl.
- Subjects
0301 basic medicine ,Statistics and Probability ,Cytoplasm ,Materials science ,Hypha ,Viscous sheet ,Hyphae ,Models, Biological ,General Biochemistry, Genetics and Molecular Biology ,Cell wall ,03 medical and health sciences ,0302 clinical medicine ,Morphogenesis ,Tip growth ,General Immunology and Microbiology ,Applied Mathematics ,Isotropy ,Fungi ,General Medicine ,Mechanics ,Cell wall growth ,Pressure difference ,030104 developmental biology ,Modeling and Simulation ,Hardening (metallurgy) ,General Agricultural and Biological Sciences ,030217 neurology & neurosurgery ,Hypha growth - Abstract
In this paper we present a new model for single-celled, non-branching hypha tip growth. The growth mechanism of hypha cells consists of transport of cell wall building material to the cell wall and subsequent incorporation of this material in the wall as it arrives. To model the transport of cell wall building material to the cell wall we follow Bartnicki-Garcia and Gierz in assuming that the cell wall building material is transported in straight lines by an isotropic point source. To model the dynamics of the cell wall, including its growth by new material, we use the approach of Campàs and Mahadevan, which assumes that the cell wall is a thin viscous sheet sustained by a pressure difference. Furthermore, we include a novel equation which models the hardening of the cell wall as it ages. We validate the new model by comparing it to experimental data.
- Published
- 2020
- Full Text
- View/download PDF
29. Enriching stochastic model updating metrics: An efficient Bayesian approach using Bray-Curtis distance and an adaptive binning algorithm
- Author
-
Wenhua Zhao, Lechang Yang, Chao Dang, Roberto Rocchetta, Marcos Valdebenito, David Moens, Industrial Statistics, EAISI Health, and EAISI High Tech Systems
- Subjects
Control and Systems Engineering ,Mechanical Engineering ,Signal Processing ,Aerospace Engineering ,Adaptive binning algorithm ,Approximate Bayesian computation ,Bayesian inversion ,Stochastic model updating ,Bray-Curtis distance ,Computer Science Applications ,Civil and Structural Engineering - Abstract
In practical engineering, experimental data is not fully in line with the true system response due to various uncertain factors, e.g., parameter imprecision, model uncertainty, and measurement errors. In the presence of mixed sources of aleatory and epistemic uncertainty, stochastic model updating is a powerful tool for model validation and parameter calibration. This paper investigates the use of Bray-Curtis (B-C) distance in stochastic model updating and proposes a Bayesian approach addressing a scenario where the dataset contains multiple outliers. In the proposed method, a B-C distance-based uncertainty quantification metric is employed, that rewards models for which the discrepancy between observations and simulated samples is small while penalizing those which exhibit large differences. To improve the computational efficiency, an adaptive binning algorithm is developed and embedded into the Bayesian approximate computation framework. The merit of this algorithm is that the number of bins is automatically selected according to the difference between the experimental data and the simulated data. The effectiveness and efficiency of the proposed method is verified via two numerical cases and an engineering case from the NASA 2020 UQ challenge. Both static and dynamic cases with explicit and implicit propagation models are considered.
- Published
- 2022
30. I overthink—Therefore I am not: An active inference account of altered sense of self and agency in depersonalisation disorder
- Author
-
Anna Ciaunica, Anil Seth, Jakub Limanowski, Casper Hesp, Karl J. Friston, and Ontwikkelingspsychologie (Psychologie, FMG)
- Subjects
Arts and Humanities (miscellaneous) ,Depersonalization ,Developmental and Educational Psychology ,Humans ,Experimental and Cognitive Psychology ,Self Psychology - Abstract
This paper considers the phenomenology of depersonalisation disorder, in relation to predictive processing and its associated pathophysiology. To do this, we first establish a few mechanistic tenets of predictive processing that are necessary to talk about phenomenal transparency, mental action, and self as subject. We briefly review the important role of ‘predicting precision’ and how this affords mental action and the loss of phenomenal transparency. We then turn to sensory attenuation and the phenomenal consequences of (pathophysiological) failures to attenuate or modulate sensory precision. We then consider this failure in the context of depersonalisation disorder. The key idea here is that depersonalisation disorder reflects the remarkable capacity to explain perceptual engagement with the world via the hypothesis that “I am an embodied perceiver, but I am not in control of my perception”. We suggest that individuals with depersonalisation may believe that ‘another agent’ is controlling their thoughts, perceptions or actions, while maintaining full insight that the ‘other agent’ is ‘me’ (the self). Finally, we rehearse the predictions of this formal analysis, with a special focus on the psychophysical and physiological abnormalities that may underwrite the phenomenology of depersonalisation.
- Published
- 2022
- Full Text
- View/download PDF
31. Adjustable deterministic pseudonymization of speech
- Author
-
Rob J.J.H. van Son, Mathew Magimai.-Doss, S. Pavankumar Dubagunta, and ACLC (FGw)
- Subjects
speech features ,speech privacy ,Computer science ,Two-alternative forced choice ,Speech recognition ,speech signal processing ,computer.file_format ,Intelligibility (communication) ,Theoretical Computer Science ,Human-Computer Interaction ,Dysarthria ,articulation ,Formant ,medicine ,identification ,Identifiability ,ABX test ,medicine.symptom ,speech pseudonymization ,Pseudonymization ,computer ,Software ,Vocal tract - Abstract
While public speech resources become increasingly available, there is a growing interest to preserve the privacy of the speakers, through methods that anonymize the speaker information from speech while preserving the spoken linguistic content. In this paper, a method for pseudonymization (reversible anonymization) of speech is presented, that allows to obfuscate the speaker identity in untranscribed running speech. The approach manipulates the spectro-temporal structure of the speech to simulate a different length and structure of the vocal tract by modifying the formant locations, as well as by altering the pitch and speaking rate. The method is deterministic and partially reversible, and the changes are adjustable on a continuous scale. The method has been evaluated in terms of (i) ABX listening experiments, and (ii) automatic speaker verification and speech recognition. ABX experimental results indicate that the speaker identifiability among forced choice pairs reduced from over 90% to less than 70% through pseudonymization, and that de-pseudonymization was partially effective. An evaluation on the VoicePrivacy 2020 challenge data showed that the proposed approach performs better than the signal processing based baseline method that uses McAdams coefficient and performs slightly worse than the neural source filtering based baseline method. Further analysis showed that the proposed approach: (i) is comparable to the neural source filtering baseline based method in terms of phone posterior feature based objective intelligibility measure, (ii) preserves formant tracks better than the McAdams based method, and (iii) preserves paralinguistic aspects such as dysarthria in several speakers.
- Published
- 2022
- Full Text
- View/download PDF
32. What if you went to the police and accused your uncle of abuse? Misunderstandings concerning the benefits of memory distortion: a commentary on Fernandez (2015)
- Subjects
EVENTS ,False memory ,Memory distortion ,Adaptive memory ,Nonbelieved memories ,Belief ,Recollection ,AMNESIA ,CHILDREN ,SCIENCE ,REPRESSED MEMORY ,EXPERIENCES - Abstract
In a recent paper, Fernandez (2015) argues that memory distortion can have beneficial outcomes. Although we agree with this, we find his reasoning and examples flawed to such degree that they will lead to misunderstandings rather than clarification in the field of memory (distortion). In his paper, Fernandez uses the terms belief and memory incorrectly, creating a conceptual blur. Also, Fernandez tries to make the case that under certain circumstances, false memories of abuse are beneficial. We argue against this idea as the reasoning behind this claim is based on controversial assumptions such as repression. Although it is true that memory distortions can be beneficial, the examples sketched by Fernandez are not in line with recent documentation in this area. (C) 2015 Elsevier Inc. All rights reserved.
- Published
- 2015
33. Average radial integrability spaces of analytic functions
- Author
-
Manuel D. Contreras, Luis Rodríguez-Piazza, Tanausú Aguilar-Hernández, Universidad de Sevilla. Departamento de Matemática Aplicada II (ETSI), and Universidad de Sevilla. Departamento de Análisis Matemático
- Subjects
Mathematics::Complex Variables ,30H20 (Primary), 47B33 (Primary), 47D06 (Primary), 46E15 (Secondary), 47G10 (Secondary) ,Holomorphic function ,Mixed norm spaces ,Hardy space ,Characterization (mathematics) ,Projection (linear algebra) ,Functional Analysis (math.FA) ,Separable space ,Mathematics - Functional Analysis ,Combinatorics ,symbols.namesake ,FOS: Mathematics ,symbols ,Radial integrability ,Bergman projection ,Unit (ring theory) ,Analysis ,Analytic function ,Mathematics - Abstract
In this paper we introduce the family of spaces $RM(p,q)$, $1\leq p,q\leq +\infty$. They are spaces of holomorphic functions in the unit disc with average radial integrability. This family contains the classical Hardy spaces (when $p=\infty$) and Bergman spaces (when $p=q$). We characterize the inclusion between $RM(p_1,q_1)$ and $RM(p_2,q_2)$ depending on the parameters. For $1, Comment: 31 pages
- Published
- 2022
34. On the number of critical points of the second eigenfunction of the Laplacian in convex planar domains
- Author
-
Fabio De Regibus and Massimo Grossi
- Subjects
Mathematics - Analysis of PDEs ,Convex domain ,critical points ,FOS: Mathematics ,eigenfunctions ,topological degree ,Mathematics::Spectral Theory ,Analysis ,Analysis of PDEs (math.AP) - Abstract
In this paper we consider the second eigenfunction of the Laplacian with Dirichlet boundary conditions in convex domains. If the domain has \emph{large eccentricity} then the eigenfunction has \emph{exactly} two nondegenerate critical points (of course they are one maximum and one minimum). The proof uses some estimates proved by Jerison ([Jer95a]) and Grieser-Jerison ([GJ96]) jointly with a topological degree argument. Analogous results for higher order eigenfunctions are proved in rectangular-like domains considered in [GJ09]., Comment: 17 pages, 1 figure
- Published
- 2022
35. On n-partite digraphical representations of finite groups
- Author
-
Jia-Li Du, Yan-Quan Feng, Pablo Spiga, Du, J, Feng, Y, and Spiga, P
- Subjects
DnSR ,Mathematics::Combinatorics ,Astrophysics::High Energy Astrophysical Phenomena ,n-PDR ,Group Theory (math.GR) ,Semiregular group ,Theoretical Computer Science ,Mathematics::Group Theory ,Regular representation ,Computational Theory and Mathematics ,Computer Science::Discrete Mathematics ,FOS: Mathematics ,Mathematics - Combinatorics ,Discrete Mathematics and Combinatorics ,Combinatorics (math.CO) ,Mathematics - Group Theory ,DRR - Abstract
A group $G$ admits an \textbf{\em $n$-partite digraphical representation} if there exists a regular $n$-partite digraph $\Gamma$ such that the automorphism group $\mathrm{Aut}(\Gamma)$ of $\Gamma$ satisfies the following properties: $\mathrm{Aut}(\Gamma)$ is isomorphic to $G$, $\mathrm{Aut}(\Gamma)$ acts semiregularly on the vertices of $\Gamma$ and the orbits of $\mathrm{Aut}(\Gamma)$ on the vertex set of $\Gamma$ form a partition into $n$ parts giving a structure of $n$-partite digraph to $\Gamma$. In this paper, for every positive integer $n$, we classify the finite groups admitting an $n$-partite digraphical representation., Comment: 9 pages
- Published
- 2022
36. Deposition analysis and the hidden life of Bronze Age houses
- Author
-
Martin Kuna, Andrea Němcová, Tereza Šálková, Petr Manšík, and Ondřej Chvojka
- Subjects
Archeology ,History ,bronze age ,settlement discard ,actor-network ,house biography ,Human Factors and Ergonomics ,deposition analysis - Abstract
This paper deals with the application of deposition analysis to an unusual type of features in the Late Bronze Age settlements in Central Europe. These are long narrow trenches (referred to as ‘long pits’ in this text) with characteristic standard form and alignment, as well as find contents, including high amounts of secondary-burned pottery fragments. In the context of prehistoric research, these features represent a relatively new phenomenon that has attracted attention in the last two decades due to new excavations in Bohemia and Bavaria. Based on the finds from Březnice (Czechia), the authors conclude that the long pits were connected with the closing rituals following the abandonment and burial of dwellings. Although no houses were directly documented on this site, their presence must be assumed, and their cultural biography can be reconstructed from the depositional characteristics of the accompanying finds. In order to fully understand the processes of deposition, the authors find it useful to focus not only on human agency but also on the relationships between the things themselves. This way, houses are understood as the central element of a hybrid actor-network. Their role may have been strengthened by their ontological status of living beings.
- Published
- 2022
37. Centralizers of commutators in finite groups
- Author
-
Eloisa Detomi, Marta Morigi, Pavel Shumyatsky, Detomi E., Morigi M., and Shumyatsky P.
- Subjects
Centralizer ,Conjugacy classes ,Mathematics::Group Theory ,Algebra and Number Theory ,Centralizers ,Commutator ,Mathematics::Number Theory ,FOS: Mathematics ,Group Theory (math.GR) ,Commutators ,Mathematics - Group Theory ,20E45, 20F24, 20F14 - Abstract
Let G be a finite group. A coprime commutator in G is any element that can be written as a commutator [x, y] for suitable x, y is an element of G such that pi(x) & cap; pi(y) = theta. Here pi(g) denotes the set of prime divisors of the order of the element g is an element of G. An anti-coprime commutator is an element that can be written as a commutator [x, y], where pi(x) = pi(y). The main results of the paper are as follows. If Ix(G)I
- Published
- 2022
38. Discrete Appell-Dunkl sequences and Bernoulli-Dunkl polynomials of the second kind
- Author
-
Extremiana Aldana, José Ignacio, Labarga, Edgar, Mínguez Ceniceros, Judit, Varona, Juan Luis, 0000-0002-6224-7768, 0000-0003-1017-869X, 0000-0002-9840-5772, and 0000-0002-2023-9946
- Subjects
Pure mathematics ,Applied Mathematics ,Mathematics::Classical Analysis and ODEs ,Context (language use) ,Bernoulli polynomials ,Exponential function ,Kernel (algebra) ,symbols.namesake ,Bernoulli's principle ,Operator (computer programming) ,Mathematics::Quantum Algebra ,symbols ,Mathematics::Representation Theory ,Real line ,Analysis ,Dunkl operator ,Mathematics - Abstract
In a similar way that the Appell sequences of polynomials can be extended to the Dunkl context, where the ordinary derivative is replaced by Dunkl operator on the real line, and the exponential function is replaced by the so-called Dunkl kernel, one can expect that the discrete Appell sequences can be extended to the Dunkl context. In this extension, the role of the ordinary translation is played by the Dunkl translation, that is a much more intricate operator. In this paper, we define discrete Appell-Dunkl sequences of polynomials, and we give some properties and examples. In particular, we show which is the suitable definition for the Bernoulli polynomials of the second kind in the Dunkl context.
- Published
- 2022
39. Bernstein operator method for approximate solution of singularly perturbed Volterra integral equations
- Author
-
Fatih Say, Khursheed J. Ansari, Mahmut Akyiğit, Fuat Usta, and [Belirlenecek]
- Subjects
Singularly perturbed integral equation ,Applied Mathematics ,Numerical analysis ,Bernstein's approximation ,Numerical method ,Integral equation ,Volterra integral equation ,symbols.namesake ,Operator (computer programming) ,Convergence analysis ,Scheme (mathematics) ,Convergence (routing) ,symbols ,Dependability ,Applied mathematics ,Approximate solution ,Asymptotics ,Analysis ,Mathematics - Abstract
An approximate solution of integral equations takes an active role in the numerical analysis. This paper presents and tests an algorithm for the approximate solution of singularly perturbed Volterra integral equations via the Bernstein approximation technique. The method of computing the numerical approximation of the solution is properly demonstrated and exemplified in the matrix notation. Besides, the error bound and convergence associated with the numerical scheme are constituted. Finally, particular examples indicate the dependability and numerical capability of the introduced scheme in comparison with other numerical techniques. © 2021 Elsevier Inc. Deanship of Scientific Research, King Faisal University, DSR, KFU: R.G.P. 2/172/42 The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through research groups program under Grant number R.G.P. 2/172/42 . 2-s2.0-85119321719
- Published
- 2021
40. Homogenization of a Dirichlet semilinear elliptic problem with a strong singularity at u=0 in a domain with many small holes
- Author
-
Pedro J. Martínez-Aparicio, Daniela Giachetti, François Murat, Dipartimento di Scienze di Base e Applicate per l'Ingegneria (SBAI), Università degli Studi di Roma 'La Sapienza' = Sapienza University [Rome], Departamento de Matematica Aplicada y Estadistica, Universidad Politecnica de Cartagena, Technical University of Cartagena (UPTC), Laboratoire Jacques-Louis Lions (LJLL), Université Pierre et Marie Curie - Paris 6 (UPMC)-Université Paris Diderot - Paris 7 (UPD7)-Centre National de la Recherche Scientifique (CNRS), Murat, Francois, Università degli Studi di Roma 'La Sapienza' = Sapienza University [Rome] (UNIROMA), and Universidad Politécnica de Cartagena / Technical University of Cartagena (UPCT)
- Subjects
strange term ,Pure mathematics ,Open set ,homogenization ,01 natural sciences ,Homogenization (chemistry) ,Dirichlet distribution ,symbols.namesake ,Singularity ,Mathematics - Analysis of PDEs ,strong singularity at u = 0 ,FOS: Mathematics ,[MATH.MATH-AP]Mathematics [math]/Analysis of PDEs [math.AP] ,Uniqueness ,0101 mathematics ,[MATH.MATH-AP] Mathematics [math]/Analysis of PDEs [math.AP] ,Mathematics ,semilinear elliptic problem ,010102 general mathematics ,35B25, 35B27, 35J25, 35J67 ,A domain ,Homogenization ,Perforated domains with Dirichlet boundary condition ,Semilinear elliptic problem ,Strong singularity at u=0 ,Analysis ,perforated domains with Dirichlet boundary condition ,010101 applied mathematics ,35B25, 35B27, 35J25, 35J6 ,symbols ,Analysis of PDEs (math.AP) - Abstract
In the present paper we perform the homogenization of the semilinear elliptic problem { u e ≥ 0 in Ω e , − d i v A ( x ) D u e = F ( x , u e ) in Ω e , u e = 0 on ∂ Ω e . In this problem F ( x , s ) is a Caratheodory function such that 0 ≤ F ( x , s ) ≤ h ( x ) / Γ ( s ) a.e. x ∈ Ω for every s > 0 , with h in some L r ( Ω ) and Γ a C 1 ( [ 0 , + ∞ [ ) function such that Γ ( 0 ) = 0 and Γ ′ ( s ) > 0 for every s > 0 . On the other hand the open sets Ω e are obtained by removing many small holes from a fixed open set Ω in such a way that a “strange term” μ u 0 appears in the limit equation in the case where the function F ( x , s ) depends only on x. We already treated this problem in the case of a “mild singularity”, namely in the case where the function F ( x , s ) satisfies 0 ≤ F ( x , s ) ≤ h ( x ) ( 1 s + 1 ) . In this case the solution u e to the problem belongs to H 0 1 ( Ω e ) and its definition is a “natural” and rather usual one. In the general case where F ( x , s ) exhibits a “strong singularity” at u = 0 , which is the purpose of the present paper, the solution u e to the problem only belongs to H loc 1 ( Ω e ) but in general does not belong to H 0 1 ( Ω e ) anymore, even if u e vanishes on ∂ Ω e in some sense. Therefore we introduced a new notion of solution (in the spirit of the solutions defined by transposition) for problems with a strong singularity. This definition allowed us to obtain existence, stability and uniqueness results. In the present paper, using this definition, we perform the homogenization of the above semilinear problem and we prove that in the homogenized problem, the “strange term” μ u 0 still appears in the left-hand side while the source term F ( x , u 0 ) is not modified in the right-hand side.
- Published
- 2018
41. A framework for data-driven adaptive GUI generation based on DICOM
- Author
-
Orazio Gambino, Vincenzo Cannella, Leonardo Rundo, Roberto Pirrone, Salvatore Vitabile, Gambino, O, Rundo, L, Cannella, V, Vitabile, S, Pirrone, R, Gambino, Orazio, Rundo, Leonardo, Cannella, Vincenzo, Vitabile, Salvatore, and Pirrone, Roberto
- Subjects
0301 basic medicine ,Diagnostic Imaging ,Automated ,Computer science ,Data-driven GUI generation ,DICOM ,Faceted classification ,Graphical user interfaces ,Medical diagnostic software ,Algorithms ,Brain ,Cognition ,Computers ,Decision Support Systems, Clinical ,Feasibility Studies ,Humans ,Magnetic Resonance Imaging ,Medical Informatics ,Pattern Recognition, Automated ,Software ,Computer Graphics ,Radiology Information Systems ,User-Computer Interface ,Decision Support Systems ,Health Informatics ,Pattern Recognition ,computer.software_genre ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Clinical ,0302 clinical medicine ,Human–computer interaction ,Graphical user interface ,Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazioni ,business.industry ,Computer Science Applications1707 Computer Vision and Pattern Recognition ,Computer Science Applications ,Visualization ,Software framework ,030104 developmental biology ,Workflow ,Information model ,Software design ,business ,computer - Abstract
Computer applications for diagnostic medical imaging provide generally a wide range of tools to support physicians in their daily diagnosis activities. Unfortunately, some functionalities are specialized for specific diseases or imaging modalities, while other ones are useless for the images under investigation. Nevertheless, the corresponding Graphical User Interface (GUI) widgets are still present on the screen reducing the image visualization area. As a consequence, the physician may be affected by cognitive overload and visual stress causing a degradation of performances, mainly due to unuseful widgets. In clinical environments, a GUI must represent a sequence of steps for image investigation following a well-defined workflow. This paper proposes a software framework aimed at addressing the issues outlined before. Specifically, we designed a DICOM based mechanism of data-driven GUI generation, referring to the examined body part and imaging modality as well as to the medical image analysis task to perform. In this way, the self-configuring GUI is generated on-the-fly, so that just specific functionalities are active according to the current clinical scenario. Such a solution provides also a tight integration with the DICOM standard, which considers various aspects of the technology in medicine but does not address GUI specification issues. The proposed workflow is designed for diagnostic workstations with a local file system on an interchange media acting inside or outside the hospital ward. Accordingly, the DICOMDIR conceptual data model, defined by a hierarchical structure, is exploited and extended to include the GUI information thanks to a new Information Object Module (IOM), which reuses the DICOM information model. The proposed framework exploits the DICOM standard representing an enabling technology for an auto-consistent solution in medical diagnostic applications. In this paper we present a detailed description of the framework, its software design, and a proof-of-concept implementation as a suitable plug-in of the OsiriX imaging software.
- Published
- 2018
42. Transformation of the family farm under rising land pressure: A theoretical essay
- Author
-
Jean-Philippe Platteau and Catherine Guirkinger
- Subjects
Market integration ,Economics and Econometrics ,Land ,business.industry ,Land law ,Agriculture ,West africa ,Land scarcity ,Market economy ,Property rights ,Africa ,Family farm ,Economics ,Family ,Economic system ,Land tenure ,business - Abstract
If we understand well the individualization of land tenure rules under conditions of growing land scarcity and increased market integration, much less is known about the mode of evolution of the family farms possessing the land. Inspired by first-hand evidence from West Africa, this paper argues that these units undergo a similar process of individualization governed by the same forces as property rights in land. It provides a simple theoretical account of the coexistence of different forms of family when farms are heterogenous in land endowments and technology is stagnant. In particular, it throws light on the factors determining the coexistence of collective fields and individual plots inside the family farm, and on those driving the possible splitting of the family. The paper also offers analytical insights into the sequence following which such forms succeed each other.
- Published
- 2015
- Full Text
- View/download PDF
43. Cytoplasmic versus periplasmic expression of site-specifically and bioorthogonally functionalized nanobodies using expressed protein ligation
- Author
-
Brecht Billen, Nick Devoogdt, Peter Adriaensens, Serge Muyldermans, Rebekka Hansen, Cécile Vincke, Wanda Guedens, Department of Bio-engineering Sciences, Cellular and Molecular Immunology, Translational Imaging Research Alliance, Medical Imaging, and Supporting clinical sciences
- Subjects
0301 basic medicine ,Cytoplasm ,Gene Expression ,medicine.disease_cause ,03 medical and health sciences ,Escherichia coli ,medicine ,Periplasmic expression and extraction ,nanobodies ,expressed protein ligation ,periplasmic expression and extraction ,click chemistry ,CuAAC ,Polyacrylamide gel electrophoresis ,Chemistry ,Binding properties ,Periplasmic space ,Single-Domain Antibodies ,Expressed protein ligation ,Cell biology ,030104 developmental biology ,Biochemistry ,Periplasm ,Click chemistry ,Bioorthogonal chemistry ,Ligation ,biotechnology - Abstract
Site-specific functionalization of nanobodies after introducing bioorthogonal groups offers the possibility to biofunctionalize surfaces with a uniformly oriented layer of nanobodies. In this paper, expressed protein ligation (EPL) was used for site-specific alkynation of the model nanobody NbBcII10. In contrast to EPL constructs, which are typically expressed in the cytoplasm, nanobodies are expressed in the periplasm where its oxidizing environment ensures a correct folding and disulfide bond formation. Different pathways were explored to express the EPL constructs in the periplasm but simultaneously, the effect of cytoplasmic expression on the functionality of NbBcII10 was also evaluated. By using Escherichia coli SHuffle®T7 cells, it was demonstrated that expression of the EPL complex in the cytoplasm was readily established and that site-specifically mono-alkynated nanobodies can be produced with the same binding properties as the non-modified NbBcII10 expressed in the periplasm. In conclusion, this paper shows that periplasmic expression of the EPL complex is quite challenging, but cytoplasmic expression has proven to be a valuable alternative. This research is funded by the FWO project G.0581.12N. The authors gratefully thank Prof. Andre Matagne (Universite de Liege, Belgium) for the BcII antigen, Prof. J-P. Noben and Mr. E. Royackers for the MS measurements, and drs. Ema Romao for the technical assistance during the SPR experiments. We further acknowledge the Hercules Foundation for the project "LC-MS@UHasselt: Linear Trap Quadrupole-Orbitrap mass spectrometer", and the Interreg IV-A project "BioMiMedics" which is financed by the EU and the province of Limburg (Belgium) (EMR INT4-1.2-2010-03/063). We further thank the Interuniversity Attraction Poles programme (P7/05) initiated by the Belgian Science Policy Office (BELSPO).
- Published
- 2017
- Full Text
- View/download PDF
44. Demographics, human capital, and the demand for housing
- Author
-
Thies Lindenthal, Piet Eichholtz, Finance, Externe publicaties SBE, and RS: GSBE EFME
- Subjects
Economics and Econometrics ,Labour economics ,education.field_of_study ,Demographics ,Population ,MODELS ,High education ,Housing demand ,Human capital ,MARKETS ,Residential real estate ,PRICES ,Economics ,education - Abstract
This paper investigates how the demand for residential real estate depends on age and other demographic characteristics at the household level. Based on a detailed cross-sectional survey of English households, it finds that housing demand is significantly determined by a household’s human capital, and that housing demand generally increases with age. After retirement it declines, but only to a small extent. High education levels, good health, and high income will increase a household’s demand for housing even when households age. These results are relevant for countries that experience population shrinkage, but where total housing demand could still grow in the future despite stagnating household numbers and aging populations. The paper further shows that changes in demographics lead to very heterogeneous demand responses for different housing attributes, providing information regarding the future qualitative demand for housing.
- Published
- 2014
- Full Text
- View/download PDF
45. Friezes, weak friezes, and T-paths
- Author
-
Ilke Canakci, Peter Jørgensen, and Mathematics
- Subjects
Cluster algebra ,Polygon dissection ,Frieze ,Mathematics::Combinatorics ,Applied Mathematics ,010102 general mathematics ,Frieze pattern ,Positivity ,01 natural sciences ,05B45, 05E15, 05E99, 51M20 ,Combinatorics ,Frieze group ,0103 physical sciences ,FOS: Mathematics ,Mathematics - Combinatorics ,Combinatorics (math.CO) ,010307 mathematical physics ,Generalised frieze pattern ,0101 mathematics ,Algebra over a field ,Computer Science::Data Structures and Algorithms ,Cluster expansion formula ,Semifield ,Mathematics - Abstract
Frieze patterns form a nexus between algebra, combinatorics, and geometry. T-paths with respect to triangulations of surfaces have been used to obtain expansion formulae for cluster variables. This paper will introduce the concepts of weak friezes and T-paths with respect to dissections of polygons. Our main result is that weak friezes are characterised by satisfying an expansion formula which we call the T-path formula. We also show that weak friezes can be glued together, and that the resulting weak frieze is a frieze if and only if so was each of the weak friezes being glued., 17 pages
- Published
- 2021
- Full Text
- View/download PDF
46. Environmental risk assessment of pesticides in tropical terrestrial ecosystems
- Subjects
Soil invertebrates ,Soil contamination ,Tropics ,Pesticides ,Ecotoxicology - Abstract
Despite the increasing use of pesticides in tropical countries, research and legislative efforts have focused on their temperate counterparts. This paper presents a review of the literature on environmental risk assessment of pesticides for tropical terrestrial agroecosystems. It aims at evaluating potential differences in pesticide risk between temperate and tropical regions as well as to highlight research needs in the latter. Peculiarities of pesticide risks in tropical terrestrial agroecosystems are discussed in subsections 1) agricultural practices; 2) research efforts; 3) fate and exposure; 4) toxicity testing methods; and 5) sensitivity. The intensive and often inadequate pesticide application practices in tropical areas are likely to result in a relatively greater pesticide exposure in edge-of-field water bodies. Since pesticide fate may be different under tropical conditions, tropical scenarios for models estimating predicted environmental pesticide concentrations should be developed. Sensitivity comparisons do not indicate a consistent similar, greater or lower relative sensitivity of tropical soil organisms as compared to temperate organisms. However, several methods and procedures for application in the tropics need to be developed, which include: 1) identifying and collecting natural soils to be used as reference test substrates in tests; 2) identifying and discerning the range of sensitivity of native test species to soil contaminants; 3) developing test guidelines applicable to tropical/subtropical conditions; and 4) developing methods and procedures for higher tier testing for full development and implementation of environmental risk assessment schemes.
- Published
- 2019
- Full Text
- View/download PDF
47. In-vivo magnetic resonance imaging (MRI) of laminae in the human cortex
- Abstract
The human neocortex is organized radially into six layers which differ in their myelination and the density and arrangement of neuronal cells. This cortical cyto- and myeloarchitecture plays a central role in the anatomical and functional neuroanatomy but is primarily accessible through invasive histology only. To overcome this limitation, several non-invasive MRI approaches have been, and are being, developed to resolve the anatomical cortical layers. As a result, recent studies on large populations and structure-function relationships at the laminar level became possible. Early proof-of-concept studies targeted conspicuous laminar structures such as the stria of Gennari in the primary visual cortex. Recent work characterized the laminar structure outside the visual cortex, investigated the relationship between laminar structure and function, and demonstrated layer-specific maturation effects. This paper reviews the methods and in-vivo MRI studies on the anatomical layers in the human cortex based on conventional and quantitative MRI (excluding diffusion imaging). A focus is on the related challenges, promises and potential future developments. The rapid development of MRI scanners, motion correction techniques, analysis methods and biophysical modeling promise to overcome the challenges of spatial resolution, precision and specificity of systematic imaging of cortical laminae.
- Published
- 2019
- Full Text
- View/download PDF
48. Adaptive numerical homogenization for upscaling single phase flow and transport
- Author
-
Mary F. Wheeler, Gurpreet Singh, Yerlan Amanbek, Hans van Duijn, and Energy Technology
- Subjects
Numerical Analysis ,Materials science ,Physics and Astronomy (miscellaneous) ,Adaptive mesh refinement ,Advection ,Applied Mathematics ,Computation ,Domain decomposition methods ,010103 numerical & computational mathematics ,Mechanics ,Mixed finite element method ,01 natural sciences ,Homogenization (chemistry) ,Computer Science Applications ,010101 applied mathematics ,Computational Mathematics ,Modeling and Simulation ,0101 mathematics ,Single phase ,Multiscale methods ,Porous medium ,Numerical homogenization ,Enhanced velocity - Abstract
We propose an adaptive multiscale method to improve the efficiency and the accuracy of numerical computations by combining numerical homogenization and domain decomposition for modeling flow and transport. Our approach focuses on minimizing the use of fine scale properties associated with advection and diffusion/dispersion. Here a fine scale flow and transport problem is solved in subdomains defined by a transient region where spatial changes in transported species concentrations are large while a coarse scale problem is solved in the remaining subdomains. Away from the transient region, effective macroscopic properties are obtained using local numerical homogenization. An Enhanced Velocity Mixed Finite Element Method (EVMFEM) as a domain decomposition scheme is used to couple these coarse and fine subdomains [1] . Specifically, homogenization is employed here only when coarse and fine scale problems can be decoupled to extract temporal invariants in the form of effective parameters. In this paper, a number of numerical tests are presented for demonstrating the capabilities of this adaptive numerical homogenization approach in upscaling flow and transport in heterogeneous porous medium.
- Published
- 2019
- Full Text
- View/download PDF
49. Axial and torsional self-excited vibrations of a distributed drill-string
- Author
-
Nathan van de Wouw, Ulf Jakob F. Aarsnes, and Dynamics and Control
- Subjects
Acoustics and Ultrasonics ,Infinite dimensional systems ,02 engineering and technology ,Rotation ,01 natural sciences ,Instability ,Drill-string vibrations ,Drill string ,Rate of penetration ,Hyperbolic systems ,0203 mechanical engineering ,0103 physical sciences ,Boundary value problem ,010301 acoustics ,Physics ,Mechanical Engineering ,Mechanics ,Condensed Matter Physics ,Vibration ,020303 mechanical engineering & transports ,Mechanics of Materials ,Weight on bit ,Distributed parameter systems ,Constant (mathematics) ,Stability ,Stick-slip - Abstract
We consider a distributed axial-torsional drill-string model with a rate-independent bit-rock interaction law to study the occurrence and non-local characteristics of axial and torsional self-excited vibrations as caused by the regenerative effect. A first contribution of the paper is the derivation of a non-dimensional version of the full non-linear distributed drill-string–bit-rock interaction model and showing how it relates to the minimal set of characteristic quantities. Using this model the study shows how multiple axial modes of the drill-string are excited, or attenuated, depending on the bit rotation rate. This indicates that a lumped drill-string model approximation is insufficient for the general case. Then, a comprehensive simulation study is performed to create a stability map for the occurrence of stick-slip oscillations. In particular, the significance of the axial topside boundary condition, i.e., constant velocity vs. constant hook-load, is evaluated. A central finding is that increasing the axial loop gain (determined by the bit-rock parameters) tends to both increase the area of stable torsional dynamics and increase the rate of penetration for a constant imposed weight on bit. This also corresponds to a more severe axial instability.
- Published
- 2019
50. A necessary and sufficient condition for unique skill assessment
- Author
-
Pasquale Anselmi, Luca Stefanutti, Egidio Robusto, and Jürgen Heller
- Subjects
Competence structure ,Psychology (all) ,Competence-based extension of the basic local independence model (CBLIM) ,Machine learning ,computer.software_genre ,050105 experimental psychology ,0504 sociology ,0501 psychology and cognitive sciences ,Identifiability ,Competence (human resources) ,General Psychology ,Knowledge structure ,Conjunctive skill function ,DINA model ,Problem function ,Applied Mathematics ,Mathematics ,business.industry ,05 social sciences ,Probabilistic logic ,050401 social sciences methods ,Artificial intelligence ,business ,computer - Abstract
The skill-based extension of the theory of knowledge structures forms the framework for addressing the problem of whether it is possible to uniquely assess the skills underlying the solution behavior exhibited on some set of items. Technically speaking, the paper strives for characterizing the so-called conjunctive skill functions, assigning to each item a subset of skills sufficient for solving it, that allow for singling out a unique state of a given competence structure. While previously proposed properties turn out to be either sufficient, or necessary within this setting, the paper provides a necessary and sufficient condition. Possible extensions of this characterization to more general skill functions are discussed. The conclusions cover suggestions on how to extend a test so that it allows for unique skill assessment, as well as implications for the identifiability of probabilistic models defined on top of the deterministic framework.
- Published
- 2017
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.