8,433 results on '"COMPUTER simulation"'
Search Results
2. The unusual kinetics of lactate dehydrogenase of Schistosoma mansoni and their role in the rapid metabolic switch after penetration of the mammalian host
- Author
-
Bexkens, Michiel L., Martin, Olivier M.F., van den Heuvel, Jos M., Schmitz, Marion G.J., Teusink, Bas, Bakker, Barbara M., van Hellemond, Jaap J., Haanstra, Jurgen R., Walkinshaw, Malcolm D., Tielens, Aloysius G.M., Bexkens, Michiel L., Martin, Olivier M.F., van den Heuvel, Jos M., Schmitz, Marion G.J., Teusink, Bas, Bakker, Barbara M., van Hellemond, Jaap J., Haanstra, Jurgen R., Walkinshaw, Malcolm D., and Tielens, Aloysius G.M.
- Abstract
Lactate dehydrogenase (LDH) from Schistosoma mansoni has peculiar properties for a eukaryotic LDH. Schistosomal LDH (SmLDH) isolated from schistosomes, and the recombinantly expressed protein, are strongly inhibited by ATP, which is neutralized by fructose-1,6-bisphosphate (FBP). In the conserved FBP/anion binding site we identified two residues in SmLDH (Val187 and Tyr190) that differ from the conserved residues in LDHs of other eukaryotes, but are identical to conserved residues in FBP-sensitive prokaryotic LDHs. Three-dimensional (3D) models were generated to compare the structure of SmLDH with other LDHs. These models indicated that residues Val187, and especially Tyr190, play a crucial role in the interaction of FBP with the anion pocket of SmLDH. These 3D models of SmLDH are also consistent with a competitive model of SmLDH inhibition in which ATP (inhibitor) and FBP (activator) compete for binding in a well-defined anion pocket. The model of bound ATP predicts a distortion of the nearby key catalytic residue His195, resulting in enzyme inhibition. To investigate a possible physiological role of this allosteric regulation of LDH in schistosomes we made a kinetic model in which the allosteric regulation of the glycolytic enzymes can be varied. The model showed that inhibition of LDH by ATP prevents fermentation to lactate in the free-living stages in water and ensures complete oxidation via the Krebs cycle of the endogenous glycogen reserves. This mechanism of allosteric inhibition by ATP prevents the untimely depletion of these glycogen reserves, the only fuel of the free-living cercariae. Neutralization by FBP of this ATP inhibition of LDH prevents accumulation of glycolytic intermediates when S. mansoni schistosomula are confronted with the sudden large increase in glucose availability upon penetration of the final host. It appears that the LDH of S. mansoni is special and well suited to deal with the variations in glucose availability the parasite encoun
- Published
- 2024
3. Another Brick in the Wall?: Moral Education, Social Learning, and Moral Progress
- Author
-
Rehren, Paul, Sauer, Hanno, Rehren, Paul, and Sauer, Hanno
- Abstract
Many believe that moral education can cause moral progress. At first glance, this makes sense. A major goal of moral education is the improvement of the moral beliefs, values and behaviors of young people. Most would also consider all of these improvements to be important instances of moral progress. Moreover, moral education is a form of social learning, and there are good reasons to think that social learning processes shape episodes of progressive moral change. Despite this, we argue that instead of being a cause of moral change, the main effect of moral education is often to provide stability or continuity. In addition, we will argue that even when the conditions are right for moral education to contribute to moral change, it is far from clear that the resulting changes will always, or even most of the time, end up being progressive.
- Published
- 2024
4. Framing the Predictive Mind: Why we should think again about Dreyfus
- Author
-
Reynolds, Jack and Reynolds, Jack
- Abstract
In this paper I return to Hubert Dreyfus’ old but influential critique of artificial intelligence, redirecting it towards contemporary predictive processing models of the mind (PP). I focus on Dreyfus’ arguments about the “frame problem” for artificial cognitive systems, and his contrasting account of embodied human skills and expertise. The frame problem presents as a prima facie problem for practical work in AI and robotics, but also for computational views of the mind in general, including for PP. Indeed, some of the issues it presents seem more acute for PP, insofar as it seeks to unify all cognition and intelligence, and aims to do so without admitting any cognitive processes or mechanisms outside of the scope of the theory. I contend, however, that there is an unresolved problem for PP concerning whether it can both explain all cognition and intelligent behavior as minimizing prediction error with just the core formal elements of the PP toolbox, and also adequately comprehend (or explain away) some of the apparent cognitive differences between biological and prediction-based artificial intelligence, notably in regard to establishing relevance and flexible context-switching, precisely the features of interest to Dreyfus’ work on embodied indexicality, habits/skills, and abductive inference. I address several influential philosophical versions of PP, including the work of Jakob Hohwy and Andy Clark, as well as more enactive-oriented interpretations of active inference coming from a broadly Fristonian perspective.
- Published
- 2024
5. Anchoring as a Structural Bias of Deliberation
- Author
-
Rafiee Rad, Soroush, Braun, Sebastian Till, Roy, Olivier, Rafiee Rad, Soroush, Braun, Sebastian Till, and Roy, Olivier
- Abstract
We study the anchoring effect in a computational model of group deliberation on preference rankings. Anchoring is a form of path-dependence through which the opinions of those who speak early have a stronger influence on the outcome of deliberation than the opinions of those who speak later. We show that anchoring can occur even among fully rational agents. We then compare the respective effects of anchoring and three other determinants of the deliberative outcome: the relative weight or social influence of the speakers, the popularity of a given speaker's opinion, and the homogeneity of the group. We find that, on average, anchoring has the strongest effect among these. We finally show that anchoring is often correlated with increases in proximity to single-plateauedness. We conclude that anchoring can constitute a structural bias that might hinder some of the otherwise positive effects of group deliberation.
- Published
- 2024
6. Does it Harm Science to Suppress Dissenting Evidence?
- Author
-
Coates, Matthew and Coates, Matthew
- Abstract
There has been increased attention on how scientific communities should respond to spurious dissent. One proposed method is to hide such dissent by preventing its publication. To investigate this, I computationally model the epistemic effects of hiding dissenting evidence on scientific communities. I find that it is typically epistemically harmful to hide dissent, even when there exists an agent purposefully producing biased dissent. However, hiding dissent also allows for quicker correct epistemic consensus among scientists. Quicker consensus may be important when policy decisions must be made quickly, such as during a pandemic, suggesting times when hiding dissent may be useful.
- Published
- 2024
7. Deep Learning as Method-Learning: Pragmatic Understanding, Epistemic Strategies and Design-Rules
- Author
-
Kieval, Phillip Hintikka, Westerblad, Oscar, Kieval, Phillip Hintikka, and Westerblad, Oscar
- Abstract
We claim that scientists working with deep learning (DL) models exhibit a form of pragmatic understanding that is not reducible to or dependent on explanation. This pragmatic understanding comprises a set of learned methodological principles that underlie DL model design-choices and secure their reliability. We illustrate this action-oriented pragmatic understanding with a case study of AlphaFold2, highlighting the interplay between background knowledge of a problem and methodological choices involving techniques for constraining how a model learns from data. Building successful models requires pragmatic understanding to apply modelling strategies that encourage the model to learn data patterns that will facilitate reliable generalisation.
- Published
- 2024
8. Analysis, modeling, and simulation solution of induced-draft fan rotor with excessive vibration: a case study
- Author
-
González Barbosa, Erick Alejandro, Vázquez Matínez, José Juan, Jurado Pérez, Fernando, Castro, Héctor, Rodríguez Ornelas, Francisco Javier, González Barbosa, José Joel, González Barbosa, Erick Alejandro, Vázquez Matínez, José Juan, Jurado Pérez, Fernando, Castro, Héctor, Rodríguez Ornelas, Francisco Javier, and González Barbosa, José Joel
- Abstract
In the modern industry, computer modeling and simulation tools have become fundamental to estimating the behavior of rotodynamic systems. These computational tools allow analyzing possible modifications as well as alternative solutions to changes in design, with the aim of improving performance. Nowadays, rotodynamic systems, present in various industrial applications, require greater efficiency and reliability. Although there are deep learning methodologies for monitoring and diagnosing failures which improve these standards, the main challenge is the lack of databases for learning, a problem that can be addressed through experimental monitoring and computer analysis. This work analyzes the vibrations of two induced-draft fans with excess vibration in a thermoelectric plant in Mexico. A vibration analysis was carried out through the instrumentation and monitoring of accelerometers located at crucial points in the fans. The results of this experimental analysis were validated by computer simulation based on FEM. The results show that the operating speed of the induced-draft fans is very close to their natural frequency, causing considerable stress and potential failures due to excessive vibration. Finally, this work presents a practical solution to modify the natural frequency of induced-draft fans, so that they can function correctly at the required operating speed, thus mitigating excessive vibration issues., En la industria moderna, las herramientas de modelado y simulación computacional se han vuelto fundamentales para estimar el comportamiento de los sistemas rotodinámicos. Estas herramientas computacionales permiten analizar posibles modificaciones y soluciones alternativas a cambios en el diseño, con el objetivo de mejorar el rendimiento. Hoy en día, los sistemas rotodinámicos, presentes en diversas aplicaciones industriales, requieren mayor eficiencia y fiabilidad. Aunque existen metodologías de aprendizaje profundo para el monitoreo y diagnóstico de fallas que mejoran estos estándares, el principal desafío es la falta de bases de datos para el aprendizaje. Este problema puede ser abordado a través del monitoreo experimental y el análisis computacional. Este trabajo analiza las vibraciones de dos ventiladores de tiro inducido con exceso de vibración en una planta termoeléctrica en México. Se realizó un análisis de vibración a través de la instrumentación y el monitoreo de acelerómetros ubicados en puntos cruciales de los ventiladores. Los resultados de este análisis experimental fueron validados por simulación computacional basada en el método de elementos finitos. Los resultados muestran que la velocidad de operación de los ventiladores de tiro inducido está muy cerca de su frecuencia natural, causando un estrés considerable y posibles fallas debido a la vibración excesiva. Finalmente, este trabajo presenta una solución práctica para modificar la frecuencia natural de los ventiladores de tiro inducido, de modo que puedan funcionar correctamente a la velocidad de operación requerida, mitigando así los problemas de vibración excesiva.
- Published
- 2024
9. Adaptative comfort modeling for a typical non-centrifugal cane sugar processing facility
- Author
-
Cortés Tovar, Giovanni Andrés, Osorio Hernández, Robinson, Osório Saraz, Jairo Alexander, Cortés Tovar, Giovanni Andrés, Osorio Hernández, Robinson, and Osório Saraz, Jairo Alexander
- Abstract
The production of non-centrifuged cane sugar in Colombia takes place in post-harvest facilities that generate significant heat and steam resulting from the evaporation of cane juices during the process. This study aimed to improve the comfort conditions of a facility of this type in the municipality of Pacho, Cundinamarca, Colombia, through bioclimatic simulation, where the enclosure on the walls and the lantern window were modified. The evaluation of adaptative thermal comfort revealed that configurations with open perimeter and lantern window demonstrated the best bioclimatic behavior. This is attributed to the increased ventilation area and chimney effect, which optimizes the transfer of heat and mass. Likewise, it was observed that there is a generalized behavior of thermal discomfort for workers in the thermal zone of the oven, due to the high emissions of heat and steam in this specific area., La producción de azúcar de caña no centrifugada, en Colombia se realiza en instalaciones de poscosecha que generan alta cantidad de calor y vapor, producto de la evaporación de los jugos de caña del proceso. Este estudio tuvo como objetivo mejorar las condiciones de confort de una instalación de este tipo en el municipio de Pacho, Cundinamarca, Colombia, a través de simulación bioclimática, donde se modificó el cerramiento en las paredes y en la ventana cenital. Se evalúo el confort térmico adaptativo, donde el mejor comportamiento bioclimático se presentó en las configuraciones con perímetro abierto y ventana cenital, esto debido a que una mayor área de ventilación y efecto chimenea optimizan la transferencia de calor y masa; así mismo, se observó que hay un comportamiento generalizado de incomodidad térmica para los trabajadores en la zona térmica hornilla, debido a las altas emisiones de calor y vapor en esta zona.
- Published
- 2024
10. Analysis, modeling, and simulation solution of induced-draft fan rotor with excessive vibration: a case study
- Author
-
González Barbosa, Erick Alejandro, Vázquez Matínez, José Juan, Jurado Pérez, Fernando, Castro, Héctor, Rodríguez Ornelas, Francisco Javier, González Barbosa, José Joel, González Barbosa, Erick Alejandro, Vázquez Matínez, José Juan, Jurado Pérez, Fernando, Castro, Héctor, Rodríguez Ornelas, Francisco Javier, and González Barbosa, José Joel
- Abstract
In the modern industry, computer modeling and simulation tools have become fundamental to estimating the behavior of rotodynamic systems. These computational tools allow analyzing possible modifications as well as alternative solutions to changes in design, with the aim of improving performance. Nowadays, rotodynamic systems, present in various industrial applications, require greater efficiency and reliability. Although there are deep learning methodologies for monitoring and diagnosing failures which improve these standards, the main challenge is the lack of databases for learning, a problem that can be addressed through experimental monitoring and computer analysis. This work analyzes the vibrations of two induced-draft fans with excess vibration in a thermoelectric plant in Mexico. A vibration analysis was carried out through the instrumentation and monitoring of accelerometers located at crucial points in the fans. The results of this experimental analysis were validated by computer simulation based on FEM. The results show that the operating speed of the induced-draft fans is very close to their natural frequency, causing considerable stress and potential failures due to excessive vibration. Finally, this work presents a practical solution to modify the natural frequency of induced-draft fans, so that they can function correctly at the required operating speed, thus mitigating excessive vibration issues., En la industria moderna, las herramientas de modelado y simulación computacional se han vuelto fundamentales para estimar el comportamiento de los sistemas rotodinámicos. Estas herramientas computacionales permiten analizar posibles modificaciones y soluciones alternativas a cambios en el diseño, con el objetivo de mejorar el rendimiento. Hoy en día, los sistemas rotodinámicos, presentes en diversas aplicaciones industriales, requieren mayor eficiencia y fiabilidad. Aunque existen metodologías de aprendizaje profundo para el monitoreo y diagnóstico de fallas que mejoran estos estándares, el principal desafío es la falta de bases de datos para el aprendizaje. Este problema puede ser abordado a través del monitoreo experimental y el análisis computacional. Este trabajo analiza las vibraciones de dos ventiladores de tiro inducido con exceso de vibración en una planta termoeléctrica en México. Se realizó un análisis de vibración a través de la instrumentación y el monitoreo de acelerómetros ubicados en puntos cruciales de los ventiladores. Los resultados de este análisis experimental fueron validados por simulación computacional basada en el método de elementos finitos. Los resultados muestran que la velocidad de operación de los ventiladores de tiro inducido está muy cerca de su frecuencia natural, causando un estrés considerable y posibles fallas debido a la vibración excesiva. Finalmente, este trabajo presenta una solución práctica para modificar la frecuencia natural de los ventiladores de tiro inducido, de modo que puedan funcionar correctamente a la velocidad de operación requerida, mitigando así los problemas de vibración excesiva.
- Published
- 2024
11. Adaptative comfort modeling for a typical non-centrifugal cane sugar processing facility
- Author
-
Cortés Tovar, Giovanni Andrés, Osorio Hernández, Robinson, Osório Saraz, Jairo Alexander, Cortés Tovar, Giovanni Andrés, Osorio Hernández, Robinson, and Osório Saraz, Jairo Alexander
- Abstract
The production of non-centrifuged cane sugar in Colombia takes place in post-harvest facilities that generate significant heat and steam resulting from the evaporation of cane juices during the process. This study aimed to improve the comfort conditions of a facility of this type in the municipality of Pacho, Cundinamarca, Colombia, through bioclimatic simulation, where the enclosure on the walls and the lantern window were modified. The evaluation of adaptative thermal comfort revealed that configurations with open perimeter and lantern window demonstrated the best bioclimatic behavior. This is attributed to the increased ventilation area and chimney effect, which optimizes the transfer of heat and mass. Likewise, it was observed that there is a generalized behavior of thermal discomfort for workers in the thermal zone of the oven, due to the high emissions of heat and steam in this specific area., La producción de azúcar de caña no centrifugada, en Colombia se realiza en instalaciones de poscosecha que generan alta cantidad de calor y vapor, producto de la evaporación de los jugos de caña del proceso. Este estudio tuvo como objetivo mejorar las condiciones de confort de una instalación de este tipo en el municipio de Pacho, Cundinamarca, Colombia, a través de simulación bioclimática, donde se modificó el cerramiento en las paredes y en la ventana cenital. Se evalúo el confort térmico adaptativo, donde el mejor comportamiento bioclimático se presentó en las configuraciones con perímetro abierto y ventana cenital, esto debido a que una mayor área de ventilación y efecto chimenea optimizan la transferencia de calor y masa; así mismo, se observó que hay un comportamiento generalizado de incomodidad térmica para los trabajadores en la zona térmica hornilla, debido a las altas emisiones de calor y vapor en esta zona.
- Published
- 2024
12. spVC for the detection and interpretation of spatial gene expression variation
- Author
-
Yu, Shan, Yu, Shan, Li, Wei Vivian, Yu, Shan, Yu, Shan, and Li, Wei Vivian
- Abstract
Spatially resolved transcriptomics technologies have opened new avenues for understanding gene expression heterogeneity in spatial contexts. However, existing methods for identifying spatially variable genes often focus solely on statistical significance, limiting their ability to capture continuous expression patterns and integrate spot-level covariates. To address these challenges, we introduce spVC, a statistical method based on a generalized Poisson model. spVC seamlessly integrates constant and spatially varying effects of covariates, facilitating comprehensive exploration of gene expression variability and enhancing interpretability. Simulation and real data applications confirm spVC's accuracy in these tasks, highlighting its versatility in spatial transcriptomics analysis.
- Published
- 2024
13. Removing direct photocurrent artifacts in optogenetic connectivity mapping data via constrained matrix factorization.
- Author
-
Antin, Benjamin, Antin, Benjamin, Sadahiro, Masato, Gajowa, Marta, Triplett, Marcus, Adesnik, Hillel, Paninski, Liam, Antin, Benjamin, Antin, Benjamin, Sadahiro, Masato, Gajowa, Marta, Triplett, Marcus, Adesnik, Hillel, and Paninski, Liam
- Abstract
Monosynaptic connectivity mapping is crucial for building circuit-level models of neural computation. Two-photon optogenetic stimulation, when combined with whole-cell recording, enables large-scale mapping of physiological circuit parameters. In this experimental setup, recorded postsynaptic currents are used to infer the presence and strength of connections. For many cell types, nearby connections are those we expect to be strongest. However, when the postsynaptic cell expresses opsin, optical excitation of nearby cells can induce direct photocurrents in the postsynaptic cell. These photocurrent artifacts contaminate synaptic currents, making it difficult or impossible to probe connectivity for nearby cells. To overcome this problem, we developed a computational tool, Photocurrent Removal with Constraints (PhoRC). Our method is based on a constrained matrix factorization model which leverages the fact that photocurrent kinetics are less variable than those of synaptic currents. We demonstrate on real and simulated data that PhoRC consistently removes photocurrents while preserving synaptic currents, despite variations in photocurrent kinetics across datasets. Our method allows the discovery of synaptic connections which would have been otherwise obscured by photocurrent artifacts, and may thus reveal a more complete picture of synaptic connectivity. PhoRC runs faster than real time and is available as open source software.
- Published
- 2024
14. Competition between physical search and a weak-to-strong transition rate-limits kinesin binding times.
- Author
-
Nguyen, Trini, Nguyen, Trini, Narayanareddy, Babu, Gross, Steven, Miles, Christopher, Nguyen, Trini, Nguyen, Trini, Narayanareddy, Babu, Gross, Steven, and Miles, Christopher
- Abstract
The self-organization of cells relies on the profound complexity of protein-protein interactions. Challenges in directly observing these events have hindered progress toward understanding their diverse behaviors. One notable example is the interaction between molecular motors and cytoskeletal systems that combine to perform a variety of cellular functions. In this work, we leverage theory and experiments to identify and quantify the rate-limiting mechanism of the initial association between a cargo-bound kinesin motor and a microtubule track. Recent advances in optical tweezers provide binding times for several lengths of kinesin motors trapped at varying distances from a microtubule, empowering the investigation of competing models. We first explore a diffusion-limited model of binding. Through Brownian dynamics simulations and simulation-based inference, we find this simple diffusion model fails to explain the experimental binding times, but an extended model that accounts for the ADP state of the molecular motor agrees closely with the data, even under the scrutiny of penalizing for additional model complexity. We provide quantification of both kinetic rates and biophysical parameters underlying the proposed binding process. Our model suggests that a typical binding event is limited by ADP state rather than physical search. Lastly, we predict how these association rates can be modulated in distinct ways through variation of environmental concentrations and physical properties.
- Published
- 2024
15. Structural and practical identifiability of contrast transport models for DCE-MRI
- Author
-
Conte, Martina, Scott, Jacob G1, Conte, Martina, Woodall, Ryan T, Gutova, Margarita, Chen, Bihong T, Shiroishi, Mark S, Brown, Christine E, Munson, Jennifer M, Rockne, Russell C, Conte, Martina, Scott, Jacob G1, Conte, Martina, Woodall, Ryan T, Gutova, Margarita, Chen, Bihong T, Shiroishi, Mark S, Brown, Christine E, Munson, Jennifer M, and Rockne, Russell C
- Abstract
Contrast transport models are widely used to quantify blood flow and transport in dynamic contrast-enhanced magnetic resonance imaging. These models analyze the time course of the contrast agent concentration, providing diagnostic and prognostic value for many biological systems. Thus, ensuring accuracy and repeatability of the model parameter estimation is a fundamental concern. In this work, we analyze the structural and practical identifiability of a class of nested compartment models pervasively used in analysis of MRI data. We combine artificial and real data to study the role of noise in model parameter estimation. We observe that although all the models are structurally identifiable, practical identifiability strongly depends on the data characteristics. We analyze the impact of increasing data noise on parameter identifiability and show how the latter can be recovered with increased data quality. To complete the analysis, we show that the results do not depend on specific tissue characteristics or the type of enhancement patterns of contrast agent signal.
- Published
- 2024
16. Structural and practical identifiability of contrast transport models for DCE-MRI
- Author
-
Conte, Martina, Scott, Jacob G1, Conte, Martina, Woodall, Ryan T, Gutova, Margarita, Chen, Bihong T, Shiroishi, Mark S, Brown, Christine E, Munson, Jennifer M, Rockne, Russell C, Conte, Martina, Scott, Jacob G1, Conte, Martina, Woodall, Ryan T, Gutova, Margarita, Chen, Bihong T, Shiroishi, Mark S, Brown, Christine E, Munson, Jennifer M, and Rockne, Russell C
- Abstract
Contrast transport models are widely used to quantify blood flow and transport in dynamic contrast-enhanced magnetic resonance imaging. These models analyze the time course of the contrast agent concentration, providing diagnostic and prognostic value for many biological systems. Thus, ensuring accuracy and repeatability of the model parameter estimation is a fundamental concern. In this work, we analyze the structural and practical identifiability of a class of nested compartment models pervasively used in analysis of MRI data. We combine artificial and real data to study the role of noise in model parameter estimation. We observe that although all the models are structurally identifiable, practical identifiability strongly depends on the data characteristics. We analyze the impact of increasing data noise on parameter identifiability and show how the latter can be recovered with increased data quality. To complete the analysis, we show that the results do not depend on specific tissue characteristics or the type of enhancement patterns of contrast agent signal.
- Published
- 2024
17. SuPreMo: a computational tool for streamlining in silico perturbation using sequence-based predictive models
- Author
-
Gjoni, Ketrin, Martelli, Pier Luigi1, Gjoni, Ketrin, Pollard, Katherine S, Gjoni, Ketrin, Martelli, Pier Luigi1, Gjoni, Ketrin, and Pollard, Katherine S
- Abstract
SummaryThe increasing development of sequence-based machine learning models has raised the demand for manipulating sequences for this application. However, existing approaches to edit and evaluate genome sequences using models have limitations, such as incompatibility with structural variants, challenges in identifying responsible sequence perturbations, and the need for vcf file inputs and phased data. To address these bottlenecks, we present Sequence Mutator for Predictive Models (SuPreMo), a scalable and comprehensive tool for performing and supporting in silico mutagenesis experiments. We then demonstrate how pairs of reference and perturbed sequences can be used with machine learning models to prioritize pathogenic variants or discover new functional sequences.Availability and implementationSuPreMo was written in Python, and can be run using only one line of code to generate both sequences and 3D genome disruption scores. The codebase, instructions for installation and use, and tutorials are on the GitHub page: https://github.com/ketringjoni/SuPreMo.
- Published
- 2024
18. Artificial neural networks for model identification and parameter estimation in computational cognitive models
- Author
-
Rmus, Milena, Cai, Ming Bo1, Rmus, Milena, Pan, Ti-Fen, Xia, Liyu, Collins, Anne GE, Rmus, Milena, Cai, Ming Bo1, Rmus, Milena, Pan, Ti-Fen, Xia, Liyu, and Collins, Anne GE
- Abstract
Computational cognitive models have been used extensively to formalize cognitive processes. Model parameters offer a simple way to quantify individual differences in how humans process information. Similarly, model comparison allows researchers to identify which theories, embedded in different models, provide the best accounts of the data. Cognitive modeling uses statistical tools to quantitatively relate models to data that often rely on computing/estimating the likelihood of the data under the model. However, this likelihood is computationally intractable for a substantial number of models. These relevant models may embody reasonable theories of cognition, but are often under-explored due to the limited range of tools available to relate them to data. We contribute to filling this gap in a simple way using artificial neural networks (ANNs) to map data directly onto model identity and parameters, bypassing the likelihood estimation. We test our instantiation of an ANN as a cognitive model fitting tool on classes of cognitive models with strong inter-trial dependencies (such as reinforcement learning models), which offer unique challenges to most methods. We show that we can adequately perform both parameter estimation and model identification using our ANN approach, including for models that cannot be fit using traditional likelihood-based methods. We further discuss our work in the context of the ongoing research leveraging simulation-based approaches to parameter estimation and model identification, and how these approaches broaden the class of cognitive models researchers can quantitatively investigate.
- Published
- 2024
19. Assessing the Functionality of Transit and Shared Mobility Systems after Earthquakes
- Author
-
Soga, Kenichi, PhD, Soga, Kenichi, PhD, Comfort, Louise, PhD, Zhao, Bingyu, PhD, Tang, Yili (Kelly), PhD, Han, Tianyu, Soga, Kenichi, PhD, Soga, Kenichi, PhD, Comfort, Louise, PhD, Zhao, Bingyu, PhD, Tang, Yili (Kelly), PhD, and Han, Tianyu
- Abstract
Located within the seismically active Pacific Ring of Fire, California's transportation infrastructure, especially in the Bay Area, is susceptible to earthquakes. A review of current research and stakeholder interviews revealed a growing awareness of emergency preparedness among local jurisdictions and transit agencies in recent years. However, many have yet to formalize and publish their recovery plans. This study introduces an agent-based multimodal transportation simulation tool to enhance post-earthquake transportation resilience. Integrating a road network simulator with a metro system simulator, the tool employs an optimized Dijkstra-based algorithm to calculate optimal routes, travel times, and fares. A case study is conducted for the East Bay, using the simulator to gauge the impact of a compromised Bay Area Rapid Transit (BART) system. The results suggested that original BART passengers could face either longer commute times or higher costs during the recovery phase of a major earthquake without appropriate policies. Such outcomes could disproportionately burden low-income riders, affecting their mobility and overall travel time.
- Published
- 2024
20. Modeling homologous chromosome recognition via nonspecific interactions
- Author
-
Marshall, Wallace F, Marshall, Wallace F, Fung, Jennifer C, Marshall, Wallace F, Marshall, Wallace F, and Fung, Jennifer C
- Abstract
In many organisms, most notably Drosophila, homologous chromosomes associate in somatic cells, a phenomenon known as somatic pairing, which takes place without double strand breaks or strand invasion, thus requiring some other mechanism for homologs to recognize each other. Several studies have suggested a "specific button" model, in which a series of distinct regions in the genome, known as buttons, can associate with each other, mediated by different proteins that bind to these different regions. Here, we use computational modeling to evaluate an alternative "button barcode" model, in which there is only one type of recognition site or adhesion button, present in many copies in the genome, each of which can associate with any of the others with equal affinity. In this model, buttons are nonuniformly distributed, such that alignment of a chromosome with its correct homolog, compared with a nonhomolog, is energetically favored; since to achieve nonhomologous alignment, chromosomes would be required to mechanically deform in order to bring their buttons into mutual register. By simulating randomly generated nonuniform button distributions, many highly effective button barcodes can be easily found, some of which achieve virtually perfect pairing fidelity. This model is consistent with existing literature on the effect of translocations of different sizes on homolog pairing. We conclude that a button barcode model can attain highly specific homolog recognition, comparable to that seen in actual cells undergoing somatic homolog pairing, without the need for specific interactions. This model may have implications for how meiotic pairing is achieved.
- Published
- 2024
21. Adjusting Incidence Estimates with Laboratory Test Performances: A Pragmatic Maximum Likelihood Estimation-Based Approach.
- Author
-
Weng, Yingjie, Weng, Yingjie, Tian, Lu, Boothroyd, Derek, Lee, Justin, Zhang, Kenny, Lu, Di, Lindan, Christina, Bollyky, Jenna, Huang, Beatrice, Rutherford, George, Maldonado, Yvonne, Desai, Manisha, Weng, Yingjie, Weng, Yingjie, Tian, Lu, Boothroyd, Derek, Lee, Justin, Zhang, Kenny, Lu, Di, Lindan, Christina, Bollyky, Jenna, Huang, Beatrice, Rutherford, George, Maldonado, Yvonne, and Desai, Manisha
- Abstract
Understanding the incidence of disease is often crucial for public policy decision-making, as observed during the COVID-19 pandemic. Estimating incidence is challenging, however, when the definition of incidence relies on tests that imperfectly measure disease, as in the case when assays with variable performance are used to detect the SARS-CoV-2 virus. To our knowledge, there are no pragmatic methods to address the bias introduced by the performance of labs in testing for the virus. In the setting of a longitudinal study, we developed a maximum likelihood estimation-based approach to estimate laboratory performance-adjusted incidence using the expectation-maximization algorithm. We constructed confidence intervals (CIs) using both bootstrapped-based and large-sample interval estimator approaches. We evaluated our methods through extensive simulation and applied them to a real-world study (TrackCOVID), where the primary goal was to determine the incidence of and risk factors for SARS-CoV-2 infection in the San Francisco Bay Area from July 2020 to March 2021. Our simulations demonstrated that our method converged rapidly with accurate estimates under a variety of scenarios. Bootstrapped-based CIs were comparable to the large-sample estimator CIs with a reasonable number of incident cases, shown via a simulation scenario based on the real TrackCOVID study. In more extreme simulated scenarios, the coverage of large-sample interval estimation outperformed the bootstrapped-based approach. Results from the application to the TrackCOVID study suggested that assuming perfect laboratory test performance can lead to an inaccurate inference of the incidence. Our flexible, pragmatic method can be extended to a variety of disease and study settings.
- Published
- 2024
22. Perturbation Variability Does Not Influence Implicit Sensorimotor Adaptation.
- Author
-
Wang, Tianhe, Wang, Tianhe, Avraham, Guy, Tsay, Jonathan, Abram, Sabrina, Ivry, Richard, Wang, Tianhe, Wang, Tianhe, Avraham, Guy, Tsay, Jonathan, Abram, Sabrina, and Ivry, Richard
- Abstract
Implicit adaptation has been regarded as a rigid process that automatically operates in response to movement errors to keep the sensorimotor system precisely calibrated. This hypothesis has been challenged by recent evidence suggesting flexibility in this learning process. One compelling line of evidence comes from work suggesting that this form of learning is context-dependent, with the rate of learning modulated by error history. Specifically, learning was attenuated in the presence of perturbations exhibiting high variance compared to when the perturbation is fixed. However, these findings are confounded by the fact that the adaptation system corrects for errors of different magnitudes in a non-linear manner, with the adaptive response increasing in a proportional manner to small errors and saturating to large errors. Through simulations, we show that this non-linear motor correction function is sufficient to explain the effect of perturbation variance without referring to an experience-dependent change in error sensitivity. Moreover, by controlling the distribution of errors experienced during training, we provide empirical evidence showing that there is no measurable effect of perturbation variance on implicit adaptation. As such, we argue that the evidence to date remains consistent with the rigidity assumption.
- Published
- 2024
23. Development and validation of the Michigan Chronic Disease Simulation Model (MICROSIM).
- Author
-
Burke, James, Burke, James, Copeland, Luciana, Sussman, Jeremy, Hayward, Rodney, Gross, Alden, Briceño, Emily, Whitney, Rachael, Giordani, Bruno, Elkind, Mitchell, Manly, Jennifer, Gottesman, Rebecca, Gaskin, Darrell, Sidney, Stephen, Yaffe, Kristine, Sacco, Ralph, Heckbert, Susan, Hughes, Timothy, Galecki, Andrzej, Levine, Deborah, Burke, James, Burke, James, Copeland, Luciana, Sussman, Jeremy, Hayward, Rodney, Gross, Alden, Briceño, Emily, Whitney, Rachael, Giordani, Bruno, Elkind, Mitchell, Manly, Jennifer, Gottesman, Rebecca, Gaskin, Darrell, Sidney, Stephen, Yaffe, Kristine, Sacco, Ralph, Heckbert, Susan, Hughes, Timothy, Galecki, Andrzej, and Levine, Deborah
- Abstract
Strategies to prevent or delay Alzheimers disease and related dementias (AD/ADRD) are urgently needed, and blood pressure (BP) management is a promising strategy. Yet the effects of different BP control strategies across the life course on AD/ADRD are unknown. Randomized trials may be infeasible due to prolonged follow-up and large sample sizes. Simulation analysis is a practical approach to estimating these effects using the best available existing data. However, existing simulation frameworks cannot estimate the effects of BP control on both dementia and cardiovascular disease. This manuscript describes the design principles, implementation details, and population-level validation of a novel population-health microsimulation framework, the MIchigan ChROnic Disease SIMulation (MICROSIM), for The Effect of Lower Blood Pressure over the Life Course on Late-life Cognition in Blacks, Hispanics, and Whites (BP-COG) study of the effect of BP levels over the life course on dementia and cardiovascular disease. MICROSIM is an agent-based Monte Carlo simulation designed using computer programming best practices. MICROSIM estimates annual vascular risk factor levels and transition probabilities in all-cause dementia, stroke, myocardial infarction, and mortality in a nationally representative sample of US adults 18+ using the National Health and Nutrition Examination Survey (NHANES). MICROSIM models changes in risk factors over time, cognition and dementia using changes from a pooled dataset of individual participant data from 6 US prospective cardiovascular cohort studies. Cardiovascular risks were estimated using a widely used risk model and BP treatment effects were derived from meta-analyses of randomized trials. MICROSIM is an extensible, open-source framework designed to estimate the population-level impact of different BP management strategies and reproduces US population-level estimates of BP and other vascular risk factors levels, their change over time, and incident
- Published
- 2024
24. The manatee variational autoencoder model for predicting gene expression alterations caused by transcription factor perturbations.
- Author
-
Yang, Ying, Yang, Ying, Seninge, Lucas, Wang, Ziyuan, Oro, Anthony, Stuart, Joshua, Ding, Hongxu, Yang, Ying, Yang, Ying, Seninge, Lucas, Wang, Ziyuan, Oro, Anthony, Stuart, Joshua, and Ding, Hongxu
- Abstract
We present the Manatee variational autoencoder model to predict transcription factor (TF) perturbation-induced transcriptomes. We demonstrate that the Manatee in silico perturbation analysis recapitulates target transcriptomic phenotypes in diverse cellular lineage transitions. We further propose the Manatee in silico screening analysis for prioritizing TF combinations targeting desired transcriptomic phenotypes.
- Published
- 2024
25. Towards silent and efficient flight by combining bioinspired owl feather serrations with cicada wing geometry.
- Author
-
Wei, Zixiao, Wei, Zixiao, Wang, Stanley, Farris, Sean, Chennuri, Naga, Wang, Ningping, Shinsato, Stara, Demir, Kahraman, Horii, Maya, Gu, Grace, Wei, Zixiao, Wei, Zixiao, Wang, Stanley, Farris, Sean, Chennuri, Naga, Wang, Ningping, Shinsato, Stara, Demir, Kahraman, Horii, Maya, and Gu, Grace
- Abstract
As natural predators, owls fly with astonishing stealth due to the serrated feather morphology that produces advantageous flow characteristics. Traditionally, these serrations are tailored for airfoil edges with simple two-dimensional patterns, limiting their effect on noise reduction while negotiating tradeoffs in aerodynamic performance. Conversely, the intricately structured wings of cicadas have evolved for effective flapping, presenting a potential blueprint for alleviating these aerodynamic limitations. In this study, we formulate a synergistic design strategy that harmonizes noise suppression with aerodynamic efficiency by integrating the geometrical attributes of owl feathers and cicada forewings, culminating in a three-dimensional sinusoidal serration propeller topology that facilitates both silent and efficient flight. Experimental results show that our design yields a reduction in overall sound pressure levels by up to 5.5 dB and an increase in propulsive efficiency by over 20% compared to the current industry benchmark. Computational fluid dynamics simulations validate the efficacy of the bioinspired design in augmenting surface vorticity and suppressing noise generation across various flow regimes. This topology can advance the multifunctionality of aerodynamic surfaces for the development of quieter and more energy-saving aerial vehicles.
- Published
- 2024
26. Transfer of visual perceptual learning over a task-irrelevant feature through feature-invariant representations: Behavioral experiments and model simulations.
- Author
-
Liu, Jiajuan, Liu, Jiajuan, Lu, Zhong-Lin, Dosher, Barbara, Liu, Jiajuan, Liu, Jiajuan, Lu, Zhong-Lin, and Dosher, Barbara
- Abstract
A large body of literature has examined specificity and transfer of perceptual learning, suggesting a complex picture. Here, we distinguish between transfer over variations in a task-relevant feature (e.g., transfer of a learned orientation task to a different reference orientation) and transfer over a task-irrelevant feature (e.g., transfer of a learned orientation task to a different retinal location or different spatial frequency), and we focus on the mechanism for the latter. Experimentally, we assessed whether learning a judgment of one feature (such as orientation) using one value of an irrelevant feature (e.g., spatial frequency) transfers to another value of the irrelevant feature. Experiment 1 examined whether learning in eight-alternative orientation identification with one or multiple spatial frequencies transfers to stimuli at five different spatial frequencies. Experiment 2 paralleled Experiment 1, examining whether learning in eight-alternative spatial-frequency identification at one or multiple orientations transfers to stimuli with five different orientations. Training the orientation task with a single spatial frequency transferred widely to all other spatial frequencies, with a tendency to specificity when training with the highest spatial frequency. Training the spatial frequency task fully transferred across all orientations. Computationally, we extended the identification integrated reweighting theory (I-IRT) to account for the transfer data (Dosher, Liu, & Lu, 2023; Liu, Dosher, & Lu, 2023). Just as location-invariant representations in the original IRT explain transfer over retinal locations, incorporating feature-invariant representations effectively accounted for the observed transfer. Taken together, we suggest that feature-invariant representations can account for transfer of learning over a task-irrelevant feature.
- Published
- 2024
27. A Technique to Quantify Very Low Activities in Regions of Interest With a Collimatorless Detector
- Author
-
Caravaca, Javier, Caravaca, Javier, Bobba, Kondapa Naidu, Du, Shixian, Peter, Robin, Gullberg, Grant T, Bidkar, Anil P, Flavell, Robert R, Seo, Youngho, Caravaca, Javier, Caravaca, Javier, Bobba, Kondapa Naidu, Du, Shixian, Peter, Robin, Gullberg, Grant T, Bidkar, Anil P, Flavell, Robert R, and Seo, Youngho
- Abstract
We present a new method to measure sub-microcurie activities of photon-emitting radionuclides in organs and lesions of small animals in vivo. Our technique, named the collimator-less likelihood fit, combines a very high sensitivity collimatorless detector with a Monte Carlo-based likelihood fit in order to estimate the activities in previously segmented regions of interest along with their uncertainties. This is done directly from the photon projections in our collimatorless detector and from the region of interest segmentation provided by an x-ray computed tomography scan. We have extensively validated our approach with 225Ac experimentally in spherical phantoms and mouse phantoms, and also numerically with simulations of a realistic mouse anatomy. Our method yields statistically unbiased results with uncertainties smaller than 20% for activities as low as ~111Bq (3nCi) and for exposures under 30 minutes. We demonstrate that our method yields more robust recovery coefficients when compared to SPECT imaging with a commercial pre-clinical scanner, specially at very low activities. Thus, our technique is complementary to traditional SPECT/CT imaging since it provides a more accurate and precise organ and tumor dosimetry, with a more limited spatial information. Finally, our technique is specially significant in extremely low-activity scenarios when SPECT/CT imaging is simply not viable.
- Published
- 2024
28. Advances in Difference-in-differences Methods for Policy Evaluation Research.
- Author
-
Wang, Guangyi, Wang, Guangyi, Hamad, Rita, White, Justin, Wang, Guangyi, Wang, Guangyi, Hamad, Rita, and White, Justin
- Abstract
Difference-in-differences (DiD) is a powerful, quasi-experimental research design widely used in longitudinal policy evaluations with health outcomes. However, DiD designs face several challenges to ensuring reliable causal inference, such as when policy settings are more complex. Recent economics literature has revealed that DiD estimators may exhibit bias when heterogeneous treatment effects, a common consequence of staggered policy implementation, are present. To deepen our understanding of these advancements in epidemiology, in this methodologic primer, we start by presenting an overview of DiD methods. We then summarize fundamental problems associated with DiD designs with heterogeneous treatment effects and provide guidance on recently proposed heterogeneity-robust DiD estimators, which are increasingly being implemented by epidemiologists. We also extend the discussion to violations of the parallel trends assumption, which has received less attention. Last, we present results from a simulation study that compares the performance of several DiD estimators under different scenarios to enhance understanding and application of these methods.
- Published
- 2024
29. Pushing the high count rate limits of scintillation detectors for challenging neutron-capture experiments
- Author
-
Universitat Politècnica de Catalunya. Departament de Física, Universitat Politècnica de Catalunya. ANT - Advanced Nuclear Technologies Research Group, Balibrea Correa, Javier, Lerendegui Marco, Jorge, Babiano Suárez, Víctor, Domingo Pardo, César, Ladarescu, Ion, Tarifeño Saldivia, Ariel, Fuente Rosales, Gabriel de la, Alcayne Aicua, Víctor, Cano Ott, Daniel, González Romero, Enrique Miguel, Casanovas Hoste, Adrià, Calviño Tavares, Francisco, Cortés Rossell, Guillem Pere, Universitat Politècnica de Catalunya. Departament de Física, Universitat Politècnica de Catalunya. ANT - Advanced Nuclear Technologies Research Group, Balibrea Correa, Javier, Lerendegui Marco, Jorge, Babiano Suárez, Víctor, Domingo Pardo, César, Ladarescu, Ion, Tarifeño Saldivia, Ariel, Fuente Rosales, Gabriel de la, Alcayne Aicua, Víctor, Cano Ott, Daniel, González Romero, Enrique Miguel, Casanovas Hoste, Adrià, Calviño Tavares, Francisco, and Cortés Rossell, Guillem Pere
- Abstract
One of the critical aspects for the accurate determination of neutron capture cross sections when combining time-of-flight and total energy detector techniques is the characterization and control of systematic uncertainties associated to the measuring devices. In this work we explore the most conspicuous effects associated to harsh count rate conditions: dead-time and pile-up effects. Both effects, when not properly treated, can lead to large systematic uncertainties and bias in the determination of neutron cross sections. In the majority of neutron capture measurements carried out at the CERN n_TOF facility, the detectors of choice are the C6D6 liquid-based either in form of large-volume cells or recently commissioned sTED detector array, consisting of much smaller-volume modules. To account for the aforementioned effects, we introduce a Monte Carlo model for these detectors mimicking harsh count rate conditions similar to those happening at the CERN n_TOF 20 m flight path vertical measuring station. The model parameters are extracted by comparison with the experimental data taken at the same facility during 2022 experimental campaign. We propose a novel methodology to consider both, dead-time and pile-up effects simultaneously for these fast detectors and check the applicability to experimental data from 197Au(n, gamma), including the saturated 4.9 eV resonance which is an important component of normalization for neutron cross section measurements., This work has been carried out in the framework of a project funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ERC Consolidator Grant project HYMNS, with grant agreement No. 681740). This work was supported by grant ICJ220-045122-I funded by MCIN/AEI/10.130 39/501100011033 and by European Union NextGenerationEU/PRTR. The authors acknowledge support from the Spanish Ministerio de Ciencia e Innovación under grants PID2019-104714GB-C21, PID2022- 138297NB-C21 and the funding agencies of the participating institutes. The authors acknowledge the financial support from MCIN and the European Union NextGenerationEU and Generalitat Valenciana in the call PRTR PC I+D+i ASFAE/2022/027., Article signat per 132 autors/es: J. Balibrea-Correa, J. Lerendegui-Marco, V. Babiano-Suarez, C. Domingo-Pardo, I. Ladarescu, A. Tarifeño-Saldivia, G. de la Fuente-Rosales, V. Alcayne, D. Cano-Ott, E. González-Romero, T. Martínez, E. Mendoza, A. Pérez de Rada, J. Plaza del Olmo, A. Sánchez-Caballero, A. Casanovas, F. Calviño, S. Valenta, O. Aberle, S. Altieri, S. Amaducci, J. Andrzejewski, M. Bacak, C. Beltrami, S. Bennett, A.P. Bernardes, E. Berthoumieux, R. Beyer, M. Boromiza, D. Bosnar, M. Caamaño, M. Calviani, D.M. Castelluccio, F. Cerutti, G. Cescutti, S. Chasapoglou, E. Chiaveri, P. Colombetti, N. Colonna, P. Console Camprini, G. Cortés, M.A. Cortés-Giraldo, L. Cosentino, S. Cristallo, S. Dellmann, M. Di Castro, S. Di Maria, M. Diakaki, M. Dietz, R. Dressler, E. Dupont, I. Durán, Z. Eleme, S. Fargier, B. Fernández, B. Fernández-Domínguez, P. Finocchiaro, S. Fiore, V. Furman, F. García-Infantes, A. Gawlik-Ramikega, G. Gervino, S. Gilardoni, C. Guerrero, F. Gunsing, C. Gustavino, J. Heyse, W. Hillman, D.G. Jenkins, E. Jericha, A. Junghans, Y. Kadi, K. Kaperoni, G. Kaur, A. Kimura, I. Knapová, M. Kokkoris, Y. Kopatch, M. Krtička, N. Kyritsis, C. Lederer-Woods, G. Lerner, A. Manna, A. Masi, C. Massimi, P. Mastinu, M. Mastromarco, E.A. Maugeri, A. Mazzone, A. Mengoni, V. Michalopoulou, P.M. Milazzo, R. Mucciola, F. Murtas, E. Musacchio-Gonzalez, A. Musumarra, P. Negret, P. Pérez-Maroto, N. Patronis, J.A. Pavón-Rodríguez, M.G. Pellegriti, J. Perkowski, C. Petrone, E. Pirovano, S. Pomp, I. Porras, J. Praena, J.M. Quesada, R. Reifarth, D. Rochman, Y. Romanets, C. Rubbia, M. Sabaté-Gilarte, P. Schillebeeckx, D. Schumann, A. Sekhar, A.G. Smith, N.V. Sosnin, M.E. Stamati, A. Sturniolo, G. Tagliente, D. Tarrío, P. Torres-Sánchez, E. Vagena, V. Variale, P. Vaz, G. Vecchio, D. Vescovi, V. Vlachoudis, R. Vlastou, A. Wallner, P.J. Woods, T. Wright, R. Zarrella, P. Žugec, Postprint (published version)
- Published
- 2024
30. Estudi de l'aplicació de la simulació en la planificació d'obres
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Figueras Jové, Jaume, Guasch Petit, Antonio, Solé Pajuelo, Adrià, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Figueras Jové, Jaume, Guasch Petit, Antonio, and Solé Pajuelo, Adrià
- Abstract
Els projectes constructius sovint es troben exposats a problemes relacionats amb la planificació i l’execució dels treballs producte d’una falta d'organització, una mala distribució dels recursos o altres factors externs que ocasionen retards. L’increment de la durada del projecte ocasiona un increment en els costos de l’obra i això pot derivar en pèrdues a l’empresa. Aquest projecte té com a objecte l’estudi dels diferents mètodes per realitzar planificacions temporals en el sector de la construcció i posteriorment analitzar els beneficis i inconvenients de cada alternativa. S’abordaran diferents programaris emprats actualment per dur a terme l’organització de les tasques i també es plantejaran els diferents escenaris cap on apunten els nous avanços dins el camp de la construcció amb la finalitat de comparar el potencial de la simulació amb la resta de tècniques que s’apliquen actualment. A partir d’un cas d’estudi real es pretén desenvolupar un model de simulació simplificat que qualsevol professional del sector pugui fer servir, incloent-hi aquells que no tinguin coneixements en simulació. Posteriorment, es compraran els resultats experimentals obtinguts de la simulació amb les dades extretes d’un projecte real en execució per comprovar la validesa del model. Es demostra mitjançant l’anàlisi i comparativa de resultats experimentals amb els valors reals que malgrat que algunes de les activitats del projecte no segueixin la planificació estipulada a causa d’algun factor extern, amb el control i actualització de la planificació es poden continuar assolint els objectius. Tot i això, es conclou que hi ha altres tendències amb més potencial que la simulació dins el sector de la construcció.
- Published
- 2024
31. Scheduling algorithms for time-sensitive wireless networks suited for control and sensing industrial applications
- Author
-
Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions, Villares Piera, Nemesio Javier, El Kaisi Rahmoun, Youssef, Universitat Politècnica de Catalunya. Departament de Teoria del Senyal i Comunicacions, Villares Piera, Nemesio Javier, and El Kaisi Rahmoun, Youssef
- Abstract
The integration of Time-Sensitive Networking (TSN) into wireless networks would enable applications that require bounded delays and rely on wireless links. This thesis proposes three schedulers for wireless networks with isochronous traffic. The proposed algorithms achieve bounded delay and null jitter (delay variability). An inherent delay-todata throughput relationship is present in each of the proposals, ranging from minimal delays but minimal throughput to maximum throughput at the expense of substantially increasing delays. As a highlight, the proposal named partially overlapped windows scheduler provides an adjustable trade-off between delay and throughput, which allows for a more flexible operation. Furthermore, the use of spatial diversity techniques to improve the behavior of the schedulers is studied. Finally, this work evaluates the relative performance of the proposals in specific scenarios with the use of software simulations., La asimilación de tecnologías Time-Sensitive Networking (TSN) por parte de las redes inalámbricas haría factible la ejecución de aplicaciones que requieren retardos delimitados y necesitan enlaces inalámbricos. Esta tesis propone tres planificadores para redes inalámbricas con tráfico isócrono. Los algoritmos propuestos consiguen retardo delimitado y jitter (variabilidad del retardo) nulo, y cada uno de ellos presenta una relación entre el retardo y la tasa de transferencia diferente, desde conseguir retardos mínimos y tasa mínima, hasta llegar a tasa máxima a costa de incrementar los retardos considerablemente. Cabe recalcar que la propuesta llamada planificador de ventanas parcialmente solapadas proporciona una relación retardo-tasa ajustable, lo que permite una operación más flexible. Además, se ha estudiado el uso de técnicas de diversidad espacial para mejorar el comportamiento de los planificadores. Finalmente, este trabajo evalúa el rendimiento relativo de las propuestas en escenarios específicos., L'assimilació de tecnologies Time-Sensitive Networking (TSN) per part de les xarxes sense fils faria factible l'execució d'aplicacions que requereixen retards delimitats i necessiten enllaços sense fil. Aquesta tesi proposa tres planificadors per xarxes sense fils amb tràfic isòcron. Els algoritmes proposats aconsegueixen retard delimitat i jitter (variabilitat del retard) nul, i cadascun d'ells presenta una relació entre el retard i la taxa de transferència diferent, des d'aconseguir retards mínims amb taxa mínima, fins a arribar a taxa màxima a costa d'incrementar els retards de forma considerable. És important destacar la proposta anomenada planificador de finestres parcialment encavalcades, que proporciona una relació retard-taxa ajustable, i permet una operació més flexible. A més, s'ha estudiat l'ús de tècniques de diversitat espacial per millorar el comportament dels planificadors. Finalment, aquest treball avalua el rendiment relatiu de les propostes en escenaris específics.
- Published
- 2024
32. Multiple Time Stepping Methods for Numerical Simulation of Charge Transfer by Mobile Discrete Breathers
- Author
-
Universidad de Sevilla. Departamento de Física Aplicada I, Universidad de Sevilla. FQM280: Física No Lineal, Ministerio de Ciencia, Innovación y Universidades (MICINN). España, Junta de Andalucía, Bajārs, Jānis, Archilla, Juan F. R., Universidad de Sevilla. Departamento de Física Aplicada I, Universidad de Sevilla. FQM280: Física No Lineal, Ministerio de Ciencia, Innovación y Universidades (MICINN). España, Junta de Andalucía, Bajārs, Jānis, and Archilla, Juan F. R.
- Abstract
In this work we propose new structure-preserving multiple time stepping methods for numerical simulation of charge transfer by intrinsic localized modes in nonlinear crystal lattice models. We consider, without loss of generality, one-dimensional crystal lattice models described by classical Hamiltonian dynamics, whereas charge (electron or hole) is modeled as a quantum particle within the tight-binding approximation. Proposed multiple time stepping schemes are based on symplecticity-preserving symmetric splitting methods recently developed by the authors. Originally developed explicit splitting methods do not exactly conserve total charge probability, thus, to improve charge probability conservation and to better resolve high frequency oscillations of the charge in numerical simulations with large time steps we incorporate multiple time stepping approach when solving split charge equations. Improved numerical results with multiple time stepping methods of charge transfer by mobile discrete breathers are demonstrated in a crystal lattice model example.
- Published
- 2024
33. Parasites and the ecology of fear: Nonconsumptive effects of ectoparasites on larvae reduce growth in simulated Drosophila populations
- Author
-
Horn, Collin J., Luong, Lien T., Visscher, D. (Darcy), Horn, Collin J., Luong, Lien T., and Visscher, D. (Darcy)
- Abstract
Predators negatively affect prey outside of direct attack, and these nonconsumptive effects (NCEs) may cause over half the impacts of predators on prey populations. This “ecology of fear” framework has been extended to host–parasite interactions. The NCEs of parasites are thought to be small relative to those of predators. However, recent research shows ectoparasites exert NCEs on multiple life stages of Drosophila. In this study, we apply recent data to a matrix-based model of fly populations experiencing infection/consumption and NCEs from an ectoparasitic mite. We found the NCEs of parasites on larvae, which are not actively parasitized, decreased the size of simulated host populations. By contrast, the NCEs on adult flies increased population size through compensatory egg production. The negative NCEs on larvae outweighed the positive effects on adults to reduce population size. This study suggests that parasitic NCEs can suppress host populations independent of infection.
- Published
- 2024
- Full Text
- View/download PDF
34. Coronary artery calcium quantification technique using dual energy material decomposition: a simulation study.
- Author
-
Black, Dale, Black, Dale, Singh, Tejus, Molloi, Sabee, Black, Dale, Black, Dale, Singh, Tejus, and Molloi, Sabee
- Abstract
Coronary artery calcification is a significant predictor of cardiovascular disease, with current detection methods like Agatston scoring having limitations in sensitivity. This study aimed to evaluate the effectiveness of a novel CAC quantification method using dual-energy material decomposition, particularly its ability to detect low-density calcium and microcalcifications. A simulation study was conducted comparing the dual-energy material decomposition technique against the established Agatston scoring method and the newer volume fraction calcium mass technique. Detection accuracy and calcium mass measurement were the primary evaluation metrics. The dual-energy material decomposition technique demonstrated fewer false negatives than both Agatston scoring and volume fraction calcium mass, indicating higher sensitivity. In low-density phantom measurements, material decomposition resulted in only 7.41% false-negative (CAC = 0) measurements compared to 83.95% for Agatston scoring. For high-density phantoms, false negatives were removed (0.0%) compared to 20.99% in Agatston scoring. The dual-energy material decomposition technique presents a more sensitive and reliable method for CAC quantification.
- Published
- 2024
35. Penning micro-trap for quantum computing
- Author
-
Jain, Shreyans, Sägesser, Tobias, Hrmo, Pavel, Torkzaban, Celeste, Stadler, Martin, Oswald, Robin, Axline, Chris, Bautista-Salvador, Amado, Ospelkaus, Christian, Kienzler, Daniel, Home, Jonathan, Jain, Shreyans, Sägesser, Tobias, Hrmo, Pavel, Torkzaban, Celeste, Stadler, Martin, Oswald, Robin, Axline, Chris, Bautista-Salvador, Amado, Ospelkaus, Christian, Kienzler, Daniel, and Home, Jonathan
- Abstract
Trapped ions in radio-frequency traps are among the leading approaches for realizing quantum computers, because of high-fidelity quantum gates and long coherence times1–3. However, the use of radio-frequencies presents several challenges to scaling, including requiring compatibility of chips with high voltages4, managing power dissipation5 and restricting transport and placement of ions6. Here we realize a micro-fabricated Penning ion trap that removes these restrictions by replacing the radio-frequency field with a 3 T magnetic field. We demonstrate full quantum control of an ion in this setting, as well as the ability to transport the ion arbitrarily in the trapping plane above the chip. This unique feature of the Penning micro-trap approach opens up a modification of the quantum charge-coupled device architecture with improved connectivity and flexibility, facilitating the realization of large-scale trapped-ion quantum computing, quantum simulation and quantum sensing.
- Published
- 2024
36. System identification of a physics-informed ship model for better predictions in wind conditions
- Author
-
Alexandersson, Martin, Mao, Wengang, Ringsberg, Jonas W., Kjellberg, Martin, Alexandersson, Martin, Mao, Wengang, Ringsberg, Jonas W., and Kjellberg, Martin
- Abstract
System identification offers ways to obtain proper models describing a ship’s dynamics in real operational conditions but poses significant challenges, such as the multicollinearity and generality of the identified model. This paper proposes a new physics-informed ship manoeuvring model, where a deterministic semi-empirical rudder model has been added, to guide the identification towards a physically correct hydrodynamic model. This is an essential building block to distinguish the hydrodynamic modelling uncertainties from wind, waves, and currents – in real sea conditions – which is particularly important for ships with wind-assisted propulsion. In the physics-informed manoeuvring modelling framework, a systematical procedure is developed to establish various force/motion components within the manoeuvring system by inverse dynamics regression. The novel test case wind-powered pure car carrier (wPCC) assesses the physical correctness. First, a reference model, assumed to resemble the physically correct kinetics, is established via parameter identification on virtual captive tests. Then, the model tests are used to build both the physics-informed model and a physics-uninformed mathematical model for comparison. All models predicted the zigzag tests with satisfactory agreement. Thus, they can indeed be considered as being mathematically correct. However, introducing a semi-empirical rudder model seems to have guided the identification towards a more physically correct calm water hydrodynamic model, having lower multicollinearity and better generalization., The authors would like to acknowledge the financial support from Trafikverket/Lighthouse (grant id: FP4 2020) to prepare this paper. They would also thank all personnel at SSPA who have been involved in creating the model test results, building the ship models, and conducting the experiments.
- Published
- 2024
- Full Text
- View/download PDF
37. System identification of a physics-informed ship model for better predictions in wind conditions
- Author
-
Alexandersson, Martin, Mao, Wengang, Ringsberg, Jonas W., Kjellberg, Martin, Alexandersson, Martin, Mao, Wengang, Ringsberg, Jonas W., and Kjellberg, Martin
- Abstract
System identification offers ways to obtain proper models describing a ship’s dynamics in real operational conditions but poses significant challenges, such as the multicollinearity and generality of the identified model. This paper proposes a new physics-informed ship manoeuvring model, where a deterministic semi-empirical rudder model has been added, to guide the identification towards a physically correct hydrodynamic model. This is an essential building block to distinguish the hydrodynamic modelling uncertainties from wind, waves, and currents – in real sea conditions – which is particularly important for ships with wind-assisted propulsion. In the physics-informed manoeuvring modelling framework, a systematical procedure is developed to establish various force/motion components within the manoeuvring system by inverse dynamics regression. The novel test case wind-powered pure car carrier (wPCC) assesses the physical correctness. First, a reference model, assumed to resemble the physically correct kinetics, is established via parameter identification on virtual captive tests. Then, the model tests are used to build both the physics-informed model and a physics-uninformed mathematical model for comparison. All models predicted the zigzag tests with satisfactory agreement. Thus, they can indeed be considered as being mathematically correct. However, introducing a semi-empirical rudder model seems to have guided the identification towards a more physically correct calm water hydrodynamic model, having lower multicollinearity and better generalization., The authors would like to acknowledge the financial support from Trafikverket/Lighthouse (grant id: FP4 2020) to prepare this paper. They would also thank all personnel at SSPA who have been involved in creating the model test results, building the ship models, and conducting the experiments.
- Published
- 2024
- Full Text
- View/download PDF
38. Solute interaction-driven and solvent interaction-driven liquid-liquid phase separation induced by molecular size difference
- Author
-
60853207, 80402957, Iida, Yuya, Hiraide, Shotaro, Miyahara, Minoru T., Watanabe, Satoshi, 60853207, 80402957, Iida, Yuya, Hiraide, Shotaro, Miyahara, Minoru T., and Watanabe, Satoshi
- Abstract
We conducted molecular dynamics (MD) simulations in a binary Lennard-Jones system as a model system for molecular solutions and investigated the mechanism of liquid-liquid phase separation (LLPS), which has recently been recognized as a fundamental step in crystallization and organelle formation. Our simulation results showed that LLPS behavior varied drastically with the size ratio of solute to solvent molecules. Interestingly, increasing the size ratio can either facilitate or inhibit LLPS, depending on the combination of interaction strengths. We demonstrated that the unique behavior observed in MD simulation could be reasonably explained by the free energy barrier height calculated using our thermodynamic model based on the classical nucleation theory. Our model proved that the molecular size determines the change in number of interaction pairs through LLPS. Varying the size ratio changes the net number of solute-solvent and solvent-solvent interaction pairs that are either broken or newly generated per solute-solute pair generation, thereby inducing a complicated trend in LLPS depending on the interaction parameters. As smaller molecules have more interaction pairs per unit volume, their contribution is more dominant in the promotion of LLPS. Consequently, as the size ratio of the solute to the solvent increased, the LLPS mode changed from solute-related interaction-driven to solvent-related interaction-driven.
- Published
- 2024
39. From known to unknown unknowns through pattern-oriented modelling: Driving research towards the Medawar zone
- Author
-
Wang, Ming, Wang, H.-H., Koralewski, T.E., Grant, W.E., White, N., Hanan, J., Grimm, Volker, Wang, Ming, Wang, H.-H., Koralewski, T.E., Grant, W.E., White, N., Hanan, J., and Grimm, Volker
- Abstract
The metaphor of the Medawar zone describes the relationship between the difficulty of a scientific problem and the potential payoff of solving it. This zone represents the realm where questions offer high benefits relative to the effort required to address them. By harnessing the power of mechanistic modelling, scientists can navigate towards this zone, moving beyond known unknowns to discover unknown unknowns. This requires models to be realistic and reliable. Model usefulness, impact, and predictive power can be enhanced by achieving intermediate model complexity, where the trade-off between the realism and tractability of a model is optimised. To achieve these goals, we use the pattern-oriented modelling strategy (POM) to direct research into the Medawar zone by steering model structure towards intermediate complexity. We illustrate this strategy with a detailed conceptual process. Using example models from agri-ecological systems, we demonstrate how intermediate complexity can be attained through POM, and how pattern-oriented models of intermediate complexity that reproduce multiple patterns can uncover both known unknowns and unknown unknowns, which ultimately advances our understanding of complex systems and facilitates groundbreaking discoveries. In addition, we discuss the multidimensionality of the Medawar zone in the context of modelling philosophy and highlight the challenges and imperatives for achieving coherence in the modelling discipline. We emphasize the need for collaboration between end-users and modellers and the adoption of systematic modelling strategies such as POM.
- Published
- 2024
40. Programmable metachronal motion of closely packed magnetic artificial cilia
- Author
-
Wang, Tongsheng, ul Islam, Tanveer, Steur, Erik, Homan, Tess, Aggarwal, Ishu, Onck, Patrick R., den Toonder, Jaap M.J., Wang, Ye, Wang, Tongsheng, ul Islam, Tanveer, Steur, Erik, Homan, Tess, Aggarwal, Ishu, Onck, Patrick R., den Toonder, Jaap M.J., and Wang, Ye
- Abstract
Despite recent advances in artificial cilia technologies, the application of metachrony, which is the collective wavelike motion by cilia moving out-of-phase, has been severely hampered by difficulties in controlling closely packed artificial cilia at micrometer length scales. Moreover, there has been no direct experimental proof yet that a metachronal wave in combination with fully reciprocal ciliary motion can generate significant microfluidic flow on a micrometer scale as theoretically predicted. In this study, using an in-house developed precise micro-molding technique, we have fabricated closely packed magnetic artificial cilia that can generate well-controlled metachronal waves. We studied the effect of pure metachrony on fluid flow by excluding all symmetry-breaking ciliary features. Experimental and simulation results prove that net fluid transport can be generated by metachronal motion alone, and the effectiveness is strongly dependent on cilia spacing. This technique not only offers a biomimetic experimental platform to better understand the mechanisms underlying metachrony, it also opens new pathways towards advanced industrial applications.
- Published
- 2024
41. Estudio de la fiabilidad de test multirrespuesta con el método de Monte Carlo
- Author
-
Calaf Chica, José, García Tárrago, María José, Calaf Chica, José, and García Tárrago, María José
- Abstract
Durante gran parte del siglo XX se ha escrito mucho sobre la fiabilidad de los test multirrespuesta como método para la evaluación de contenidos. En concreto son muchos los estudios teóricos y empíricos que buscan enfrentar los distintos sistemas de puntuación existentes. En esta investigación se ha diseñado un algoritmo que genera estudiantes virtuales con los siguientes atributos: conocimiento real, nivel de cautela y conocimiento erróneo. El primer parámetro establece la probabilidad que tiene el alumno de conocer la veracidad o falsedad de cada opción de respuesta del test. El nivel de cautela refleja la probabilidad de responder a una cuestión desconocida. Finalmente, el conocimiento erróneo es aquel conocimiento falsamente asimilado como cierto. El algoritmo también tiene en cuenta parámetros de configuración del test como el número de preguntas, el número de opciones de respuesta por pregunta y el sistema de puntuación establecido. El algoritmo lanza test a los individuos virtuales analizando la desviación generada entre el conocimiento real y el conocimiento estimado (la puntuación alcanzada en el test). En este estudio se confrontaron los sistemas de puntuación más comúnmente utilizados (marcado positivo, marcado negativo, test de elección libre y método de la respuesta doble) para comprobar la fiabilidad de cada uno de ellos. Para la validación del algoritmo, se comparó con un modelo analítico probabilístico. De los resultados obtenidos, se observó que la existencia o no de conocimiento erróneo generaba una importante alteración en la fiabilidad de los test más aceptados por la comunidad educativa (los test de marcado negativo). Ante la imposibilidad de comprobar la existencia de conocimiento erróneo en los individuos a través de un test, es decisión del evaluador castigar su presencia con el uso del marcado negativo, o buscar una estimación más real del conocimiento real a través del marcado positivo., During the twentieth century many investigations have been published about the reliability of the multiple-choice tests for subject evaluation. Specifically, there are a lot of theoretical and empirical studies that compare the different scoring methods applied in tests. A novel algorithm was designed to generate hypothetical examinees with three specific characteristics: real knowledge, level of cautiousness and erroneous knowledge. The first one established the probability to know the veracity or falsity of each answer choice in a multiple-choice test. The cautiousness level showed the probability to answer an unknown question by guessing. Finally, the erroneous knowledge was false knowledge assimilated as true. The test setup needed by the algorithm included the test length, choices per question and the scoring system. The algorithm launched tests to these hypothetical examinees analysing the deviation between the real knowledge and the estimated knowledge (the test score). The most popular test scoring methods (positive marking, negative marking, free-choice tests and the dual response method) were analysed and compared to measure their reliability. In order to validate the algorithm, this was compared with an analytical probabilistic model. This investigation verified that the presence of the erroneous knowledge generates an important alteration in the reliability of the most accepted scoring methods in the educational community (the negative marking method). In view of the impossibility of ascertaining the existence of erroneous knowledge in the examinees using a test, the examiner could penalize its presence with the use of negative marking, or looking for a best fitted estimation of the real knowledge with the positive marking method.
- Published
- 2024
42. Can Confirmation Bias Improve Group Learning?
- Author
-
Gabriel, Nathan, O'Connor, Cailin, Gabriel, Nathan, and O'Connor, Cailin
- Abstract
Confirmation bias has been widely studied for its role in failures of reasoning. Individuals exhibiting confirmation bias fail to engage with information that contradicts their current beliefs, and, as a result, can fail to abandon inaccurate beliefs. But although most investigations of confirmation bias focus on individual learning, human knowledge is typically developed within a social structure. We use network models to show that moderate confirmation bias often improves group learning. However, a downside is that a stronger form of confirmation bias can hurt the knowledge producing capacity of the community.
- Published
- 2024
43. NSF DARE-Transforming modeling in neurorehabilitation: Four threads for catalyzing progress.
- Author
-
Valero-Cuevas, Francisco, Valero-Cuevas, Francisco, Finley, James, Orsborn, Amy, Fung, Natalie, Hicks, Jennifer, Huang, He, Reinkensmeyer, David, Schweighofer, Nicolas, Weber, Douglas, Steele, Katherine, Valero-Cuevas, Francisco, Valero-Cuevas, Francisco, Finley, James, Orsborn, Amy, Fung, Natalie, Hicks, Jennifer, Huang, He, Reinkensmeyer, David, Schweighofer, Nicolas, Weber, Douglas, and Steele, Katherine
- Abstract
We present an overview of the Conference on Transformative Opportunities for Modeling in Neurorehabilitation held in March 2023. It was supported by the Disability and Rehabilitation Engineering (DARE) program from the National Science Foundations Engineering Biology and Health Cluster. The conference brought together experts and trainees from around the world to discuss critical questions, challenges, and opportunities at the intersection of computational modeling and neurorehabilitation to understand, optimize, and improve clinical translation of neurorehabilitation. We organized the conference around four key, relevant, and promising Focus Areas for modeling: Adaptation & Plasticity, Personalization, Human-Device Interactions, and Modeling In-the-Wild. We identified four common threads across the Focus Areas that, if addressed, can catalyze progress in the short, medium, and long terms. These were: (i) the need to capture and curate appropriate and useful data necessary to develop, validate, and deploy useful computational models (ii) the need to create multi-scale models that span the personalization spectrum from individuals to populations, and from cellular to behavioral levels (iii) the need for algorithms that extract as much information from available data, while requiring as little data as possible from each client (iv) the insistence on leveraging readily available sensors and data systems to push model-driven treatments from the lab, and into the clinic, home, workplace, and community. The conference archive can be found at (dare2023.usc.edu). These topics are also extended by three perspective papers prepared by trainees and junior faculty, clinician researchers, and federal funding agency representatives who attended the conference.
- Published
- 2024
44. Simulation of neural activation and electrically evoked compound action potentials during combined cochlear-vestibular stimulation
- Author
-
Vey, Björn Michael and Vey, Björn Michael
- Abstract
Um die kombinierte Cochlea-Vestibularis-Stimulation zu simulieren, wurde ein Mo\-del\-lier\-ungs\-work\-flow entwickelt, der die realistische Anatomie des menschlichen Innenohrs nutzt. Dies erweitert die bisherigen Computermodelle unserer Gruppe, die sich auf die Untersuchung von Stimulationsszenarien mit vestibulären Implantaten konzentriert ha\-ben. Natürlich verteilte Nervenfasertrajektorien wurden für die ampullaren Nerven, die Ma\-kula-Organe, die Cochlea, der Gesichtsnerv und den inneren Gehörgang generiert. Die Nervenmodellierung wurde um ein mathematisches Modell von myelinisierten Coch\-lea-Ner\-ven\-fasern erweitert. Die extrazellulären elektrischen Felder, die aus der Stimulation mit im Innenohr platzierten Elektroden resultieren, wurden mit der Finite-Elemente-Methode be\-rech\-net. Um die selektive Stimulierung bestimmter Nerven zu bewerten, wurden verschiedene Szenarien mit Elektroden sowohl im vestibulären System als auch in der Cochlea getestet. Zusätzlich wurden elektrisch evozierte Summen-Aktionspotentiale (ECAPs) berechnet, um die Messung der neuronalen Antwort an Messelektroden simulieren zu können. Die zeitabhängigen Transmembranströme der stimulierten Nervenfasern wurden als verteilte Stromquellen behandelt, und die resultierenden elektrischen Felder wurden an den Positionen der Sensorelektroden ausgewertet. Die Funktionalität des Frameworks wurde durch die Anwendung auf einzelne und kombinierte Stimulations-Szenarien bewertet, einschließlich eines Vergleichs der selektiven Nervenstimulation innerhalb der Nerven des Innenohrs. Die Simulationsergebnisse deuten darauf hin, dass die kombinierte Stimulation der Vestibular- und Cochlea-Nerven erhebliche Auswirkungen hat, mit deutlicher Überlagerung, insbesondere während der gleichzeitigen Stimulation beider Organe bei hohen Amplituden. Darüber hinaus entsprachen die generierten ECAPs aus realistischen Stimulations-Szenarien den vorhandenen Literaturdaten. Das präsentierte Modell unterstützt die Erfo, A modeling workflow for simulating combined cochlear-vestibular stimulation was developed, based on realistic human inner ear anatomy. This extends our group's previous computer models, which focused on analyzing vestibular implant stimulation scenarios. Naturally distributed nerve fiber trajectories were generated for the ampullary nerves, macula organs, cochlea, facial nerve, and inner auditory canal. The nerve modeling in the simulation framework was expanded to include myelinated cochlear nerve fibers. Utilizing the finite element method, extracellular electrical fields resulting from electrode contact stimulation were computed. Various scenarios involving electrodes in both the vestibular system and cochlea were tested, evaluating selective stimulation of targeted nerve branches. Additionally, electrically evoked compound action potentials (ECAPs) were computed to simulate the measurement of the neural response with measuring electrodes. The time-dependent transmembrane currents from stimulated nerve fibers were treated as distributed current sources, and the resulting electrical fields were assessed at positions of sensing electrodes. The effectiveness of the framework was evaluated through the examination of individual and combined stimulation scenarios, along with a comparison of selective nerve stimulation within the inner ear branches. The simulation results indicate that the combined stimulation of vestibular and cochlear nerve branches has a significant impact, with noticeable potential crosstalk, especially during simultaneous stimulation of both organs at high amplitudes. Furthermore, the obtained ECAPs from realistic stimulation scenarios aligned closely with existing literature data. The presented model facilitates the exploration of the influence of combined cochlear-vestibular stimulation on both the cochlea and vestibular system, aiding the understanding of their interactions. This exploration aims to enhance our comprehension of how electrode pla, Masterarbeit Universität Innsbruck 2024
- Published
- 2024
45. Mechanistic Approaches to Detect, Target, and Ablate the Drivers of Atrial Fibrillation
- Author
-
Quintanilla, Jorge G., Pérez Villacastín Domínguez, Julián, Pérez Castellano, Nicasio, Pandit, Sandeep V., Berenfeld, Omer, Jalife, José, Filgueiras Rama, David, Quintanilla, Jorge G., Pérez Villacastín Domínguez, Julián, Pérez Castellano, Nicasio, Pandit, Sandeep V., Berenfeld, Omer, Jalife, José, and Filgueiras Rama, David
- Abstract
Fondo Europeo de Desarrollo Regional, Instituto de Salud Carlos III, Depto. de Medicina, Fac. de Medicina, TRUE, pub
- Published
- 2024
46. Sparse Firing in a Hybrid Central Pattern Generator for Spinal Motor Circuits
- Author
-
Strohmer, Beck, Najarro, Elias, Ausborn, Jessica, Berg, Rune W., Tolu, Silvia, Strohmer, Beck, Najarro, Elias, Ausborn, Jessica, Berg, Rune W., and Tolu, Silvia
- Abstract
Central pattern generators are circuits generating rhythmic movements, such as walking. The majority of existing computational models of these circuits produce antagonistic output where all neurons within a population spike with a broad burst at about the same neuronal phase with respect to network output. However, experimental recordings reveal that many neurons within these circuits fire sparsely, sometimes as rarely as once within a cycle. Here we address the sparse neuronal firing and develop a model to replicate the behavior of individual neurons within rhythm-generating populations to increase biological plausibility and facilitate new insights into the underlying mechanisms of rhythm generation. The developed network architecture is able to produce sparse firing of individual neurons, creating a novel implementation for exploring the contribution of network architecture on rhythmic output. Furthermore, the introduction of sparse firing of individual neurons within the rhythm-generating circuits is one of the factors that allows for a broad neuronal phase representation of firing at the population level. This moves the model toward recent experimental findings of evenly distributed neuronal firing across phases among individual spinal neurons. The network is tested by methodically iterating select parameters to gain an understanding of how connectivity and the interplay of excitation and inhibition influence the output. This knowledge can be applied in future studies to implement a biologically plausible rhythm-generating circuit for testing biological hypotheses.
- Published
- 2024
47. A dynamic fitting strategy for physiological models: a case study of a cardiorespiratory model for the simulation of incremental aerobic exercise
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Universitat Politècnica de Catalunya. BIOART - BIOsignal Analysis for Rehabilitation and Therapy, Sarmiento Pérez, Carlos Andrés, Hernández Valdivieso, Alher Mauricio, Mañanas Villanueva, Miguel Ángel, Serna Higuita, Leidy Yanet, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Universitat Politècnica de Catalunya. BIOART - BIOsignal Analysis for Rehabilitation and Therapy, Sarmiento Pérez, Carlos Andrés, Hernández Valdivieso, Alher Mauricio, Mañanas Villanueva, Miguel Ángel, and Serna Higuita, Leidy Yanet
- Abstract
Using mathematical models of physiological systems in medicine has allowed for the development of diagnostic, treatment, and medical educational tools. However, their complexity restricts, in most cases, their application for predictive, preventive, and personalized purposes. Although there are strategies that reduce the complexity of applying models based on fitting techniques, most of them are focused on a single instant of time, neglecting the effect of the system’s temporal evolution. The objective of this research was to introduce a dynamic fitting strategy for physiological models with an extensive array of parameters and a constrained amount of experimental data. The proposed strategy focused on obtaining better predictions based on the temporal trends in the system’s parameters and being capable of predicting future states. The study utilized a cardiorespiratory model as a case study. Experimental data from a longitudinal study of healthy adult subjects undergoing aerobic exercise were used for fitting and validation. The model predictions obtained in a steady state using the proposed strategy and the traditional single-fit approach were compared. The most successful outcomes were primarily linked to the proposed strategy, exhibiting better overall results regarding accuracy and behavior than the traditional population fitting approach at a single instant in time. The results evidenced the usefulness of the dynamic fitting strategy, highlighting its use for predictive, preventive, and personalized applications., Peer Reviewed, Postprint (published version)
- Published
- 2024
48. Estudi i calibració del model hidràulic d'una gran xarxa de distribució
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Pérez Magrané, Ramon, Verjano Ruano, Daniel, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Sistemes, Automàtica i Informàtica Industrial, Pérez Magrané, Ramon, and Verjano Ruano, Daniel
- Abstract
El projecte té com a objectiu principal obtenir una simulació que permeti l’estudi de la xarxa de distribució d’aigües de Tarragona. Aquesta simulació s’aconsegueix utilitzant les dades proporcionades pel Consorci d’Aigües de Tarragona. Es treballa amb dades dividides en trams i el propòsit es unificar aquests trams mitjançant bombaments hidràulics per crear un model unitari de la xarxa. Això implica la utilització d’EPANET per a definir el model hidràulic i realitzar les simulacions i la programació en RStudio per introduir les dades, executar el model i extreure els resultats del model hidràulic. La metodologia consisteix en obrir els arxius en format .inp de EPANET amb un editor de text com el bloc de notes, afegir els elements necessaris en cada tram, ajustar les bombes per aconseguir un cabal similar al del model per separat i avaluar els resultats. Aquesta avaluació es realitza considerant els gràfics del model separat i del model unitari, permetent una comparació i un anàlisi exhaustiu del funcionament de la xarxa de distribució d’aigües. Per aconseguir l’objectiu d’unificació dels trams, es pot implementar una solució que consisteix en EPANET, en afegir una bomba de gran potencia i assignar-li una corba mitjançant prova-error. A més, es necessari afegir-li una vàlvula PRV (Pressure Reducing Valve) per garantir que el cabal sigui el mateix en tota la xarxa. Amb aquesta combinació d’elements s’aconsegueix complir l’objectiu del treball.
- Published
- 2024
49. Project of thermal modelling for transformers
- Author
-
Universitat Politècnica de Catalunya. Departament d'Enginyeria Elèctrica, Riba Ruiz, Jordi-Roger, García Espinosa, Antonio, Farrús Tena, Marc, Universitat Politècnica de Catalunya. Departament d'Enginyeria Elèctrica, Riba Ruiz, Jordi-Roger, García Espinosa, Antonio, and Farrús Tena, Marc
- Abstract
This Master’s thesis titled "Project of Thermal Modelling Transformers" presents a comprehensive study of thermal phenomena in air transformers with an emphasis on developing a reliable predictive model for hotspot temperatures. With an objective to devise static models, this research employs a multi-faceted approach encompassing mathematical analysis with thermal resistance networks. The thesis initiates by grounding in the fundamentals of heat transfer, exploring the mechanisms of conduction, convection, and radiation, and assessing their implications in transformer design. The thermal models account for critical factors such as power losses, heat capacities, and the nature of transformer operations. Experimental validations are conducted using different transformer topologies, offering insights into the model's efficacy. Ultimately, the study culminates in a validated thermal model that integrates into a broader PhD work, with the potential to optimize thermal management in electrical devices. The models exhibit a robust correlation with empirical data, reinforcing their utility in the thermal assessment of air transformers and their design optimization.
- Published
- 2024
50. Lighter and faster simulations on domains with symmetries
- Author
-
Universitat Politècnica de Catalunya. Centre Tecnològic de la Transferència de Calor, Universitat Politècnica de Catalunya. Departament de Màquines i Motors Tèrmics, Universitat Politècnica de Catalunya. CTTC - Centre Tecnològic de Transferència de Calor, Alsalti Baldellou, Àdel, Álvarez Farré, Xavier, Colomer Rey, Guillem, Gorobets, Andrei, Pérez Segarra, Carlos David, Oliva Llena, Asensio, Trias Miquel, Francesc Xavier, Universitat Politècnica de Catalunya. Centre Tecnològic de la Transferència de Calor, Universitat Politècnica de Catalunya. Departament de Màquines i Motors Tèrmics, Universitat Politècnica de Catalunya. CTTC - Centre Tecnològic de Transferència de Calor, Alsalti Baldellou, Àdel, Álvarez Farré, Xavier, Colomer Rey, Guillem, Gorobets, Andrei, Pérez Segarra, Carlos David, Oliva Llena, Asensio, and Trias Miquel, Francesc Xavier
- Abstract
A strategy to improve the performance and reduce the memory footprint of simulations on meshes with spatial reflection symmetries is presented in this work. By using an appropriate mirrored ordering of the unknowns, discrete partial differential operators are represented by matrices with a regular block structure that allows replacing the standard sparse matrix–vector product with a specialised version of the sparse matrix-matrix product, which has a significantly higher arithmetic intensity. Consequently, matrix multiplications are accelerated, whereas their memory footprint is reduced, making massive simulations more affordable. As an example of practical application, we consider the numerical simulation of turbulent incompressible flows using a low-dissipation discretisation on unstructured collocated grids. All the required matrices are classified into three sparsity patterns that correspond to the discrete Laplacian, gradient, and divergence operators. Therefore, the above-mentioned benefits of exploiting spatial reflection symmetries are tested for these three matrices on both CPU and GPU, showing up to 5.0x speed-ups and 8.0x memory savings. Finally, a roofline performance analysis of the symmetry-aware sparse matrix–vector product is presented., A.A.B., X.A.F., G.C., C.D.P.S., A.O. and F.X.T. have been financially supported by two competitive R+D projects: RETOtwin (PDC2021120970-I00), given by MCIN/AEI/10.13039/501100011033 and European Union Next GenerationEU/PRTR, and FusionCAT (001P-001722), given by Generalitat de Catalunya RIS3CAT-FEDER. A.A.B. has also been supported by the predoctoral grants DIN2018-010061 and 2019-DI-90, given by MCIN/AEI/10.13039/501100011033 and the Catalan Agency for Management of University and Research Grants (AGAUR), respectively. The numerical experiments have been conducted on the Marenostrum4 supercomputer at the Barcelona Supercomputing Center under the project IM-2022-3-0026. The authors thankfully acknowledge these institutions., Peer Reviewed, Postprint (published version)
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.