1,169 results on '"python"'
Search Results
2. Python workflow for the selection and identification of marker peptides—proof-of-principle study with heated milk.
- Author
-
Kuhnen, Gesine, Class, Lisa-Carina, Badekow, Svenja, Hanisch, Kim Lara, Rohn, Sascha, and Kuballa, Jürgen
- Abstract
The analysis of almost holistic food profiles has developed considerably over the last years. This has also led to larger amounts of data and the ability to obtain more information about health-beneficial and adverse constituents in food than ever before. Especially in the field of proteomics, software is used for evaluation, and these do not provide specific approaches for unique monitoring questions. An additional and more comprehensive way of evaluation can be done with the programming language Python. It offers broad possibilities by a large ecosystem for mass spectrometric data analysis, but needs to be tailored for specific sets of features, the research questions behind. It also offers the applicability of various machine-learning approaches. The aim of the present study was to develop an algorithm for selecting and identifying potential marker peptides from mass spectrometric data. The workflow is divided into three steps: (I) feature engineering, (II) chemometric data analysis, and (III) feature identification. The first step is the transformation of the mass spectrometric data into a structure, which enables the application of existing data analysis packages in Python. The second step is the data analysis for selecting single features. These features are further processed in the third step, which is the feature identification. The data used exemplarily in this proof-of-principle approach was from a study on the influence of a heat treatment on the milk proteome/peptidome. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. PyRETIS 3: Conquering rare and slow events without boundaries.
- Author
-
Vervust, Wouter, Zhang, Daniel T., Ghysels, An, Roet, Sander, van Erp, Titus S., and Riccardi, Enrico
- Subjects
- *
METASTABLE states , *MODULAR construction , *MACHINE learning , *PYTHON programming language , *ALGORITHMS , *BIOCHEMICAL substrates - Abstract
We present and discuss the advancements made in PyRETIS 3, the third instalment of our Python library for an efficient and user‐friendly rare event simulation, focused to execute molecular simulations with replica exchange transition interface sampling (RETIS) and its variations. Apart from a general rewiring of the internal code towards a more modular structure, several recently developed sampling strategies have been implemented. These include recently developed Monte Carlo moves to increase path decorrelation and convergence rate, and new ensemble definitions to handle the challenges of long‐lived metastable states and transitions with unbounded reactant and product states. Additionally, the post‐analysis software PyVisa is now embedded in the main code, allowing fast use of machine‐learning algorithms for clustering and visualising collective variables in the simulation data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Comparative study of kinship network community detection approaches.
- Author
-
Joram, Arun and Singh, Karam Ratan
- Abstract
This study employs five community detection methods on a rooted tree network with 600 nodes and 599 edges based on the Galo tribe’s kin naming system in Arunachal Pradesh, India. The network was originally assumed to be made up of three communities, which the algorithms were able to further divide into smaller groupings. The Louvain approach produced the most balanced distribution of community sizes, with skewness and kurtosis values close to zero, implying that the detected communities were reasonably evenly distributed in size and without major outliers. However, the Louvain algorithm found 25 communities, which is more than the network’s previously reported three communities. Further investigation may be required to integrate some of these communities in order to obtain the original known communities. Overall, this study highlights the importance of selecting an appropriate community detection algorithm for a given network and research question. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. COVID-19 Research Trends in Social Work: LDA Topic Modeling Analysis in South Korea.
- Author
-
Park, TaeJeong
- Abstract
AbstractThis study utilizes LDA topic modeling to examine research trends related to COVID-19 within the field of social work, analyzing these trends through peer-reviewed articles from the Korea Citation Index database. Latent Dirichlet Allocation (LDA) topic modeling, a statistical method for discovering abstract topics within a collection of documents, is applied to categorize and summarize the thematic concentration of the literature. Five themes have emerged: healthcare service and digitalized methods, exploring mental health status, qualitative approaches to social service program responses to COVID-19, evaluation of care services, and public service and program responses to COVID-19. Key findings reveal that the Korean social work academia focused on digital-based non-face-to-face services, evaluating the adequacy of public social services, and analyzing mental health and caregiving services during the pandemic. These findings indicate a reassessment of social work practices in response to the pandemic, underscoring the need to explore the challenges and opportunities presented by varied national responses. Additionally, the application of Python and machine learning in this research has shown significant benefits for social work studies, enabling a deeper analysis of complex, big data and facilitating more informed decision-making. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. SEAOP: a statistical ensemble approach for outlier detection in quantitative proteomics data.
- Author
-
Huang, Jinze, Zhao, Yang, Meng, Bo, Lu, Ao, Wei, Yaoguang, Dong, Lianhua, Fang, Xiang, An, Dong, and Dai, Xinhua
- Abstract
Quality control in quantitative proteomics is a persistent challenge, particularly in identifying and managing outliers. Unsupervised learning models, which rely on data structure rather than predefined labels, offer potential solutions. However, without clear labels, their effectiveness might be compromised. Single models are susceptible to the randomness of parameters and initialization, which can result in a high rate of false positives. Ensemble models, on the other hand, have shown capabilities in effectively mitigating the impacts of such randomness and assisting in accurately detecting true outliers. Therefore, we introduced SEAOP, a Python toolbox that utilizes an ensemble mechanism by integrating multi-round data management and a statistics-based decision pipeline with multiple models. Specifically, SEAOP uses multi-round resampling to create diverse sub-data spaces and employs outlier detection methods to identify candidate outliers in each space. Candidates are then aggregated as confirmed outliers via a chi-square test, adhering to a 95% confidence level, to ensure the precision of the unsupervised approaches. Additionally, SEAOP introduces a visualization strategy, specifically designed to intuitively and effectively display the distribution of both outlier and non-outlier samples. Optimal hyperparameter models of SEAOP for outlier detection were identified by using a gradient-simulated standard dataset and Mann–Kendall trend test. The performance of the SEAOP toolbox was evaluated using three experimental datasets, confirming its reliability and accuracy in handling quantitative proteomics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Lightworks, a scientific research framework for the design of stiffened composite-panel structures using gradient-based optimization.
- Author
-
Dähne, Sascha, Werthen, Edgar, Zerbst, David, Tönjes, Lennart, Traub, Hendrik, and Hühne, Christian
- Subjects
- *
COMMERCIAL aeronautics , *MODULAR design , *STRUCTURAL optimization , *THIN-walled structures , *STIFFNERS , *FIBROUS composites - Abstract
Efficient structural optimization remains integral in advancing lightweight structures, particularly concerning the mitigation of environmental impact in air transportation systems. Varying levels of detail prove useful for different applications and design phases. The lightworks framework presents a modular approach, for the consideration of individual design parameterizations and structural solvers for the numerical optimization of thin-walled structures. The framework provides the combination of lightweight fibre composite design and the incorporation of stiffeners for a gradient-based optimization process. Therefore, an analytical stiffener formulation is implemented in combination with different continuous composite material parameterizations. This approach allows the analysis of local buckling modes, as well as the consideration of load redistribution between stringer and skin. The flexibility achieved in this way allows a tailored configuration of the optimization problem to the required level of complexity. A verification of the framework's implementation is carried out using established literature results of a simplified unstiffened wing box structure, where a very good agreement is shown. The accessibility of solvers with different fidelity through a generic solver interface is demonstrated. Furthermore, the usage of the implemented continuous composite parameterizations as design variables is compared in terms of computational performance and mass, providing different advantages and disadvantages. Finally, introducing stringer into the wing box use case demonstrates a 38% mass reduction, showcasing the potential of the inline optimization of stiffeners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A Python Framework for Neutrosophic Sets and Mappings.
- Author
-
Nordo, Giorgio, Jafari, Saeid, Mehmood, Arif, and Basumatary, Bhimraj
- Subjects
- *
PYTHON programming language , *COMPUTER software - Abstract
In this paper we present an open source framework developed in Python and consisting of three distinct classes designed to manipulate in a simple and intuitive way both symbolic representations of neutrosophic sets over universes of various types as well as mappings between them. The capabilities offered by this framework extend and generalize previous attempts to provide software solutions to the manipulation of neutrosophic sets such as those proposed by Salama et al. [21], Saranya et al. [23], El-Ghareeb [7], Topal et al. [29] and Sleem [26]. The code is described in detail and many examples and use cases are also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
9. Paste, aggregate, or air? That is the question.
- Author
-
Ossetchkina, Ekaterina, Chernoloz, Oleksiy, Bromerchenkel, Lucas Herzog, Karim, Mahzabin, MacHale, Liam, Montgomery, Amy, Hu, Yuqi, and Peterson, Karl
- Subjects
- *
BORDER crossing , *DIGITAL image processing , *AIR-entrained concrete , *FREEZE-thaw cycles , *ARTIFICIAL intelligence , *AIR quality , *CABLE-stayed bridges , *SOIL freezing - Abstract
The Ambassador Bridge between Detroit, Michigan, and Windsor, Ontario, has served for almost 100 years as North America's busiest international border crossing. But in 2025, the Ambassador will be replaced by the new Gordie Howe International Bridge. The Gordie Howe is a cable‐stayed bridge, with two massive 220 m tall concrete piers on opposite banks of the St. Claire River, a single clear span of 853 m, and 42 m of clearance over this busy waterway. To ensure durability in this harsh freeze‐thaw environment, air‐entrained concrete is specified throughout. And, to ensure the quality of air entrainment, the ASTM C 457 Procedure C, Contrast Enhanced Method is employed. While a similar automated microscopic approach has been in use for well over a decade according to EN 480‐11 Determination of air void characteristics in hardened concrete, this is the first large‐scale application of automated air void assessment in North American infrastructure. According to the ASTM Procedure C, the air void characteristics are determined through digital image processing, while the paste content may be determined by either mix design parameters, manual point count, or 'other means'. Of these three options, point counting is used for Gordie Howe; but in parallel, during each point count, the digital image coordinates and phase identifications for each evaluated stop are recorded. This allows for training of a neural network, for automated determination of paste content, as demonstrated here. LAY DESCRIPTION: The Ambassador Bridge, which connects Detroit, Michigan, to Windsor, Ontario, has been the most heavily used border crossing in North America for nearly a century. However, it is set to be replaced in 2025 by the new Gordie Howe International Bridge. This modern bridge will feature two tall concrete towers and a long span over the river, designed to withstand the freezing and thawing cycles that occur from the climate in the region. To ensure longevity, the concrete contains dispersed microscopic air bubbles (air‐entrained concrete) that help to dissipate pressures generated during freeze‐thaw. The size distribution and spatial distribution of these air bubbles are routinely checked by trained technicians according to standard procedures, a tedious and time‐consuming process. Artificial intelligence has the potential to streamline these measurements, and its utility is explored here, showing promise for making future evaluations quicker and more accurate. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. thebeat: A Python package for working with rhythms and other temporal sequences.
- Author
-
van der Werff, J., Ravignani, Andrea, and Jadoul, Yannick
- Abstract
thebeat is a Python package for working with temporal sequences and rhythms in the behavioral and cognitive sciences, as well as in bioacoustics. It provides functionality for creating experimental stimuli, and for visualizing and analyzing temporal data. Sequences, sounds, and experimental trials can be generated using single lines of code. thebeat contains functions for calculating common rhythmic measures, such as interval ratios, and for producing plots, such as circular histograms. thebeat saves researchers time when creating experiments, and provides the first steps in collecting widely accepted methods for use in timing research. thebeat is an open-source, on-going, and collaborative project, and can be extended for use in specialized subfields. thebeat integrates easily with the existing Python ecosystem, allowing one to combine our tested code with custom-made scripts. The package was specifically designed to be useful for both skilled and novice programmers. thebeat provides a foundation for working with temporal sequences onto which additional functionality can be built. This combination of specificity and plasticity should facilitate research in multiple research contexts and fields of study. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Develop an Interactive Python Dashboard for Analyzing EZproxy Logs.
- Author
-
Huff, Andy, Roth, Matthew, and Weiling Liu
- Subjects
- *
PYTHON programming language , *DASHBOARDS (Management information systems) , *ELECTRONIC information resources management , *INFORMATION resources management , *DATA visualization - Abstract
This paper describes the development of an interactive dashboard in Python with EZproxy log data. Hopefully, this dashboard will help improve the evidence-based decision-making process in electronic resources management and explore the impact of library use. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Exploring the Entropy-Based Classification of Time Series Using Visibility Graphs from Chaotic Maps.
- Author
-
Conejero, J. Alberto, Velichko, Andrei, Garibo-i-Orts, Òscar, Izotov, Yuriy, and Pham, Viet-Thanh
- Subjects
- *
TIME series analysis , *TIME management , *BRAIN-computer interfaces , *NAIVE Bayes classification , *CLASSIFICATION , *SCIENTIFIC community , *MACHINE learning , *ENTROPY - Abstract
The classification of time series using machine learning (ML) analysis and entropy-based features is an urgent task for the study of nonlinear signals in the fields of finance, biology and medicine, including EEG analysis and Brain–Computer Interfacing. As several entropy measures exist, the problem is assessing the effectiveness of entropies used as features for the ML classification of nonlinear dynamics of time series. We propose a method, called global efficiency (GEFMCC), for assessing the effectiveness of entropy features using several chaotic mappings. GEFMCC is a fitness function for optimizing the type and parameters of entropies for time series classification problems. We analyze fuzzy entropy (FuzzyEn) and neural network entropy (NNetEn) for four discrete mappings, the logistic map, the sine map, the Planck map, and the two-memristor-based map, with a base length time series of 300 elements. FuzzyEn has greater GEFMCC in the classification task compared to NNetEn. However, NNetEn classification efficiency is higher than FuzzyEn for some local areas of the time series dynamics. The results of using horizontal visibility graphs (HVG) instead of the raw time series demonstrate the GEFMCC decrease after HVG time series transformation. However, the GEFMCC increases after applying the HVG for some local areas of time series dynamics. The scientific community can use the results to explore the efficiency of the entropy-based classification of time series in "The Entropy Universe". An implementation of the algorithms in Python is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Unlocking the surface chemistry of ionic minerals: a high‐throughput pipeline for modeling realistic interfaces.
- Author
-
Mates-Torres, Eric and Rimola, Albert
- Subjects
- *
SURFACE chemistry , *MINERALS , *CHEMICAL species , *SURFACE properties , *FORMALDEHYDE - Abstract
A systematic procedure is introduced for modeling charge‐neutral non‐polar surfaces of ionic minerals containing polyatomic anions. By integrating distance‐ and charge‐based clustering to identify chemical species within the mineral bulk, our pipeline, PolyCleaver, renders a variety of theoretically viable surface terminations. As a demonstrative example, this approach was applied to forsterite (Mg2SiO4), unveiling a rich interface landscape based on interactions with formaldehyde, a relevant multifaceted molecule, and more particularly in prebiotic chemistry. This high‐throughput method, going beyond techniques traditionally applied in the modeling of minerals, offers new insights into the potential catalytic properties of diverse surfaces, enabling a broader exploration of synthetic pathways in complex mineral systems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. How are decisions made in open source software communities? — Uncovering rationale from python email repositories.
- Author
-
Sharma, Pankajeshwara Nand, Savarimuthu, Bastin Tony Roy, and Stanger, Nigel
- Subjects
- *
OPEN source software , *PYTHON programming language , *PYTHONS , *NATURAL language processing , *GROUP decision making , *DECISION making - Abstract
Group decision‐making (GDM) processes shape the evolution of open source software (OSS) products, thus playing an important role in the governance of open source software communities. While these GDM processes have attracted the attention of researchers, the rationale behind decisions, that is, how decisions are made that enhance the OSS, have not received much attention. This work bridges this gap by extracting these rationales from a large open source repository comprising 1.55 million emails available in Python development archives. This work makes a methodological contribution by presenting a heuristics‐based rationale extraction system called Rationale Miner that employs information retrieval, natural language processing, and heuristics‐based techniques. Using these techniques, it extracts the rationale behind specific decisions (for example, whether a new module was added based on core developer consensus or a benevolent dictator's pronouncement). This work unearths 11 such rationales behind decisions in the Python community and thus makes a knowledge contribution. It also analyzes the prevalence of these rationales across all PEPs and three sub‐types of PEPs: Process, Informational, and Standard Track PEPs. The effectiveness of our contributions has been positively evaluated using quantitative and qualitative approaches (e.g., comparison against baselines for rationale identification showed up to 47% improvement in the most conservative case, and feedback from the Python steering committee showed the accurate identification of rationales respectively). The approach proposed in this work can be used and extended to discover the rationale behind decisions that remain hidden in communication repositories of other OSS projects, which will make the decision‐making (DM) process transparent to stakeholders and encourage decision‐makers to be more accountable. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. On symbolic computation of C.P. Okeke functional equations using Python programming language.
- Author
-
Okeke, Chisom Prince, Ogala, Wisdom I., and Nadhomi, Timothy
- Subjects
- *
FUNCTIONAL equations , *SYMBOLIC computation , *PYTHON programming language , *COMPUTER programming - Abstract
This present paper is inspired by one of the questions posed by Okeke (Results Math 78(96):1-30, 2023, see Remark 2.10b). In particular, we aim to develop a robust computer code based on the theoretical results obtained in Okeke (2023), which determines the polynomial solutions of the following functional equation, 0.1 ∑ i = 1 n γ i F (a i x + b i y) = ∑ j = 1 m (α j x + β j y) f (c j x + d j y) , for all x , y ∈ R , γ i , α j , β j ∈ R , and a i , b i , c j , d j ∈ Q , and their special forms. The primary motivation for writing such a computer code is that solving even simple equations belonging to class (0.1) needs long and tiresome calculations. Therefore, one of the advantages of such a computer code is that it allows us to solve complicated problems quickly, easily, and efficiently. Additionally, the computer code will significantly improve the level of accuracy in calculations. Along with that, there is also the factor of speed. We point out that the computer code will operate with symbolic calculations provided by the programming language Python, which means that it does not contain any numerical or approximate methods, and it yields the exact solutions of the equations considered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Üç duygu ile etiketlenmiş Türkçe ses kayıt verilerinin makine öğrenim algoritmalarıyla analizi.
- Author
-
Tepecik, Abdulkadir and Demir, Engin
- Subjects
- *
SENTIMENT analysis , *MACHINE learning - Abstract
With the development of technology, many transactions in our daily life take place on the internet. The increasing use of the internet has also increased the number of data in this environment. Subjects such as processing and analysis of data have allowed the formation of new fields of science, especially in the world of informatics. In our study, the emotion labels of the data were firstly determined by the Python programming language on the data set containing the Turkish voice recordings, and then the analyzes were carried out with the five most used machine learning algorithms in the literature studies. Analyses were conducted through both Rapid Miner and the Python programming language. In the study, both CountVectorizer and TF-IDF vectorization methods were used in analyses performed through the Python programming language, and TF-IDF vectorization method was used in analyses performed with Rapid Miner. As a result, Naive Bayes and Support Vector Machine algorithms obtained the best accuracy rate in the Python programming language with 67%. In RapidMiner, the Naïve Bayes machine learning algorithm achieved the best accuracy rate of 60.61%. Our study is also an original study in which emotion detection was done with the BERT model of the data obtained from Turkish voice recordings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Flexible development and evaluation of machine‐learning‐supported optimal control and estimation methods via HILO‐MPC.
- Author
-
Pohlodek, Johannes, Morabito, Bruno, Schlauch, Christian, Zometa, Pablo, and Findeisen, Rolf
- Abstract
Model‐based optimization approaches for monitoring and control, such as model predictive control and optimal state and parameter estimation, have been used successfully for decades in many engineering applications. Models describing the dynamics, constraints, and desired performance criteria are fundamental to model‐based approaches. Thanks to recent technological advancements in digitalization, machine‐learning methods such as deep learning, and computing power, there has been an increasing interest in using machine learning methods alongside model‐based approaches for control and estimation. The number of new methods and theoretical findings using machine learning for model‐based control and optimization is increasing rapidly. However, there are no easy‐to‐use, flexible, and freely available open‐source tools that support the development and straightforward solution to these problems. This article outlines the basic ideas and principles behind an easy‐to‐use Python toolbox that allows to solve machine‐learning‐supported optimization, model predictive control, and estimation problems quickly and efficiently. The toolbox leverages state‐of‐the‐art machine learning libraries to train components used to define the problem. Machine learning can be used for a broad spectrum of problems, ranging from model predictive control for stabilization, set point tracking, path following, and trajectory tracking to moving horizon estimation and Kalman filtering. For linear systems, it enables quick generation of code for embedded model predictive control applications. HILO‐MPC is flexible and adaptable, making it especially suitable for research and fundamental development tasks. Due to its simplicity and numerous already implemented examples, it is also a powerful teaching tool. The usability is underlined, presenting a series of application examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Sipariş toplama ve dağıtım sistemleri için düşük maliyetli yeni bir sistem tasarımı ve uygulaması.
- Author
-
SIDDIQI, Sulaiman, PEHLİVAN, Ihsan, KALAYCI, Onur, BAHADIR, Tevfik, and UZUN, Süleyman
- Subjects
- *
ORDER picking systems , *COVID-19 pandemic , *ONLINE shopping , *SUPPLY chains , *HUMAN resources departments , *WAREHOUSES - Abstract
The Covid-19 pandemic has changed people's shopping habits, causing them to turn to online shopping sites, especially on the internet. In the face of increasing order rates, it has become inevitable for manufacturers and suppliers to produce innovative solutions for collecting orders and delivering them on time. Companies/suppliers who had to use a limited number of manpower due to the pandemic have tried to solve this problem with the use of machinery. In this study, a new low-cost Order Picking and Distribution System application developed for the stages of order picking and distribution in the supply chain is described. This stage of the supply chain is the most problematic, time-consuming and costly. In general, more human resources are used in the proposed solutions. With this application, it is aimed to create a more efficient system in terms of time, manpower, warehouse space and cost. For this purpose, a prototype Order Picking and Distribution System application was designed, manufactured and implemented in real time. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Optimization Examples for Water Allocation, Energy, Carbon Emissions, and Costs.
- Author
-
Alamanos, Angelos and Garcia, Jorge Andres
- Subjects
- *
CARBON emissions , *WATER rights , *WATER management , *RENEWABLE energy sources , *POWER resources , *ACHIEVEMENT - Abstract
Definition: The field of Water Resources Management (WRM) is becoming increasingly interdisciplinary, realizing its direct connections with energy, food, and social and economic sciences, among others. Computationally, this leads to more complex models, wherein the achievement of multiple goals is sought. Optimization processes have found various applications in such complex WRM problems. This entry considers the main factors involved in modern WRM, and puts them in a single optimization problem, including water allocation from different sources to different uses and non-renewable and renewable energy supplies, with their associated carbon emissions and costs. The entry explores the problem mathematically by presenting different optimization approaches, such as linear, fuzzy, dynamic, goal, and non-linear programming models. Furthermore, codes for each model are provided in Python, an open-source language. This entry has an educational character, and the examples presented are easily reproducible, so this is expected to be a useful resource for students, modelers, researchers, and water managers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Python在结构化学可视化教学中的应用探索.
- Author
-
王延忻, 王宏娟, 石玉仁, and 杨云霞
- Abstract
This paper explores the use of the Python language and scientific computing libraries to visualize the wave functions and electron clouds of common hydrogen atomic s and p orbitals in structural chemistry. Multiple scripts have been developed in this process, employing data processing and various generation algorithms to achieve the visualization of wave functions and electron clouds. The aim is to guide students in a comprehensive study and understanding of the physical significance of wave functions and electron clouds. This approach enhances students' abilities for independent thinking and active learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A Blank Element Selection Algorithm for Element Fill-in-blank Problems in Client-side Web Programming.
- Author
-
Huiyu Qi, Nobuo Funabiki, Khaing Hsu Wai, Flasma Veronicha Hendryanna, Khin Thet Mon, Mustika Mentari, and Wen Chung Kao
- Subjects
- *
ALGORITHMS , *PYTHON programming language , *SOURCE code , *CASCADING style sheets , *WEB-based user interfaces , *WEBSITES , *EDUCATIONAL outcomes - Abstract
Nowadays, web applications play central roles in information systems using the Internet. Then, client-side web programming using HTML, CSS, and JavaScript should be mastered first by novice students. Previously, we have presented the element fill-in-blank problem (EFP) for its self-study. An EFP instance requests to fill in the blank elements in the given source code by referring to the screenshots of the corresponding web page. The correctness of any answer is marked through string matching. However, these blanks were manually selected by considering the importance of elements and the uniqueness of their correct answers. In this paper, we propose a blank element selection algorithm to automatically generate a new EFP instance from a given source code for client-side web programming. We define the seven rules on blank element selections from the code, and implement the procedure in Python using the open source BeautifulSoup and regular expressions' ' For evaluations, we applied the algorithm to the 47 source codes used for manual generations and obtained the better EFP instances with more blanks. Besides, we verified the effectiveness by generating 10 new instances with the algorithm and assigning them to 40 students. In addition, we extended its application to three source codes for games and verified the effectiveness by assigning them to 20 students, to further validate the applicability of the algorithm in EFP instance generations. We also evaluated the relationships between the number of blanks, the number of lines in source codes, the submission times and answer rates of students to further assess the adaptability of the algorithm. These results allow us to measure the algorithm's versatility in generating a wide range of EFP instances and contributes to comprehensive understanding of instance difficulties and learning outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
22. A methodology for parameter estimation in system dynamics models using artificial intelligence.
- Author
-
Gadewadikar, Jyotirmay and Marshall, Jeremy
- Subjects
- *
SYSTEM dynamics , *ARTIFICIAL intelligence , *PARAMETER estimation , *SUPPORT vector machines , *RANDOM forest algorithms - Abstract
Multiple tools exist for separately simulating and estimating the parameters of system dynamics models. Artificial intelligence (AI) has been increasingly used to estimate the parameters of system dynamics models. The development of modeling tools and advanced environments has resulted in great benefits to the community at large. The incorporation of AI tools into system dynamics presents opportunities for expanding on current decision‐making methods. As systems become complex, the need to incorporate evidence‐based data‐driven methods increases. By integrating system dynamics tools and facilitating AI and system dynamics simulation in an integrated environment, model parameters can be estimated with the latest data, and the integrity of the model can be retained effectively. This provides an advantage to the efficiency and capabilities of the system dynamics model and its analysis. This paper presents a general methodology to incorporate regression AI into system dynamics models for simulation and analysis. To demonstrate the validity of the methodology, a case study involving a susceptible‐infected‐recovered model and empirical data from the COVID‐19 pandemic is performed using support vector machines (SVMs), artificial neural networks (ANNs), and random forests. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Upgrading financial education by adding Python‐based personalized financial projection: A randomized control trial.
- Author
-
Zhu, Alex Yue Feng
- Subjects
- *
PERSONAL finance , *TIME perspective , *FINANCIAL literacy , *EDUCATIONAL finance , *FINANCIAL planning - Abstract
Research has shown that even though standardized financial education has gained prevalence to promote financial literacy over the past decade, it has had little effect on personal financial planning. The present study used a randomized control trial to examine the effectiveness of a Python‐based personalized financial projection on young working adults in Hong Kong, to examine if and how this approach improves their financial planning. Participants assigned to the experiment group received standardized financial education and Python‐based financial projections, while those in the control group only received standardized financial education. The assessment based on the two‐wave data showed that Python‐based financial projection promoted future time perspectives, reduced temporal discounting, and improved financial planning via the full mediation of promoting financial attitudes. Although numerous applications for personal financial planning exist (such as Wallet, Walnut, Monefy, and Money View), our Python‐based financial projection stands out as the pioneering solution tailored for the hands‐on manipulation of programming code to effectively manage personal finances. Our findings suggest a new track to upgrade personalized financial projection and standardized financial education and contribute generously to the development of personal finance education. Practitioner notesWhat is already known about this topic Standardized financial education promotes objective financial knowledge.Standardized financial education has a limited effect on personal financial planning.Classical personalized financial projection promotes personal financial planning, but the effect is small.What this paper adds Introduction of a novel Python‐based personalized financial projection by manipulating projection code.The evidence that Python‐based personalized financial projection more strongly improves personal financial planning, compared to the classical personalized financial projection.The evidence why Python‐based personalized financial projection can improve personal financial planning.Implications for practice and/or policy Facilitating engagement of young working adults with personalized finance planning through the use of a Python‐based intervention.Integrating Python‐based personalized financial projection into standardized financial education in the school setting.Using Python as the platform to design more topic‐specific financial education module. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Landscape of High-Performance Python to Develop Data Science and Machine Learning Applications.
- Author
-
CASTRO, OSCAR, BRUNEAU, PIERRICK, SOTTET, JEAN-SÉBASTIEN, and TORREGROSSA, DARIO
- Published
- 2024
- Full Text
- View/download PDF
25. 基于 Python 的房屋安全健康监测 数据处理与预测分析.
- Author
-
孙振林, 柳飞, 陶水忠, 杨晓辉, 于茜, 张硕, and 周闯
- Abstract
With increasing attention paid to house safety, automatic health monitoring of old buildings becomes a necessity. In this context, the preprocessor of automatic monitoring data was optimized and a preprocessor was built leveraging Pythons abundant “ library” resources. The data was screened according to the credibility through Grey Relational Analysis. The abnormal data was screened through Box-plot. Then, the abnormal data and missing data were interpolated and replaced. Finally, the data was smoothed by Kalman filtering so that the dispersion was reduced. Given automatic monitoring equipment was sensitive to surrounding environment, a data comparison method to check automatic monitoring data with manual monitoring data was proposed. This method was aimed at ensuring the accuracy and reliability of automatic monitoring data. The short-term deformation monitoring data was predicted with fitted curve. The prediction accuracy reaches 0. 1% and the effective prediction time is about 10 days. This result can meet the need of researching. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Climate and disease: tackling coffee brown‐eye spot with advanced forecasting models.
- Author
-
Oliveira Aparecido, Lucas Eduardo, Lima, Rafael Fausto, Torsoni, Guilherme Botega, Lorençone, João Antonio, Lorençone, Pedro Antonio, and Souza Rolim, Glauco
- Abstract
Background Results Conclusion Climate influences the interaction between pathogens and their hosts significantly. This is particularly evident in the coffee industry, where fungal diseases like Cercospora coffeicola, causing brown‐eye spot, can reduce yields drastically. This study focuses on forecasting coffee brown‐eye spot using various models that incorporate agrometeorological data, allowing for predictions at least 1 week prior to the occurrence of disease. Data were gathered from eight locations across São Paulo and Minas Gerais, encompassing the South and Cerrado regions of Minas Gerais state. In the initial phase, various machine learning (ML) models and topologies were calibrated to forecast brown‐eye spot, identifying one with potential for advanced decision‐making. The top‐performing models were then employed in the next stage to forecast and spatially project the severity of brown‐eye spot across 2681 key Brazilian coffee‐producing municipalities. Meteorological data were sourced from NASA's Prediction of Worldwide Energy Resources platform, and the Penman–Monteith method was used to estimate reference evapotranspiration, leading to a Thornthwaite and Mather water‐balance calculation. Six ML models – K‐nearest neighbors (KNN), artificial neural network multilayer perceptron (MLP), support vector machine (SVM), random forests (RF), extreme gradient boosting (XGBoost), and gradient boosting regression (GradBOOSTING) – were employed, considering disease latency to time define input variables.These models utilized climatic elements such as average air temperature, relative humidity, leaf wetness duration, rainfall, evapotranspiration, water deficit, and surplus. The XGBoost model proved most effective in high‐yielding conditions, demonstrating high precision and accuracy. Conversely, the SVM model excelled in low‐yielding scenarios. The incidence of brown‐eye spot varied noticeably between high‐ and low‐yield conditions, with significant regional differences observed. The accuracy of predicting brown‐eye spot severity in coffee plantations depended on the biennial production cycle. High‐yielding trees showed superior results with the XGBoost model (R2 = 0.77, root mean squared error, RMSE = 10.53), whereas the SVM model performed better under low‐yielding conditions (precision 0.76, RMSE = 12.82).The study's application of agrometeorological variables and ML models successfully predicted the incidence of brown‐eye spot in coffee plantations with a 7 day lead time, illustrating that they were valuable tools for managing this significant agricultural challenge. © 2024 Society of Chemical Industry. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Optimization of Cutting Large Amounts of Dense Material.
- Author
-
Brezina, Ivan, Kultan, Jaroslav, and Pekár, Juraj
- Subjects
- *
DATA libraries , *PYTHON programming language , *WASTE minimization , *INDUSTRIAL costs , *RESOURCE allocation , *CONCRETE bridges - Abstract
The effective operation of any manufacturing enterprise cannot be achieved without optimal allocation of available resources. Efficient distribution of input materials, entering production, and procurement costs are significant parts of production costs. One of the primary requirements to ensure a given company's production efficiency and competitive capability is its optimization. An area within this context is the distribution of, for instance, dense material required for constructing concrete structures such as bridges, roads, buildings, and more. When cutting dense material, it is necessary to create a plan for optimal cutting to minimize waste. This paper aims to create a methodology for optimizing the cutting of metal rods with the goal of waste reduction. The software MS Excel and Python solutions have been developed and used based on the methods employed. The computed solutions can serve as a foundation for creating a data repository, which holds information about purchased materials and realized production and material remnants that can be utilized in subsequent cutting. Elements of the data repository will encompass cutting plans and remnants resulting from the implementation of calculated cutting plans. The data repository enables a multidimensional view for managing the cutting process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Parselmouth for bioacoustics: automated acoustic analysis in Python.
- Author
-
Jadoul, Yannick, de Boer, Bart, and Ravignani, Andrea
- Subjects
- *
BIOACOUSTICS , *SOFTWARE development tools , *PYTHON programming language , *SPEECH , *ALGORITHMS , *ELECTRONIC data processing - Abstract
Bioacoustics increasingly relies on large datasets and computational methods. The need to batch-process large amounts of data and the increased focus on algorithmic processing require software tools. To optimally assist in a bioacoustician's workflow, software tools need to be as simple and effective as possible. Five years ago, the Python package Parselmouth was released to provide easy and intuitive access to all functionality in the Praat software. Whereas Praat is principally designed for phonetics and speech processing, plenty of bioacoustics studies have used its advanced acoustic algorithms. Here, we evaluate existing usage of Parselmouth and discuss in detail several studies which used the software library. We argue that Parselmouth has the potential to be used even more in bioacoustics research, and suggest future directions to be pursued with the help of Parselmouth. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Convolutional Neural Network (CNN) Model for the Classification of Varieties of Date Palm Fruits (Phoenix dactylifera L.).
- Author
-
Rybacki, Piotr, Niemann, Janetta, Derouiche, Samir, Chetehouna, Sara, Boulaares, Islam, Seghir, Nili Mohammed, Diatta, Jean, and Osuch, Andrzej
- Subjects
- *
DATE palm , *CONVOLUTIONAL neural networks , *DATES (Fruit) , *DIGITAL technology , *AUTOMATIC classification , *IMAGE analysis - Abstract
The popularity and demand for high-quality date palm fruits (Phoenix dactylifera L.) have been growing, and their quality largely depends on the type of handling, storage, and processing methods. The current methods of geometric evaluation and classification of date palm fruits are characterised by high labour intensity and are usually performed mechanically, which may cause additional damage and reduce the quality and value of the product. Therefore, non-contact methods are being sought based on image analysis, with digital solutions controlling the evaluation and classification processes. The main objective of this paper is to develop an automatic classification model for varieties of date palm fruits using a convolutional neural network (CNN) based on two fundamental criteria, i.e., colour difference and evaluation of geometric parameters of dates. A CNN with a fixed architecture was built, marked as DateNET, consisting of a system of five alternating Conv2D, MaxPooling2D, and Dropout classes. The validation accuracy of the model presented in this study depended on the selection of classification criteria. It was 85.24% for fruit colour-based classification and 87.62% for the geometric parameters only; however, it increased considerably to 93.41% when both the colour and geometry of dates were considered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Unveiling the Spatial Distribution of Heat Demand in North-West-Europe Compiled with National Heat Consumption Data.
- Author
-
Jüstel, Alexander, Humm, Elias, Herbst, Eileen, Strozyk, Frank, Kukla, Peter, and Bracke, Rolf
- Subjects
- *
GREENHOUSE gases , *GEOGRAPHIC information systems , *DEMAND forecasting , *GEOTHERMAL resources , *ENTHALPY , *PYTHON programming language - Abstract
Space and water heating for residential and commercial buildings amount to a third of the European Union's total final energy consumption. Approximately 75% of the primary energy is still produced by burning fossil fuels, leading to high greenhouse gas emissions in the heating sector. Therefore, policymakers increasingly strive to trigger investments in sustainable and low-emission heating systems. This study forms part of the "Roll-out of Deep Geothermal Energy in North-West-Europe"-project and aims at quantifying the spatial heat demand distribution in the Interreg North-West-Europe region. An open-source geographic information system and selected Python packages for advanced geospatial processing, analysis, and visualization are utilized to constrain the maps. These were combined, streamlined, and optimized within the open-source Python package PyHeatDemand. Based on national and regional heat demand input data, three maps are developed to better constrain heat demand at a high spatial resolution of 100 m × 100 m (=1 ha) for the residential and commercial sectors, and for both together (in total). The developed methodology can not only be applied to transnational heat demand mapping but also on various scales ranging from city district level to states and countries. In addition, the workflow is highly flexible working with raster data, vector data, and tabular data. The results reveal a total heat demand of the Interreg North-West-Europe region of around 1700 TWh. The spatial distribution of the heat demand follows specific patterns, where heat demand peaks are usually in metropolitan regions like for the city of Paris (1400 MWh/ha), the city of Brussels (1300 MWh/ha), the London metropolitan area (520 MWh/ha), and the Rhine-Ruhr region (500 MWh/ha). The developed maps are compared with two international projects, Hotmaps and Heat Roadmap Europe's Pan European Thermal Atlas. The average total heat demand difference from values obtained in this study to Hotmaps and Heat Roadmap Europe is 24 MWh/ha and 84 MWh/ha, respectively. Assuming the implementation of real consumption data, an enhancement in spatial predictability is expected. The heat demand maps are therefore predestined to provide a conceptual first overview for decision-makers and market investors. The developed methods will further allow for anticipated mandatory municipal heat demand analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Dynamic simulation and optimization of hydrogen fueling operated by n-bank cascade fueling method.
- Author
-
Oh, Jongyeon, Yoo, Sangwoo, Chae, Chungkeun, and Shin, Dongil
- Subjects
- *
DYNAMIC simulation , *HYDROGEN as fuel , *FUELING , *FUEL cell vehicles , *FUEL cells , *FUEL tanks - Abstract
The hydrogen fuel cell electric vehicle is fueled by the pressure difference in the fueling tank with high-pressure hydrogen compressed at the hydrogen fueling station, and the fueling delay occurs when the pressure difference becomes small. To resolve this, most hydrogen fueling stations minimize fueling time by cascade fueling method by installing two or three banks. Although there are protocols for hydrogen fueling, there are no guidelines for 3-bank operation, so each hydrogen fueling station has different ways to fuel. This study proposes a dynamic simulation model and open-source implementation in Python that can check the temperature and pressure changes of the fueling tank. To implement the n-bank model, we add factors that can be used as operating guidelines for fueling stations, such as supply pressure of the banks and switch moment. Through optimization studies on these factors using simulation models, we propose optimal operation guidelines for fueling stations. [Display omitted] • Development of dynamic simulation model for hydrogen fueling process with n-bank cascade system. • Python open-source simulator that allows users to analyze the simulation results under desired conditions. • Validated model using experimental data for actual fueling process. • Optimizing the fueling process to shorten the fueling time and minimize the fueling energy. • Fueling guidelines for station operators with safety and fast fueling. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Application for the study of reactive power compensation.
- Author
-
Ramos Guardarrama, Josnier, Pérez Martínez, Maykop, González Del Castillo, Brian Adiel, and Silvério Freire, Raimundo Carlos
- Subjects
- *
REACTIVE power , *ELECTRIC power , *REACTIVE power control , *POWER electronics , *POWER system simulation , *FREEWARE (Computer software) - Abstract
Among the skills to be developed in the study of reactive power compensation control devices in electrical power systems is the analysis of the control models of static reactive compensators. Python is free software that allows data analysis, control and simulation of electrical power systems. The objective of the article is to propose an application that allows the study of reactive power compensation that helps improve the teaching-learning process of the subject of Power Electronics. The study was based on a qualitative - descriptive methodology in which analytical - synthetic and inductive - deductive methods were used and the survey was used as empirical methods. For the processing and analysis of the information collected, the calculation of absolute and relative frequencies was used as a statistical method. The results corroborate the importance of using the proposed method to improve the teaching-learning process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
33. Novel Artificial Intelligence Tool for Real-time Patient Identification to Prevent Misidentification in Health Care.
- Author
-
Rajurkar, Shriram, Verma, Teerthraj, Mishra, S. P., and Bhatt, M. L. B.
- Subjects
- *
ARTIFICIAL intelligence , *IDENTIFICATION , *PYTHON programming language , *MEDICAL care , *ANONYMOUS persons , *ACQUISITION of data - Abstract
Purpose: Errors in the identification of true patients in a health-care facility may result in the wrong dose or dosage being given to the wrong patient at the wrong site during radiotherapy sessions, radiopharmaceutical administration, radiological scans, etc. The aim of this article is to reduce the error in the identification of correct patients by implementation of the Python deep learning-based real-time patient identification program. Materials and Methods: The authors utilized and installed Anaconda Prompt (miniconda 3), Python (version 3.9.12), and Visual Studio Code (version 1.71.0) for the design of the patient identification program. In the field of view, the area of interest is merely face detection. The overall performance of the developed program is accomplished over three steps, namely image data collection, data transfer, and data analysis, respectively. The patient identification tool was developed using the OpenCV library for face recognition. Results: This program provides real-time patient identification information, together with the other preset parameters such as disease site, with a precision of 0.92%, recall rate of 0.80%, and specificity of 0.90%. Furthermore, the accuracy of the program was found to be 0.84%. The output of the in-house developed program as "Unknown" is provided if a patient's relative or an unknown person is found in restricted region. Interpretation and Conclusions: This Python-based program is beneficial for confirming the patient's identity, without manual interventions, just before therapy, administering medications, and starting other medical procedures, among other things, to prevent unintended medical and health-related complications that may arise as a result of misidentification. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Empowering mathematics education through programming.
- Author
-
Lingefjärd, Thomas
- Subjects
- *
EDUCATION of mathematics teachers , *MATHEMATICS education (Higher) , *MATHEMATICAL programming , *PYTHON programming language - Abstract
One of my last assignments at the university of Gothenburg was to teach a sequence of three seminars in programming for prospective teachers (n=37). The three seminars are given in the introduction of this manuscript. Since this was a course for prospective upper secondary teacher of mathematics, it was decided that it should be a course in programming for learning mathematics. This manuscript is a research article but also a manuscript about programming in GeoGebra, Python, and Wolfram Alpha. The examples I shared with my students led me to write a book about programming. See the reference list. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. MACHINE LEARNING METHODS APPLIED IN AIR QUALITY PREDICTION.
- Author
-
Vieru, Mihai-Claudiu and Cărbureanu, Mădălina
- Subjects
- *
AIR quality & the environment , *MACHINE learning , *PUBLIC health , *CARDIOVASCULAR diseases , *GRAPHIC methods - Abstract
Air quality is an important environmental component that has a significant influence on public health and well-being. Poor air quality can cause a variety of health problems, including respiratory and cardiovascular disorders. Therefore, there is a growing demand for air quality prediction tools to enable consumers and authorities to take the best decisions and to implement the necessary actions to reduce air pollution. The present paper describes an innovative application that uses machine learning techniques to supply to the users real-time air quality predictions made on past data from their unique location. The Scikit-learn Python package was used to implement five machine learning algorithms, including K-Nearest Neighbors, Random Forest, Gradient Boosting, Support Vector Regression (SVR) and AdaBoost. To achieve robust model performance, compatibility with cross-validation approaches was evaluated. The obtained results indicate that these machine learning techniques are successful at forecasting air quality. The AdaBoost method emerged as the best accurate predictor after extensive investigation, closely followed by Gradient Boosting, SVR, Random Forest, and KNearest Neighbors. Furthermore, the investigation also focused on the adapted handling of inaccurate data and providing graphical visualizations to highlight the algorithm's efficacy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. 6 ŞUBAT 2023 TÜRKİYE'DEKİ DEPREM FIRTINASININ X (TWITTER) ÖZELİNDE TANIMLAYICI ANALİZLERİNİN YAPILMASI.
- Author
-
DEMİRHAN, Tolga and HACIOĞLU, İlker
- Subjects
- *
EARTHQUAKES - Abstract
On February 06, 2023, a major earthquake swarm occurred in Turkey. The earthquake was the main topic of social network posts after this event, which caused deep sadness across the country. Research has proven that social networks can be used as an important source for understanding public opinion in the face of real events. One of these networks, X (Twitter), is an important tool for sharing situations, opinions, requests for help and information, especially in natural disasters such as earthquakes. Although there are earthquake disaster analysis studies based on these posts, which are a rich source of data, more case studies are needed to verify the effectiveness of earthquake analysis methods. The aim of this study is to conduct descriptive analyses to determine the agenda, tendency and behavior of the public through tweets shared on network X during the earthquake storm. In this context, an application was developed using python language and libraries. With this application, in the first stage, a data set was created by retrieving 2,643,481 tweets containing the word "earthquake" in the X network between February 5-12, 2023. In the next stage, descriptive analyses were performed and results were obtained and these results were presented using data visualization tools. The results revealed the sharing behavior of users after a major disaster. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Python在酸碱滴定分析教学中的应用.
- Author
-
李伟, 李鑫, 贾树恒, 白海鑫, 赵士举, 吴璐璐, 张海燕, and 范彩玲
- Subjects
- *
VOLUMETRIC analysis , *COMPUTER software - Abstract
Based on modern Python programming technology, we independently developed a graphical user interface (GUI) acid-base titration learning software. It can achieve the visualization of titration curve images for different acid-base systems such as monobasic strong acid (base), monobasic weak acid (base), polybasic acid (base), and mixed acid (base), as well as the calculation of stoichiometric points, titration jump ranges, and titration errors. The application of software makes the teaching content more intuitive and visualized, helping to gain a perceptual understanding of the changes in titration curves and a profound comprehension of its concepts and principles. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Deep time spatio‐temporal data analysis using pyGPlates with PlateTectonicTools and GPlately.
- Author
-
Mather, Ben R., Müller, R. Dietmar, Zahirovic, Sabin, Cannon, John, Chin, Michael, Ilano, Lauren, Wright, Nicky M., Alfonso, Christopher, Williams, Simon, Tetley, Michael, and Merdith, Andrew
- Subjects
- *
PYTHON programming language , *GEOLOGICAL time scales , *PLATE tectonics , *MULTIPLE comparisons (Statistics) , *PYTHONS - Abstract
PyGPlates is an open‐source Python library to visualize and edit plate tectonic reconstructions created using GPlates. The Python API affords a greater level of flexibility than GPlates to interrogate plate reconstructions and integrate with other Python workflows. GPlately was created to accelerate spatio‐temporal data analysis leveraging pyGPlates and PlateTectonicTools within a simplified Python interface. This object‐oriented package enables the reconstruction of data through deep geologic time (points, lines, polygons and rasters), the interrogation of plate kinematic information (plate velocities, rates of subduction and seafloor spreading), the rapid comparison between multiple plate motion models, and the plotting of reconstructed output data on maps. All tools are designed to be parallel‐safe to accelerate spatio‐temporal analysis over multiple CPU processors. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. An Approach to Track and Analyze the Trend of Antimicrobial Resistance Using Python: A Pilot Study for Anand, Gujarat, India—May 2022–August 2023.
- Author
-
Khound, Priyanshu, Pandya, Himanshu, Patel, Rupal, Patel, Naimika, Darji, Siddhi A., Trivedi, Purvi, Mehta, Vandan, Raulji, Avani, and Banerjee, Devjani
- Subjects
- *
DRUG resistance in microorganisms , *PYTHONS , *DRUG resistance in bacteria , *DATA libraries , *SYSTEM integration , *PYTHON programming language - Abstract
The present work deals with the analysis and monitoring of bacterial resistance in using Python for the state of Gujarat, India, where occurrences of drug-resistant bacteria are prevalent. This will provide an insight into the portfolio of drug-resistant bacteria reported, which can be used to track resistance behavior and to suggest a treatment regime for the particular bacteria. The present analysis has been done using Python on Jupyter Notebook as the integrated development environment and its data analysis libraries such as Pandas, Seaborn, and Matplotlib. The data have been loaded from excel file using Pandas and cleaned to transform features into required format. Seaborn and Matplotlib have been used to create data visualizations and represent the data inexplicable manner using graphs, plots, and tables. This program can be used to study disaster epidemiology, tracking, analyzing, and surveillance of antimicrobial resistance with a proper system integration approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. An Approach to Forecasting the Structure of Energy Generation in the Age of Energy Transition Based on the Automated Determination of Factor Significance.
- Author
-
Ilin, Igor V., Iliashenko, Oksana Yu., and Schenikov, Egor M.
- Subjects
- *
GREENHOUSE gases , *RENEWABLE energy sources , *ELECTRIC power production , *FORECASTING , *AUTOREGRESSIVE models , *TECHNOLOGICAL forecasting , *DEMAND forecasting - Abstract
In the age of energy transition that we are going through today, many research studies discuss how to develop various approaches to making forecasts aimed at obtaining quantitative assessments of the technical and economic indicators of the energy industry. This paper considers the adaptation of a comprehensive approach to forecasting the structure of energy generation based on the factor and trend approach and using autoregressive and multifactor models that apply a linear regression tool with ridge regularization. To implement this approach, we propose a tool for automated selection of the factors that have the most significant impact on the change in the structure of energy generation. This approach allows us to forecast the dynamics of electricity generation by different types of generating facilities as affected by the key factors in energy transition in the short, medium, and long term. As a result, we obtained quantitative relationships for the energy generation structure. Over the next 10 years, the share of generation using renewable energy sources will increase to 10%, and the share of thermal power plants, on the contrary, will decrease to 50%, despite the growth in demand for electricity. Also, greenhouse gas emissions will be reduced by 30%. We have also provided scientific justification for the sufficient reliability of the forecasts we present. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Implementing the European Space Agency's SentiNel Application Platform's Open-Source Python Module for Differential Synthetic Aperture Radar Interferometry Coseismic Ground Deformation from Sentinel-1 Data.
- Author
-
Occhipinti, Martina, Carboni, Filippo, Amorini, Shaila, Paltriccia, Nicola, López-Martínez, Carlos, and Porreca, Massimiliano
- Subjects
- *
RADAR interferometry , *PYTHON programming language , *SYNTHETIC apertures , *SYNTHETIC aperture radar , *INTERFEROMETRY , *REMOTE-sensing images - Abstract
Differential SAR Interferometry is a largely exploited technique to study ground deformations. A key application is the detection of the effects promoted by earthquakes, including detailed variations in ground deformations at different scales. In this work, an implemented Python script (Snap2DQuake) based on the "snappy" module by SNAP software 9.0.8 (ESA) for the processing of satellite imagery is proposed. Snap2DQuake is aimed at producing detailed coseismic deformation maps using Sentinel-1 C-band data by the DInSAR technique. With this alternative approach, the processing is simplified, and several issues that may occur using the software are solved. The proposed tool has been tested on two case studies: the Mw 6.4 Petrinja earthquake (Croatia, December 2020) and the Mw 5.7 to Mw 6.3 earthquakes, which occurred near Tyrnavós (Greece, March 2021). The earthquakes, which occurred in two different tectonic contexts, are used to test and verify the validity of Snap2DQuake. Snap2DQuake allows us to provide detailed deformation maps along the vertical and E-W directions in perfect agreement with observations reported in previous works. These maps offer new insights into the deformation pattern linked to earthquakes, demonstrating the reliability of Snap2DQuake as an alternative tool for users working on different applications, even with basic coding skills. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Structural Outlier Detection and Zernike–Canterakis Moments for Molecular Surface Meshes—Fast Implementation in Python.
- Author
-
Banach, Mateusz
- Subjects
- *
OUTLIER detection , *STRUCTURAL bioinformatics , *PYTHON programming language , *MOLECULAR shapes , *QUATERNARY structure , *DATABASES - Abstract
Object retrieval systems measure the degree of similarity of the shape of 3D models. They search for the elements of the 3D model databases that resemble the query model. In structural bioinformatics, the query model is a protein tertiary/quaternary structure and the objective is to find similarly shaped molecules in the Protein Data Bank. With the ever-growing size of the PDB, a direct atomic coordinate comparison with all its members is impractical. To overcome this problem, the shape of the molecules can be encoded by fixed-length feature vectors. The distance of a protein to the entire PDB can be measured in this low-dimensional domain in linear time. The state-of-the-art approaches utilize Zernike–Canterakis moments for the shape encoding and supply the retrieval process with geometric data of the input structures. The BioZernike descriptors are a standard utility of the PDB since 2020. However, when trying to calculate the ZC moments locally, the issue of the deficiency of libraries readily available for use in custom programs (i.e., without relying on external binaries) is encountered, in particular programs written in Python. Here, a fast and well-documented Python implementation of the Pozo–Koehl algorithm is presented. In contrast to the more popular algorithm by Novotni and Klein, which is based on the voxelized volume, the PK algorithm produces ZC moments directly from the triangular surface meshes of 3D models. In particular, it can accept the molecular surfaces of proteins as its input. In the presented PK-Zernike library, owing to Numba's just-in-time compilation, a mesh with 50,000 facets is processed by a single thread in a second at the moment order 20. Since this is the first time the PK algorithm is used in structural bioinformatics, it is employed in a novel, simple, but efficient protein structure retrieval pipeline. The elimination of the outlying chain fragments via a fast PCA-based subroutine improves the discrimination ability, allowing for this pipeline to achieve an 0.961 area under the ROC curve in the BioZernike validation suite (0.997 for the assemblies). The correlation between the results of the proposed approach and of the 3D Surfer program attains values up to 0.99. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Density-based topology optimization with the Null Space Optimizer: a tutorial and a comparison.
- Author
-
Feppon, Florian
- Abstract
The Null Space Optimizer is a constrained optimization solver that has been developed in the context of level-set-based Topology Optimization. One of its appealing aspects comes from its relative independence to the need for tuning unintuitive algorithm parameters. The first contribution of this paper is to introduce an upgrade of the Null Space Optimizer that enables to solve optimization problems featuring a large number of constraints with sparse Jacobian matrix. This allows to include in particular bound constraints, making it possible to use the Null Space Optimizer for solving density-based Topology Optimization problems. The second contribution of the paper is to present three tutorials giving an educational view on how to use the open-source Python implementation of the Null Space Optimizer for solving Topology Optimization problems in structural mechanics and conductive heat transfer, on structured and unstructured meshes. Elegant Python programming features are used for automating the implementation of density filters and the assembly of sparse Jacobian matrices. Numerical results are presented on three design problems and compared to those obtained with the Method of Moving Asymptotes (MMA), the Interior Point Optimizer IPOPT, and the Optimality Criteria (OC) method. We found that on the situations considered, (i) the Null Space optimizer is able to compute optimized designs with performances comparable to its competitors with very little parameter tuning, (ii) the OC method or IPOPT with default parameters sometimes converge to nonoptimal designs, (iii) MMA sometimes converge to slightly better design with a faster decay than the Null Space Optimizer during the first iterations, but may also require case-dependent fixes to converge to satisfactory solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Improving access and use of climate projections for ecological research through the use of a new Python tool.
- Author
-
Paz, Andrea, Lauber, Thomas, Crowther, Thomas W., and van den Hoogen, Johan
- Subjects
- *
GENERAL circulation model , *SPATIAL systems - Abstract
Over the past decade, the use of future climate projections from the coupled model intercomparison project (CMIP) has become central in biodiversity science. Pre‐packaged datasets containing future projections of the widely used bioclimatic variables, for different times and socio‐economic pathways, have contributed immensely to the study of climate change implications for biodiversity. However, these datasets lack the flexibility to obtain projections to other target years, and the use of raw data requires coding and spatial information systems expertise. The Python tool, 'chelsa‐cmip6', developed by Karger et al., provides the flexibility needed by allowing users to generate bioclimatic variables for the time of their choice provided the selected general circulation model and socioeconomic pathway combination exists. This is a fantastic step forward in bringing flexibility to the use of climate datasets in biodiversity and will allow for more widespread use of data provided by CMIP6. We hope it also will prompt the development of more user‐friendly tools for the study of the effects of climate change on biodiversity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Developing a Technique for Automatic Lineament Identification Based on the Neural Network Approach.
- Author
-
Grishkov, G. A., Nafigin, I. O., Ustinov, S. A., Petrov, V. A., and Minaev, V. A.
- Subjects
- *
AUTOMATIC identification , *DIGITAL elevation models , *GEOGRAPHIC information systems , *PROSPECTING , *MINES & mineral resources - Abstract
The purpose of this scientific work is to study the potential of neural network technologies in the field of extracting linear structures from Shuttle Radar Topography Mission (SRTM) digital terrain models (DTMs). Linear structures, also known as lineaments, play an important role in the verification of known faults, the identification of fault-fracture structures, and detailing the framework of discontinuous faults, as well as in the exploration of mineral resources. Their accurate and effective extraction in solving the depicted problems is of fundamental importance. The use of neural network technologies provides a number of advantages over sequential algorithms, including the ability to search for universal criteria for identifying lineaments based on a training sample. This paper considers a comprehensive innovative methodology that includes several key stages. The first stage is the author's method of data preparation, which helps ensure the quality of the training sample and minimize the impact of noise. The second stage is to develop an algorithm for vectorizing the results of the neural network operation, which allows one to easily export the results (lineaments) to a geographic information system (GIS). The third stage provides a method for minimizing the noise component of the training sample and optimizing the selection of synaptic weighting coefficients by retraining the neural network using simulated data reflecting various localization conditions of the lineaments. To verify the results, a spatial comparison of linear structures extracted by the neural network and lineaments isolated by the operator is carried out. The results of this comparison demonstrate the high potential of the proposed approach and show that the use of neural network technologies is a topical and promising approach for solving the problem of extracting linear structures from digital models of the terrain. Positive conclusions are made about the expediency of using the results obtained for their practical application in the field of earth sciences. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Kamodo: Simplifying model data access and utilization.
- Author
-
Ringuette, Rebecca, Rastaetter, Lutz, De Zeeuw, Darren, Pembroke, Asher, and Gerland IV, Oliver
- Subjects
- *
DATA modeling , *SPACE flight , *DATA visualization , *SPACE environment , *INTERPOLATION - Abstract
To address the lack of user-friendly software needed to simplify the utilization of model data across Heliophysics, the Community Coordinated Modeling Center (CCMC) at NASA's Goddard Space Flight Center has developed a model-agnostic method via Kamodo for users to easily access and utilize model data in their workflows. By abstracting away the broad range of file formats and the intricacies of interpolation on specialized grids, this approach significantly lowers the barrier to model data access and utilization for the community while adding exciting new capabilities to their tool boxes. This paper describes the direct interfaces to the model data, called model readers, and a basic introduction on how to use them. Additionally, we detail the planned approach for including custom interpolation codes, and include current progress on specialized visualization developments. The CCMC is maintaining Kamodo as an official NASA open-sourced software to enable and encourage community collaboration. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Python in Heliophysics Community (PyHC): Current status and future outlook.
- Author
-
Barnum, Julie, Masson, Arnaud, Friedel, Reinhard H.W., Roberts, Aaron, and Thomas, Brian A.
- Subjects
- *
PYTHON programming language , *PYTHONS , *SPACE environment , *INFORMATION architecture , *OPEN source software - Abstract
It's been four years since the formation of the Python in Heliophysics Community (PyHC). In that time, the community has made great strides towards embodying and implementing the ideals of a "Heliophysics Framework" put forth by Burrell et al. (2018). Specifically, the components of such a framework include: 1) centralization of current Python packages, 2) increasing accessibility and connectivity of these projects, 3) consideration of software attribution issues, and 4) the establishment and implementation of best practices and standards for code development. We describe the manner in which, and to what extent, PyHC has realized these four tenants. We then set forth suggestions for advancing PyHC's efforts, including ways in which we can improve our information architecture, how we can grow our community, both in terms of project sustainability and usage, as well as the social component of the community itself, how we can improve PyHC package integration, and finally, non-Python library considerations. The suggested improvements and additions therein advance PyHC's mission and strategic goals, while helping better integrate PyHC into the broader Heliophysics and Space Weather community efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. PDBe CCDUtils: an RDKit-based toolkit for handling and analysing small molecules in the Protein Data Bank.
- Author
-
Kunnakkattu, Ibrahim Roshan, Choudhary, Preeti, Pravda, Lukas, Nadzirin, Nurul, Smart, Oliver S., Yuan, Qi, Anyango, Stephen, Nair, Sreenath, Varadi, Mihaly, and Velankar, Sameer
- Subjects
- *
BANKING industry , *SMALL molecules , *DATABASES , *COMPUTATIONAL chemistry , *PROTEINS - Abstract
While the Protein Data Bank (PDB) contains a wealth of structural information on ligands bound to macromolecules, their analysis can be challenging due to the large amount and diversity of data. Here, we present PDBe CCDUtils, a versatile toolkit for processing and analysing small molecules from the PDB in PDBx/mmCIF format. PDBe CCDUtils provides streamlined access to all the metadata for small molecules in the PDB and offers a set of convenient methods to compute various properties using RDKit, such as 2D depictions, 3D conformers, physicochemical properties, scaffolds, common fragments, and cross-references to small molecule databases using UniChem. The toolkit also provides methods for identifying all the covalently attached chemical components in a macromolecular structure and calculating similarity among small molecules. By providing a broad range of functionality, PDBe CCDUtils caters to the needs of researchers in cheminformatics, structural biology, bioinformatics and computational chemistry. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
49. Computation of Neutrosophic Soft Topology using Python.
- Author
-
Mershia Rabuni, J. J. and Balamani, N.
- Subjects
- *
PYTHON programming language , *PYTHONS , *CROWDSOURCING , *PROGRAMMING languages , *SOFT sets , *TOPOLOGY - Abstract
While other programming languages are either losing ground or stagnant, Python’s popularity is growing. Soft sets lack the ability to eliminate the uncertainties present in conventional approaches, whereas neutrosophic sets are capable of handling confusing and contradictory data. In order to reduce the number of human computations performed to compute union, intersection and complement of neutrosophic soft sets, programs are developed in Python. Additionally, Python is used to compute Neutrosophic soft topological operators like interior and closure. The developed python programs are documented in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2023
50. pystacked: Stacking generalization and machine learning in Stata.
- Author
-
Ahrens, Achim, Hansen, Christian B., and Schaffer, Mark E.
- Subjects
- *
STACKING machines , *MACHINE learning , *SUPPORT vector machines , *RANDOM forest algorithms - Abstract
The pystacked command implements stacked generalization (Wolpert, 1992, Neural Networks 5: 241–259) for regression and binary classification via Python's scikit-learn. Stacking combines multiple supervised machine learners—the "base" or "level-0" learners—into one learner. The currently supported base learners include regularized regression, random forest, gradient boosted trees, support vector machines, and feed-forward neural nets (multilayer perceptron). pystacked can also be used as a "regular" machine learning program to fit one base learner and thus provides an easy-to-use application programming interface for scikit-learn's machine learning algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.