27,643 results on '"COMPUTER algorithms"'
Search Results
2. Dynamic Placement in Refugee Resettlement.
- Author
-
Ahani, Narges, Gölz, Paul, Procaccia, Ariel D., Teytelboym, Alexander, and Trapp, Andrew C.
- Subjects
- *
REFUGEE resettlement , *EMPLOYMENT , *REFUGEE resettlement services , *STOCHASTIC programming , *ECONOMIC equilibrium , *COMPUTER algorithms - Abstract
This article details the creation of two algorithms to improve upon Annie Moore optimization software to aid refugees in finding resettlement locations with employment opportunities. The authors apply the new algorithms to the United States resettlement agency HIAS’ data from 2014 to 2019 in comparison to Annie’s baseline hindsight-optimal employment. The new algorithms take future employment of new arrivals into account and are incorporated into an updated version of the Annie software.
- Published
- 2024
- Full Text
- View/download PDF
3. Analysis of creative thinking skills of students in the algorithm and programming course.
- Author
-
Alpindo, Okta, Febrian, Fera, Mirta, and Tambunan, Linda Rosmery
- Subjects
- *
CREATIVE thinking , *DIVERGENT thinking , *CREATIVE ability , *COMPUTER algorithms , *COMPUTER programming , *MATHEMATICS - Abstract
Creative thinking is a unit or combination of logical and divergent thinking to produce something new, such as in mathematics. It is one indication of creative thinking. This study aimed to describe students' creative thinking skills in the mathematics education study program especially at algorithms and computer programming courses. Based on the analysis results, it was found that 22.73% of students were creative, 59.09% were quite creative, 13.64% were less creative, and 4.55% were not creative. Based on the study results, the percentage of achievement of each indicator of creative thinking ability, including 60.23% of the fluency indicator, the flexibility indicator at the rate 51.14%, the originality indicator of 45.45%, and the elaboration indicator showed the rate about 46.60%. The conclusions of this study indicated the tendency of students' creative thinking competence categorized in the less creative. The research suggested that the lecturers by determining the creative thinking competencies of their students, they were also can determine the learning models that practically could be improving the students' creative thinking competencies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Multiscale method for identifying and marking the multiform fractures from visible-light rock-mass images.
- Author
-
Yongbo Pan, Junzhi Cui, and Zhenhao Xu
- Subjects
- *
ROCK mechanics , *VISIBLE spectra , *FRACTURE mechanics , *GRAYSCALE model , *COMPUTER algorithms - Abstract
Multiform fractures have a direct impact on the mechanical performance of rock masses. To accurately identify multiform fractures, the distribution patterns of grayscale and the differential features of fractures in their neighborhoods are summarized. Based on this, a multiscale processing algorithm is proposed. The multiscale process is as follows. On the neighborhood of pixels, a grayscale continuous function is constructed using bilinear interpolation, the smoothing of the grayscale function is realized by Gaussian local filtering, and the grayscale gradient and Hessian matrix are calculated with high accuracy. On small-scale blocks, the pixels are classified by adaptively setting the grayscale threshold to identify potential line segments and mini-fillings. On the global image, potential line segments and minifillings are spliced together by progressing the block frontier layer-by-layer to identify and mark multiform fractures. The accuracy of identifying multiform fractures is improved by constructing a grayscale continuous function and adaptively setting the grayscale thresholds on small-scale blocks. And the layer-by-layer splicing algorithm is performed only on the domain of the 2-layer small-scale blocks, reducing the complexity. By using rock mass images with different fracture types as examples, the identification results show that the proposed algorithm can accurately identify the multiform fractures, which lays the foundation for calculating the mechanical parameters of rock masses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. BIM-based framework of automatic tunnel segment assembly and deviation control.
- Author
-
Jian Gong, Tengfei Bao, Zheng Zhu, Hong Yu, and Yangtao Li
- Subjects
- *
BUILDING information modeling , *TUNNEL design & construction , *AUTOMATIC control systems , *VISUAL programming languages (Computer science) , *COMPUTER algorithms - Abstract
The design of universal segments and deviation control of segment assembly are essential for robust and low-risk tunnel construction. A building information modeling (BIM)-based framework was proposed for parametric modeling, automatic assembly, and deviation control of universal segments. First, segment models of different levels of detail (LoDs) were built based on BIM visual programming language (VPL) for different project life cycles. Then, the geometric constraints, requirements, and procedures for parametric segment assembly were distilled to develop a program that combines a novel typesetting algorithm with a 3D path replanning algorithm. Typesetting is implemented by introducing a point indication matrix, characterizing segments by sides, and manipulating geometries in a VPL. Simultaneously, 3D path replanning, with non-uniform rational B-splines (NURBS) and arcs as basic shapes, was used to resolve unacceptable deviation situations after typesetting. Finally, the proposed framework was validated on a water diversion line and was found to be more effective and accurate than the previous method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Path Planning for Unified Scheduling of Multi-Robot Based on BSO Algorithm.
- Author
-
Qiu, Guangping and Li, Jincan
- Subjects
- *
MOBILE robots , *POTENTIAL field method (Robotics) , *OPTIMIZATION algorithms , *ROBOTIC path planning , *COMPUTER algorithms , *ALGORITHMS , *SCHEDULING - Abstract
The technology for path planning of independent mobile robots is mature, but multi-robot path planning for unified scheduling and allocation is much more complex than single-robot path planning. This requires consideration of collision problems between robots, general optimal path problems, etc. This paper proposes the use of the BSO algorithm for unified scheduling and allocation of multiple robots to improve the efficiency of task execution. The BSO algorithm is a new type of intelligent optimization algorithm that uses clustering ideas to search for local optimal solutions and obtains global optimal solutions by comparing local optimal solutions. It also uses mutation ideas to increase the diversity of the algorithm and avoid becoming trapped in local optimal solutions. Using the GA/SA algorithm and the proposed BSO algorithm for computer simulation comparison, we obtained the optimal path planning for the three robots under unified scheduling. The total distance of the optimal path obtained by the BSO algorithm was 27.36% and 25.31% shorter than those of the GA and SA algorithms, respectively. To further test the performance of the BSO algorithm, we conducted additional experiments on the unified scheduling of multiple robots. The experimental results show that the proposed BSO algorithm can significantly improve the efficiency. The multi-robot under unified scheduling performs point-to-point path planning without collisions, and they can traverse all task target points in the shortest path without repetition. This algorithm is suitable for multi-robot tasks in large-scale environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Improving 2–5 Qubit Quantum Phase Estimation Circuits Using Machine Learning.
- Author
-
Woodrum, Charles, Wagner, Torrey, and Weeks, David
- Subjects
- *
MACHINE learning , *QUANTUM computers , *QUBITS , *QUANTUM computing , *COMPUTER algorithms - Abstract
Quantum computing has the potential to solve problems that are currently intractable to classical computers with algorithms like Quantum Phase Estimation (QPE); however, noise significantly hinders the performance of today's quantum computers. Machine learning has the potential to improve the performance of QPE algorithms, especially in the presence of noise. In this work, QPE circuits were simulated with varying levels of depolarizing noise to generate datasets of QPE output. In each case, the phase being estimated was generated with a phase gate, and each circuit modeled was defined by a randomly selected phase. The model accuracy, prediction speed, overfitting level and variation in accuracy with noise level was determined for 5 machine learning algorithms. These attributes were compared to the traditional method of post-processing and a 6x–36 improvement in model performance was noted, depending on the dataset. No algorithm was a clear winner when considering these 4 criteria, as the lowest-error model (neural network) was also the slowest predictor; the algorithm with the lowest overfitting and fastest prediction time (linear regression) had the highest error level and a high degree of variation of error with noise. The XGBoost ensemble algorithm was judged to be the best tradeoff between these criteria due to its error level, prediction time and low variation of error with noise. For the first time, a machine learning model was validated using a 2-qubit datapoint obtained from an IBMQ quantum computer. The best 2-qubit model predicted within 2% of the actual phase, while the traditional method possessed a 25% error. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. An End-to-End Artificial Intelligence of Things (AIoT) Solution for Protecting Pipeline Easements against External Interference—An Australian Use-Case.
- Author
-
Iqbal, Umair, Barthelemy, Johan, and Michal, Guillaume
- Subjects
- *
ARTIFICIAL intelligence , *SERVITUDES , *HAZARDOUS substances , *OBJECT recognition (Computer vision) , *COMPUTER algorithms , *COMPUTER vision - Abstract
High-pressure pipelines are critical for transporting hazardous materials over long distances, but they face threats from third-party interference activities. Preventive measures are implemented, but interference accidents can still occur, making the need for high-quality detection strategies vital. This paper proposes an end-to-end Artificial Intelligence of Things (AIoT) solution to detect potential interference threats in real time. The solution involves developing a smart visual sensor capable of processing images using state-of-the-art computer vision algorithms and transmitting alerts to pipeline operators in real time. The system's core is based on the object-detection model (e.g., You Only Look Once version 4 (YOLOv4) and DETR with Improved deNoising anchOr boxes (DINO)), trained on a custom Pipeline Visual Threat Assessment (Pipe-VisTA) dataset. Among the trained models, DINO was able to achieve the best Mean Average Precision (mAP) of 71.2% for the unseen test dataset. However, for the deployment on a limited computational-ability edge computer (i.e., the NVIDIA Jetson Nano), the simpler and TensorRT-optimized YOLOv4 model was used, which achieved a mAP of 61.8% for the test dataset. The developed AIoT device captures the image using a camera, processes on the edge using the trained YOLOv4 model to detect the potential threat, transmits the threat alert to a Fleet Portal via LoRaWAN, and hosts the alert on a dashboard via a satellite network. The device has been fully tested in the field to ensure its functionality prior to deployment for the SEA Gas use-case. The AIoT smart solution has been deployed across the 10km stretch of the SEA Gas pipeline across the Murray Bridge section. In total, 48 AIoT devices and three Fleet Portals are installed to ensure the line-of-sight communication between the devices and portals. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Intelligent classification of cardiotocography based on a support vector machine and convolutional neural network: Multiscene research.
- Author
-
Zhang, Wen, Tang, Zixiang, Shao, Huikai, Sun, Chao, He, Xin, Zhang, Jiahui, Wang, Tiantian, Yang, Xiaowei, Wang, Yiran, Bin, Yadi, Zhao, Lanbo, Zhang, Siyi, Liang, Dongxin, Wang, Jianliu, Zhong, Dexing, and Li, Qiling
- Subjects
- *
CONVOLUTIONAL neural networks , *SUPPORT vector machines , *FETAL heart rate monitoring , *COMPUTER-aided diagnosis , *COMPUTER algorithms - Abstract
Objective: To propose a computerized system utilizing multiscene analysis based on a support vector machine (SVM) and convolutional neural network (CNN) to assess cardiotocography (CTG) intelligently. Methods: We retrospectively collected 2542 CTG records of singleton pregnancies delivered at the maternity ward of the First Affiliated Hospital of Xi'an Jiaotong University from October 10, 2020, to August 7, 2021. CTG records were divided into five categories (baseline, variability, acceleration, deceleration, and normality). Apart from the category of normality, the other four different categories of abnormal data correspond to four scenes. Each scene was divided into training and testing sets at 9:1 or 7:3. We used three computer algorithms (dynamic threshold, SVM, and CNN) to learn and optimize the system. Accuracy, sensitivity, and specificity were performed to evaluate performance. Results: The global accuracy, sensitivity, and specificity of the system were 93.88%, 93.06%, and 94.33%, respectively. In acceleration and deceleration scenes, when the convolution kernel was 3, the test data set reached the highest performance. Conclusion: The multiscene research model using SVM and CNN is a potential effective tool to assist obstetricians in classifying CTG intelligently. Synopsis: The computer‐aided diagnosis system based on support vector machine and convolutional neural network is valuable for classification of cardiotocography. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Variable parameters memory-type control charts for simultaneous monitoring of the mean and variability of multivariate multiple linear regression profiles.
- Author
-
Sabahno, Hamed and Eriksson, Marie
- Subjects
- *
QUALITY control charts , *ADAPTIVE control systems , *COMPUTER algorithms , *MONTE Carlo method - Abstract
Variable parameters (VP) schemes are the most effective adaptive schemes in increasing control charts' sensitivity to detect small to moderate shift sizes. In this paper, we develop four VP adaptive memory-type control charts to monitor multivariate multiple linear regression profiles. All the proposed control charts are single-chart (single-statistic) control charts, two use a Max operator and two use an SS (squared sum) operator to create the final statistic. Moreover, two of the charts monitor the regression parameters, and the other two monitor the residuals. After developing the VP control charts, we developed a computer algorithm with which the charts' time-to-signal and run-length-based performances can be measured. Then, we perform extensive numerical analysis and simulation studies to evaluate the charts' performance and the result shows significant improvements by using the VP schemes. Finally, we use real data from the national quality register for stroke care in Sweden, Riksstroke, to illustrate how the proposed control charts can be implemented in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Evaluating a Novel Approach to Detect the Vertical Structure of Insect Damage in Trees Using Multispectral and Three-Dimensional Data from Drone Imagery in the Northern Rocky Mountains, USA.
- Author
-
Shrestha, Abhinav, Hicke, Jeffrey A., Meddens, Arjan J. H., Karl, Jason W., and Stahl, Amanda T.
- Subjects
- *
MULTISPECTRAL imaging , *FOREST insects , *COMPUTER algorithms , *INSECT diseases , *GRID cells , *POINT cloud , *HONEYBEES - Abstract
Remote sensing is a well-established tool for detecting forest disturbances. The increased availability of uncrewed aerial systems (drones) and advances in computer algorithms have prompted numerous studies of forest insects using drones. To date, most studies have used height information from three-dimensional (3D) point clouds to segment individual trees and two-dimensional multispectral images to identify tree damage. Here, we describe a novel approach to classifying the multispectral reflectances assigned to the 3D point cloud into damaged and healthy classes, retaining the height information for the assessment of the vertical distribution of damage within a tree. Drone images were acquired in a 27-ha study area in the Northern Rocky Mountains that experienced recent damage from insects and then processed to produce a point cloud. Using the multispectral data assigned to the points on the point cloud (based on depth maps from individual multispectral images), a random forest (RF) classification model was developed, which had an overall accuracy (OA) of 98.6%, and when applied across the study area, it classified 77.0% of the points with probabilities greater than 75.0%. Based on the classified points and segmented trees, we developed and evaluated algorithms to separate healthy from damaged trees. For damaged trees, we identified the damage severity of each tree based on the percentages of red and gray points and identified top-kill based on the length of continuous damage from the treetop. Healthy and damaged trees were separated with a high accuracy (OA: 93.5%). The remaining damaged trees were separated into different damage severities with moderate accuracy (OA: 70.1%), consistent with the accuracies reported in similar studies. A subsequent algorithm identified top-kill on damaged trees with a high accuracy (OA: 91.8%). The damage severity algorithm classified most trees in the study area as healthy (78.3%), and most of the damaged trees in the study area exhibited some amount of top-kill (78.9%). Aggregating tree-level damage metrics to 30 m grid cells revealed several hot spots of damage and severe top-kill across the study area, illustrating the potential of this methodology to integrate with data products from space-based remote sensing platforms such as Landsat. Our results demonstrate the utility of drone-collected data for monitoring the vertical structure of tree damage from forest insects and diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Isomorphism classes of Drinfeld modules over finite fields.
- Author
-
Karemaker, Valentijn, Katen, Jeffrey, and Papikian, Mihran
- Subjects
- *
DRINFELD modules , *FINITE fields , *ISOMORPHISMS , *ENDOMORPHISMS , *COMMUTATIVE algebra , *ENDOMORPHISM rings , *COMPUTER algorithms - Abstract
We study isogeny classes of Drinfeld A -modules over finite fields k with commutative endomorphism algebra D , in order to describe the isomorphism classes in a fixed isogeny class. We study when the minimal order A [ π ] of D generated by the Frobenius π occurs as an endomorphism ring by proving when it is locally maximal at π , and show that this happens if and only if the isogeny class is ordinary or k is the prime field. We then describe how the monoid of fractional ideals of the endomorphism ring E of a Drinfeld module ϕ up to D -linear equivalence acts on the isomorphism classes in the isogeny class of ϕ , in the spirit of Hayes. We show that the action is free when restricted to kernel ideals, of which we give three equivalent definitions, and determine when the action is transitive. In particular, the action is free and transitive on the isomorphism classes in an isogeny class which is either ordinary or defined over the prime field, yielding a complete and explicit description in these cases, which can be implemented as a computer algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Extraction of fractures in shale CT images using improved U-Net.
- Author
-
Xiang Wu, Fei Wang, Xiaoqiu Zhang, Bohua Han, Qianru Liu, and Yonghao Zhang
- Subjects
- *
DEEP learning , *COMPUTED tomography , *IMAGE segmentation , *INFORMATION retrieval , *COMPUTER algorithms - Abstract
Accurate extraction of pores and fractures is a prerequisite for constructing digital rocks for physical property simulation and microstructural response analysis. However, fractures in CT images are similar in grayscale to the rock matrix, and traditional algorithms have difficulty to achieve accurate segmentation results. In this study, a dataset containing multiscale fracture information was constructed, and a U-Net semantic segmentation model with a scSE attention mechanism was used to classify shale CT images at the pixel level and compare the results with traditional methods. The results showed that the CLAHE algorithm effectively removed noise and enhanced the fracture information in the dark parts, which is beneficial for further fracture extraction. The Canny edge detection algorithm had significant false positives and failed to recognize the internal information of the fractures. The Otsu algorithm only extracted fractures with a significant difference from the background and was not sensitive enough for fine fractures. The MEF algorithm enhanced the edge information of the fractures and was also sensitive to fine fractures, but it overestimated the aperture of the fractures. The U-Net was able to identify almost all fractures with good continuity, with an MIou and Recall of 0.80 and 0.82, respectively. As the image resolution increases, more fine fracture information can be extracted. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Predicting of Melanoma Skin Cancer Using Machine Learning Methods.
- Author
-
Bütüner, Resul and Calp, M. Hanefi
- Subjects
- *
MELANOMA diagnosis , *SKIN cancer , *MACHINE learning , *ARTIFICIAL neural networks , *COMPUTER algorithms - Abstract
Cancer disease is known as the second highest cause of death in the world. One of the most common types of this disease is skin cancer. As in almost all cancer diseases, early diagnosis and treatment of skin cancer is of great importance. In the process of diagnosing cancer, Machine Learning-based methods are also widely used in addition to traditional methods. The most important advantage of these methods is that they eliminate or minimize human errors that may arise during the cancer diagnosis process. In this study, skin cancer was diagnosed with K-Nearest Neighbor (KNN), Naive Bayes (NB), Random Forest (RO), Logistic Regression (LR), and Artificial Neural Networks (ANN) methods using images taken from patients. In these algorithms, the training and testing process was run, the results were analyzed and models were created. With these models, benign and malignant lesions were compared and skin cancerous lesions were detected and success percentages were revealed. As a result, the best results were obtained using the ANN method, with a training percentage of 99.1% and a testing percentage of 98.6%. When different inputs were given to the created and proposed ANN model, it was observed that the model predicted skin cancer with a high accuracy rate. The results obtained revealed the success of the study and that machine learning methods are a usable method in the cancer diagnosis process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Unveiling the Non-Linear Influence of Eye-Level Streetscape Factors on Walking Preference: Evidence from Tokyo.
- Author
-
Huang, Lu, Oki, Takuya, Muto, Sachio, and Ogawa, Yoshiki
- Subjects
- *
ROAD interchanges & intersections , *COMPUTER vision , *URBAN planning , *COMPUTER algorithms , *SUSTAINABLE development , *FITNESS walking - Abstract
Promoting walking is crucial for sustainable development and fosters individual health and well-being. Therefore, comprehensive investigations of factors that make walking attractive are vital. Previous research has linked streetscapes at eye-level to walking preferences, which usually focuses on simple linear relationships, neglecting the complex non-linear dynamics. Additionally, the varied effects of streetscape factors across street segments and intersections and different street structures remain largely unexplored. To address these gaps, this study explores how eye-level streetscapes influence walking preferences in various street segments and intersections in Setagaya Ward, Tokyo. Using street view data, an image survey, and computer vision algorithms, we measured eye-level streetscape factors and walking preferences. The Extreme Gradient Boosting (XGBoost) model was then applied to analyze their non-linear relationships. This study identified key streetscape factors influencing walking preferences and uncovered non-linear trends within various factors, showcasing a variety of patterns, including upward, downward, and threshold effects. Moreover, our findings highlight the heterogeneity of the structural characteristics of street segments and intersections, which also impact the relationship between eye-level streetscapes and walking preferences. These insights can significantly inform decision-making in urban streetscape design, enhancing pedestrian perceptions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Plus noise unlock.
- Author
-
Fradkin, Meesh
- Subjects
- *
SPEECH synthesis software , *SOUND art , *MUSIC & technology , *ELECTRONIC music , *MACHINE learning , *SOUND waves , *COMPUTER algorithms , *ARTISTS with disabilities , *VISUAL culture - Abstract
What happens when speech synthesis applications are incompatible with certain software? This essay considers how inaccessibility within the software and programming language Max is sonically, aesthetically, culturally, and ideologically amplified through a case study of "plus noise unlock," which is the sound made by the author's computer when she attempts to use a screen reader within a patcher window. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Application of deep learning in radiation therapy for cancer.
- Author
-
Wen, X., Zhao, C., Zhao, B., Yuan, M., Chang, J., Liu, W., Meng, J., Shi, L., Yang, S., Zeng, J., and Yang, Y.
- Subjects
- *
CANCER radiotherapy , *DEEP learning , *ARTIFICIAL intelligence in medicine , *COMPUTER algorithms , *CANCER treatment - Abstract
In recent years, with the development of artificial intelligence, deep learning has been gradually applied to clinical treatment and research. It has also found its way into the applications in radiotherapy, a crucial method for cancer treatment. This study summarizes the commonly used and latest deep learning algorithms (including transformer, and diffusion models), introduces the workflow of different radiotherapy, and illustrates the application of different algorithms in different radiotherapy modules, as well as the defects and challenges of deep learning in the field of radiotherapy, so as to provide some help for the development of automatic radiotherapy for cancer. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Optimizing strategies for slowing the spread of invasive species.
- Author
-
Lampert, Adam
- Subjects
- *
INTRODUCED species , *COMPUTER algorithms , *ENVIRONMENTAL protection , *POPULATION dynamics , *ECOSYSTEMS , *BIODIVERSITY - Abstract
Invasive species are spreading worldwide, causing damage to ecosystems, biodiversity, agriculture, and human health. A major question is, therefore, how to distribute treatment efforts cost-effectively across space and time to prevent or slow the spread of invasive species. However, finding optimal control strategies for the complex spatial-temporal dynamics of populations is complicated and requires novel methodologies. Here, we develop a novel algorithm that can be applied to various population models. The algorithm finds the optimal spatial distribution of treatment efforts and the optimal propagation speed of the target species. We apply the algorithm to examine how the results depend on the species' demography and response to the treatment method. In particular, we analyze (1) a generic model and (2) a detailed model for the management of the spongy moth in North America to slow its spread via mating disruption. We show that, when utilizing optimization approaches to contain invasive species, significant improvements can be made in terms of cost-efficiency. The methodology developed here offers a much-needed tool for further examination of optimal strategies for additional cases of interest. Author summary: In light of the global spread of invasive species that threaten ecosystems, biodiversity, agriculture, and human health, we developed an advanced computer algorithm to identify the optimal strategy to slow the spread of established invaders. In particular, the algorithm finds the most cost-effective way to allocate resources for treatment across different locations. The algorithm is generic and is suitable for a wide variety of population dynamical models and treatment methods. We tested the algorithm using both a broad-based model and a specific model focused on the spongy moth in North America. Our findings revealed significantly improved strategies for slowing the spread of invasive species. The algorithm thus offers a promising tool for improving environmental conservation and assisting policymakers in facing the challenges posed by invasive species. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Computer algorithms of lower-order confounding in regular designs.
- Author
-
Li, Zhi and Li, Zhiming
- Subjects
- *
COMPUTER algorithms , *SEARCH algorithms , *PYTHON programming language , *EXPERIMENTAL design - Abstract
In the design of experiments, an optimal design should minimize the confounding between factorial effects, especially main effects and two-factor interaction effects. The general minimum lower-order confounding (GMC) criterion can be used to choose optimal regular designs based on the aliased component-number pattern. This paper aims to study the confounding properties of lower-order effects and provide several computer algorithms to calculate the lower-order confounding in regular designs. We provide a search algorithm to obtain GMC designs. Through python software, we conduct these algorithms. Some examples are analyzed to illustrate the effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Virtual image recognition in aerobics exercise posture based on wearable light sensing devices.
- Author
-
Liu, Feng and Guo, Shuang
- Subjects
- *
HUMAN activity recognition , *IMAGE recognition (Computer vision) , *AEROBIC exercises , *POSTURE , *COMPUTER vision , *JUDGMENT (Psychology) , *COMPUTER algorithms - Abstract
Aerobics is a form of exercise widely used in health and beauty. Correct posture is the key factor to the effectiveness of aerobics. However, the existing aerobics teaching is often limited by manual judgment and guidance, so a more effective, accurate and real-time posture recognition method is needed. Therefore, this study aims to develop a virtual image technology based on wearable light sensing devices for real-time and accurate aerobics pose recognition. By combining optical principle and image processing technology, we hope to provide an aerobics posture recognition scheme that does not rely on manual judgment. A wearable optical sensor is designed to capture the dynamic posture of aerobics athletes. The device will be able to obtain real-time key point information for athletes. Then, by applying optical principle and image processing technology, the captured node information is converted into virtual image. Finally, computer vision algorithm is used to analyze and compare virtual images to realize posture recognition. After testing and analysis, the system can realize high precision aerobics posture recognition. Compared with the traditional manual judgment, the proposed method has higher accuracy and real-time performance, and can automatically provide feedback and guidance to help athletes improve their posture. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. The Infrapolitics of Algorithmic Resistance: Exploring the complex interactions between humans and machines governed by algorithms.
- Author
-
Fabrino Mendonça, Ricardo, Filgueiras, Fernando, and Almeida, Virgílio
- Subjects
- *
COMPUTER algorithms , *RESISTANCE (Philosophy) , *ANONYMITY , *DECEPTION , *ACQUISITION of data - Abstract
This article presents the infrapolitics of everyday resistance to algorithms. Topics include James Scott’s definition of infrapolitics, Michel de Certeau's work exploring arts of resistance in everyday life, and Jacques Ranciére’s exploration of disidentification in his political framework. The article goes on to detail several methods of resistance including anonymization in digital spaces, deception intended to confuse algorithms, and the intentional hindrance of collection of biometric data.
- Published
- 2023
- Full Text
- View/download PDF
22. Vision-based detection of car turn signals (left/right).
- Author
-
Madake, Jyoti, Wagatkar, O. M., Chaturvedi, Yashovardhan, Bhatlwande, Shripad, Shilaskar, Swati, and Vernekar, Kundan
- Subjects
- *
RANDOM forest algorithms , *COMPUTER algorithms , *COMPUTER vision , *DECISION trees , *K-means clustering - Abstract
Nowadays in India, due to the increase in the number of accidents vehicles are being automated using a variety of Computer vision algorithms. This paper focuses on the detection of tail signals of cars under different illumination circumstances. This system is implemented using FAST and SIFT algorithm which helps to extract features from the images. The obtained features were optimized by using K-Means Clustering Algorithm. This huge feature vector is converted into 8 clusters. These optimized feature vectors were trained on five different classifiers as Decision Tree, SVM(RBF), Random Forest, and Voting Classifier. The trained data set used in this algorithm contains around 9052 images. The obtained accuracy results of different classifiers are as follows, Decision tress has 76.36%, SVM(RBF) has 77.91%, Random Forest has 87.52%, KNN has 89.72%, the Voting classifier has 86.52%. It is observed that KNN gives the highest accuracy among the five used classifiers. This has, to the best of the authors' knowledge, not been presented in literature before. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. B-spline method for solving fractional delay differential equations.
- Author
-
Sharadga, Mwaffag, Syam, Muhammed, and Hashim, Ishak
- Subjects
- *
ALGEBRAIC equations , *LINEAR algebra , *COLLOCATION methods , *COMPUTER algorithms , *DELAY differential equations - Abstract
In this paper, we used the fractional collocation method based on the B-spline basis to derive the numerical solutions for a special form of fractional delay differential equations (DFDEs). The fractional derivative used is defined in the sense of Caputo. So, we can represent the DFDE under consideration into a matrix form that can be solved using matrix operations and tools from linear algebra. As a result, we get algebraic equations with unknown coefficients that can be solved efficiently using a computer algorithm. To illustrate the validity and efficiency of the method, exact and approximate solutions are compared, and absolute errors are found using an example. The numerical results, which are backed up by simulation, reveal that the absolute error is very small and that the approach is extremely efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Review on generative design methods and performance analysis of aircraft.
- Author
-
Mathew, Bilji C., Kumar, Anuj, Pal, Divyanshu, Verma, Deepak Kumar, and Saini, Rahul Kumar
- Subjects
- *
AEROSPACE industry research , *AIRFRAMES , *COMPUTER algorithms , *STRUCTURAL design , *CLOUD computing - Abstract
The article discusses how to leverage generative design to optimize the aircraft structures and variousaircraft performance estimation techniques available today. Both of these are inter connected, becauseoptimization of aircraft structures would automatically improve aircraft performance. The technique is demonstrated by taking into consideration several case studies directly from aerospace research studies. The paper offers a useful explanation of the workflow and capabilities of generative methodand algorithms to provide a number of potential solutions to a static structural design challenge. In the end we will discuss the possible improvement of GD with the integration of technologies like cloud, multidisciplinary optimization. Generative design is a combination of computer algorithms that tends to satisfy a given set of boundary condition, the converged solution is our optimized parts. Our design criteria will always be weight reduction and improvement of stress loads, this will automatically optimize the aircraft's performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Reinventing Backend Subsetting at Google: Designing an algorithm with reduced connection churn that could replace deterministic subsetting.
- Author
-
WARD, PETER, WANKADIA, PAUL, and GULIANI, KAVITA
- Subjects
- *
COMPUTER algorithms , *COMPUTER software , *SCALABILITY , *DISTRIBUTED computing - Abstract
The article presents information on Google’s development and implementation of a new algorithm called Rocksteadier Subsetting, created in order to reduce connection churn and improve scalability. Properties of the new algorithm include good connection balance, no frontend churn, and good subset spread. The new algorithm balances the number of connections per backend task, yet it does have trade-offs such as increased complexity and higher resource utilization.
- Published
- 2023
- Full Text
- View/download PDF
26. Dual-path recommendation algorithm based on CNN and attention-enhanced LSTM.
- Author
-
Li, Huimin, Cheng, Yongyi, Ni, Hongjie, and Zhang, Dan
- Subjects
- *
CONVOLUTIONAL neural networks , *COMPUTER algorithms , *DEEP learning , *RECOMMENDER systems , *FEATURE extraction - Abstract
To recommend useful information to users more efficiently, this paper proposes a dual-path recommendation algorithm which combines multilayer Convolutional Neural Network (CNN) and attention-enhanced long short-term memory network (Attention-LSTM). Firstly, the matrix factorisation technique is used for learning the long-term preferences of users. Secondly, a dual-path network based on CNN and LSTM is constructed to perform feature extraction on the rating matrix. The dual-path network can learn the long-term preferences of users while capturing their dynamic preferences in changing preferences. The algorithm is tested on the public dataset MovieLens-1M, and the MAE value reflects the accuracy of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Foodnet: multi-scale and label dependency learning-based multi-task network for food and ingredient recognition.
- Author
-
Shuang, Feng, Lu, Zhouxian, Li, Yong, Han, Chao, Gu, Xia, and Wei, Shidi
- Subjects
- *
CONVOLUTIONAL neural networks , *COMPUTER algorithms , *LEARNING modules , *SOURCE code - Abstract
Image-based food pattern classification poses challenges of non-fixed spatial distribution and ingredient occlusion for mainstream computer vision algorithms. However, most current approaches classify food and ingredients by directly extracting abstract features of the entire image through a convolutional neural network (CNN), ignoring the relationship between food and ingredients and ingredient occlusion problem. To address these issues mentioned, we propose a FoodNet for both food and ingredient recognition, which uses a multi-task structure with a multi-scale relationship learning module (MSRL) and a label dependency learning module (LDL). As ingredients normally co-occur in an image, we present the LDL to use the dependency of ingredient to alleviate the occlusion problem of ingredient. MSRL aggregates multi-scale information of food and ingredients, then uses two relational matrixs to model the food-ingredient matching relationship to obtain richer feature representation. The experimental results show that FoodNet can achieve good performance on the Vireo Food-172 and UEC Food-100 datasets. It is worth noting that it reaches the most state-of-the-art level in terms of ingredient recognition in the Vireo Food-172 and UECFood-100.The source code will be made available at https://github.com/visipaper/FoodNet. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Automated diagnosis of plus disease in retinopathy of prematurity using quantification of vessels characteristics.
- Author
-
Sharafi, Sayed Mehran, Ebrahimiadib, Nazanin, Roohipourmoallai, Ramak, Farahani, Afsar Dastjani, Fooladi, Marjan Imani, and Khalili Pour, Elias
- Subjects
- *
RETROLENTAL fibroplasia , *DIAGNOSIS , *RETINAL blood vessels , *OPTIC disc , *COMPUTER algorithms , *DELPHI method - Abstract
The condition known as Plus disease is distinguished by atypical alterations in the retinal vasculature of neonates born prematurely. It has been demonstrated that the diagnosis of Plus disease is subjective and qualitative in nature. The utilization of quantitative methods and computer-based image analysis to enhance the objectivity of Plus disease diagnosis has been extensively established in the literature. This study presents the development of a computer-based image analysis method aimed at automatically distinguishing Plus images from non-Plus images. The proposed methodology conducts a quantitative analysis of the vascular characteristics linked to Plus disease, thereby aiding physicians in making informed judgments. A collection of 76 posterior retinal images from a diverse group of infants who underwent screening for Retinopathy of Prematurity (ROP) was obtained. A reference standard diagnosis was established as the majority of the labeling performed by three experts in ROP during two separate sessions. The process of segmenting retinal vessels was carried out using a semi-automatic methodology. Computer algorithms were developed to compute the tortuosity, dilation, and density of vessels in various retinal regions as potential discriminative characteristics. A classifier was provided with a set of selected features in order to distinguish between Plus images and non-Plus images. This study included 76 infants (49 [64.5%] boys) with mean birth weight of 1305 ± 427 g and mean gestational age of 29.3 ± 3 weeks. The average level of agreement among experts for the diagnosis of plus disease was found to be 79% with a standard deviation of 5.3%. In terms of intra-expert agreement, the average was 85% with a standard deviation of 3%. Furthermore, the average tortuosity of the five most tortuous vessels was significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The curvature values based on points were found to be significantly higher in Plus images compared to non-Plus images (p ≤ 0.0001). The maximum diameter of vessels within a region extending 5-disc diameters away from the border of the optic disc (referred to as 5DD) exhibited a statistically significant increase in Plus images compared to non-Plus images (p ≤ 0.0001). The density of vessels in Plus images was found to be significantly higher compared to non-Plus images (p ≤ 0.0001). The classifier's accuracy in distinguishing between Plus and non-Plus images, as determined through tenfold cross-validation, was found to be 0.86 ± 0.01. This accuracy was observed to be higher than the diagnostic accuracy of one out of three experts when compared to the reference standard. The implemented algorithm in the current study demonstrated a commendable level of accuracy in detecting Plus disease in cases of retinopathy of prematurity, exhibiting comparable performance to that of expert diagnoses. By engaging in an objective analysis of the characteristics of vessels, there exists the possibility of conducting a quantitative assessment of the disease progression's features. The utilization of this automated system has the potential to enhance physicians' ability to diagnose Plus disease, thereby offering valuable contributions to the management of ROP through the integration of traditional ophthalmoscopy and image-based telemedicine methodologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Mapping of Prion Structures in the Yeast Rnq1.
- Author
-
Galliamov, Arthur A., Malukhina, Alena D., and Kushnirov, Vitaly V.
- Subjects
- *
PRIONS , *C-terminal residues , *COMPUTER algorithms , *YEAST , *PROTEINASES - Abstract
The Rnq1 protein is one of the best-studied yeast prions. It has a large potentially prionogenic C-terminal region of about 250 residues. However, a previous study indicated that only 40 C-terminal residues form a prion structure. Here, we mapped the actual and potential prion structures formed by Rnq1 and its variants truncated from the C-terminus in two [RNQ+] strains using partial proteinase K digestion. The location of these structures differed in most cases from previous predictions by several computer algorithms. Some aggregation patterns observed microscopically for the Rnq1 hybrid proteins differed significantly from those previously observed for Sup35 prion aggregates. The transfer of a prion from the full-sized Rnq1 to its truncated versions caused substantial alteration of prion structures. In contrast to the Sup35 and Swi1, the terminal prionogenic region of 72 residues was not able to efficiently co-aggregate with the full-sized Rnq1 prion. GFP fusion to the Rnq1 C-terminus blocked formation of the prion structure at the Rnq1 C-terminus. Thus, the Rnq1-GFP fusion mostly used in previous studies cannot be considered a faithful tool for studying Rnq1 prion properties. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. COVER TIMES OF MANY DIFFUSIVE OR SUBDIFFUSIVE SEARCHERS.
- Author
-
HYUNJOONG KIM and LAWLEY, SEAN D.
- Subjects
- *
SEARCH algorithms , *GEODESIC distance , *COMPUTER algorithms , *FOOD of animal origin , *ELECTRONIC information resource searching - Abstract
Cover times measure the speed of exhaustive searches which require the exploration of an entire spatial region(s). Applications include the immune system hunting pathogens, animals collecting food, robotic demining or cleaning, and computer search algorithms. Mathematically, a cover time is the first time a random searcher(s) comes within a specified "detection radius" of every point in the target region (often the entire spatial domain). Due to their many applications and their fundamental probabilistic importance, cover times have been extensively studied in the physics and probability literatures. This prior work has generally studied cover times of a single searcher with a vanishing detection radius or a large target region. This prior work has further claimed that cover times for multiple searchers can be estimated by a simple rescaling of the cover time of a single searcher. In this paper, we study cover times of many diffusive or subdiffusive searchers and show that prior estimates break down as the number of searchers grows. We prove a rather universal formula for all the moments of such cover times in the many searcher limit that depends only on (i) the searcher's characteristic (sub)diffusivity and (ii) a certain geodesic distance between the searcher starting location(s) and the farthest point in the target. This formula is otherwise independent of the detection radius, space dimension, target size, and domain size. We illustrate our results in several examples and compare them to detailed stochastic simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. STUDY OF THE PROCESS OF IDENTIFYING THE AUTHORSHIP OF TEXTS WRITTEN IN NATURAL LANGUAGE.
- Author
-
Ulyanovska, Yuliia, Firsov, Oleksandr, Kostenko, Victoria, and Pryadka, Oleksiy
- Subjects
- *
MACHINE learning , *ELECTRONIC publications , *NATURAL languages , *COMPUTER engineering , *AUTHORSHIP , *COMPUTER algorithms - Abstract
The object of the research is the process of identifying the authorship of a text using computer technologies with the application of machine learning. The full process of solving the problem from text preparation to evaluation of the results was considered. Identification of the authorship of a text is a very complex and time-consuming task that requires maximum attention. This is because the identification process always requires taking into account a very large number of different factors and information related to each specific author. As a result, various problems and errors related to the human factor may arise in the identification process, which may ultimately lead to a deterioration in the results obtained. The subject of the work is the methods and means of analyzing the process of identifying the authorship of a text using existing computer technologies. As part of the work, the authors have developed a web application for identifying the authorship of a text. The software application was written using machine learning technologies, has a user-friendly interface and an advanced error tracking system, and can recognize both text written by one author and that written in collaboration. The effectiveness of different types of machine learning models and data fitting tools is analyzed. Computer technologies for identifying the authorship of a text are defined. The main advantages of using computer technology to identify text authorship are: – Speed: computer algorithms can analyze large amounts of text in an extremely short period of time. – Objectivity: computer algorithms use only proven algorithms to analyze text features and are not subject to emotional influence or preconceived opinions during the analysis process. The result of the work is a web application for identifying the authorship of a text developed on the basis of research on the process of identifying the authorship of a text using computer technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Interface Theatre: Watching Ourselves Disappear.
- Author
-
Felton-Dansky, Miriam and Gallagher-Ross, Jacob
- Subjects
- *
EVERYDAY life , *ONLINE social networks , *COMPUTER algorithms , *SOCIAL change - Abstract
We propose, in this essay, a new theatrical genre we term interface theatre and examine three theatrical works from the past ten years that exemplify the form: Big Art Group's Opacity (2017), 31Down's DataPurge (2015/2016), and Marike Splint's You Are Here (2020). These pieces, and others like them, depict, and critically reflect upon, a world pervasively mediated by interfaces: the apps, social media platforms, and device operating systems that screen so much of our everyday experience and personal communication and distil them to data, both for our own self-tracking and for the scrutiny of corporations and state surveillance. All three works act as transducers: they create theatrical microcosms of the processes by which human life gets transformed into data, allowing spectators to momentarily assume the perspectives of a surveilling algorithm. We offer the term interface theatre as a means of adding nuance and texture to current discourses about theatre in the digital world, which can encompass so many different forms, practices, and politics that greater specificity is required. Equally importantly, we argue, works of interface theatre make a strong case for the necessity for the theatrical form itself as a medium through which to grapple with the role of interfaces in social and political life writ large. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. COOL IT! The objective racism of carceral technofixes.
- Author
-
Salman, Cengiz
- Subjects
- *
RACISM , *HISTORY of technology , *STATISTICAL bias , *COMPUTER algorithms - Abstract
Technofixes that promise to improve racism in the prison industrial complex (PIC) frequently perpetuate it instead. This article argues that carceral technofixes undermine their promises when they attempt to solve social problems by pacifying the people that these problems affect. It offers a reading of physicist Alvin Weinberg's writings on technofixes, which, he believed, could improve social problems better than people because they appeared objective. This appearance emerged through the process of scientific reduction, which simplifies social complexity to help designers identify symptoms of problems amenable to technological intervention. This article claims that scientific reduction orients technofixes to make systemic power in the PIC invisible and to silence radical calls for system reform. I explore these tendencies through two examples: (1) Weinberg's hypothesis that air conditioning could prevent Black uprisings in the 1960s and (2) the COMPAS recidivism algorithm at use in Florida today, which attempts to pacify critics of the PIC by framing the subjective assessments of judges as the source of racist sentencing disparities. The article also shows how COMPAS limited radical critiques of the PIC even after it reproduced racist sentencing disparities as it directed the conversation of algorithmic racism around questions of algorithmic fairness, transparency, and accountability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Generative Models in the Problem of Evaluating the Efficiency of Computer Algorithms.
- Author
-
Fainzilberg, L. S.
- Subjects
- *
COMPUTER algorithms , *SEQUENTIAL analysis , *STATISTICS , *STOCHASTIC models , *DECISION making - Abstract
The author formulates definitions of computer algorithm efficiency according to a criterion that characterizes accuracy, reliability, performance speed, and other consumer properties. Schemes of proof experiments based on stochastic models of generating artificial data with statistical characteristics adequate to real observations are suggested. The experiments are aimed at determining the efficiency of computer algorithms that provide solutions to three different problems, namely, the optimal stopping for making a final decision during a sequential analysis of alternatives, training a linear classifier based on a finite sample of observations, and determining diagnostic signs of an ECG using the fasegraphy method. The results obtained based on statistical experiments are given. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. "Contrary to the Order of Nature": Complicities in Algorithmic Surveillance between SWANA and the Global North.
- Author
-
Semaan, David
- Subjects
- *
LGBTQ+ people , *COMPUTER algorithms , *DATING (Social customs) - Published
- 2024
- Full Text
- View/download PDF
36. Oblique Aerial Images: Geometric Principles, Relationships and Definitions.
- Author
-
Verykokou, Styliani and Ioannidis, Charalabos
- Subjects
- *
AERIAL photography , *COMPUTER vision , *COMPUTER algorithms , *DEFINITIONS - Abstract
Definition: Aerial images captured with the camera optical axis deliberately inclined with respect to the vertical are defined as oblique aerial images. Throughout the evolution of aerial photography, oblique aerial images have held a prominent place since its inception. While vertical airborne images dominated in photogrammetric applications for over a century, the advancements in photogrammetry and computer vision algorithms, coupled with the growing accessibility of oblique images in the market, have propelled the rise of oblique images in recent times. Their emergence is attributed to inherent advantages they offer over vertical images. In this entry, basic definitions, geometric principles and relationships for oblique aerial images, necessary for understanding their underlying geometry, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Outcomes and Safety with Utilization of Metallic Midfoot Wedges in Foot and Ankle Orthopedic Surgery: A Systematic Review of the Literature.
- Author
-
Talaski, Grayson M., Baumann, Anthony, Sleem, Bshara, Walley, Kempland C., Anastasio, Albert T., Gall, Ken, and Adams, Samuel B.
- Subjects
- *
ANKLE surgery , *OSTEOTOMY , *MEDLINE , *COMPUTER algorithms , *RADIOGRAPHY - Abstract
The use of midfoot wedges for the correction of flatfeet disorders, such as progressive collapsing foot disorder, has increased greatly in recent years. However, the wedge material/composition has yet to be standardized. Metallic wedges offer advantages such as comparable elasticity to bone, reduced infection risk, and minimized osseous resorption, but a comprehensive review is lacking in the literature. Therefore, the objective of this systematic review was to organize all studies pertaining to the use of metallic wedges for flatfoot correction to better understand their efficacy and safety. This systematic review adhered to PRISMA guidelines, and articles were searched in multiple databases (PubMED, SPORTDiscus, CINAHL, MEDLINE, and Web of Science) until August 2023 using a defined algorithm. Inclusion criteria encompassed midfoot surgeries using metallic wedges, observational studies, and English-language full-text articles. Data extraction, article quality assessment, and statistical analyses were performed. Among 11 included articles, a total of 444 patients were assessed. The average follow-up duration was 18 months. Radiographic outcomes demonstrated that patients who received metallic wedges experienced improvements in lateral calcaneal pitch angle and Meary's angle, with an enhancement of up to 15.9 degrees reported in the latter. Success rates indicated superior outcomes for metallic wedges (99.3%) compared to bone allograft wedges (89.9%), while complications were generally minor, including hardware pain and misplacement. Notably, there were no infection complications due to the inert nature of the metallic elements. This review summarizes the effectiveness, success rates, and safety of metallic wedges for flatfoot correction. Radiographic improvements and high success rates highlight their efficacy. Minor complications, including pain and mispositioning, were reported, but the infection risk remained low. Our results demonstrate that metallic midfoot wedges may be a viable option over allograft wedges with proper planning. Future research should prioritize long-term studies and standardized measures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Limitations of Protein Structure Prediction Algorithms in Therapeutic Protein Development.
- Author
-
Niazi, Sarfaraz K., Mariam, Zamara, and Paracha, Rehan Z.
- Subjects
- *
PROTEIN structure , *ALGORITHMS , *COMPUTER algorithms , *DRUG discovery , *BIOSIMILARS - Abstract
Simple Summary: Protein structure prediction using computer algorithms has long been a challenge; however, the recent introduction of algorithms like AlphaFold2 and ESMFold to predict protein structure has raised the hope for in silico drug discovery, a long sought-after breakthrough. Since the release of these algorithms, it has not been realized whether these algorithms apply if the structure is already reported to be available to the algorithms. Still, the confidence in the predicted structure varies from very low to very high, which is an observation that is unrelated to any physicochemical or biological property of the protein. Any amino acid chain sequence change fails to predict the structure, limiting the utility of these algorithms to an academic exercise. Still, researchers continue to search for the utility of the confidence scores and, despite failing, continue to suggest possible applications, resulting from the logical belief that if the confidence scores are different and reproducible, this must relate to the protein structure. To end this misconception, we predicted the structures of 204 FDA-approved therapeutic proteins, with a wishful thought that the confidence scores, if correlated on this large database, can assist in rank-ordering these proteins for their possible batch-to-batch variability, which could help to reduce testing when these molecules are developed as biosimilars. We also studied modified structures that were not predicted since no reference structure was available for the algorithms to function. This conclusion applies to the two tested algorithms, which showed comparable and proportional confidence intervals. This conclusion is controversial but deserves the attention of researchers who continue to hope to find any drug discovery utility for these algorithms. The three-dimensional protein structure is pivotal in comprehending biological phenomena. It directly governs protein function and hence aids in drug discovery. The development of protein prediction algorithms, such as AlphaFold2, ESMFold, and trRosetta, has given much hope in expediting protein-based therapeutic discovery. Though no study has reported a conclusive application of these algorithms, the efforts continue with much optimism. We intended to test the application of these algorithms in rank-ordering therapeutic proteins for their instability during the pre-translational modification stages, as may be predicted according to the confidence of the structure predicted by these algorithms. The selected molecules were based on a harmonized category of licensed therapeutic proteins; out of the 204 licensed products, 188 that were not conjugated were chosen for analysis, resulting in a lack of correlation between the confidence scores and structural or protein properties. It is crucial to note here that the predictive accuracy of these algorithms is contingent upon the presence of the known structure of the protein in the accessible database. Consequently, our conclusion emphasizes that these algorithms primarily replicate information derived from existing structures. While our findings caution against relying on these algorithms for drug discovery purposes, we acknowledge the need for a nuanced interpretation. Considering their limitations and recognizing that their utility may be constrained to scenarios where known structures are available is important. Hence, caution is advised when applying these algorithms to characterize various attributes of therapeutic proteins without the support of adequate structural information. It is worth noting that the two main algorithms, AlfphaFold2 and ESMFold, also showed a 72% correlation in their scores, pointing to similar limitations. While much progress has been made in computational sciences, the Levinthal paradox remains unsolved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Unmanned Aerial Vehicle-Based Structural Health Monitoring and Computer Vision-Aided Procedure for Seismic Safety Measures of Linear Infrastructures.
- Author
-
Ngeljaratan, Luna, Bas, Elif Ecem, and Moustafa, Mohamed A.
- Subjects
- *
STRUCTURAL health monitoring , *SCREEN time , *NATURAL gas pipelines , *VIBRATION (Mechanics) , *COMPUTER algorithms - Abstract
Computer vision in the structural health monitoring (SHM) field has become popular, especially for processing unmanned aerial vehicle (UAV) data, but still has limitations both in experimental testing and in practical applications. Prior works have focused on UAV challenges and opportunities for the vibration-based SHM of buildings or bridges, but practical and methodological gaps exist specifically for linear infrastructure systems such as pipelines. Since they are critical for the transportation of products and the transmission of energy, a feasibility study of UAV-based SHM for linear infrastructures is essential to ensuring their service continuity through an advanced SHM system. Thus, this study proposes a single UAV for the seismic monitoring and safety assessment of linear infrastructures along with their computer vision-aided procedures. The proposed procedures were implemented in a full-scale shake-table test of a natural gas pipeline assembly. The objectives were to explore the UAV potential for the seismic vibration monitoring of linear infrastructures with the aid of several computer vision algorithms and to investigate the impact of parameter selection for each algorithm on the matching accuracy. The procedure starts by adopting the Maximally Stable Extremal Region (MSER) method to extract covariant regions that remain similar through a certain threshold of image series. The feature of interest is then detected, extracted, and matched using the Speeded-Up Robust Features (SURF) and K-nearest Neighbor (KNN) algorithms. The Maximum Sample Consensus (MSAC) algorithm is applied for model fitting by maximizing the likelihood of the solution. The output of each algorithm is examined for correctness in matching pairs and accuracy, which is a highlight of this procedure, as no studies have ever investigated these properties. The raw data are corrected and scaled to generate displacement data. Finally, a structural safety assessment was performed using several system identification models. These procedures were first validated using an aluminum bar placed on an actuator and tested in three harmonic tests, and then an implementation case study on the pipeline shake-table tests was analyzed. The validation tests show good agreement between the UAV data and reference data. The shake-table test results also generate reasonable seismic performance and assess the pipeline seismic safety, demonstrating the feasibility of the proposed procedure and the prospect of UAV-based SHM for linear infrastructure monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A New Method of Reducing the Inrush Current and Improving the Starting Performance of a Line-Start Permanent-Magnet Synchronous Motor.
- Author
-
Szelag, Wojciech, Jedryczka, Cezary, and Baranski, Mariusz
- Subjects
- *
SYNCHRONOUS electric motors , *TRANSIENTS (Dynamics) , *COMPUTER algorithms , *COMPUTER programming , *TRANSIENT analysis , *NEW business enterprises - Abstract
This paper presents a new method of reducing the inrush current and improving the starting performance of a line-start permanent-magnet synchronous motor (LSPMSM). The novelty of the proposed method relies on the selection of the time instant of the connection of the stator winding to the grid, for which the smallest values of the amplitudes of inrush currents are obtained. To confirm the effectiveness of the developed method of limiting the inrush current, simulations and experimental studies were carried out. The algorithm and dedicated computer code developed by the authors for the analysis of transient coupled phenomena in the LSPMSM were used to study the impact of the time instant of connection of the winding to the grid on the motor start-up process. The algorithm was based on a field model of coupled electromagnetic and thermal phenomena in the studied motor. To verify the developed model of the phenomena and the proposed method, experimental research was carried out on a purpose-built computerised test stand. Good concordance between the results of the experiments and simulations confirmed the high reliability of the proposed model, as well as the effectiveness of the developed approach in limiting the inrush current and improving the starting performance of LSPMSMs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Algorithms patrolling content: where's the harm?
- Author
-
Horten, Monica
- Subjects
- *
ONLINE social networks , *COMPUTER algorithms , *DECISION making , *COMPUTER security , *HUMAN rights - Abstract
At the heart of this paper is an examination of the colloquial concept of a 'shadow ban'. It reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users' speech. It reveals how the Facebook shadow ban is implemented by blocking dissemination of content in News Feed. The decision-making criteria are based on 'behaviour', a term that relates to activity of the page that is identifiable through patterns in the data. It's a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of responsibility of online platforms for content moderation. It studies the experience of the shadow ban on 20 UK-based Facebook Pages over the period from November 2019 to January 2021. The potential harm was evaluated using human rights standards and a comparative metric produced from Facebook Insights data. The empirical research is connected to recent legislative developments: the EU's Digital Services Act and the UK's Online Safety Bill. Its most salient contribution may be around 'behaviour' monitoring and its interpretation by legislators. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Regulating algorithms in the digital market: a revisit of Indonesian competition law and policy.
- Author
-
Wahyuningtyas, Sih Yuliana
- Subjects
- *
COMPUTER algorithms , *UNFAIR competition , *ANTITRUST law , *ELECTRONIC commerce , *CARTELS , *PRICE discrimination - Abstract
Although the use of algorithms has become increasingly prominent in the digital market, such algorithms are often opaque and prone to risks of making biased decisions. Algorithms could also be used to harm competition, such as by facilitating cartels. Such developments make it necessary to examine the readiness of existing competition law to tackle cases involving algorithms. This paper focuses on analysing Indonesian competition law to address the following issues: (1) how current Indonesian competition law deals with algorithms-related cases; (2) which indicators could detect anti-competitive algorithms; and (3) which competition policy approach could be considered in Indonesia to tackle the problem resulted from the use of algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Design of optical sensors based on computer vision in basketball visual simulation system.
- Author
-
Shi, Rong and Wu, Zhaozhao
- Subjects
- *
COMPUTER vision , *OPTICAL sensors , *BASKETBALL techniques , *BASKETBALL , *SIMULATION methods & models , *COMPUTER algorithms - Abstract
With the development of computer vision technology, optical sensors are more and more widely used in the field of sports. But in basketball, the application of optical sensors is still relatively few. The aim of this research is to design an optical sensor system based on computer vision to realize the visual simulation of basketball. The study collected images and video data related to basketball, and processed and analyzed them. Computer vision algorithms are then used to identify and track key information in basketball motion, such as player positions and the trajectory of the ball. Then an optical sensor system is designed and implemented to acquire real-time data of basketball movement. Finally, a visualization simulation system is developed, which integrates the data acquired by optical sensor with the basketball scene to realize the realistic simulation of basketball. Through experiments and tests, the research proves the effectiveness and accuracy of the optical sensor system based on computer vision in the visual simulation of basketball sports. The system can capture and present important information in basketball sports in real time, help trainers and players better understand basketball tactics and techniques, and provide real-time feedback and analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Application of visual tracking based on infrared target imaging in aerobics action evaluation system.
- Author
-
Lv, Dan and Yan, Shizhan
- Subjects
- *
INFRARED imaging , *AEROBIC exercises , *COMPUTER vision , *INFRARED cameras , *POSTURE , *COMPUTER algorithms - Abstract
Because of the complexity and difficulty of aerobic exercise, the physical quality and technical level of athletes have higher requirements. Therefore, how to accurately evaluate and analyze the situation of aerobic athletes has become an important task in training and competition. The aim of this study is to develop a vision tracking system based on infrared target imaging for the evaluation of aerobics movements. In the system, the infrared camera is used to capture the movement of aerobics athletes, and the infrared image is processed and analyzed by computer vision algorithm to extract the key body nodes to track and measure the athlete's posture. Through the experimental verification of a group of aerobics movements, the system can effectively track the athletes' body posture and accurately evaluate the quality of their movements. Through quantitative indicators and visual results, we can get the evaluation results of whether the athlete's posture and movement are accurate and in line with the norms. Compared with traditional motion evaluation methods, the proposed method has higher accuracy and real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Computing error bounds for asymptotic expansions of regular P-recursive sequences.
- Author
-
Dong, Ruiwen, Melczer, Stephen, and Mezzarobba, Marc
- Subjects
- *
ASYMPTOTIC expansions , *COMBINATORICS , *RECURSIVE sequences (Mathematics) , *COMPUTER algorithms , *ALGEBRA , *APPROXIMATION error - Abstract
Over the last several decades, improvements in the fields of analytic combinatorics and computer algebra have made determining the asymptotic behaviour of sequences satisfying linear recurrence relations with polynomial coefficients largely a matter of routine, under assumptions that hold often in practice. The algorithms involved typically take a sequence, encoded by a recurrence relation and initial terms, and return the leading terms in an asymptotic expansion up to a big-O error term. Less studied, however, are effective techniques giving an explicit bound on asymptotic error terms. Among other things, such explicit bounds typically allow the user to automatically prove sequence positivity (an active area of enumerative and algebraic combinatorics) by exhibiting an index when positive leading asymptotic behaviour dominates any error terms. In this article, we present a practical algorithm for computing such asymptotic approximations with rigorous error bounds, under the assumption that the generating series of the sequence is a solution of a differential equation with regular (Fuchsian) dominant singularities. Our algorithm approximately follows the singularity analysis method of Flajolet and Odlyzko [SIAM J. Discrete Math. 3 (1990), pp. 216–240], except that all big-O terms involved in the derivation of the asymptotic expansion are replaced by explicit error terms. The computation of the error terms combines analytic bounds from the literature with effective techniques from rigorous numerics and computer algebra. We implement our algorithm in the SageMath computer algebra system and exhibit its use on a variety of applications (including our original motivating example, solution uniqueness in the Canham model for the shape of genus one biomembranes). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Advanced Optical Imaging Based on Metasurfaces.
- Author
-
Zou, Xiujuan, Lin, Ruoyu, Fu, Yunlai, Gong, Guangxing, Zhou, Xuxi, Wang, Shuming, Zhu, Shining, and Wang, Zhenlin
- Subjects
- *
OPTICAL images , *COMPUTER algorithms , *ELECTROMAGNETIC waves , *OPTIMIZATION algorithms , *HOLOGRAPHY , *WAVEFRONTS (Optics) , *NANOFABRICATION , *SPECTRAL imaging - Abstract
Recently, metasurfaces have arisen as next‐generation optics with various merits, such as being ultrathin and lightweight with multi‐function integration. Numerous applications of metasurfaces have been demonstrated up to now, including wavefront shaping, optical imaging, hologram display, structured light generation, polarization detection, and so on. Among them, optical imaging components and techniques have been substantially promoted with the advent of metasurfaces due to their superior spatial modulation properties over electromagnetic waves. Here, the forward and reverse design approaches of metasurface and recent advances in various imaging applications based on them are reviewed. Due to the ability of arbitrary wavefront encoding, metasurfaces have been widely applied to numerous imaging applications, including diffraction‐limited imaging with aberration correction, polarization‐related imaging, high‐resolution spectral imaging, three‐dimentional (3D) imaging, and even algorithm‐assisted imaging, etc. Optical imaging applications combined with novel metasurfaces can make breakthroughs with the development of nanofabrication technologies and the improvement of computer algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. A Fully Automated Mini-Mental State Examination Assessment Model Using Computer Algorithms for Cognitive Screening.
- Author
-
Chen, Lihua, Zhang, Meiwei, Yu, Weihua, Yu, Juan, Cui, Qiushi, Chen, Chenxi, Liu, Junjin, Huang, Lihong, Liu, Jiarui, Yu, Wuhan, Li, Wenjie, Zhang, Wenbo, Yan, Mengyu, Wu, Jiani, Wang, Xiaoqin, Song, Jiaqi, Zhong, Fuxing, Liu, Xintong, Wang, Xianglin, and Li, Chengxing
- Subjects
- *
MINI-Mental State Examination , *MEDICAL screening , *COGNITIVE computing , *COMPUTER algorithms , *DAS-Naglieri Cognitive Assessment System , *COMPUTER simulation - Abstract
Background: Rapidly growing healthcare demand associated with global population aging has spurred the development of new digital tools for the assessment of cognitive performance in older adults. Objective: To develop a fully automated Mini-Mental State Examination (MMSE) assessment model and validate the model's rating consistency. Methods: The Automated Assessment Model for MMSE (AAM-MMSE) was an about 10-min computerized cognitive screening tool containing the same questions as the traditional paper-based Chinese MMSE. The validity of the AAM-MMSE was assessed in term of the consistency between the AAM-MMSE rating and physician rating. Results: A total of 427 participants were recruited for this study. The average age of these participants was 60.6 years old (ranging from 19 to 104 years old). According to the intraclass correlation coefficient (ICC), the interrater reliability between physicians and the AAM-MMSE for the full MMSE scale AAM-MMSE was high [ICC (2,1)=0.952; with its 95% CI of (0.883,0.974)]. According to the weighted kappa coefficients results the interrater agreement level for audio-related items showed high, but for items "Reading and obey", "Three-stage command", and "Writing complete sentence" were slight to fair. The AAM-MMSE rating accuracy was 87%. A Bland-Altman plot showed that the bias between the two total scores was 1.48 points with the upper and lower limits of agreement equal to 6.23 points and −3.26 points. Conclusions: Our work offers a promising fully automated MMSE assessment system for cognitive screening with pretty good accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. A deep learning-based approach for axle counter in free-flow tolling systems.
- Author
-
Souza, Bruno José, da Costa, Guinther Kovalski, Szejka, Anderson Luis, Freire, Roberto Zanetti, and Gonzalez, Gabriel Villarrubia
- Subjects
- *
DEEP learning , *AXLES , *IMAGE reconstruction , *IMAGE analysis , *COMPUTER vision , *TOLL collection , *COMPUTER algorithms - Abstract
Enhancements in the structural and operational aspects of transportation are important for achieving high-quality mobility. Toll plazas are commonly known as a potential bottleneck stretch, as they tend to interfere with the normality of the flow due to the charging points. Focusing on the automation of toll plazas, this research presents the development of an axle counter to compose a free-flow toll collection system. The axle counter is responsible for the interpretation of images through algorithms based on computer vision to determine the number of axles of vehicles crossing in front of a camera. The You Only Look Once (YOLO) model was employed in the first step to identify vehicle wheels. Considering that several versions of this model are available, to select the best model, YOLOv5, YOLOv6, YOLOv7, and YOLOv8 were compared. The YOLOv5m achieved the best result with precision and recall of 99.40% and 98.20%, respectively. A passage manager was developed thereafter to verify when a vehicle passes in front of the camera and store the corresponding frames. These frames are then used by the image reconstruction module which creates an image of the complete vehicle containing all axles. From the sequence of frames, the proposed method is able to identify when a vehicle was passing through the scene, count the number of axles, and automatically generate the appropriate charge to be applied to the vehicle. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Generating quantum channels from functions on discrete sets.
- Author
-
Quillen, A. C. and Skerrett, Nathan
- Subjects
- *
SET functions , *QUANTUM computers , *QUBITS , *QUANTUM states , *COMPUTER algorithms , *EUCLIDEAN algorithm - Abstract
Using the recent ability of quantum computers to initialize quantum states rapidly with high fidelity, we use a function operating on a discrete set to create a simple class of quantum channels. Fixed points and periodic orbits, that are present in the function, generate fixed points and periodic orbits in the associated quantum channel. Phenomenology such as periodic doubling is visible in a 6 qubit dephasing channel constructed from a truncated version of the logistic map. Using disjoint subsets, discrete function-generated channels can be constructed that preserve coherence within subspaces. Error correction procedures can be in this class as syndrome detection uses an initialized quantum register. A possible application for function-generated channels is in hybrid classical/quantum algorithms. We illustrate how these channels can aid in carrying out classical computations involving iteration of non-invertible functions on a quantum computer with the Euclidean algorithm for finding the greatest common divisor of two integers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Delayed treatment of traumatic eyeball dislocation into the maxillary sinus and treatment algorithm: a case report and literature review.
- Author
-
Hoon Kim, Keun Hyung Kim, In Chang Koh, Ga Hyun Lee, and Soo Yeon Lim
- Subjects
- *
MAXILLARY sinus , *SUBARACHNOID hemorrhage , *COMPUTER algorithms - Abstract
Orbital floor fractures are commonly encountered, but the dislocation of the eyeball into the maxillary sinus is relatively rare. When it does occur, globe dislocation can have serious consequences, including vision loss, enucleation, and orbito-ocular deformity. Immediate surgical intervention is typically attempted when possible. However, severe comorbidities and poor general health can delay necessary surgery. In this report, we present the surgical outcomes of a 70-year-old woman who received delayed treatment for traumatic eyeball dislocation into the maxillary sinus due to a subarachnoid hemorrhage and hemopneumothorax. Additionally, we propose a treatment algorithm based on our clinical experience and a review of the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.