43 results
Search Results
2. An optimal algorithm for 2-bounded delay buffer management with lookahead.
- Author
-
Kobayashi, Koji M.
- Subjects
- *
ALGORITHMS , *ONLINE algorithms , *DETERMINISTIC algorithms , *QUALITY of service , *COMPUTER science - Abstract
• We consider a variant of the online buffer management problem. • This variant is called the 2-bounded delay buffer management problem with lookahead. • We design an optimal online algorithm for this variant. The bounded delay buffer management problem, which was proposed by Kesselman et al. (STOC 2001 and SIAM Journal on Computing 33(3), 2004), is an online problem focusing on buffer management of a switch supporting Quality of Service (QoS). The problem definition is as follows: Packets arrive to a buffer over time and each packet is specified by the release time , deadline and value. An algorithm can transmit at most one packet from the buffer at each integer time and can gain its value as the profit if transmitting the packet by its deadline after its release time. The objective of this problem is to maximize the gained profit. We say that an instance of the problem is s -bounded if for any packet, an algorithm has at most s chances to transmit it. For any s ≥ 2 , Hajek (CISS 2001) showed that the competitive ratio of any deterministic algorithm is at least (1 + 5) / 2 ≥ 1.618. Recently, Veselý et al. (SODA 2019) designed an online algorithm matching the lower bound. Böhm et al. (ISAAC 2016 and Theoretical Computer Science, 2019) introduced the lookahead ability to an online algorithm. At a time t , the algorithm obtains information about packets arriving at time t + 1 , and showed that for s = 2 , there is an algorithm which achieves the competitive ratio of (− 1 + 13) / 2 ≤ 1.303. Also, they showed that the competitive ratio of any deterministic algorithm is at least (1 + 17) / 4 ≥ 1.280. In this paper, for the 2-bounded model with lookahead, we design an algorithm with a matching competitive ratio of (1 + 17) / 4. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Almost linear time algorithms for minsum k-sink problems on dynamic flow path networks.
- Author
-
Higashikawa, Yuya, Katoh, Naoki, Teruyama, Junichi, and Watase, Koji
- Subjects
- *
POLYNOMIAL time algorithms , *ALGORITHMS , *DATA structures , *COMPUTER science , *UNITS of time - Abstract
• Study the minsum k -sink problems on dynamic flow path networks for the confluent/non-confluent flow model. • Develop algorithms which run in almost linear time regardless of the number of sinks k. • Improve upon the previous results for the same problem with the confluent flow model. • Provide the first polynomial time algorithm for the problem with the non-confluent flow model. • The main theoretical contribution is to construct novel data structures for solving subproblems in polylogarithmic time. We address the facility location problems on dynamic flow path networks. A dynamic flow path network consists of an undirected path with positive edge lengths, positive edge capacities, and positive vertex weights. A path can be considered as a road, an edge length as the distance along the road and a vertex weight as the number of people at the site. An edge capacity limits the number of people that can enter the edge per unit time. In the dynamic flow network, given particular points on edges or vertices, called sinks , all the people evacuate from the vertices to the sinks as quickly as possible. The problem is to find the location of sinks on a dynamic flow path network in such a way that the aggregate evacuation time (i.e., the sum of evacuation times for all the people) to sinks is minimized. We consider two models of the problem: the confluent flow model and the non-confluent flow model. In the former model, the way of evacuation is restricted so that all the people at a vertex have to evacuate to the same sink, and in the latter model, there is no such restriction. In this paper, for both the models, we develop algorithms which run in almost linear time regardless of the number of sinks. It should be stressed that for the confluent flow model, our algorithm improves upon the previous result by Benkoczi et al. [Theoretical Computer Science, 2020], and one for the non-confluent flow model is the first polynomial time algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. A novel algorithmic construction for deductions of categorical polysyllogisms by Carroll's diagrams.
- Author
-
Senturk, Ibrahim, Gursoy, Necla Kircali, Oner, Tahsin, and Gursoy, Arif
- Subjects
- *
ARTIFICIAL intelligence , *ALGORITHMS , *AUTHORSHIP in literature , *COMPUTER science , *SYLLOGISM - Abstract
In this work, with the help of a calculus system syllogistic logic with Carroll's diagrams (SLCD), we construct a useful algorithm for the possible deductions of polysyllogisms (soriteses). This algorithm makes a general deduction in categorical syllogisms with the help of diagrams to depict each proposition of polysyllogisms. The developed calculus system PolySLCD (PSLCD) is used to allow a formal deduction from premises set by comprising synchronically biliteral and triliteral diagrammatical appearance and simple algorithmic nature. This algorithm can be used to deduce new conclusions, step by step, through recursive conclusion sets that are obtained from premises of categorical polysyllogisms. The fundamental contributions of this paper are accurately deducing conclusions from sets corresponding to given premises as exact human reasoning using a single algorithm and designing this algorithm based on SLCD. Therefore, it is more suitable for computer-aided solution. Since the algorithm is set-based, it is a novel algorithm in the literature and it can easily contribute to the researchers using polysyllogisms in different scientific branches, such as computer science, decision-making systems and artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Subspace clustering based on latent low rank representation with Frobenius norm minimization.
- Author
-
Yu, Song and Yiquan, Wu
- Subjects
- *
FROBENIUS manifolds , *FROBENIUS algebras , *ARTIFICIAL neural networks , *ALGORITHMS , *COMPUTER science - Abstract
The problem of subspace clustering which refers to segmenting a collection of data samples approximately drawn from a union of linear subspaces is considered in this paper. Among existing subspace clustering algorithms, low rank representation (LRR) based subspace clustering is a very powerful method and has demonstrated that its performance is good. Latent low rank representation (LLRR) subspace clustering algorithm is an improvement of the original LRR algorithm when the observed data samples are insufficient. The clustering accuracy of LLRR is higher than that of LRR. Recently, Frobenius norm minimization based LRR algorithm has been proposed and its clustering accuracy is higher than that of LRR demonstrating the effectiveness of Frobenius norm as another convex surrogate of the rank function. Combining LLRR and Frobenius norm, a new low rank representation subspace clustering algorithm is proposed in this paper. The nuclear norm in the LLRR algorithm is replaced by the Frobenius norm. The resulting optimization problem is solved via alternating direction method of multipliers (ADMM). Experimental results show that compared with LRR, LLRR and several other state-of-the-art subspace clustering algorithms, the proposed algorithm can get higher clustering accuracy. Compared with LLRR, the running time of the proposed algorithm is reduced significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Collaborative linear manifold learning for link prediction in heterogeneous networks.
- Author
-
Liu, JiaHui, Jin, Xu, Hong, YuXiang, Liu, Fan, Chen, QiXiang, Huang, YaLou, Liu, MingMing, Xie, MaoQiang, and Sun, FengChi
- Subjects
- *
ALGORITHMS , *COMPUTER science , *MANIFOLDS (Mathematics) , *TOPOLOGY - Abstract
Link prediction in heterogeneous networks aims at predicting missing interactions between pairs of nodes with the help of the topology of the target network and interconnected auxiliary networks. It has attracted considerable attentions from both computer science and bioinformatics communities in the recent years. In this paper, we introduce a novel Collaborative Linear Manifold Learning (CLML) algorithm. It can optimize the consistency of nodes similarities by collaboratively using the manifolds embedded between the target network and the auxiliary network. The experiments on four benchmark datasets have demonstrated the outstanding advantages of CLML, not only in the high prediction performance compared to baseline methods, but also in the capability to predict the unknown interactions in the target networks accurately and effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
7. A simple linear time algorithm for the locally connected spanning tree problem on maximal planar chordal graphs.
- Author
-
Calamoneri, Tiziana, Dell'Orefice, Matteo, and Monti, Angelo
- Subjects
- *
SPANNING trees , *SUBGRAPHS , *PLANAR graphs , *ALGORITHMS , *COMPUTER science - Abstract
Abstract A locally connected spanning tree (LCST) T of a graph G is a spanning tree of G such that, for each node, its neighborhood in T induces a connected subgraph in G. The problem of determining whether a graph contains an LCST or not has been proved to be NP-complete, even if the graph is planar or chordal. The main result of this paper is a simple linear time algorithm that, given a maximal planar chordal graph, determines in linear time whether it contains an LCST or not, and produces one if it exists. We give an analogous result for the case when the input graph is a maximal outerplanar graph. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Construction method of concept lattice based on improved variable precision rough set.
- Author
-
Zhang, Ruiling, Xiong, Shengwu, and Chen, Zhong
- Subjects
- *
COMPUTER science , *ARTIFICIAL neural networks , *DISCRIMINANT analysis , *ALGORITHMS , *PARAMETER estimation - Abstract
This paper mainly focuses on how to construct concept lattice effectively and efficiently based on improved variable precision rough set. On the basis of preprocessing formal concept, one algorithm that can determine the value range of variable precision parameter β according to the approximate classification quality is proposed. An improved β -upper and lower distribution attribute reduction algorithm is also proposed based on the improved variable precision rough set, the algorithm can be used for attribute reduction on the original data of the concept lattice, and to eliminate the redundant knowledge or noises of the formal context. For the reduced formal context, the paper combines the concept construction algorithm with an improved rule acquisition algorithm seamlessly, and proposes a novel approach of concept lattice construction based on improved variable precision rough set. Finally, a concept lattice generation prototype system is developed, this paper also performs comprehensive experiments, and the effectiveness of the improved algorithm is proved through the experimental results. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
9. How to catch [formula omitted]-heavy-hitters on sliding windows.
- Author
-
Braverman, Vladimir, Gelles, Ran, and Ostrovsky, Rafail
- Subjects
- *
ALGORITHMS , *HISTOGRAMS , *COMPUTER science , *APPLIED mathematics , *INFORMATION science , *MATHEMATICAL statistics , *MATHEMATICAL programming - Abstract
Finding heavy-elements (heavy-hitters) in streaming data is one of the central, and well-understood tasks. Despite the importance of this problem, when considering the sliding windows model of streaming (where elements eventually expire) the problem of finding L 2 -heavy elements has remained completely open despite multiple papers and considerable success in finding L 1 -heavy elements. Since the L 2 -heavy element problem doesn't satisfy certain conditions, existing methods for sliding windows algorithms, such as smooth histograms or exponential histograms are not directly applicable to it. In this paper, we develop the first polylogarithmic-memory algorithm for finding L 2 -heavy elements in the sliding window model. Our technique allows us not only to find L 2 -heavy elements, but also heavy elements with respect to any L p with 0 < p ≤ 2 on sliding windows. By this we completely “close the gap” and resolve the question of finding L p -heavy elements in the sliding window model with polylogarithmic memory, since it is well known that for p > 2 this task is impossible. We demonstrate a broader applicability of our method on two additional examples: we show how to obtain a sliding window approximation of the similarity of two streams, and of the fraction of elements that appear exactly a specified number of times within the window (the α -rarity problem). In these two illustrative examples of our method, we replace the current expected memory bounds with worst case bounds. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
10. Food image classification using local appearance and global structural information.
- Author
-
Nguyen, Duc Thanh, Zong, Zhimin, Ogunbona, Philip O., Probst, Yasmine, and Li, Wanqing
- Subjects
- *
IMAGE processing , *IMAGE analysis , *COMPUTER science , *APPLIED mathematics , *ALGORITHMS - Abstract
Abstract: This paper proposes food image classification methods exploiting both local appearance and global structural information of food objects. The contribution of the paper is threefold. First, non-redundant local binary pattern (NRLBP) is used to describe the local appearance information of food objects. Second, the structural information of food objects is represented by the spatial relationship between interest points and encoded using a shape context descriptor formed from those interest points. Third, we propose two methods of integrating appearance and structural information for the description and classification of food images. We evaluated the proposed methods on two datasets. Experimental results verified that the combination of local appearance and structural features can improve classification performance. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
11. Image medium similarity measure and its applications.
- Author
-
Zhou, Ningning, Hong, Long, and Zhang, Shaobai
- Subjects
- *
ALGORITHMS , *MATHEMATICAL models , *IMAGE processing , *SET theory , *APPLIED mathematics , *COMPUTER science - Abstract
Abstract: Similarity measure plays an important role in many image processing fields. This paper introduces the medium mathematic systems, and establishes a novel image medium similarity measure between two pixels and that of between two image sets based on the measure of medium truth degree. Moreover, an image edge detection algorithm, an image matching algorithm and an image medium fidelity measure method governed by the medium similarity measure are discussed in this paper. Experimental results show that the proposed similarity measure is effective. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
12. Coupled local–global adaptation for multi-source transfer learning.
- Author
-
Liu, Jieyan, Li, Jingjing, and Lu, Ke
- Subjects
- *
MACHINE learning , *ROBUST control , *ARTIFICIAL neural networks , *ALGORITHMS , *COMPUTER science - Abstract
This paper presents a novel unsupervised multi-source domain adaptation approach, named as coupled local–global adaptation (CLGA). At the global level, in order to maximize the adaptation ability, CLGA regards multiple domains as a unity, and jointly mitigates the gaps of both marginal and conditional distributions between source and target dataset. At the local level, with the intention of maximizing the discriminative ability, CLGA investigates the relationship among distinctive domains, and exploits both class and domain manifold structures embedded in data samples. We formulate both local and global adaptation in a concise optimization problem, and further derive an analytic solution for the objective function. Extensive evaluations verify that CLGA performs better than several existing methods not only in multi-source adaptation tasks but also in single source scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
13. A new greedy algorithm for sparse recovery.
- Author
-
Wang, Qian and Qu, Gangrong
- Subjects
- *
COMPRESSED sensing , *SIGNAL sampling , *ALGORITHMS , *ARTIFICIAL neural networks , *COMPUTER science - Abstract
Compressed sensing (CS) has been one of the great successes of applied mathematics in the last decade. This paper proposes a new method, combining the advantage of the Compressive Sampling Matching Pursuit (CoSaMP) algorithm and the Quasi–Newton Iteration Projection (QNIP) algorithm, for the recovery of sparse signal from underdetermined linear systems. To get the new algorithm, Quasi–Newton Projection Pursuit (QNPP), the least-squares technique in CoSaMP is used to accelerate convergence speed and QNIP is modified slightly. The convergence rate of QNPP is studied, under a certain condition on the restricted isometry constant of the measurement matrix, which is smaller than that of QNIP. The fast version of QNPP is also proposed which uses the Richardson’s iteration to reduce computation time. The numerical results show that the proposed algorithms have higher recovery rate and faster convergence speed than existing techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Tangible images of real life scenes.
- Author
-
Zhang, Xingzi, Goesele, Michael, and Sourin, Alexei
- Subjects
- *
COMPUTER science , *ENGINEERING , *ALGORITHMS , *POLYGONS - Abstract
Haptic technologies allow for adding a new “touching” modality into virtual scenes. 3D reconstruction of a real life scene results, however, often in millions of polygons which cannot be simultaneously visualized and haptically rendered. In this paper, we propose a way of haptic interaction with real life scenes where multiple original images of the real scenes are augmented with reconstructed polygon meshes. We present our solution to the problems of haptic model alignment with the images and interactive haptic rendering of large polygon meshes with reconstruction artifacts. In particular, the presented collision detection algorithm is not restricted by any hypothesis and robust enough to support smooth interaction with millions of polygons. The feasibility and usability of the proposed solution is evaluated in a user study. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
15. Lessons from collecting a million biometric samples.
- Author
-
Phillips, P. Jonathon, Flynn, Patrick J., and Bowyer, Kevin W.
- Subjects
- *
BIOMETRIC identification , *COMPUTER science , *GESTURE , *HUMAN facial recognition software , *ALGORITHMS - Abstract
Over the past decade, independent evaluations have become commonplace in many areas of experimental computer science, including face and gesture recognition. A key attribute of many successful independent evaluations is a curated data set. Desired aspects associated with these data sets include appropriateness to the experimental design, a corpus size large enough to allow statistically rigorous characterization of results, and the availability of comprehensive metadata that allow inferences to be made on various data set attributes. In this paper, we review a ten-year biometric sampling effort that enabled the creation of several key biometrics challenge problems. We summarize the design and execution of data collections, identify key challenges, and convey some lessons learned. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
16. Preference-oriented fixed-priority scheduling for periodic real-time tasks.
- Author
-
Begam, Rehana, Xia, Qin, Zhu, Dakai, and Aydin, Hakan
- Subjects
- *
REAL-time control , *ALGORITHMS , *MONOTONIC functions , *COMPUTER science - Abstract
Traditionally, real-time scheduling algorithms prioritize tasks solely based on their timing parameters and cannot effectively handle tasks that have different execution preferences . In this paper, for a set of periodic real-time tasks running on a single processor, where some tasks are preferably executed as soon as possible (ASAP) and others as late as possible (ALAP) , we investigate Preference-Oriented Fixed-Priority (POFP) scheduling techniques. First, based on Audsley’s Optimal Priority Assignment (OPA) , we study a Preference Priority Assignment (PPA) scheme that attempts to assign ALAP (ASAP) tasks lower (higher) priorities, whenever possible. Then, by considering the non-work-conserving strategy, we exploit the promotion times of ALAP tasks and devise an online dual-queue based POFP scheduling algorithm. Basically, with the objective of fulfilling the execution preferences of all tasks, the POFP scheduler retains ALAP tasks in the delay queue until their promotion times while putting ASAP tasks into the ready queue right after their arrivals. In addition, to further expedite (delay) the executions of ASAP (ALAP) tasks using system slack, runtime techniques based on dummy and wrapper tasks are investigated. The proposed schemes are evaluated through extensive simulations. The results show that, compared to the classical fixed-priority Rate Monotonic Scheduling (RMS) algorithm, the proposed priority assignment scheme and POFP scheduler can achieve significant improvement in terms of fulfilling the execution preferences of both ASAP and ALAP tasks, which can be further enhanced at runtime with the wrapper-task based slack management technique. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
17. Harmony search: Current studies and uses on healthcare systems.
- Author
-
Abdulkhaleq, Maryam T., Rashid, Tarik A., Alsadoon, Abeer, Hassan, Bryar A., Mohammadi, Mokhtar, Abdullah, Jaza M., Chhabra, Amit, Ali, Sazan L., Othman, Rawshan N., Hasan, Hadil A., Azad, Sara, Mahmood, Naz A., Abdalrahman, Sivan S., Rasul, Hezha O., Bacanin, Nebojsa, and Vimal, S.
- Subjects
- *
CURIOSITY , *FLEXIBLE structures , *MEDICAL care , *COMPUTER science , *EVOLUTIONARY algorithms , *SEARCH algorithms , *ALGORITHMS , *LONGITUDINAL method - Abstract
One of the popular metaheuristic search algorithms is Harmony Search (HS). It has been verified that HS can find solutions to optimization problems due to its balanced exploratory and convergence behavior and its simple and flexible structure. This capability makes the algorithm preferable to be applied in several real-world applications in various fields, including healthcare systems, different engineering fields, and computer science. The popularity of HS urges us to provide a comprehensive survey of the literature on HS and its variants on health systems, analyze its strengths and weaknesses, and suggest future research directions. In this review paper, the current studies and uses of harmony search are studied in four main domains. (i) The variants of HS, including its modifications and hybridization. (ii) Summary of the previous review works. (iii) Applications of HS in healthcare systems. (iv) And finally, an operational framework is proposed for the applications of HS in healthcare systems. The main contribution of this review is intended to provide a thorough examination of HS in healthcare systems while also serving as a valuable resource for prospective scholars who want to investigate or implement this method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. An accurate slap fingerprint based verification system.
- Author
-
Gupta, Puneet and Gupta, Phalguni
- Subjects
- *
HUMAN fingerprints , *COMPUTER science , *NEURAL computers , *BIOMETRY , *ALGORITHMS , *DATABASES - Abstract
A slap fingerprint based verification system generally is highly secure. But it cannot handle the problems of temporal deformation and pose variation. In this paper, a template update algorithm has been proposed to ensure better performance of the system. During verification, each single fingerprint is matched and matching scores of all fingerprints are fused using an adaptive score level fusion. The performance of the proposed system has been evaluated on a challenging database containing 1800 slap-images acquired from 150 subjects. Experimental results show that the accuracy is increased by more than 3.5%. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
19. Noise detection in the meta-learning level.
- Author
-
Garcia, Luís P.F., Carvalho, André C.P.L.F. de, and Lorena, Ana C.
- Subjects
- *
MACHINE learning , *DATA distribution , *COMPUTER science , *ALGORITHMS , *NEURAL computers , *PREDICTION theory - Abstract
The presence of noise in real data sets can harm the predictive performance of machine learning algorithms. There are several noise filtering techniques whose goal is to improve the quality of the data in classification tasks. These techniques usually scan the data for noise identification in a preprocessing step. Nonetheless, this is a non-trivial task and some noisy data can remain unidentified, while safe data can also be removed. The bias of each filtering technique influences its performance on a particular data set. Therefore, there is no single technique that can be considered the best for all domains or data distribution and choosing a particular filter is not straightforward. Meta-learning has been largely used in the last years to support the recommendation of the most suitable machine learning algorithm(s) for a new data set. This paper presents a meta-learning recommendation system able to predict the expected performance of noise filters in noisy data identification tasks. For such, a meta-base is created, containing meta-features extracted from several corrupted data sets along with the performance of some noise filters when applied to these data sets. Next, regression models are induced from this meta-base to predict the expected performance of the investigated filters in the identification of noisy data. The experimental results show that meta-learning can provide a good recommendation of the most promising filters to be applied to new classification data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
20. Active learning on anchorgraph with an improved transductive experimental design.
- Author
-
Fu, Weijie, Hao, Shijie, and Wang, Meng
- Subjects
- *
SUPERVISED learning , *ACTIVE learning , *ALGORITHMS , *GRAPH theory , *COMPUTER science , *NEURAL computers - Abstract
Anchorgraph based learning methods have met with success in modeling the large data for scalable semi-supervised learning. However, like most graph based learning algorithms, they are usually built with a randomly selected labeled set classified in advance. Although many pool-based active learning methods have been proposed, they often require a relatively large computational and storage consumption, which tends to impose extra burden on the learning system. Thus in this paper, we propose a novel active learning method named anchor-based transductive experimental design (ATED). By fully utilizing the representing power of anchors, the improved method efficiently enhances the performance of the original anchorgraph based learning while introduces much less extra cost on computation and storage. Extensive experimental results on real-world datasets have validated our approach in terms of classifying accuracy and computational efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
21. INFFC: An iterative class noise filter based on the fusion of classifiers with noise sensitivity control.
- Author
-
Sáez, José A., Galar, Mikel, Luengo, Julián, and Herrera, Francisco
- Subjects
- *
ITERATIVE methods (Mathematics) , *NOISE control , *ALGORITHMS , *ARTIFICIAL intelligence , *COMPUTER science - Abstract
In classification, noise may deteriorate the system performance and increase the complexity of the models built. In order to mitigate its consequences, several approaches have been proposed in the literature. Among them, noise filtering, which removes noisy examples from the training data, is one of the most used techniques. This paper proposes a new noise filtering method that combines several filtering strategies in order to increase the accuracy of the classification algorithms used after the filtering process. The filtering is based on the fusion of the predictions of several classifiers used to detect the presence of noise. We translate the idea behind multiple classifier systems, where the information gathered from different models is combined, to noise filtering. In this way, we consider the combination of classifiers instead of using only one to detect noise. Additionally, the proposed method follows an iterative noise filtering scheme that allows us to avoid the usage of detected noisy examples in each new iteration of the filtering process. Finally, we introduce a noisy score to control the filtering sensitivity, in such a way that the amount of noisy examples removed in each iteration can be adapted to the necessities of the practitioner. The first two strategies (use of multiple classifiers and iterative filtering) are used to improve the filtering accuracy, whereas the last one (the noisy score) controls the level of conservation of the filter removing potentially noisy examples. The validity of the proposed method is studied in an exhaustive experimental study. We compare the new filtering method against several state-of-the-art methods to deal with datasets with class noise and study their efficacy in three classifiers with different sensitivity to noise. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Fuzzy-rough community in social networks.
- Author
-
Kundu, Suman and Pal, Sankar K.
- Subjects
- *
FUZZY sets , *SOCIAL networks , *COMPUTER science , *ALGORITHMS , *GRANULAR computing , *BIG data - Abstract
Community detection in a social network is a well-known problem that has been studied in computer science since early 2000. The algorithms available in the literature mainly follow two strategies, one, which allows a node to be a part of multiple communities with equal membership, and the second considers a disjoint partition of the whole network where a node belongs to only one community. In this paper, we proposed a novel community detection algorithm which identifies fuzzy-rough communities where a node can be a part of many groups with different memberships of their association. The algorithm runs on a new framework of social network representation based on fuzzy granular theory. A new index viz. normalized fuzzy mutual information, to quantify the goodness of detected communities is used. Experimental results on benchmark data show the superiority of the proposed algorithm compared to other well known methods, particularly when the network contains overlapping communities. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. Message scheduling for real-time interprocessor communication.
- Author
-
Waldherr, Stefan, Knust, Sigrid, and Aust, Stefan
- Subjects
- *
MULTIPROCESSORS , *TEXT messages , *REAL time scheduling (Computer science) , *COMPUTER science , *GRAPH coloring , *ALGORITHMS - Abstract
In this paper an efficient algorithm is proposed which optimizes periodic message scheduling in a real-time multiprocessor system. The system is based on a many-core single-chip computer architecture and uses a multistage baseline network for inter-core communication. Due to its basic architecture, internal blockings can occur during data transfers, i.e. the baseline network is not real-time capable by itself. Therefore, we propose a scheduling algorithm that may be performed before the execution of an application in order to compute a non-blocking schedule of periodic message transfers. Additionally, we optimize the clock rate of the network subject to the constraint that all data transfers can be performed in a non-blocking way. Our solution algorithm is based on a generalized graph coloring model and a randomized greedy approach. The algorithm was tested on some realistic communication scenarios as they appear in modern electronic car units. Computational results show the effectiveness of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
24. A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects.
- Author
-
Ezugwu, Absalom E., Ikotun, Abiodun M., Oyelade, Olaide O., Abualigah, Laith, Agushaka, Jeffery O., Eke, Christopher I., and Akinyelu, Andronicus A.
- Subjects
- *
MACHINE learning , *ALGORITHMS , *ARTIFICIAL intelligence , *DATA mining , *COMPUTER science - Abstract
Clustering is an essential tool in data mining research and applications. It is the subject of active research in many fields of study, such as computer science, data science, statistics, pattern recognition, artificial intelligence, and machine learning. Several clustering techniques have been proposed and implemented, and most of them successfully find excellent quality or optimal clustering results in the domains mentioned earlier. However, there has been a gradual shift in the choice of clustering methods among domain experts and practitioners alike, which is precipitated by the fact that most traditional clustering algorithms still depend on the number of clusters provided a priori. These conventional clustering algorithms cannot effectively handle real-world data clustering analysis problems where the number of clusters in data objects cannot be easily identified. Also, they cannot effectively manage problems where the optimal number of clusters for a high-dimensional dataset cannot be easily determined. Therefore, there is a need for improved, flexible, and efficient clustering techniques. Recently, a variety of efficient clustering algorithms have been proposed in the literature, and these algorithms produced good results when evaluated on real-world clustering problems. This study presents an up-to-date systematic and comprehensive review of traditional and state-of-the-art clustering techniques for different domains. This survey considers clustering from a more practical perspective. It shows the outstanding role of clustering in various disciplines, such as education, marketing, medicine, biology, and bioinformatics. It also discusses the application of clustering to different fields attracting intensive efforts among the scientific community, such as big data, artificial intelligence, and robotics. This survey paper will be beneficial for both practitioners and researchers. It will serve as a good reference point for researchers and practitioners to design improved and efficient state-of-the-art clustering algorithms. • Provide an up-to-date comprehensive review of the different clustering techniques. • Highlight novel and most recent practical applications areas of clustering. • Provide a convenient research path for new researchers. • Help experts develop new algorithms for emerging challenges in the research area. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Two structure-preserving-doubling like algorithms for obtaining the positive definite solution to a class of nonlinear matrix equation.
- Author
-
Huang, Na and Ma, Chang-Feng
- Subjects
- *
ALGORITHMS , *NONLINEAR equations , *MATRICES (Mathematics) , *ADDITION (Mathematics) , *COMPUTER science - Abstract
In this paper, we present two structure-preserving-doubling like algorithms for obtaining the positive definite solution of the nonlinear matrix equation X + A H X ¯ − 1 A = Q , where X ∈ C n × n is an unknown matrix and Q ∈ C n × n is a Hermitian positive definite matrix. We prove that the sequences generated by the algorithms converge to the positive definite solution of the considered matrix equation R-quadratically. In addition, we also present some numerical results to illustrate the behavior of the considered algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. Automatic custom instruction identification for application-specific instruction set processors.
- Author
-
Xiao, Chenglong, Casseau, Emmanuel, Wang, Shanshan, and Liu, Wanjun
- Subjects
- *
APPLICATION-specific instruction-set processors , *MICROPROCESSORS , *DATA flow computing , *ALGORITHMS , *COMPUTER science , *APPLICATION software - Abstract
The application-specific instruction set processors (ASIPs) have received more and more attention in recent years. ASIPs make trade-offs between flexibility and performance by extending the base instruction set of a general-purpose processor with custom functional units (CFUs). Custom instructions, executed on CFUs, make it possible to improve performance and achieve flexibility for extensible processors. The custom instruction synthesis flow involves two essential issues: custom instruction enumeration (subgraph enumeration) and custom instruction selection (subgraph selection). However, both enumerating all possible custom instructions of a given data-flow graph and selecting the most profitable custom instructions from the enumerated custom instructions are computationally difficult problems. In this paper, we propose efficient algorithms for custom instruction enumeration and custom instruction selection. Compared with previously proposed well-known enumeration algorithms, our approach can achieve a significant speedup while generating the identical set of all possible custom instructions or only connected custom instructions. Experimental results also show that a code size reduction rate up to 76% can be achieved for a set of computational intensive programs, and the speed-up achieved is up to 8.2×. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
27. A fast algorithm for data collection along a fixed track.
- Author
-
Cheong, O., El Shawi, R., and Gudmundsson, J.
- Subjects
- *
ACQUISITION of data , *ALGORITHMS , *WIRELESS sensor networks , *COMMUNICATION , *APPLIED mathematics , *VORONOI polygons , *COMPUTER science - Abstract
Recent research shows that significant energy saving can be achieved in wireless sensor networks (WSNs) with a mobile base station that collects data from sensor nodes via short-range communications. However, a major performance bottleneck of such WSNs is the significantly increased latency in data collection due to the low movement speed of mobile base stations. In this paper we study the problem of finding a data collection path for a mobile base station moving along a fixed track in a wireless sensor network to minimize the latency of data collection. The main contribution is an O ( m n log n ) expected time algorithm, where n is the number of sensors in the networks and m is the complexity of the fixed track. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
28. Online algorithms for 1-space bounded 2-dimensional bin packing and square packing.
- Author
-
Zhang, Yong, Chin, Francis Y.L., Ting, Hing-Fung, Han, Xin, Poon, Chung Keung, Tsin, Yung H., and Ye, Deshi
- Subjects
- *
ALGORITHMS , *COMPUTER science , *APPLIED mathematics , *COMPUTER software , *MATHEMATICAL programming , *TECHNOLOGY - Abstract
In this paper, we study 1-space bounded 2-dimensional bin packing and square packing. A sequence of rectangular items (square items) arrive one by one, each item must be packed into a square bin of unit size on its arrival without any information about future items. When packing items, 90°-rotation is allowed. 1-space bounded means there is only one “active” bin. If the “active” bin cannot accommodate the coming item, it will be closed and a new bin will be opened. The objective is to minimize the total number of bins used for packing all items in the sequence. Our contributions are as follows: For 1-space bounded 2-dimensional bin packing, we propose an online packing algorithm with a tight competitive ratio of 5.06. A lower bound of 3.17 on the competitive ratio is proven. Moreover, we study 1-space bounded square packing, where each item is a square with side length no more than 1. A 4.3-competitive algorithm is achieved, and a lower bound of 2.94 on the competitive ratio is given. All these bounds surpass the previously best known results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. Algorithms for parameterized maximum agreement forest problem on multiple trees.
- Author
-
Shi, Feng, Wang, Jianxin, Chen, Jianer, Feng, Qilong, and Guo, Jiong
- Subjects
- *
ALGORITHMS , *PHYLOGENY , *INFORMATION science , *COMPUTER science , *BIOLOGICAL evolution , *PARAMETERIZED family , *PROBABILITY theory - Abstract
The Maximum Agreement Forest problem ( MAF ) asks for a largest common subforest of a collection of phylogenetic trees. The MAF problem on two binary phylogenetic trees has been studied extensively in the literature. In this paper, we present a group of fixed-parameter tractable algorithms for the MAF problem on multiple (i.e., two or more) binary phylogenetic trees. Our techniques work fine for the problem for both rooted trees and unrooted trees. The computational complexity of our algorithms is comparable with that of the known algorithms for two trees, and is independent of the number of phylogenetic trees for which a maximum agreement forest is constructed. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. Schedules for marketing products with negative externalities.
- Author
-
Cao, Zhigang, Chen, Xujin, and Wang, Changjun
- Subjects
- *
SOCIAL networks , *POLYNOMIAL time algorithms , *DECISION making , *CONSUMERS , *ALGORITHMS , *COMPUTER science , *MARKETING - Abstract
With the fast development of social network services, network marketing of products with externalities has been attracting more and more attention from both academia and business. The extensive study on network marketing mainly concerns with positive externalities. The focus of this paper is on the much less understood counterpart for negative externalities, where a consumer has lower incentive to buy a product as the product is possessed by more social network neighbors. For a seller who markets these products, it is desirable to have a good schedule which specifies an order of consumers he approaches. We design polynomial time algorithms that find marketing schedules for products with negative externalities. The goals are two-fold: maximizing the product sale and ensuring consumer regret-free decisions. We show that the maximization is NP-hard. Our algorithms achieve satisfactory performance guarantees, approximating the maximum within constant factors in most of the cases. Two of these algorithms provide regret-proof schedules, reaching an equilibrium state where no consumers regret their previous decisions. Our work is the first attempt to address these marketing problems from an algorithmic point of view. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. A practical decision procedure for Propositional Projection Temporal Logic with infinite models.
- Author
-
Duan, Zhenhua and Tian, Cong
- Subjects
- *
DECISION making , *ALGORITHMS , *C++ , *LOGIC , *COMPUTER science , *INFORMATION science , *PROPOSITIONAL calculus - Abstract
This paper presents a practical decision procedure for Propositional Projection Temporal Logic with infinite models. First, a set Prop l of labels l i , 0 ⩽ i ⩽ n ∈ N 0 , is used to mark nodes of an LNFG of a formula, and a node with l i is treated as an accepting state as in an automaton. Further, the generalized Büchi accepting condition for automata is employed to identify a path (resulting a word) in an LNFG as a model of the formula. In addition, the implementation details of the decision procedure and relevant algorithms including pre-processing, LNFG, circle finding algorithms are presented; as a matter of fact, all algorithms are implemented by C++ programs. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
32. The string guessing problem as a method to prove lower bounds on the advice complexity.
- Author
-
Böckenhauer, Hans-Joachim, Hromkovič, Juraj, Komm, Dennis, Krug, Sacha, Smula, Jasmin, and Sprock, Andreas
- Subjects
- *
ALGORITHMS , *ORACLE software , *COMPUTER science , *INFORMATION science , *SET theory - Abstract
The advice complexity of an online problem describes the additional information both necessary and sufficient for online algorithms to compute solutions of a certain quality. In this model, an oracle inspects the input before it is processed by an online algorithm. Depending on the input string, the oracle prepares an advice bit string that is accessed sequentially by the algorithm. The number of advice bits that are read to achieve some specific solution quality can then serve as a fine-grained complexity measure. The main contribution of this paper is to study a powerful method for proving lower bounds on the number of advice bits necessary. To this end, we consider the string guessing problem as a generic online problem and show a lower bound on the number of advice bits needed to obtain a good solution. We use special reductions from string guessing to improve the best known lower bound for the online set cover problem and to give a lower bound on the advice complexity of the online maximum clique problem. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
33. 2-Connecting outerplanar graphs without blowing up the pathwidth.
- Author
-
Babu, Jasine, Basavaraju, Manu, Chandran, L. Sunil, and Rajendraprasad, Deepak
- Subjects
- *
GRAPH theory , *ALGORITHMS , *PLANAR graphs , *COMPUTER science , *INFORMATION science - Abstract
Given a connected outerplanar graph G of pathwidth p , we give an algorithm to add edges to G to get a supergraph of G , which is 2-vertex-connected, outerplanar and of pathwidth O ( p ) . This settles an open problem raised by Biedl [1] , in the context of computing minimum height planar straight line drawings of outerplanar graphs, with their vertices placed on a two-dimensional grid. In conjunction with the result of this paper, the constant factor approximation algorithm for this problem obtained by Biedl [1] for 2-vertex-connected outerplanar graphs will work for all outer planar graphs. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
34. A formal proof of the deadline driven scheduler in PPTL axiomatic system.
- Author
-
Zhang, Nan, Duan, Zhenhua, Tian, Cong, and Du, Dingzhu
- Subjects
- *
ALGORITHMS , *SIMULATION methods & models , *AUTOMATIC theorem proving , *COMPUTER science , *TECHNOLOGY , *LAMMA language , *MATHEMATICS theorems - Abstract
This paper presents an approach for verifying the correctness of the feasibility theorem on the deadline driven scheduler (DDS) with the axiom system of Propositional Projection Temporal Logic (PPTL). To do so, the deadline driven scheduling algorithm is modeled by an MSVL (Modeling, Simulation and Verification Language) program and the feasibility theorem is formulated by PPTL formulas with two parts: a necessary part and a sufficient part. Then, several lemmas are abstracted and proved by means of the axiom system of PPTL. With the help of the lemmas, two parts of the theorem are deduced respectively. This case study convinces us that some real-time properties of systems can be formally verified by theorem proving using the axiom system of PPTL. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
35. Efficient CTL model-checking for pushdown systems.
- Author
-
Song, Fu and Touili, Tayssir
- Subjects
- *
ALGORITHMS , *PROGRAMMING languages , *COMPUTER software , *COMPUTER science , *COMPUTER engineering - Abstract
Pushdown systems (PDS) are well adapted to model sequential programs with (possibly recursive) procedure calls. Therefore, it is important to have efficient model checking algorithms for PDSs. We consider in this paper CTL model checking for PDSs. We consider the “standard” CTL model checking problem where whether a configuration of a PDS satisfies an atomic proposition or not depends only on the control state of the configuration. We consider also CTL model checking with regular valuations, where the set of configurations in which an atomic proposition holds is a regular language. We reduce these problems to the emptiness problem in Alternating Büchi Pushdown Systems, and we give an algorithm to solve this emptiness problem. Our algorithms are more efficient than the other existing algorithms for CTL model checking for PDSs in the literature. We implemented our techniques in a tool, and we applied it to different case studies. Our results are encouraging. In particular, we were able to confirm the existence of known bugs in Linux source code. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
36. RCELF: A residual-based approach for Influence Maximization Problem.
- Author
-
Zhang, Shiqi, Zeng, Xinxun, and Tang, Bo
- Subjects
- *
SOCIAL networks , *ALGORITHMS , *VIRAL marketing , *COMPUTER science , *NEW product development - Abstract
Influence Maximization Problem (IMP) is selecting a seed set of nodes in the social network to spread the influence as widely as possible. It has many applications in multiple domains, e.g., viral marketing is frequently used for new products or activities advertisement. While it is a classic and well-studied problem in computer science, unfortunately, all those proposed techniques are compromising among time efficiency, memory consumption, and result quality. In this paper, we conduct comprehensive experimental studies on the state-of-the-art IMP approximate approaches to reveal the underlying trade-off strategies. Interestingly, we find that even the state-of-the-art approaches are impractical when the propagation probability of the network have been taken into consideration. With the findings of existing approaches, we propose a novel residual-based approach (i.e., RCELF) for IMP , which (i) overcomes the deficiencies of existing approximate approaches, and (ii) provides theoretical guaranteed results with high efficiency in both time- and space-perspectives. We demonstrate the superiority of our proposal by extensive experimental evaluation on real datasets. • We reveal the trade-off strategies among the approximate approaches for IMP. • An effective and efficient approximate algorithm called RCELF is proposed. • The performance of RCELF is extensively evaluated on 5 real-world datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. A Lyapunov analysis of the continuous-time adaptive Bellman–Ford algorithm.
- Author
-
Mo, Yuanqiu and Yu, Lanlin
- Subjects
- *
ALGORITHMS , *GLOBAL asymptotic stability , *ARTIFICIAL intelligence , *LYAPUNOV functions , *COMPUTER science - Abstract
The shortest path problem, one of the most classical graph problems, has been addressed in many different ways suitable for various settings in the fields of computer science and artificial intelligence. In this paper, we revisit a distributed control solution, namely the continuous-time adaptive Bellman–Ford algorithm, to the shortest path problem. While previous work only concerned its global asymptotic stability, we not only prove its global asymptotic stability by formulating a Lyapunov function, but characterize the initial conditions under which the algorithm will converge exponentially, and show that the algorithm is globally ultimately bounded under persistent bounded perturbations based on the proposed Lyapunov function. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
38. Weight evaluation for features via constrained data-pairscan't-linkq.
- Author
-
Liu, Ming, Wu, Chong, and Liu, Yuanchao
- Subjects
- *
ALGORITHMS , *PROBABILITY theory , *INCONSISTENCY (Logic) , *DISTRIBUTION (Probability theory) , *COMPUTER science - Abstract
Facing the massive amount of data appearing on the web, automatic analysis tools have become essential for web users to discover valuable information online. Precise similarity measurement plays a decisive role in enabling analysis tools to acquire high-quality performances. Because different features contribute diversely to similarity calculation, it is necessary to utilize weight to measure feature's contribution and import it into similarity measurement. To accurately assign feature's weight, constrained data-pairs provided by users are usually imported into the weight evaluation procedure, whereas conventional plans all fail to consider two challenges: (a) asymmetrical distribution of constrained data-pairs, and (b) inconsistency contained by constrained data-pairs. If these two issues occur, conventional plans are incompetent at addressing them or are even unable to work. Thus, this paper proposes a novel constraint based weight evaluation to address these two issues. For the former, constrained data-pairs are partitioned into several equivalent classes, and distributing parameters are assigned to constrained data-pairs to balance their distributions. For the latter, constrained data-pairs are connected one after another, and belief values are thereby formed to indicate their probability of being inconsistent. Experimental results demonstrate that this type of evaluation is independent of any algorithm. With this evaluation, similarities can be calculated more accurately. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
39. Intragroup and intergroup secret image sharing based on homomorphic Lagrange interpolation.
- Author
-
Yang, Ching-Nung, Wu, Xiaotian, Lin, Hsuan-Yu, and Kim, Cheonshik
- Subjects
- *
INFORMATION sharing , *INTERPOLATION , *HOMOMORPHISMS , *ALGORITHMS , *COMPUTER science - Abstract
In this paper, a (k , n , m) -intragroup and intergroup secret image sharing (I 2 SIS) scheme for the multi-group (i.e., m -group) environment is investigated. (2 m − 1) secret images are shared within groups (intragroups) and between groups (intergroups). Differing from existing SIS methods, any k or more qualified participants among different groups (i.e., intergroup) can collaboratively recover the respective secret image via the homomorphic Lagrange interpolation. The proposed scheme consists of three major procedures: preprocessing, sharing, and recovering. In the preprocessing procedure, an approach is designed to partition the coefficients in the m (k − 1) -degree polynomials for sharing the (2 m − 1) secret images. In addition, an optimal algorithm for the preprocessing is given to let all the percentages of recovering secret images be the same as possible. Shadows belonging to all participants in the m groups are generated in the sharing procedure. In the recovering procedure, the secret images can be reconstructed in intragroups and intergroups. Moreover, experiments and comparisons are demonstrated to show the merits of the proposed scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Solving the k-influence region problem with the GPU.
- Author
-
Fort, M. and Sellarès, J.A.
- Subjects
- *
GRAPHICS processing units , *DECISION making , *EUCLIDEAN domains , *PROBLEM solving , *INFORMATION processing , *ALGORITHMS - Abstract
Abstract: In this paper we study a problem that arises in the competitive facility location field. Facilities and customers are represented by points of a planar Euclidean domain. We associate a weighted distance to each facility to reflect that customers select facilities depending on distance and importance. We define, by considering weighted distances, the k-influence region of a facility as the set of points of the domain that has the given facility among their k-nearest/farthest neighbors. On the other hand, we partition the domain into subregions so that each subregion has a non-negative weight associated to it which measures a characteristic related to the area of the subregion. Given a weighted partition of the domain, the k-influence region problem finds the points of the domain where are new facility should be opened. This is done considering the known weight associated to the new facility and ensuring a minimum weighted area of its k-influence region. We present a GPU parallel approach, designed under CUDA architecture, for approximately solving the k-influence region problem. In addition, we describe how to visualize the solutions, which improves the understanding of the problem and reveals complicated structures that would be hard to capture otherwise. Integration of computation and visualization facilitates decision makers with an iterative what-if analysis process, to acquire more information to obtain an approximate optimal location. Finally, we provide and discuss experimental results showing the efficiency and scalability of our approach. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
41. Using deep neural networks for kinematic analysis: Challenges and opportunities.
- Author
-
Cronin, Neil J.
- Subjects
- *
ARTIFICIAL intelligence , *SUPERVISED learning , *COMPUTER science , *COACHING psychology , *MOTION analysis , *DATA quality , *ALGORITHMS - Abstract
Kinematic analysis is often performed in a lab using optical cameras combined with reflective markers. With the advent of artificial intelligence techniques such as deep neural networks, it is now possible to perform such analyses without markers, making outdoor applications feasible. In this paper I summarise 2D markerless approaches for estimating joint angles, highlighting their strengths and limitations. In computer science, so-called "pose estimation" algorithms have existed for many years. These methods involve training a neural network to detect features (e.g. anatomical landmarks) using a process called supervised learning, which requires "training" images to be manually annotated. Manual labelling has several limitations, including labeller subjectivity, the requirement for anatomical knowledge, and issues related to training data quality and quantity. Neural networks typically require thousands of training examples before they can make accurate predictions, so training datasets are usually labelled by multiple people, each of whom has their own biases, which ultimately affects neural network performance. A recent approach, called transfer learning, involves modifying a model trained to perform a certain task so that it retains some learned features and is then re-trained to perform a new task. This can drastically reduce the required number of training images. Although development is ongoing, existing markerless systems may already be accurate enough for some applications, e.g. coaching or rehabilitation. Accuracy may be further improved by leveraging novel approaches and incorporating realistic physiological constraints, ultimately resulting in low-cost markerless systems that could be deployed both in and outside of the lab. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Three viewpoints on null-collision Monte Carlo algorithms.
- Author
-
El Hafi, Mouna, Blanco, Stephane, Dauchet, Jérémi, Fournier, Richard, Galtier, Mathieu, Ibarrart, Loris, Tregan, Jean-Marc, and Villefranque, Najda
- Subjects
- *
MONTE Carlo method , *ALGORITHMS , *COMPUTER graphics , *INTEGRAL operators , *RADIATIVE transfer equation , *PROBABILITY theory , *COMPUTER science - Abstract
• Monte Carlo Method and computer graphics tools are combined. • Non linearity of the integral operator in the double randomization is bypassed. • Three viewpoints allow classification of ideas, flexibility and intuition. • Data-algorithm orthogonality is preserved using null collision algorithm. • Hierarchical grids combined to null collision algorithm allow a fast path tracking. In 2013, Galtier et al. [1] theoretically revisited a numerical trick that had been used since the very beginning of linear-transport Monte-Carlo simulation: introducing "null" scatterers into a heterogeneous field to make it virtually homogeneous. The rigorous connection between null-collision algorithms and integral formulations of the radiative transfer equation led to null-collision algorithms being used in distinct contexts, from atmospheric or combustion sciences to computer graphics, addressing questions that may strongly depart from the initial objective of handling heterogeneous fields (handling large spectroscopic databases, non-linearly coupling radiation with other physics). We briefly describe here some of this research and we classify it by proposing three alternative viewpoints on the very same null-collision concept: an intuitive, physical point of view, called similitude ; a viewpoint built on the probability theory, where the null-collision method is seen as rejection sampling ; and a more formal writing where the nonlinear exponential function is expanded into an infinite sum of linear terms. By formulating the null-collision concept under three distinct formalisms, our intention is to increase the reader's awareness of its flexibility.The idea defended and illustrated in this paper is that the ability to explore null-collision algorithms under their different forms has often led to a broadening of the solution space when facing difficult problems, including ones where the Monte Carlo method was consensually considered inapplicable. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. A hybrid scheme for enhancing recovered image quality in polynomial based secret image sharing by modify-and-recalculate strategy.
- Author
-
Wu, Xiaotian, Yang, Ching-Nung, and Yang, Yi-Yun
- Subjects
- *
IMAGE quality analysis , *IMAGE processing , *ALGORITHMS , *POLYNOMIALS , *COMPUTER science - Abstract
A hybrid approach for enhancing the recovered image quality in Shamir's polynomial based secret image sharing (SIS) is proposed in this paper. To remain computational efficiency, simple modular arithmetic is still used. The fundamental operation in the proposed method is mod P calculation, where P is a prime. A framework for the proposed method is firstly given. Mod P G and mod P S sharing algorithms are included. Then, a modify-and-recalculate strategy for mod P S sharing is proposed to make sure that each generated modular result is within the interval [ 0 , 2 N − 1 ]. Theoretical analysis on recovered image quality and extensive experiments are conducted. Recommendation on choosing optimal sharing method for different values of N (8 ≤ N ≤ 17) is given. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.