12,438 results
Search Results
2. The Application of Optimized Particle Swarm Algorithm in Non-paper Examination.
- Author
-
Liang, Zhou, Lixin, Ke, Wu, Kaijun, Jianmin, Gong, and Jian, Hua
- Abstract
To deal with non-paper test composition algorithm impact on exam quality, we proposed the test-sheet composition algorithms. By comparing a variety of existing intelligent algorithms in the application of test-sheet composition, we identify the shortcomings of existing algorithms, such as the "premature" of algorithm due to the poor local search ability and the low convergence rate, etc. PSO algorithm has no crossover, mutation operators. It directly provides the speed, position update formula, and completes the assessment with the help of the fitness function of iterations. The principles and mechanisms of algorithm are simpler. On the basis of standard PSO algorithm, we proposed a Binary Particle Swarm Optimize (BPSO) algorithm based on probability. Bayes formula was used to overcome the human factors impacting on algorithm convergence speed. The algorithm validity has been shown in the simulation experiment with Java. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
3. Item bank system and the test paper generation algorithm.
- Author
-
Yan, Song and Guoxing, Yang
- Abstract
Item bank system has been developed with Java and MySQL. The system contains course management module, examination questions management module, test paper management module and examination management module. The test paper generation algorithm used in this system was also discussed. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
4. The new automated IEEE INFOCOM review assignment system.
- Author
-
Li, Baochun and Hou, Y. Thomas
- Subjects
COMPUTER networks ,SCALABILITY ,SYSTEMS design ,SYSTEMS engineering ,LARGE scale systems - Abstract
In academic conferences, the structure of the review process has always been considered a critical aspect of ensuring the quality of the conferences. Assigning reviews manually, by either the TPC chairs or Area Chairs, is time-consuming, and the process does not scale well to the number of submitted papers. Organizing a conference with multiple symposia (or tracks) helps its scalability, but predetermined boundaries between tracks may lead to inefficient use of reviewer expertise and suboptimal review assignment. Inspired by a rich literature on the problem of automated review assignment, we have designed and implemented a new review assignment system, called Erie, and successfully deployed it for IEEE INFOCOM 2015 and INFOCOM 2016. Implemented in Python, Erie is designed to use Latent Semantic Indexing to compute the suitability score between a submitted paper and a reviewer's representative papers, and to solve an optimization problem that maximizes the total suitability score across all submitted papers to the conference. Anecdotal evidence shows that Erie outperformed the accuracy of manual assignments by Area Chairs, and helped to improve the percentage of expert reviewers by a substantial margin. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
5. Paper Classification by Topic Grouping in Citation Networks.
- Author
-
Su, Yi-Jen, Wun, Jian-Cheng, Hsu, Wei-Lin, and Chen, Yue-Qun
- Abstract
The enormous popularity of Web 2.0 social network services has led to much research on social network analysis (SNA). These studies focus on analyzing the complex interactive activities between users in the world of virtual networks. SNA has shown great potential in automatic document classification, especially in identifying citation networks of research papers and the references among them. This research adopts the Clique Percolation Method (CPM) to identify all overlapping subgroups in a citation network. In the grouping process, research papers with similar topics will be grouped into the same topic group. Two papers are regarded as having a relationship when the common citation rate between them is higher than the threshold. A modified TF-IDF calculates the weight of each keyword in the topic groups. The keyword-weight vector represents the main features of each group, while the category of a new-coming document is determined by a novel similarity function. All the papers under study are collected from the journal IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) published from 1979 to 2011. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
6. An Apparatus and Method for Real-Time Stacked Sheets Counting With Line-Scan Cameras.
- Author
-
Chen, Tiejian, Wang, Yaonan, and Xiao, Changyan
- Subjects
PRINTING industry ,PACKAGING industry ,QUALITY control ,ALGORITHMS ,PRINTING paper - Abstract
To satisfy the requirement of quality control in printing and packaging industry, a sheet counting apparatus is developed, which adopts a line-scan camera to image the fringes of sheet stack and is able to provide a real-time and noncontact measurement of their quantity. With a brief introduction of the system architecture, our main work focuses on the sheet counting algorithms. The basic principle is to identify each sheet profile from the 1-D image with a robust ridge strength measurement. First, a multiscale bi-Gaussian ridge likelihood measurement and a ridge-valley descriptor are utilized to improve adjacent objects detection by increasing local contrast around sheet fringes. Then, a sheet recognition scheme, which integrates a peak detection algorithm and the ridge region criteria for verification, is proposed to discriminate true sheets from the obtained ridgeness measure. According to experiments and tests in real production lines, our apparatus can reach a very high measuring accuracy for printing papers or cards with a thickness not <0.2 mm. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
7. Submodular Memetic Approximation for Multiobjective Parallel Test Paper Generation.
- Author
-
Nguyen, Minh Luan, Hui, Siu Cheung, and Fong, Alvis C. M.
- Abstract
Parallel test paper generation is a biobjective distributed resource optimization problem, which aims to generate multiple similarly optimal test papers automatically according to multiple user-specified assessment criteria. Generating high-quality parallel test papers is challenging due to its NP-hardness in both of the collective objective functions. In this paper, we propose a submodular memetic approximation algorithm for solving this problem. The proposed algorithm is an adaptive memetic algorithm (MA), which exploits the submodular property of the collective objective functions to design greedy-based approximation algorithms for enhancing steps of the multiobjective MA. Synergizing the intensification of submodular local search mechanism with the diversification of the population-based submodular crossover operator, our algorithm can jointly optimize the total quality maximization objective and the fairness quality maximization objective. Our MA can achieve provable near-optimal solutions in a huge search space of large datasets in efficient polynomial runtime. Performance results on various datasets have shown that our algorithm has drastically outperformed the current techniques in terms of paper quality and runtime efficiency. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
8. Research of paper surface defects detection system based on blob algorithm.
- Author
-
Qi, Xingguang, Li, Xiaoting, and Zhang, Hailun
- Abstract
Effective recognition and localization of paper defect based on machine vision is the key issue for paper defect detection system. This paper proposed an improved algorithm by combination with Blob analysis algorithm and image preprocessing approach to detect the paper defects which exist in captured images by a linear charge coupled device (CCD) camera. First, the defected images are preprocessed, such as image denoising, image segmentation, connectivity analysis, and then extract effective paper textures: defect amount, regional area, long axis, short axis, central position and so on, meanwhile draw the minimum bounding rectangles. Compared with the traditional morphology algorithm and threshold segmentation and fractal feature algorithm, the improved algorithm is validated by a great deal of experimental results with high detection efficiency and defects localization accuracy. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
9. Extraction of Answer Image of Choice Questions in Examination Paper.
- Author
-
Zhipeng, Xu
- Abstract
firstly, the skewed image of examination paper is corrected by Hough transform, and then mathematical morphology methods are used to locate the frame lines in table. Then connected component analysis is utilized to locate the form region based on maximum area feature. Finally the form frame line is removed according to the vertical gradient, and the answer image is achieved by analyzing the coordinates of objects. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
10. Outstanding Paper Award: Fair Lateness Scheduling: Reducing Maximum Lateness in G-EDF-Like Scheduling.
- Author
-
Erickson, Jeremy P. and Anderson, James H.
- Abstract
Existing research in soft real-time scheduling has focused on determining tardiness bounds given a scheduling algorithm. In this paper, we study lateness bounds, which are related to tardiness bounds, and propose a scheduling algorithm to minimize lateness bounds, namely the global fair lateness (G-FL) algorithm. G-FL is a G-EDF-like scheduler, but has lower maximum lateness bounds than GEDF. Due to its G-EDF-like nature, it can be used within existing systems that implement arbitrary-deadline G-EDF, and with existing synchronization protocols. Therefore, we argue that G-FL should replace G-EDF for SRT applications. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
11. Online Coverage of Planar Environments by a Battery Powered Autonomous Mobile Robot.
- Author
-
Shnaps, Iddo and Rimon, Elon
- Subjects
MOBILE robots ,STORAGE batteries ,ALGORITHMS ,BATTERY charge measurement ,AUTONOMOUS robots - Abstract
This paper is concerned with online coverage of unknown planar environments by a mobile robot of size D operating with a limited energy capacity battery. The battery capacity is represented by the path length L that the robot can travel under a full battery charge. Starting at S, the robot has to cover a planar environment containing unknown obstacles, and return to S upon task completion. During task execution, the robot may return to S at any time to recharge its battery. This paper first describes a battery powered offline coverage methodology, then introduces the battery powered coverage (BPC) algorithm that performs online battery powered coverage using position and local obstacle detection sensors. The performance of the BPC algorithm is measured by its competitiveness, determined by measuring the mobile robot’s total online path length, l, relative to the optimal offline solution lopt. This paper establishes that the BPC algorithm has a competitive performance of l \leq ( L/ D) lopt. This paper additionally establishes a universal lower bound of l \geq \log ( L/ 4 D) lopt over all online battery powered coverage algorithms. Execution example illustrates the usefulness of the BPC algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
12. Introduction to the Special Section on Nature Inspired Methods in Industry Applications.
- Author
-
Slowik, Adam and Kwasnicka, Halina
- Abstract
In this paper, we present a short introduction to the special section on nature-inspired optimization methods and their industry applications. The focus of this paper is on a brief presentation of the main idea (topics, algorithms, engineering problems) of the papers which were accepted for the publication in this special section. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
13. Optimized Multiagent Routing for a Class of Guidepath-Based Transport Systems.
- Author
-
Daugherty, Greyson, Reveliotis, Spyros, and Mohler, Greg
- Subjects
MULTIAGENT systems ,QUANTUM computing ,STRUCTURAL optimization ,RESOURCE management ,TRAFFIC congestion - Abstract
This paper presents a heuristic algorithm for minimizing the makespan required to route a set of agents inhabiting a shared guidepath network from their initial locations to their respective destinations while observing a set of regulations that seek to ensure the safety and the integrity of the generated traffic. From an application standpoint, the presented developments are motivated by the traffic coordination challenges that arise in the context of many automated unit-load material handling systems and also in the transport of the ionized atoms that takes place in the context of quantum computing. From a methodological standpoint, our developments constitute a customization of the general “local-search” framework of combinatorial optimization theory to the traffic management problem that is considered in this paper. Hence, the presented results include a rigorous characterization of the considered problem and its solution space, detailed algorithms for the construction of the necessary initial solutions and the improving step for the pursued search, a complexity analysis of these algorithms, and a set of computational experiments that reveal and assess the computational efficiency of the presented algorithms and the efficacy of the derived solutions. The paper concludes with some suggestions for potential extensions of the presented results. Note to Practitioners—In many contemporary applications of automation science and engineering, a number of entities—or “agents”—must be transported expediently from their initial locations to certain destinations using a set of links that define the underlying “guidepath network.” Furthermore, various safety considerations require that the agents must be adequately separated during these transports, and the imposed restrictions turn the corresponding traffic coordination problem into a complex resource allocation problem, where the contested resources are the guidepath-network links. This paper presents a set of algorithms that can provide high-quality schedules for the resulting traffic-scheduling problems in a computationally efficient manner. These properties of our algorithms are established through the necessary theoretical analysis, but they are also demonstrated through a series of numerical experiments where they are shown capable to provide near-optimal solutions for some very complex problem instances in no more than a few seconds. In addition, our algorithms are “complete,” i.e., they will always provide a feasible schedule for any instantiation of the traffic-scheduling problem considered in this paper. Hence, they can effectively address the needs for “real-time” traffic management that arise in the context of the considered applications. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. Efficient Top-k Retrieval on Massive Data.
- Author
-
Han, Xixian, Li, Jianzhong, and Gao, Hong
- Subjects
INFORMATION retrieval ,DATA analysis ,APPLICATION software ,QUERY (Information retrieval system) ,COMPUTER algorithms ,DATA structures - Abstract
In many applications, top-k query is an important operation to return a set of interesting points in a potentially huge data space. It is analyzed in this paper that the existing algorithms cannot process top- k query on massive data efficiently. This paper proposes a novel table-scan-based T2S algorithm to efficiently compute top-k results on massive data. T2S first constructs the presorted table, whose tuples are arranged in the order of the round-robin retrieval on the sorted lists. T2S maintains only fixed number of tuples to compute results. The early termination checking for T2S is presented in this paper, along with the analysis of scan depth. The selective retrieval is devised to skip the tuples in the presorted table which are not top-k results. The theoretical analysis proves that selective retrieval can reduce the number of the retrieved tuples significantly. The construction and incremental-update/batch-processing methods for the used structures are proposed in this paper. The extensive experimental results, conducted on synthetic and real-life data sets, show that T2S has a significant advantage over the existing algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Network Resource Allocation via Stochastic Subgradient Descent: Convergence Rate.
- Author
-
Bedi, Amrit Singh and Rajawat, Ketan
- Subjects
RESOURCE allocation ,STOCHASTIC processes ,WIRELESS sensor networks ,COGNITIVE radio ,SMART power grids ,CROSS layer optimization ,RANDOM variables ,SUBGRADIENT methods - Abstract
This paper considers a general stochastic resource allocation problem that arises widely in wireless networks, cognitive radio, networks, smart-grid communications, and cross-layer design. The problem formulation involves expectations with respect to a collection of random variables with unknown distributions, representing exogenous quantities such as channel gain, user density, or spectrum occupancy. We consider the constant step-size stochastic dual subgradient descent (SDSD) method that has been widely used for online resource allocation in networks. The problem is solved in dual domain, which results in a primal resource allocation subproblem at each time instant. The goal here is to characterize the non-asymptotic behavior of such stochastic resource allocations in an almost sure sense. It is well known that with a step size of \epsilon , SDSD converges to an \mathcal {O}(\epsilon) -sized neighborhood of the optimum. In practice, however, there exists a trade-off between the rate of convergence and the choice of \epsilon $ . This paper establishes a convergence rate result for the SDSD algorithm that precisely characterizes this trade-off. Toward this end, a novel stochastic bound on the gap between the objective function and the optimum is developed. The asymptotic behavior of the stochastic term is characterized in an almost sure sense, thereby generalizing the existing results for the stochastic subgradient methods. For the stochastic resource allocation problem at hand, the result explicates the rate with which the allocated resources become near-optimal. As an application, the power and user-allocation problem in device-to-device networks is formulated and solved using the SDSD algorithm. Further intuition on the rate results is obtained from the verification of the regularity conditions and accompanying simulation results. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
16. A Framework for Large-Scale Multiobjective Optimization Based on Problem Transformation.
- Author
-
Zille, Heiner, Ishibuchi, Hisao, Mostaghim, Sanaz, and Nojima, Yusuke
- Subjects
MATHEMATICAL optimization ,METAHEURISTIC algorithms ,BENCHMARK testing (Engineering) ,OPTIMIZERS (Computer software) ,DIMENSIONAL reduction algorithms - Abstract
In this paper, we propose a new method for solving multiobjective optimization problems with a large number of decision variables. The proposed method called weighted optimization framework is intended to serve as a generic method that can be used with any population-based metaheuristic algorithm. After explaining some general issues of large-scale optimization, we introduce a problem transformation scheme that is used to reduce the dimensionality of the search space and search for improved solutions in the reduced subspace. This involves so-called weights that are applied to alter the decision variables and are also subject to optimization. Our method relies on grouping mechanisms and employs a population-based algorithm as an optimizer for both original variables and weight variables. Different grouping mechanisms and transformation functions within the framework are explained and their advantages and disadvantages are examined. Our experiments use test problems with 2–3 objectives 40–5000 variables. Using our approach on three well-known algorithms and comparing its performance with other large-scale optimizers, we show that our method can significantly outperform most existing methods in terms of solution quality as well as convergence rate on almost all tested problems for many-variable instances. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
17. Outstanding Paper Award: Global Mixed-Criticality Scheduling on Multiprocessors.
- Author
-
Li, Haohan and Baruah, Sanjoy
- Abstract
The scheduling of mixed-criticality implicit-deadline sporadic task systems on identical multiprocessor platforms is considered, when inter-processor migration is permitted. A scheduling algorithm is derived and proved correct, and its properties investigated. Theoretical analysis (in the form of both a speedup factor and sufficient schedulability conditions) as well as extensive simulation experiments serve to demonstrate its effectiveness. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
18. A Comparison of Three Uniquely Different State of the Art and Two Classical Multiobjective Optimization Algorithms as Applied to Electromagnetics.
- Author
-
Nagar, Jogender and Werner, Douglas H.
- Subjects
GENETIC algorithms ,APERTURE antennas ,WAVEGUIDE antennas ,COMMUNITIES ,ELECTROMAGNETIC compatibility - Abstract
This paper compares three modern and two classical multiobjective optimizers (MOOs) as applied to real-world problems in electromagnetics. The behavior of sophisticated optimizers on simple test functions has been studied exhaustively. In contrast, the algorithms here are tested on practical applications, where the function evaluations are computationally expensive, making the convergence rate a crucial factor. The examples considered include the optimization of a narrowband slot antenna, a mushroom-type electromagnetic bandgap structure, and an ultrawideband Vivaldi antenna. Another popular topic in the literature is in comparing classical MOOs on electromagnetics problems. The modern optimizers chosen in this paper are state of the art and each has a distinct design philosophy. This paper introduces two unique MOOs to the electromagnetics community: BORG, an auto-adaptive genetic algorithm and the Multi-Objective Covariance Matrix Adaptation Evolutionary Strategy (MO-CMA-ES), an extension of the popular single-objective CMA-ES. These algorithms are compared to the Multi-objective Evolutionary Algorithm based on Decomposition (MOEA/D), a Chebysheff scalarization algorithm, and two classical MOOs. This paper will study the behavior of these algorithms on problems in electromagnetics with a limited number of function evaluations using five distinct metrics and will provide useful guidelines and recommended optimizer settings. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. A computation and energy reduction technique for HEVC intra prediction.
- Author
-
Azgin, Hasan, Kalali, Ercan, and Hamzaoglu, Ilker
- Subjects
VIDEO coding ,ENERGY consumption of computers ,COMPUTER algorithms ,PREDICTION models ,HOUSEHOLD electronics - Abstract
Intra prediction algorithm used in High Efficiency Video Coding (HEVC) standard has very high computational complexity. Therefore, in this paper, a novel technique is proposed for reducing amount of computations performed by HEVC intra prediction algorithm and, therefore, reducing energy consumption of HEVC intra prediction hardware. The proposed technique significantly reduced the amount of computations performed by 4x4, 8x8, 16x16 and 32x32 luminance angular prediction modes. The proposed technique does not affect the PSNR and bit rate. In this paper, a low energy HEVC intra prediction hardware for 4x4, 8x8, 16x16 and 32x32 angular prediction modes is also designed and implemented using Verilog HDL. The proposed technique significantly reduced the energy consumption of the HEVC intra prediction hardware. Therefore, it can be used in portable consumer electronics products that require a real-time HEVC encoder. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
20. Learning Graphical Models From the Glauber Dynamics.
- Author
-
Bresler, Guy, Gamarnik, David, and Shah, Devavrat
- Subjects
GRAPHICAL modeling (Statistics) ,MARKOV random fields ,LEARNING modules ,ESTIMATION theory ,MATHEMATICAL variables ,ALGORITHMS - Abstract
In this paper, we consider the problem of learning undirected graphical models from data generated according to the Glauber dynamics (also known as the Gibbs sampler). The Glauber dynamics is a Markov chain that sequentially updates individual nodes (variables) in a graphical model and it is frequently used to sample from the stationary distribution (to which it converges given sufficient time). Additionally, the Glauber dynamics is a natural dynamical model in a variety of settings. This paper deviates from the standard formulation of graphical model learning in the literature, where one assumes access to independent identically distributed samples from the distribution. Much of the research on graphical model learning has been directed toward finding algorithms with low computational cost. As the main result of this paper, we establish that the problem of reconstructing binary pairwise graphical models is computationally tractable when we observe the Glauber dynamics. Specifically, we show that a binary pairwise graphical model on p nodes with maximum degree d can be learned in time f(d)p^2\log p , for a function f(d) defined explicitly in this paper, using nearly the information-theoretic minimum number of samples. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Solving the Group Multirole Assignment Problem by Improving the ILOG Approach.
- Author
-
Zhu, Haibin, Liu, Dongning, Zhang, Siqin, Teng, Shaohua, and Zhu, Yu
- Subjects
SCHEDULING software - Abstract
Role assignment is a critical element in the role-based collaboration process. There are many different requirements to be considered when undertaking this task. This correspondence paper formalizes the group multirole assignment (GMRA) problem; proves the necessary and sufficient condition for the problem to have a feasible solution, provides an improved IBM ILOG CPLEX optimization package solution, and verifies the proposed solution with experiments. The contributions of this paper include: 1) the formalization of an important engineering problem, i.e., the GMRA problem; 2) a theoretical proof of the necessary and sufficient condition for GMRA to have a feasible solution; and 3) an improved ILOG solution to such a problem. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
22. Introduction to the IEEE Journal on Selected Topics in Signal Processing and IEEE Transactions on Signal and Information Processing Over Networks Joint Special Issue on Graph Signal Processing.
- Author
-
Frossard, Pascal, Dragotti, Pier Luigi, Ortega, Antonio, Rabbat, Michael G., and Ribeiro, Alejandro
- Abstract
The papers in this special issue are intended to address some of the main research challenges in Graph Signal Processing by presenting a collection of the latest advances in the domain. These papers examine key representation, learning and processing aspects for signals living on graphs and networks, as well as new methods and applications in graph signal processing. Numerous applications rely on the processing of high dimensional data that reside on irregular or otherwise unordered structures that are naturally modeled as networks. The need for new tools to process such data has led to the emergence of the field of graph signal processing, which merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process signals on structures such as graphs. This important new paradigm in signal processing research, coupled with its numerous applications in very different domains, has fueled the rapid development of an inter-disciplinary research community that has been working on theoretical aspects of graph signal processing and applications to diverse problems such as big data analysis, coding and compression of 3D point clouds, biological data processing, and brain network analysis. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
23. On Qualitative Analysis of Fault Trees Using Structurally Persistent Nets.
- Author
-
Rodriguez, Ricardo J.
- Subjects
FAULT trees (Reliability engineering) ,PETRI nets ,LINEAR programming - Abstract
A fault tree (FT) defines an undesired top event, characterizing it using logic combinations of lower-level undesired events. In this paper, we focus on coherent FTs, i.e., the logic is restricted to AND/OR formulas. FT analysis is used to identify and assess the minimal cut sets (MCSs) of an FT, which define the minimal set of events leading to the undesired state. The dual of MCS is minimal path set (MPS). MCS and MPS are commonly used for qualitative evaluation of FTs in safety and reliability engineering. This paper explores computation of the MCS/MPS of an FT by means of structural analysis (namely, computation of minimal p -semiflows) of a Petri net (PN) that represents the FT. To this end, we propose a formal definition of a coherent FT and a transformation from this model to a PN subclass (namely, structurally persistent nets). We also prove the relationship between minimal p -semiflows and MCS/MPS in an FT. In addition, we propose an algorithm that uses linear programming techniques to compute the MCS/MPS in an FT. Finally, we put our findings into practice by qualitatively evaluating the FT of a pressure tank system. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
24. Observer-Based Consensus for Multiagent Systems Under Stochastic Sampling Mechanism.
- Author
-
Du, Sheng-Li, Xia, Weiguo, Ren, Wei, Sun, Xi-Ming, and Wang, Wei
- Subjects
MULTIAGENT systems ,TIME delay systems - Abstract
This paper is concerned with the consensus problem of general linear dynamic multiagent systems with stochastic sampling. In this paper, the sampling intervals randomly switch between two different values. The communication topology between agents is fixed and directed. Full- and reduced-order observers are designed based on neighbor agents’ relative output information. The algorithms to construct such observers are also provided. By using the estimated states of the agents, the observer-based consensus protocol with stochastic sampling are presented. Sufficient conditions to ensure consensus in mean square are derived by using Lyapunov stability theory. Finally, simulations are given to examine the effectiveness of the proposed methods. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
25. Concise Planning and Filtering: Hardness and Algorithms.
- Author
-
O'Kane, Jason M. and Shell, Dylan A.
- Subjects
CONJOINT analysis ,MARKETING research ,COMPUTATIONAL acoustics ,COMPUTATIONAL physics ,BANDWIDTH allocation - Abstract
Motivated by circumstances with severe computational resource limits (e.g., settings with strong constraints on memory or communication), this paper addresses the problem of concisely representing and processing information for estimation and planning tasks. In this paper, conciseness is a measure of explicit representational complexity: for filtering, we are concerned with maintaining as little state as possible to perform a given task; for the planning case, we wish to generate the plan graph (or policy graph) with the fewest vertices that is correct and also complete. We present hardness results showing that both filtering and planning are NP-hard to perform in an optimally concise way, and that the related decision problems are NP-complete. We also describe algorithms for filter reduction and concise planning, for which these hardness results justify the potentially suboptimal output. The filter-reduction algorithm accepts as input an arbitrary combinatorial filter, expressed as a transition graph, and outputs an equivalent filter that uses fewer I-states to complete the same filtering task. The planning algorithm, using the filter-reduction algorithm as a subroutine, generates concise plans for planning problems that may involve both nondeterminism and partial observability. Both algorithms are governed by parameters that encode tradeoffs between computational efficiency and solution quality. We describe implementation of both algorithms and present a series of experiments evaluating their effectiveness. Note to Practitioners—The reduced filters and plans explored in this paper are of practical interest in several contexts, including: 1) on robot platforms with severely limited computational power; 2) communication over low-bandwidth noisy channels; 3) a special instance of the previous case includes human-robot interaction settings where interfaces constrain information transfer; and 4) understanding the size and the structure of concise plans or filters for given problems provides insights into those problems (e.g., to assess the value of a particular sensor by comparing the size of filters with or without it.) [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. On the Polygon Containment Problem on an Isometric Grid.
- Author
-
Wei, Xiangzhi, Zhao, Bao, Joneja, Ajay, and Xi, Juntong
- Subjects
ISOMETRICS (Mathematics) ,COMPUTER algorithms ,PACKING problem (Mathematics) ,GRID computing - Abstract
This paper addresses the issue of placing a simple polygon (upon translation and rotation) on an isometric triangular grid such that the polygon contains the maximum number of triangles in its closure. This solves the problem left open in two recent papers titled “On the problem of the automated design of large-scale robot skin” and “An improved algorithm for the automated design of large scale robot skin” published in the IEEE Transactions on Automation Science and Engineering . Based on the properties of the grid, an improved algorithm is also presented. We also present some experimental results describing the use of this algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. A Parallel Proximal Algorithm for Anisotropic Total Variation Minimization.
- Author
-
Kamilov, Ulugbek S.
- Subjects
PARALLEL algorithms ,INVERSE problems ,APPROXIMATION theory ,SIGNAL processing ,MATHEMATICAL optimization - Abstract
Total variation (TV) is a one of the most popular regularizers for stabilizing the solution of ill-posed inverse problems. This paper proposes a novel proximal-gradient algorithm for minimizing TV regularized least-squares cost functionals. Unlike traditional methods that require nested iterations for computing the proximal step of TV, our algorithm approximates the latter with several simple proximals that have closed form solutions. We theoretically prove that the proposed parallel proximal method achieves the TV solution with arbitrarily high precision at a global rate of converge that is equivalent to the fast proximal-gradient methods. The results in this paper have the potential to enhance the applicability of TV for solving very large-scale imaging inverse problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
28. Generalized MUSIC-Like Array Processing for Underwater Environments.
- Author
-
Lim, Hock Siong, Ng, Boon Poh, and Reddy, Vinod V.
- Subjects
MULTIPLE Signal Classification ,UNDERWATER acoustics ,STATISTICAL correlation ,MONTE Carlo method ,DIRECTION of arrival estimation - Abstract
This paper proposes the generalized MUltiple SIgnal Classification (MUSIC)-like algorithm for robust MUSIC-like processing for underwater applications. The solution proposed in this paper is to generalize the noise correlation assumption and include a noise correlation model in its problem formulation. By doing so, the proposed generalized MUSIC-like algorithm is able to provide robust MUSIC-like performances in any noise condition, so long as the noise correlation property of the environment is known partially. Results from simulations and real data processing show that our proposed algorithm is able to suppress spurious peaks caused by mismatched noise assumptions in standard MUSIC-like algorithms. The bound of the controlling parameter denoted by $\beta $ for the proposed generalized MUSIC-like algorithm is also discussed in this paper. Performance study using Monte Carlo simulations shows that the proposed generalized MUSIC-like algorithm has the same resolving power as the MUSIC method but slightly poorer accuracy in direction-of-arrival (DOA) estimation. This paper also presents the results from real data processing by the generalized MUSIC-like algorithm and demonstrates better resolving power than the Capon and MUSIC algorithms used consistently in the experiment. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
29. On the Easiest and Hardest Fitness Functions.
- Author
-
He, Jun, Chen, Tianshi, and Yao, Xin
- Subjects
EVOLUTIONARY algorithms ,MATHEMATICAL functions ,GENETIC programming ,EVOLUTIONARY computation ,POLYNOMIALS - Abstract
The hardness of fitness functions is an important research topic in the field of evolutionary computation. In theory, this paper can help with understanding the ability of evolutionary algorithms (EAs). In practice, this paper may provide a guideline to the design of benchmarks. The aim of this paper is to answer the following research questions. Given a fitness function class, which functions are the easiest with respect to an EA? Which are the hardest? How are these functions constructed? This paper provides theoretical answers to these questions. The easiest and hardest fitness functions are constructed for an elitist (1 + 1) EA to maximize a class of fitness functions with the same optima. It is demonstrated that the unimodal functions are the easiest and deceptive functions are the hardest in terms of the time-based fitness landscape. This paper also reveals that in a fitness function class, the easiest function to one algorithm may become the hardest to another algorithm, and vice versa. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
30. Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis.
- Author
-
Thomaz, Lucas A., Jardim, Eric, da Silva, Allan F., da Silva, Eduardo A. B., Netto, Sergio L., and Krim, Hamid
- Subjects
SEQUENCES (Motion pictures) ,ANOMALY detection (Computer security) ,SUBSPACE identification (Mathematics) - Abstract
This paper presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However, this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
31. Context-Aware Dynamic Asset Allocation for Maritime Interdiction Operations.
- Author
-
Sidoti, David, Pattipati, Krishna R., Han, Xu, Zhang, Lingyi, Avvari, Gopi Vinod, Ayala, Diego Fernando Martinez, Mishra, Manisha, Sankavaram, Muni Sravanth, Kellmeyer, David L., and Hansen, James A.
- Subjects
ASSET allocation ,DYNAMIC programming ,MARITIME shipping ,DRUG traffic ,WORK structure ,MARITIME management - Abstract
This paper validates two approximate dynamic programming approaches on a maritime interdiction problem involving the allocation of multiple heterogeneous assets over a large area of responsibility to interdict multiple drug smugglers using heterogeneous types of transportation on the sea with varying contraband weights. The asset allocation is based on a probability of activity surface, which represents spatio-temporal target activity obtained by integrating intelligence data on drug smuggler whereabouts/waypoints for contraband transportation, behavior models, and meteorological and oceanographic information. We validate the proposed architectural and algorithmic concepts via several realistic mission scenarios. We conduct sensitivity analyses to quantify the robustness and proactivity of our approach, as well as to measure the value of information used in the allocation process. The contributions of this paper have been transitioned to and are currently being tested by Joint Interagency Task Force—South, an organization tasked with providing the initial line of defense against drug trafficking in the East Pacific and Caribbean Oceans. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Adaptive Quantized Estimation Fusion Using Strong Tracking Filtering and Variational Bayesian.
- Author
-
Ge, Quanbo, Wei, Zhongliang, Liu, Mingxin, Yu, Junzhi, and Wen, Chenglin
- Subjects
KALMAN filtering ,FILTERS & filtration ,ARTIFICIAL satellite tracking - Abstract
In this paper, adaptive quantized state estimation fusion is deeply studied. To approach the model mismatching problem induced by random quantization, some quantized Kalman filters have been presented in the previous work, such as the quantized Kalman filter with strong tracking filtering (QKF-STF), the variational Bayesian adaptive quantized Kalman filter (VB-AQKF), and a centralized fusion frame-based complex quantized filter called variational Bayesian adaptive QKF-STF (VB-AQKF-STF). Based on the previous work for the single sensor system, a distributed complex quantized filter is designed in this paper. A novel quantized Kalman filter based on multiple-method fusion scheme (QKF-MMF) is proposed. Similar to the VB-AQKF-STF, the QKF-MMF can also realize joint estimation on the state and the quantization error covariance under the distributed fusion frame. Furthermore, it extends the single sensor results to multisensor tracking systems by using centralized and distributed fusion frames. Two multisensor quantized fusion estimators are proposed for a parallel structure with main-secondary processors in the fusion center. The weighted fusion and embedded integration ways are deeply applied to design the multisensor quantized fusion methods. The proposed work can perfect the quantized estimation algorithms and provide different choices for practical engineering applications. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. The Improved BP Algorithm Based on MapReduce and Genetic Algorithm.
- Author
-
Chenje, Zhu and Ruonan, Rao
- Abstract
The traditional BP neural network training method processes the training dataset serially on one machine, so the efficiency is quite low. The massive data that need to be explored brings great challenge for BP neural network. The traditional serial training method of BP neural network will encounter many problems, such as costing too much time and insufficient memory to finish the training process. To solve these problems, this paper proposes a new parallel training method that is based on MapReduce and genetic algorithm, and the new training method is called MR-GAIBP (MapReduce based Genetic Algorithm Improved Back Propagation). MR-GAIBP includes two parts: MapReduce based BP algorithm and MapReduce based genetic algorithm. Genetic algorithm is first iterated for a few times to find appropriate initial weights of BP neural network, then BP algorithm is used to find the appropriate weights that meets the requirement. In the phase of BP algorithm, local iteration is used to speed up the convergence. Experiment results demonstrate that MR-GAIBP has faster convergence rate and higher accuracy compared with the previous MapReduce based algorithm proposed in other papers. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
34. Fast Recommendation on Bibliographic Networks.
- Author
-
Kucuktunc, Onur, Kaya, Kamer, Saule, Erik, and Catalyurek, Umit V.
- Abstract
Graphs and matrices are widely used in algorithms for social network analyses. Since the number of interactions is much less than the possible number of interactions, the graphs and matrices used in the analyses are usually sparse. In this paper, we propose an efficient implementation of a sparse-matrix computation which arises in our publicly available citation recommendation service called the advisor. The recommendation algorithm uses a sparse matrix generated from the citation graph. We observed that the nonzero pattern of this matrix is highly irregular and the computation suffers from high number of cache misses. We propose techniques for storing the matrix in memory efficiently and reducing the number of cache misses. Experimental results show that our techniques are highly efficient on reducing the query processing time which is highly crucial for a web service. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
35. Stability of Evolving Fuzzy Systems Based on Data Clouds.
- Author
-
Rong, Hai-Jun, Angelov, Plamen P., Gu, Xiaowei, and Bai, Jian-Ming
- Subjects
FUZZY systems ,CLOUD storage ,LYAPUNOV stability ,MEMBERSHIP functions (Fuzzy logic) ,MACHINE learning - Abstract
Evolving fuzzy systems (EFSs) are now well developed and widely used, thanks to their ability to self-adapt both their structures and parameters online. Since the concept was first introduced two decades ago, many different types of EFSs have been successfully implemented. However, there are only very few works considering the stability of the EFSs, and these studies were limited to certain types of membership functions with specifically predefined parameters, which largely increases the complexity of the learning process. At the same time, stability analysis is of paramount importance for control applications and provides the theoretical guarantees for the convergence of the learning algorithms. In this paper, we introduce the stability proof of a class of EFSs based on data clouds, which are grounded at the AnYa type fuzzy systems and the recently introduced empirical data analytics (EDA) methodological framework. By employing data clouds, the class of EFSs of AnYa type considered in this paper avoids the traditional way of defining membership functions for each input variable in an explicit manner and its learning process is entirely data driven. The stability of the considered EFS of AnYa type is proven through the Lyapunov theory, and the proof of stability shows that the average identification error converges to a small neighborhood of zero. Although, the stability proof presented in this paper is specially elaborated for the considered EFS, it is also applicable to general EFSs. The proposed method is illustrated with Box–Jenkins gas furnace problem, one nonlinear system identification problem, Mackey–Glass time series prediction problem, eight real-world benchmark regression problems as well as a high-frequency trading prediction problem. Compared with other EFSs, the numerical examples show that the considered EFS in this paper provides guaranteed stability as well as a better approximation accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. An Optimal Robotic Assembly Sequence Planning by Assembly Subsets Detection Method Using Teaching Learning-Based Optimization Algorithm.
- Author
-
Gunji, A. Balamurali, Deepak, B. B. B. V. L., Bahubalendruni, C. M. V. A. Raju, and Biswal, D. Bibhuti Bhushan
- Subjects
ROBOTIC assembly ,COMBINATORIAL optimization ,BOREL subsets ,ENERGY consumption ,PROBLEM solving - Abstract
In recent days, many interacted shape products have been developed by manufacturing industries for different applications in various fields such as defense, aerospace, and space centers. In manufacturing, 30% of time consumption is due to assembly operation compared with the remaining processes in manufacturing. It is very difficult to get optimal sequence because assembly sequence planning is a multimodel optimization problem. As the number of parts in the assembly increases, the possible number of sequences increases exponentially therefore obtaining the optimal assembly sequence becomes more difficult and time consuming. There exist many mathematical algorithms to obtain optimal assembly sequences. But, recent studies state that they perform poorly when it comes to multiobjective optimal assembly sequence. In recent years, researchers have developed several soft computing-based algorithms for solving assembly sequence problems. In this paper, assembly subset detection method has been introduced. The proposed method is applied for the first time to solve assembly sequence problems. This method eliminates those assembly sets that have more directional changes and require more energy. The method is compared with other algorithms, namely, genetic algorithm (GA), enhanced GA, ant colony optimization (ACO), memetic algorithm, imperialistic harmonic search algorithm, and flower pollination algorithm (FPA), and is found to be successful in achieving the optimal assembly sequence for an industrial product with smaller number of iterations. Note to Practitioners—This paper is motivated by the redesign of helicopter cowling of a Canadian aircraft company using concepts of design for assembly. Though we could reduce the number of parts using advanced composite materials and manufacturing processes, obtaining a feasible assembly for the new assembly structure required a lot of computation time. Hence, the researchers studied the existing literature on assembly sequence generation methods and their limitations, and came up with efficient automated optimal sequence generation method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. Cost Pattern Value Method for Local Search Algorithms Applied to Optimal FEA-Based Design of Induction Motors.
- Author
-
Lee, Dongsu and Jung, Ho-Chang
- Subjects
INDUCTION motors ,SEARCH algorithms ,TORQUE control ,FINITE element method ,OPTIMAL designs (Statistics) - Abstract
This paper presents an approach to obtain high torque density in induction motor (IM) design by optimizing the search algorithms and processes. Optimal design of electrical machines has many sub-optimal peaks and requires significant computation time. To solve this problem, this paper proposes a new cost pattern value method (CPVM) for local search algorithms employed in optimal design. The CPVM concept is an intelligent strategy that digitizes the gradient over the searching region by sharing information between neighboring points. When the algorithm converges quickly to local optimum by a normalized weight method, it provides a mechanism to escape from that region. The validity of the proposed approach was evaluated using a benchmark function, and it was applied to optimal design of a 260 kW IM to maximize torque production based on a finite-element analysis. For better accuracy, we employ slip-frequency analysis compatible with vector control for an IM numerical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. A Global Bayesian Optimization Algorithm and Its Application to Integrated System Design.
- Author
-
Torun, Hakki Mert, Swaminathan, Madhavan, Kavungal Davis, Anto, and Bellaredj, Mohamed Lamine Faycal
- Subjects
INTEGRATED circuit design ,BAYESIAN analysis ,ALGORITHMS - Abstract
Increasing levels of system integration pose difficulties in meeting design specifications for high-performance systems. Oftentimes increased complexity, nonlinearity, and multiple tradeoffs need to be handled simultaneously during the design cycle. Since components in such systems are highly correlated with each other, codesign and co-optimization of the complete system are required. Machine learning (ML) provides opportunities for analyzing such systems with multiple control parameters, where techniques based on Bayesian optimization (BO) can be used to meet or exceed design specifications. In this paper, we propose a new BO-based global optimization algorithm titled Two-Stage BO (TSBO). TSBO can be applied to black box optimization problems where the computational time can be reduced through a reduction in the number of simulations required. Empirical analysis on a set of popular challenge functions with several local extrema and dimensions shows TSBO to have a faster convergence rate as compared with other optimization methods. In this paper, TSBO has been applied for clock skew minimization in 3-D integrated circuits and multiobjective co-optimization for maximizing efficiency in integrated voltage regulators. The results show that TSBO is between $2\times $ - $4\times $ faster as compared with previously published BO algorithms and other non-ML-based techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
39. AntMapper: An Ant Colony-Based Map Matching Approach for Trajectory-Based Applications.
- Author
-
Gong, Yue-Jiao, Chen, En, Zhang, Xinglin, Ni, Lionel M., and Zhang, Jun
- Abstract
Many trajectory-based applications require an essential step of mapping raw GPS trajectories onto the digital road network accurately. This task, commonly referred to as map matching, is challenging due to the measurement error of GPS devices in critical environment and the sampling error caused by long sampling intervals. Traditional algorithms focus on either a local or a global perspective to deal with the problem. To further improve the performance, this paper develops a novel map matching model that considers local geometric/topological information and a global similarity measure simultaneously. To accomplish the optimization goal in this complex model, we adopt an ant colony optimization algorithm that mimics the path finding process of ants transporting food in nature. The algorithm utilizes both local heuristic and global fitness to search the global optimum of the model. Experimental results verify that the proposed algorithm is able to provide accurate map matching results within a relatively short execution time. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
40. Efficient Planar Caging Test Using Space Mapping.
- Author
-
Wan, Weiwei and Fukui, Rui
- Subjects
PLANAR motion ,MOBILE robots ,ROBOT dynamics ,ROBOT motion ,PLANAR transistors - Abstract
This paper presents an efficient algorithm to test whether a planar object can be caged by a formation of point agents (point fingertips or point mobile robots). The algorithm is based on a space mapping between the 2-D work space ( \mathcal W space) and the 3-D configuration space ( \mathcal C space) of the given agent formation. When performing caging test on a planar object, the algorithm looks up the space mapping to recover the \mathcal C space of the given agent formation, labels the recovered \mathcal C space, and counts the number of labeled surfaces to judge the success of caging. The algorithm is able to work with various planar shapes, including objects with convex boundaries, concave boundaries, or holes. It can also respond quickly to varying agent formations and different object shapes. Experiments and analysis on different objects and fingertip formations demonstrate the completeness, robustness, and efficiency of our proposal.
Note to Practitioners—This paper proposes an algorithm to solve a geometric problem—find whether a given formation of planar points can constrain (or cage) a planar shape. Users can use the proposed algorithm to actuate a formation of robotic fingertips to perform caging-based grasping tasks or use the proposed algorithm to actuate a formation of mobile robots to perform cooperative transportation tasks. The algorithm inherits the merits of caging and helps users to avoid explicit force analysis. It offers robustness to avoid uncertainty in the tasks. The code of our work is in the supplementary material. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
41. $\lambda $ -Domain Optimal Bit Allocation Algorithm for High Efficiency Video Coding.
- Author
-
Li, Li, Bin, Li, Houqiang, and Chen, Chang Wen
- Subjects
MATHEMATICAL optimization ,THEORY of knowledge ,BIT allocation analysis ,ALGORITHMS ,VIDEO coding - Abstract
Rate control typically involves two steps: bit allocation and bitrate control. The bit allocation step can be implemented in various fashions depending on how many levels of allocation are desired and whether or not an optimal rate–distortion (R-D) performance is pursued. The bitrate control step has a simple aim in achieving the target bitrate as precisely as possible. In our recent research, we have developed a \lambda -domain rate control algorithm that is capable of controlling the bitrate precisely for High Efficiency Video Coding (HEVC). The initial research showed that the bitrate control in the \lambda -domain can be more precise than the conventional schemes. However, the simple bit allocation scheme adopted in this initial research is unable to achieve an optimal R-D performance reflecting the inherent R-D characteristics governed by the video content. In order to achieve an optimal R-D performance, the bit allocation algorithms need to be developed taking into account the video content of a given sequence. The key issue in deriving the video-content-guided optimal bit allocation algorithm is to build a suitable R-D model to characterize the R-D behavior of the video content. In this paper, to complement the R- \lambda model developed in our initial work, a D- \lambda model is properly constructed to complete a comprehensive framework of \lambda -domain R-D analysis. Based on this comprehensive \lambda -domain R-D analysis framework, a suite of optimal bit allocation algorithms are developed. In particular, we design both picture-level and basic-unit-level bit allocation algorithms based on the fundamental R-D optimization theory to take full advantage of the content-guided principles. The proposed algorithms are implemented in HEVC reference software, and the experimental results demonstrate that they can achieve an obvious R-D performance improvement with a smaller bitrate control error. The proposed bit allocation algorithms have already been adopted by the Joint Collaborative Team on Video Coding and integrated into the HEVC reference software. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
42. MultiComm: Finding Community Structurein Multi-Dimensional Networks.
- Author
-
Li, Xutao, Ng, Michael K., and Ye, Yunming
- Subjects
COMMUNITY organization ,DATA mining ,SOCIAL media ,COMPUTER networks ,PROBABILITY theory - Abstract
The main aim of this paper is to develop a community discovery scheme in a multi-dimensional network for data mining applications. In online social media, networked data consists of multiple dimensions/entities such as users, tags, photos, comments, and stories. We are interested in finding a group of users who interact significantly on these media entities. In a co-citation network, we are interested in finding a group of authors who relate to other authors significantly on publication information in titles, abstracts, and keywords as multiple dimensions/entities in the network. The main contribution of this paper is to propose a framework (MultiComm)to identify a seed-based community in a multi-dimensional network by evaluating the affinity between two items in the same type of entity (same dimension)or different types of entities (different dimensions)from the network. Our idea is to calculate the probabilities of visiting each item in each dimension, and compare their values to generate communities from a set of seed items. In order to evaluate a high quality of generated communities by the proposed algorithm, we develop and study a local modularity measure of a community in a multi-dimensional network. Experiments based on synthetic and real-world data sets suggest that the proposed framework is able to find a community effectively. Experimental results have also shown that the performance of the proposed algorithm is better in accuracy than the other testing algorithms in finding communities in multi-dimensional networks. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
43. An Efficient Framework for Data-Plane Verification With Geometric Windowing Queries.
- Author
-
Inoue, Takeru, Chen, Richard, Mano, Toru, Mizutani, Kimihiro, Nagata, Hisashi, and Akashi, Osamu
- Abstract
Modern networks have complex configurations to provide advanced functions. Network softwarization, a promising new movement in the networking community, could make networks more complexly configured due to the nature of software. Since these complexities make the networks error-prone, network verification is attracting attention as a key technology to detect inconsistencies between a configuration and an operational policy. Existing verifiers are, unfortunately, either inefficient or incomplete (operational policies are not rigorously checked). This paper presents a novel framework of data-plane verification. So as to efficiently manage the large search space defined by packet headers, our framework formalizes the consistency check by applying simple set operations defined in a small quotient space of packet header. This paper also reveals that the two spaces can be connected via the windowing query in computational geometry. Two windowing algorithms are proposed and backed by solid theoretical analyses. Experiments on real network datasets show that our framework with the windowing algorithms is surprisingly fast; when verifying policy compliance in a real network with thousands of switches, our framework reduces the verification time of all-pairs reachability from ten hours to ten minutes. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
44. Dead Reckoning in Structured Environments for Human Indoor Navigation.
- Author
-
Giorgi, Giada, Frigo, Guglielmo, and Narduzzi, Claudio
- Abstract
Provision for unrestricted movement by everybody, including disabled people, is among the basic aims of a Smart City. In this paper, we deal with human indoor navigation based on inertial sensors that are commonly found in devices, such as smartphones. The approach has gathered broad interest within the scientific community, since it does not require installation of external devices and allows the use of a smartphone both as measurement platform and user interface. Thus, it can be seen as an inclusive low-cost, low-energy human navigation aid. We focus this paper on the implementation of algorithms for estimating and tracking the heading direction of a user walking within a structured environment, e.g., a building. The main feature of the proposed method is that the estimation of direction is not referred to absolute headings based on the four cardinal directions, as usually done in the literature. To achieve sufficient reliability and, at the same time, preserve simplicity, our approach is based on detecting relative changes in the user direction with respect to a reference system obtained during an initial calibration phase. The fundamental direction, which exhibits the minimum distance with respect to the raw measured values, is then provided as output. Experimental results reported in this paper show that a user path can be traced with sufficient accuracy within four steps in the worst case. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
45. Data-Driven Partitioning of Power Networks Via Koopman Mode Analysis.
- Author
-
Raak, Fredrik, Susuki, Yoshihiko, and Hikihara, Takashi
- Subjects
COHERENCE (Physics) ,GRAPH theory ,ELECTRIC generators ,EIGENVECTORS ,ELECTRIC power systems - Abstract
This paper applies a new technique for modal decomposition based solely on measurements to test systems and demonstrates the technique's capability for partitioning a power network, which determines the points of separation in an islanding strategy. The mathematical technique is called the Koopman mode analysis (KMA) and stems from a spectral analysis of the so-called Koopman operator. Here, KMA is numerically approximated by applying an Arnoldi-like algorithm recently first applied to power system dynamics. In this paper we propose a practical data-driven algorithm incorporating KMA for network partitioning. Comparisons are made with two techniques previously applied for the network partitioning: spectral graph theory which is based on the eigenstructure of the graph Laplacian, and slow-coherency which identifies coherent groups of generators for a specified number of low-frequency modes. The partitioning results share common features with results obtained with graph theory and slow-coherency-based techniques. The suggested partitioning method is evaluated with two test systems, and similarities between Koopman modes and Laplacian eigenvectors are showed numerically and elaborated theoretically. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
46. Distributed Resource Management for Cognitive Ad Hoc Networks With Cooperative Relays.
- Author
-
Guan, Zhangyu, Melodia, Tommaso, Yuan, Dongfeng, and Pados, Dimitris A.
- Subjects
COGNITIVE radio ,ELECTRIC relays ,DISTRIBUTED resources (Electric utilities) ,DISTRIBUTED algorithms ,NASH equilibrium ,EQUIPMENT & supplies - Abstract
It is well known that the data transport capacity of a wireless network can be increased by leveraging the spatial and frequency diversity of the wireless transmission medium. This has motivated the recent surge of research in cooperative and dynamic-spectrum-access (which we also refer to as cognitive spectrum access) networks. Still, as of today, a key open research challenge is to design distributed control strategies to dynamically jointly assign: 1) portions of the spectrum and 2) cooperative relays to different traffic sessions to maximize the resulting network-wide data rate. In this paper, we make a significant contribution in this direction. First, we mathematically formulate the problem of joint spectrum management and relay selection for a set of sessions concurrently utilizing an interference-limited infrastructure-less wireless network. We then study distributed solutions to this (nonlinear and nonconvex) problem. The overall problem is separated into two subproblems: 1) spectrum management through power allocation with given relay selection strategy; and 2) relay selection for a given spectral profile. Distributed solutions for each of the two subproblems are proposed, which are then analyzed based on notions from variational inequality (VI) theory. The distributed algorithms can be proven to converge, under certain conditions, to VI solutions, which are also Nash equilibrium (NE) solutions of the equivalent NE problems. A distributed algorithm based on iterative solution of the two subproblems is then designed. Performance and price of anarchy of the distributed algorithm are then studied by comparing it to the globally optimal solution obtained with a newly designed centralized algorithm. Simulation results show that the proposed distributed algorithm achieves performance that is within a few percentage points of the optimal solution. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
47. Decentralized Learning-Based Relay Assignment for Cooperative Communications.
- Author
-
Chen, Zanyu, Lin, Tsungnan, and Wu, Chunlin
- Subjects
WIRELESS cooperative communication ,LONG-Term Evolution (Telecommunications) ,REINFORCEMENT learning ,SELF-organizing systems ,STOCHASTIC learning models - Abstract
Cooperative communication exploits spatial diversity via relay node antennas to increase data rates in wireless networks. Relay node selection, therefore, plays a critical role in system performance. This paper examines the problem of relay node selection in cooperative networks, in which one relay node can be used by multiple source–destination transmission pairs, and all transmission pairs share the same set of relay nodes. Centralized approaches may exhibit higher complexity when the number of source nodes is increased. This paper gives source nodes self-optimizing and self-learning abilities and enables them to autonomously select relay nodes. A fully distributed approach, called a “Decentralized Learning-based Relay Assignment” (DLRA) algorithm, is proposed to achieve this goal. DLRA uses a reinforcement learning technique called stochastic learning automata, with each source node attaining the self-learning ability to find an appropriate relay node according to environmental feedback. This paper shows the convergency, optimality, and performance of DLRA via mathematical analysis and evaluates the performance of DLRA in two different network systems: a cooperative ad hoc network and a Long-Term Evolution (LTE)-Advanced relay network. The experimental results present several properties of DLRA: the effectiveness of DLRA in cooperative communication systems, the significant improvements made by DLRA in fairness, and the capacity of each node in LTE-Advanced systems. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
48. Energy-Efficient Dynamic Packet Downloading for Medical IoT Platforms.
- Author
-
Kim, Joongheon
- Abstract
This paper proposes a polynomial-time algorithm for energy-efficient dynamic packet downloading from medical cloud storage to medical Internet-of-Things (IoT) devices. The medical cloud can distribute its own medical data to medical IoT devices via access points. Therefore, network disconnection can happen between the medical cloud and medical IoT devices when power/energy management in each access point is not efficient. This situation is especially harmful in in-hospital network architectures, because the architecture usually has strict requirements in terms of reliability. Therefore, this paper proposes a dynamic energy-efficient algorithm, which computes the amount of power allocation in each access point based on the buffer backlog size and channel states under the consideration of buffer stability. With the proposed adaptive algorithm, each access point calibrates its own parameters for more adaptive power/energy management. The performance of the proposed algorithm is evaluated in terms of network lifetime, and it is observed that the proposed algorithm achieves the desired performance. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
49. Randomized Voting-Based Rigid-Body Motion Segmentation.
- Author
-
Heechul Jung, Jeongwoo Ju, and Junmo Kim
- Subjects
MOTION analysis ,VOTING ,ALGORITHMS ,PROBABILITY theory ,NOISE - Abstract
In this paper, we propose a novel rigid-body motion segmentation algorithm that uses randomized voting to assign high scores to correctly estimated models and low scores to wrongly estimated models. This algorithm is based on an epipolar geometrical representation of the camera motion, and computes scores using the distance between the feature point and the corresponding epipolar line. These scores are accumulated and utilized for motion segmentation. To evaluate the efficacy of our algorithm, we conduct a series of experiments using the Hopkins 155 data set and the UdG data set, which are representative test sets for rigid motion segmentation. Among several state-of-the-art data sets, our algorithm achieves the most accurate motion segmentation results and, in the presence of measurement noise, achieves comparable results to the other algorithms. Finally, we analyze why our motion segmentation algorithm works using probabilistic and theoretical analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Linear Beamformer Design for Interference Alignment via Rank Minimization.
- Author
-
Sridharan, Gokul and Yu, Wei
- Subjects
BEAMFORMING ,MIMO systems ,INTERFERENCE (Telecommunication) ,SIGNAL processing ,ALGORITHMS ,FROBENIUS algebras - Abstract
This paper proposes a new framework for the design of transmit and receive beamformers for interference alignment (IA) without symbol extensions in multi-antenna cellular networks. We consider IA in a G cell network with K users/cell, N antennas at each base station (BS) and M antennas at each user. The proposed framework is developed by recasting the conditions for IA as two sets of rank constraints, one on the rank of interference matrices, and the other on the transmit beamformers in the uplink. The interference matrix consists of all the interfering vectors received at a BS from the out-of-cell users in the uplink. Using these conditions and the crucial observation that the rank of interference matrices under alignment can be determined beforehand, this paper develops two sets of algorithms for IA. The first part of this paper develops rank minimization algorithms for IA by iteratively minimizing a weighted matrix norm of the interference matrix. Different choices of matrix norms lead to reweighted nuclear norm minimization (RNNM) or reweighted Frobenius norm minimization (RFNM) algorithms with significantly different per-iteration complexities. Alternately, the second part of this paper devises an alternating minimization (AM) algorithm where the rank-deficient interference matrices are expressed as a product of two lower-dimensional matrices that are then alternately optimized. Simulation results indicate that RNNM, which has a per-iteration complexity of a semidefinite program, is effective in designing aligned beamformers for proper-feasible systems with or without redundant antennas, while RFNM and AM, which have a per-iteration complexity of a quadratic program, are better suited for systems with redundant antennas. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.