1,733 results
Search Results
2. Paper Classification by Topic Grouping in Citation Networks.
- Author
-
Su, Yi-Jen, Wun, Jian-Cheng, Hsu, Wei-Lin, and Chen, Yue-Qun
- Abstract
The enormous popularity of Web 2.0 social network services has led to much research on social network analysis (SNA). These studies focus on analyzing the complex interactive activities between users in the world of virtual networks. SNA has shown great potential in automatic document classification, especially in identifying citation networks of research papers and the references among them. This research adopts the Clique Percolation Method (CPM) to identify all overlapping subgroups in a citation network. In the grouping process, research papers with similar topics will be grouped into the same topic group. Two papers are regarded as having a relationship when the common citation rate between them is higher than the threshold. A modified TF-IDF calculates the weight of each keyword in the topic groups. The keyword-weight vector represents the main features of each group, while the category of a new-coming document is determined by a novel similarity function. All the papers under study are collected from the journal IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) published from 1979 to 2011. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
3. Coding With Noiseless Feedback Over the Z-Channel.
- Author
-
Deppe, Christian, Lebedev, Vladimir, Maringer, Georg, and Polyanskii, Nikita
- Subjects
ERROR-correcting codes ,BOUND states ,PARALLEL algorithms - Abstract
In this paper, we consider encoding strategies for the Z-channel with noiseless feedback. We analyze the combinatorial setting where the maximum number of errors inflicted by an adversary is proportional to the number of transmissions, which goes to infinity. Without feedback, it is known that the rate of optimal asymmetric-error-correcting codes for the error fraction $\tau \ge 1/4$ vanishes as the blocklength grows. In this paper, we give an efficient feedback encoding scheme with $n$ transmissions that achieves a positive rate for any fraction of errors $\tau < 1$ and $n\to \infty $. Additionally, we state an upper bound on the rate of asymptotically long feedback asymmetric error-correcting codes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Optimal Modification of Peak-Valley Period Under Multiple Time-of-Use Schemes Based on Dynamic Load Point Method Considering Reliability.
- Author
-
Yang, Hejun, Gao, Yuan, Ma, Yinghao, and Zhang, Dabo
- Subjects
DYNAMIC loads ,RELIABILITY in engineering ,TEST systems ,POWER resources ,ELECTRIC power distribution grids ,BACK propagation - Abstract
Time-of-use (TOU) is an effective price-based demand response strategy. A reasonable design of TOU strategy can effectively reduce the peak-valley difference, and then produce a lot of benefits (such as delaying power grid investment, reducing interruption cost, and improving reliability). However, changing peak-valley period has a great influence on the peak-valley difference and power supply reliability of power system. Therefore, this paper aims to investigate the optimal modification of peak-valley period considering reliability loss under multiple TOU schemes. Firstly, this paper presents a clustering model and algorithm of optimal load curve based on a minimum error iteration method. Secondly, an optimal modification of peak-valley period based on a dynamic load point method is proposed, and the traditional peak-valley difference is replaced by the global peak-valley difference to calculate the objective function. Thirdly, this paper establishes a load–reliability relation fitting model based on the back propagation neural network. Finally, the effectiveness and correctness of the proposed method are investigated by the Roy Billinton test system and reliability test system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. CPiX: Real-Time Analytics Over Out-of-Order Data Streams by Incremental Sliding-Window Aggregation.
- Author
-
Bou, Savong, Kitagawa, Hiroyuki, and Amagasa, Toshiyuki
- Subjects
ELECTRONIC data processing ,PARALLEL algorithms ,BIG data ,DIGITAL watermarking - Abstract
Stream processing is used in various fields. In the field of big data, stream aggregation is a popular processing technique, but it suffers serious setbacks when the order of events (e.g., stream elements) occurring is different from the order of events arriving to the systems. Such data streams are called “non-FIFO steams”. This phenomenon usually occurs in a distributed environment due to many factors, such as network disruptions, delays, etc. Many analyzing scenarios require efficient processing of such non-FIFO streams to meet various data processing requirements. This paper proposes an efficient scalable checkpoint-based bidirectional indexing approach, called $CPiX$ C P i X , for faster real-time analysis over non-FIFO streams. CPiX maintains the partial aggregation results in an on-demand manner per checkpoint. CPiX needs less time and space than the state-of-the-art approach. Extensive experiments confirm that CPiX can deal with out-of-order streams very efficiently and is, on average, about 3.8 times faster than the state-of-the-art approach while consuming less memory. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. Finite-Length Construction of High Performance Spatially-Coupled Codes via Optimized Partitioning and Lifting.
- Author
-
Esfahanizadeh, Homa, Hareedy, Ahmed, and Dolecek, Lara
- Subjects
GRAPH theory ,COMBINATORICS ,RANDOM noise theory ,MATHEMATICAL optimization ,PERFORMANCE evaluation - Abstract
Spatially-coupled (SC) codes are a family of graph-based codes that have attracted significant attention, thanks to their capacity approaching performance and low decoding latency. An SC code is constructed by partitioning an underlying block code into a number of components and coupling their copies together. In this paper, we first introduce a general approach for the enumeration of detrimental combinatorial objects in the graph of finite-length SC codes. Our approach is general in the sense that it effectively works for SC codes with various partitioning schemes, column weights, and memories. Next, we present a two-stage framework for the construction of high performance binary SC codes optimized for the additive white Gaussian noise channels; we aim at minimizing the number of detrimental combinatorial objects in the error floor region. In the first stage, we deploy a novel partitioning scheme, called the optimal overlap partitioning, to produce the optimal partitioning corresponding to the smallest number of detrimental objects. In the second stage, we apply a new circulant power optimizer to further reduce the number of detrimental objects in the lifted graph. SC codes constructed by our new framework have up to two orders of magnitude error floor performance improvement and up to 0.6 dB SNR gain compared to prior state-of-the-art SC codes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Fast Power Grid Partition for Voltage Control With Balanced-Depth-Based Community Detection Algorithm.
- Author
-
Yang, Yang, Sun, Yichao, Wang, Qi, Liu, Fusuo, and Zhu, Ling
- Subjects
ELECTRIC power distribution grids ,VOLTAGE control ,ALGORITHMS ,BUS transportation ,PARALLEL algorithms ,COMMUNITIES ,HIERARCHICAL clustering (Cluster analysis) - Abstract
Network partition in complex power networks is essential for the var-voltage control. Traditional partition methods such as Ward method are applied in practical power networks, but they are unable to evaluate the quality of partition results. Moreover, they lack efficiency when dealing with large-scale networks. Since complex operation characteristics and topology have emerged in recent power systems, the power grid partition requires higher efficiency and quality. Therefore, this paper proposes a fast network partition method with a balanced-depth-based community detection algorithm. Its aim is to significantly improve the efficiency of partition while maintaining high quality of partition, with which the inter-zone coupling is minimized while the intra-zone coupling is maximized. In the meantime, a surrogate-optimization-based selection algorithm is proposed to select the zonal pilot bus, based on which the secondary voltage control method is used to evaluate the quality of partition. Results from four case studies conducted in various power networks with different sizes, as compared to other partition methods, validate the high efficiency and high quality of the proposed power grid partition approach. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. ESA-Stream: Efficient Self-Adaptive Online Data Stream Clustering.
- Author
-
Li, Yanni, Li, Hui, Wang, Zhi, Liu, Bing, Cui, Jiangtao, and Fei, Hang
- Subjects
STREAMING video & television ,PARALLEL algorithms ,BIG data ,HEURISTIC algorithms ,ALGORITHMS ,ADAPTIVE natural resource management - Abstract
Many big data applications produce a massive amount of high-dimensional, real-time, and evolving streaming data. Clustering such data streams with both effectiveness and efficiency are critical for these applications. Although there are well-known data stream clustering algorithms that are based on the popular online-offline framework, these algorithms still face some major challenges. Several critical questions are still not answer satisfactorily: How to perform dimensionality reduction effectively and efficiently in the online dynamic environment? How to enable the clustering algorithm to achieve complete real-time online processing? How to make algorithm parameters learn in a self-supervised or self-adaptive manner to cope with high-speed evolving streams? In this paper, we focus on tackling these challenges by proposing a fully online data stream clustering algorithm (called ESA-Stream) that can learn parameters online dynamically in a self-adaptive manner, speedup dimensionality reduction, and cluster data streams effectively and efficiently in an online and dynamic environment. Experiments on a wide range of synthetic and real-world data streams show that ESA-Stream outperforms state-of-the-art baselines considerably in both effectiveness and efficiency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. A Space-Efficient Fair Cache Scheme Based on Machine Learning for NVMe SSDs.
- Author
-
Liu, Weiguang, Cui, Jinhua, Li, Tiantian, Liu, Junwei, and Yang, Laurence T.
- Subjects
MACHINE learning ,SOLID state drives ,RANDOM access memory ,WRITING processes - Abstract
Non-volatile memory express (NVMe) solid-state drives (SSDs) have been widely adopted in multi-tenant cloud computing environments or multi-programming systems. The on-board DRAM cache inside NVMe SSDs can efficiently reduce the disk accesses and extend the lifetime of SSDs. Current SSD cache management research either improves cache hit ratio while ignoring fairness, or improves fairness while sacrificing overall performance. In this paper, we present MLCache, a space-efficient shared cache management scheme for NVMe SSDs. By learning the impact of reuse distance on cache allocation, a workload-generic neural network model is built. At runtime, MLCache continuously monitors the reuse distance distribution for the neural network module to obtain space-efficient allocation decisions. MLCache also proposes an efficient parallel writing back strategy based on hit ratio and response time, to improve fairness. Experimental results show MLCache improves the write hit ratio when compared to baseline, and MLCache strongly safeguards the fairness of SSDs with parallel write-back and maintains a low level of degradation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
10. F-FM: Fixed-Outline Floorplanning Methodology for Mixed-Size Modules Considering Voltage-Island Constraint.
- Author
-
Lin, Jai-Ming and Wu, Ji-Heng
- Subjects
POWER aware computing ,ELECTRIC potential ,SYSTEM integration ,ELECTRICAL engineering ,ELECTRONIC systems - Abstract
This paper presents a two-stage approach to handle fixed-outline floorplanning for mixed size modules, named F-FM. F-FM combines the advantages of the analytical approach and the slicing tree representation. Thus, it is not only suitable for handling fixed-outline floorplanning but also can be extended to handle other important issues in floorplanning such as routability or thermal effect in addition to wirelength. Recently, low power has become big challenges in very large-scale integration designs, which makes voltage-island driven floorplanning more important than ever. Although the problem has been discussed by previous works, no paper considers signal wirelength, powerplanning, and voltage drop at the same time under the fixed-outline constraint. Thus, this paper extends F-FM to handle this problem and consider these issues by properly dividing modules in a voltage domain into several islands. The experimental results show our approach obtains the best results in these problems. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
11. Integrating Community Context Information Into a Reliably Weighted Collaborative Filtering System Using Soft Ratings.
- Author
-
Nguyen, Van-Doan, Huynh, Van-Nam, and Sriboonchitta, Songsak
- Subjects
RECOMMENDER systems ,DEMPSTER-Shafer theory ,SOCIAL networks ,COMMUNITIES ,PARALLEL algorithms - Abstract
In this paper, we aim at developing a new collaborative filtering recommender system using soft ratings, which is capable of dealing with both imperfect information about user preferences and the sparsity problem. On the one hand, Dempster–Shafer theory is employed for handling the imperfect information due to its advantage in providing not only a flexible framework for modeling uncertain, imprecise, and incomplete information, but also powerful operations for fusion of information from multiple sources. On the other hand, in dealing with the sparsity problem, community context information that is extracted from the social network containing all users is used for predicting unprovided ratings. As predicted ratings are not a hundred percent accurate, while the provided ratings are actually evaluated by users, we also develop a new method for calculating user–user similarities, in which provided ratings are considered to be more significant than predicted ones. In the experiments, the developed recommender system is tested on two different data sets; and the experiment results indicate that this system is more effective than CoFiDS, a typical recommender system offering soft ratings. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
12. LinkBlackHole$^{*}$*: Robust Overlapping Community Detection Using Link Embedding.
- Author
-
Kim, Jungeun, Lim, Sungsu, Lee, Jae-Gil, and Lee, Byung Suk
- Subjects
EMBEDDINGS (Mathematics) ,COMMUNITIES ,STATISTICAL sampling ,PARALLEL algorithms - Abstract
This paper proposes LinkBlackHole$^{*}$*, a novel algorithm for finding communities that are (i) overlapping in nodes and (ii) mixing (not separating clearly) in links. There has been a small body of work in each category, but this paper is the first one that addresses both. LinkBlackHole$^{*}$* is a merger of our earlier two algorithms, LinkSCAN$^{*}$* and BlackHole, inheriting their advantages in support of highly-mixed overlapping communities. The former is used to handle overlapping nodes, and the latter to handle mixing links in finding communities. Like LinkSCAN and its more efficient variant LinkSCAN$^{*}$*, this paper presents LinkBlackHole and its more efficient variant LinkBlackHole$^{*}$*, which reduces the number of links through random sampling. Thorough experiments show superior quality of the communities detected by LinkBlackHole$^{*}$* and LinkBlackHole to those detected by other state-of-the-art algorithms. In addition, LinkBlackHole$^{*}$* shows high resilience to the link sampling effect, and its running time scales up almost linearly with the number of links in a network. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
13. Fast Recommendation on Bibliographic Networks.
- Author
-
Kucuktunc, Onur, Kaya, Kamer, Saule, Erik, and Catalyurek, Umit V.
- Abstract
Graphs and matrices are widely used in algorithms for social network analyses. Since the number of interactions is much less than the possible number of interactions, the graphs and matrices used in the analyses are usually sparse. In this paper, we propose an efficient implementation of a sparse-matrix computation which arises in our publicly available citation recommendation service called the advisor. The recommendation algorithm uses a sparse matrix generated from the citation graph. We observed that the nonzero pattern of this matrix is highly irregular and the computation suffers from high number of cache misses. We propose techniques for storing the matrix in memory efficiently and reducing the number of cache misses. Experimental results show that our techniques are highly efficient on reducing the query processing time which is highly crucial for a web service. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
14. Density Peak Clustering Algorithm and Optimization Based on Measurements of Unlikeness Properties in Position Sensor Environment.
- Author
-
Yao, Zhe and Gao, Kun
- Abstract
With the increasing popularity of position sensors, the rapid development of the mobile Internet and increasingly high quality of communication facilities based on 5G networks, all walks of life are producing moving object trajectory data at an increasingly rapid rate. The density peak clustering algorithm (DPC) is an effective method based on the attributes of local density and relative distance. DPC can find a peak density by identifying the clustering center using a decision graph without specifying the quantity of clusters beforehand and can identify clusters with arbitrary shapes. Unfortunately, because the computations of local density and relative distance only rely on the likeness matrix based on distance measurements, DPC results are unsatisfactory in the following cases: 1) when the data dimension is high, and the distribution is uneven; 2) no uniform measurement exists for the computation of local density, requiring different degrees to be selected based on different data sets; and 3) the measurement of the truncated distance ${d}_{c}$ focuses on global data and disregards local information, allowing variations in ${d}_{c}$ to affect the result. This paper proposes an optimized density peak clustering algorithm based on measurements of unlikeness properties (UDPC) to solve these problems. UDPC uses the block-based unlikeness measurement method to compute the likeness matrix, determines k-nearest neighbor information based on the newly formed likeness matrix, and determines the local density measurement method based on the k-nearest neighbor information. Experiments on classic datasets show that the density peak clustering algorithm based on measurements of unlikeness properties performs better than the DPC, FKNN-DPC and DPC-KNN algorithms. The proposed algorithm unifies the measurement of local density and mitigates the effects of the truncated distance ${d}_{c}$ on clustering. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Phase Partition and Online Monitoring for Batch Process Based on Multiway BEAM.
- Author
-
Guo, Runxia, Guo, Kai, and Dong, Jiankang
- Subjects
PHASE partition ,PHASE equilibrium ,ONLINE monitoring systems ,SECURITY systems ,COVARIANCE matrices - Abstract
Batch process can exhibit significantly different characteristics across different phases, hence it is significant to partition it reasonably and set up corresponding subphase models for online monitoring. Unlike traditional phase-partition algorithms that customarily exploit the result of PCA algorithm for advanced research, an innovative algorithm which directly extracts effective information from the covariance matrix is presented in this paper, which is called multiway beacon exception analysis for maintenance (MBEAM). Its theoretics and statistical characteristics are demonstrated adequately. Based on the accurate capture of the change in variable correlation caused by characteristic variance of the process, the algorithm can separate the process into major phases and transition patterns automatically. The time-varying characteristics will then remain relatively stable in each independent subphase and will be supervised by homologous monitoring model that reflects the inherent phase feature. Due to its simple and intuitive format, MBEAM has superior performance in computation efficiency and fault interpretation, which is illuminated later in this paper. Synthetical illustrations are given concerning the influences of major parameters on the monitoring performance. Comparison with the step-wise sequential phase partition algorithm is conducted for a clearer insight. Experiments are carried out to further confirm the validation of the proposed method. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
16. Unsupervised Learning Based Emission-Aware Uplink Resource Allocation Scheme for Non-Orthogonal Multiple Access Systems.
- Author
-
Jamshed, Muhammad Ali, Heliot, Fabien, and Brown, Tim W. C.
- Subjects
RESOURCE allocation ,QUALITY of service ,MACHINE learning ,ELECTROMAGNETIC fields ,MULTIPLE access protocols (Computer network protocols) ,PARALLEL algorithms - Abstract
The densification of wireless infrastructure to meet ever-increasing quality of service (QoS) demands, and the ever-growing number of wireless devices may lead to higher levels of electromagnetic field (EMF) exposure in the environment, in the 5G era. The possible long term health effects related to the EMF radiation are still an open debate and requires attention. Therefore, in this paper, we propose a novel EMF-aware resource allocation scheme based on the power domain non-orthogonal multiple access (PD-NOMA) and machine learning (ML) technologies for reducing the EMF exposure in the uplink of cellular systems. More specifically, we use the K-means approach (an unsupervised ML approach) to create clusters of users to be allocated together and to then strategically group and assign them on the subcarriers, based on their associated channel properties. Finding the best number of clusters in the PD-NOMA environment is a key challenge, and in this paper, we have used the elbow method in conjunction with the F-test method to effectively control the maximum number of users to be allocated at the same time per subcarrier. We have also derived an EMF-aware power allocation by formulating and solving a convex optimization problem. Based on the simulation results, our proposed ML-based strategy effectively reduces the EMF exposure, in comparison with the state-of-the-art techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Urban Road Network Partitioning Based on Bi-Modal Traffic Flows With Multiobjective Optimization.
- Author
-
Chen, Saifei, Wu, NaiQi, Fu, Hui, Wang, Yefei, and Qiao, Yan
- Abstract
The recent extension of a macroscopic fundamental diagram (MFD) into a bi-modal MFD (or 3D-MFD) provides the relationship among the total network circulating flows and the accumulations of private vehicles and public buses. 3D-MFD reveals the contribution of large occupancy vehicles such as buses in improving urban transportation efficiency. A lot of bi-modal traffic management techniques are introduced based on 3D-MFD to improve the urban traffic efficiency without using detailed origin-destination (OD) information. However, similar to MFD, 3D-MFD is also highly affected by the heterogeneity of a road network. In order to form 3D-MFDs with low scatter to be utilized for further bi-modal traffic management, this paper proposes a partition method to cluster road links into several homogeneous regions for a bi-modal urban network. It is comprised of three layers named as initial partition, merging, and boundary adjusting. At the initial partition layer, Seeded Region Growing (SRG) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) are integrated to obtain a number of subregions. A modified Genetic Algorithm (GA) is developed to merge the subregions into larger regions at the merging layer. Then, boundary adjusting is performed by changing the region to which a boundary is clustered to optimize the result. Multi-sensor data collected from Shenzhen in China are utilized to verify the effectiveness of the proposed partition method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Data-Driven Partitioning of Power Networks Via Koopman Mode Analysis.
- Author
-
Raak, Fredrik, Susuki, Yoshihiko, and Hikihara, Takashi
- Subjects
COHERENCE (Physics) ,GRAPH theory ,ELECTRIC generators ,EIGENVECTORS ,ELECTRIC power systems - Abstract
This paper applies a new technique for modal decomposition based solely on measurements to test systems and demonstrates the technique's capability for partitioning a power network, which determines the points of separation in an islanding strategy. The mathematical technique is called the Koopman mode analysis (KMA) and stems from a spectral analysis of the so-called Koopman operator. Here, KMA is numerically approximated by applying an Arnoldi-like algorithm recently first applied to power system dynamics. In this paper we propose a practical data-driven algorithm incorporating KMA for network partitioning. Comparisons are made with two techniques previously applied for the network partitioning: spectral graph theory which is based on the eigenstructure of the graph Laplacian, and slow-coherency which identifies coherent groups of generators for a specified number of low-frequency modes. The partitioning results share common features with results obtained with graph theory and slow-coherency-based techniques. The suggested partitioning method is evaluated with two test systems, and similarities between Koopman modes and Laplacian eigenvectors are showed numerically and elaborated theoretically. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
19. Sequence-Level Speaker Change Detection With Difference-Based Continuous Integrate-and-Fire.
- Author
-
Fan, Zhiyun, Dong, Linhao, Cai, Meng, Ma, Zejun, and Xu, Bo
- Subjects
PARALLEL algorithms ,GENETIC transduction ,TASK analysis ,CORPORA ,VIDEO coding - Abstract
Speaker change detection is an important task in multi-party interactions such as meetings and conversations. In this paper, we address the speaker change detection task from the perspective of sequence transduction. Specifically, we propose a novel encoder-decoder framework that directly converts the input feature sequence to the speaker identity sequence. The difference-based continuous integrate-and-fire mechanism is designed to support this framework. It detects speaker changes by integrating the speaker difference between the encoder outputs frame-by-frame and transfers encoder outputs to segment-level speaker embeddings according to the detected speaker changes. The whole framework is supervised by the speaker identity sequence, a weaker label than the precise speaker change points. The experiments on the AMI and DIHARD-I corpora show that our sequence-level method consistently outperforms a strong frame-level baseline that uses the precise speaker change labels. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Optimal Task Allocation Algorithms for Energy Constrained Multihop Wireless Networks.
- Author
-
Yu, Wanli, Huang, Yanqiu, and Garcia-Ortiz, Alberto
- Abstract
In recent years, multihop wireless networks have been playing a key role in many Internet of Things applications. Due to the limited resources of wireless nodes, extending the network lifetime is one of the most crucial issues, which needs to be concerned. This paper aims to maximize the network lifetime by appropriately distributing the tasks of the applications for each node in the network. First, a centralized optimal task allocation algorithm for multihop wireless networks (COTAM) is proposed by modeling the problem of maximizing the network lifetime as a linear programming (LP) problem. As the centralized algorithm requires knowing all the network parameters in advance, COTAM is mostly restricted to the off-line optimization in known environments. To extend the usability of the approach, this paper further proposes a distributed optimal task allocation algorithm (DOTAM) based on Dantzig–Wolf decomposition. DOTAM divides the centralized large-sized LP problem into small-sized subproblems, which are independently executed by each node. The proposed COTAM and DOTAM are tested by applying both the artificially generated applications and a realistic application. The extensive results demonstrate that DOTAM achieves the same performance as COTAM. Comparing with existing methods, they provide significant improvements on extending the network lifetime. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. Multiset-Partition Distribution Matching.
- Author
-
Fehenberger, Tobias, Millar, David S., Koike-Akino, Toshiaki, Kojima, Keisuke, and Parsons, Kieran
- Subjects
AMPLITUDE estimation ,ALGORITHMS ,BOREL subsets ,QUADRATURE domains ,RANDOM noise theory - Abstract
Distribution matching is a fixed-length invertible mapping from a uniformly distributed bit sequence to shaped amplitudes and plays an important role in the probabilistic amplitude shaping framework. With conventional constant-composition distribution matching (CCDM), all output sequences have identical composition. In this paper, we propose multiset-partition distribution matching (MPDM), where the composition is constant over all output sequences. When considering the desired distribution as a multiset, MPDM corresponds to partitioning this multiset into equal-sized subsets. We show that MPDM allows addressing more output sequences and, thus, has a lower rate loss than CCDM in all nontrivial cases. By imposing some constraints on the partitioning, a constructive MPDM algorithm is proposed which comprises two parts. A variable-length prefix of the binary data word determines the composition to be used, and the remainder of the input word is mapped with a conventional CCDM algorithm, such as arithmetic coding, according to the chosen composition. Simulations of 64-ary quadrature amplitude modulation over the additive white Gaussian noise channel demonstrate that the block-length saving of MPDM over CCDM for a fixed gap to capacity is approximately a factor of 2.5–5 at medium to high signal-to-noise ratios. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. Exponential Error Rates of SDP for Block Models: Beyond Grothendieck’s Inequality.
- Author
-
Fei, Yingjie and Chen, Yudong
- Subjects
SEMIDEFINITE programming ,ERROR rates ,GRAPHIC methods ,STATISTICS ,EQUALITY - Abstract
In this paper, we consider the cluster estimation problem under the stochastic block model. We show that the semidefinite programming (SDP) formulation for this problem achieves an error rate that decays exponentially in the signal-to-noise ratio. The error bound implies weak recovery in the sparse graph regime with bounded expected degrees as well as exact recovery in the dense regime. An immediate corollary of our results yields error bounds under the censored block model. Moreover, these error bounds are robust, continuing to hold under heterogeneous edge probabilities and a form of the so-called monotone attack. Significantly, this error rate is achieved by the SDP solution itself without any further pre- or post-processing and improves upon existing polynomially decaying error bounds proved using the Grothendieck’s inequality. Our analysis builds on two key ingredients: 1) showing that the graph has a well-behaved spectrum, even in the sparse regime, after discounting an exponentially small number of edges and 2) an order-statistics argument that governs the final error rate. Both arguments highlight the implicit regularization effect of the SDP formulation. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Distributed Contaminant Detection and Isolation for Intelligent Buildings.
- Author
-
Kyriacou, Alexis, Michaelides, Michalis P., Reppa, Vasso, Timotheou, Stelios, Panayiotou, Christos G., and Polycarpou, Marios M.
- Subjects
ARTIFICIAL intelligence ,SMART cities - Abstract
The automatic preservation of the indoor air quality (IAQ) is an important task of the intelligent building design in order to ensure the health and safety of the occupants. The IAQ, however, is often compromised by various airborne contaminants that penetrate the indoor environment as a result of accidents or planned attacks. In this paper, we provide the detailed analysis, implementation, and evaluation of a distributed methodology for detecting and isolating multiple contaminant events in large-scale buildings. Specifically, we consider the building as a collection of interconnected subsystems, and we design a contaminant event monitoring software agent for each subsystem. Each monitoring agent aims to detect the contaminant and isolate the zone where the contaminant source is located, while it is allowed to exchange information with its neighboring agents. For configuring the subsystems, we implement both exact and heuristic partitioning solutions. A main contribution of this paper is the investigation of the impact of the partitioning solution on the performance of the distributed contaminant detection and isolation (CDI) scheme with respect to the detectability and isolability of the contaminant sources. The performance of the proposed distributed CDI methodology is demonstrated using the models of real building case studies created on CONTAM. 1 CONTAM is a multizone simulation program developed by the U.S. National Institute of Standards and Technology. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Skyline Diagram: Efficient Space Partitioning for Skyline Queries.
- Author
-
Liu, Jinfei, Yang, Juncheng, Xiong, Li, Pei, Jian, Luo, Jun, Guo, Yuzhang, Ma, Shuaicheng, and Fan, Chenglin
- Subjects
VORONOI polygons ,CHARTS, diagrams, etc. ,POINT set theory ,PARALLEL algorithms ,HEURISTIC algorithms - Abstract
Skyline queries are important in many application domains. In this paper, we propose a novel structure Skyline Diagram, which given a set of points, partitions the plane into a set of regions, referred to as skyline polyominos. All query points in the same skyline polyomino have the same skyline query results. Similar to $k$ k th-order Voronoi diagram commonly used to facilitate $k$ k nearest neighbor ($k$ k NN) queries, skyline diagram can be used to facilitate skyline queries and many other applications. However, it may be computationally expensive to build the skyline diagram. By exploiting some interesting properties of skyline, we present several efficient algorithms for building the diagram with respect to three kinds of skyline queries, quadrant, global, and dynamic skylines. In addition, we propose an approximate skyline diagram which can significantly reduce the space cost. Experimental results on both real and synthetic datasets show that our algorithms are efficient and scalable. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Association Rules Mining for Academic Cooperation Based on Time Extension and Duration Accumulation.
- Author
-
Huang, Fang, Zou, Zhike, Liu, Xinmin, and He, Jiefeng
- Abstract
Academic relations originated from academic activities are of particular temporal features. The academic activities which happen more recently, last longer period and have more occurrence will be more possible to form valuable academic relationships and more meaningful to be analyzed. For the characteristics of the accumulated time and vague boundaries of valid time with academic cooperation, the strategy of time extension and duration accumulation is introduced into the algorithm of progressive partition association rules mining. In the proposed method, the time attribute of academic activities transaction records are mapped and extended on time series, and the duration period is marked according to time partition. Through progressive partition processing, the academic association rules mining with continuous time accumulation is realized. This paper presents basic principle and realization process in details, and takes information about research projects and co-published papers as cases to mine academic relationships. The experimental comparison and results analysis show the effectiveness of the proposed algorithm. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
26. A Robust Algorithm Based on Link Label Propagation for Identifying Functional Modules From Protein-Protein Interaction Networks.
- Author
-
Jiang, Hao, Zhan, Fei, Wang, Congtao, Qiu, Jianfeng, Su, Yansen, Zheng, Chunhou, Zhang, Xingyi, and Zeng, Xiangxiang
- Abstract
Identifying functional modules in protein-protein interaction (PPI) networks elucidates cellular organization and mechanism. Various methods have been proposed to identify the functional modules in PPI networks, but most of these methods do not consider the noisy links in PPI networks. They achieve a competitive performance on the PPI networks without noisy links, but the performance of these methods considerably deteriorates in the noisy PPI networks. Furthermore, the noisy links are inevitable in the PPI networks. In this paper, we propose a novel link-driven label propagation algorithm (LLPA) to identify functional modules in PPI networks. The LLPA first find link clusters in PPI networks, and then the functional modules are identified from the link clusters. Two strategies aimed to ensure the robustness of LLPA are proposed. One strategy involves the proposed LLPA updating the link labels in accordance with the designed weight of the link, which can reduce the incidence of noisy links. The other strategy involves the filtration of some noisy labels from the link clusters to further reduce the influence of noisy links. The performance evaluation on three real PPI networks shows that LLPA outperforms other eight state-of-the-art detection algorithms in terms of accuracy and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. A Novel Localization Approach for Irregular Wireless Sensor Networks Based on Anchor Segmentation.
- Author
-
Wang, Jing, Cheng, Li, Tu, Yuanfei, and Gu, Shenkai
- Abstract
Source localization has been a crucial fundamental service in wireless sensor networks (WSNs). Existing algorithms assume a regular region or controlled deployment. In practice, however, irregular network topologies often occurs, which greatly downgrade the localization performance. In this paper, we propose a new distributed localization approach based on Anchor Segmentation and Projection for Irregular networks (ASPI). The new framework is composed of three phases: anchor segmentation boarder construction, convex hull identification and projection-based localization. An anchor based network approximate convex segmentation method is proposed to reduce the consumption of network resources in the first two phases and an improved giftwrapping based convex hull identification method is provided to reduce the complexity. In the localization phase, we formulate the localization as a convex feasibility problem to avoid the multimodality in Maximum likelihood techniques and an alternative procedure is provided for inconsistent situation in the projection-based scheme. Experiments are conducted and the results demonstrate that our algorithm outperforms other existing solutions in irregular-shaped networks in higher accuracy with low complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Low-Complexity CTU Partition Structure Decision and Fast Intra Mode Decision for Versatile Video Coding.
- Author
-
Yang, Hao, Shen, Liquan, Dong, Xinchao, Ding, Qing, An, Ping, and Jiang, Gangyi
- Subjects
VIDEO coding ,COMPUTATIONAL complexity ,BLOCK codes ,RECURSIVE partitioning ,FORECASTING ,PARALLEL algorithms - Abstract
Quadtree with nested multi-type tree (QTMT) partition structure is an efficient improvement in versatile video coding (VVC) over the quadtree (QT) structure in the advanced high-efficiency video coding (HEVC) standard. With the exception of the recursive QT partition structure, recursive multi-type tree partition is applied to each leaf node, which generates more flexible block sizes. Besides, intra prediction modes are extended from 35 to 67 so as to satisfy various texture patterns. These newly developed techniques achieve high coding efficiency but also result in very high computational complexity. To tackle this problem, we propose a fast intra-coding algorithm consisting of low-complexity coding tree units (CTU) structure decision and fast intra mode decision in this paper. The contributions of the proposed algorithm lie in the following aspects: 1) the new block size and coding mode distribution features are first explored for a reasonable fast coding scheme; 2) a novel fast QTMT partition decision framework is developed, which can determine the partition decision on both QT and multi-type tree with a novel cascade decision structure; and 3) fast intra mode decision with gradient descent search is introduced, while the best initial search point and search step are also investigated in this paper. The simulation results show that the complexity reduction of the proposed algorithm is up to 70% compared to VVC reference software (VTM), and averagely 63% encoding time saving is achieved with 1.93% BDBR increasing. Such results demonstrate that our method yields a superior performance in terms of computational complexity and compression quality compared to the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Efficient Generalized Surrogate-Assisted Evolutionary Algorithm for High-Dimensional Expensive Problems.
- Author
-
Cai, Xiwen, Gao, Liang, and Li, Xinyu
- Subjects
HIGH-dimensional model representation ,BENCHMARK problems (Computer science) ,GENETIC algorithms ,MATHEMATICAL optimization ,EVOLUTIONARY computation ,PARALLEL algorithms ,EVOLUTIONARY algorithms - Abstract
Engineering optimization problems usually involve computationally expensive simulations and many design variables. Solving such problems in an efficient manner is still a major challenge. In this paper, a generalized surrogate-assisted evolutionary algorithm is proposed to solve such high-dimensional expensive problems. The proposed algorithm is based on the optimization framework of the genetic algorithm (GA). This algorithm proposes to use a surrogate-based trust region local search method, a surrogate-guided GA (SGA) updating mechanism with a neighbor region partition strategy and a prescreening strategy based on the expected improvement infilling criterion of a simplified Kriging in the optimization process. The SGA updating mechanism is a special characteristic of the proposed algorithm. This mechanism makes a fusion between surrogates and the evolutionary algorithm. The neighbor region partition strategy effectively retains the diversity of the population. Moreover, multiple surrogates used in the SGA updating mechanism make the proposed algorithm optimize robustly. The proposed algorithm is validated by testing several high-dimensional numerical benchmark problems with dimensions varying from 30 to 100, and an overall comparison is made between the proposed algorithm and other optimization algorithms. The results show that the proposed algorithm is very efficient and promising for optimizing high-dimensional expensive problems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. A Coverage Maintenance Algorithm for Mobile WSNs With Adjustable Sensing Range.
- Author
-
Khalifa, Banafsj, Khedr, Ahmed M., and Al Aghbari, Zaher
- Abstract
The work of wireless sensor networks (WSNs) depends on reliable coverage of the area to be monitored. The problem of coverage holes arises when one or more nodes fail due to energy depletion or harsh physical environments. Furthermore, random deployment of nodes could lead to a high degree of overlapping coverage among the sensor nodes. While there are many research papers on optimal deployment of sensor nodes, coverage holes that appear post-deployment are not often considered. In this paper, we propose an algorithm that employs adjustable sensing ranges and exploits node mobility to repair emergent coverage holes. The algorithm selects suitable nodes by gauging the degree of overlap and the residual energy of each node in the vicinity of the coverage hole. We have evaluated the performance of the algorithm via simulation, and compared with baseline approaches in terms of coverage performance and energy cost. Simulations have shown that our approach outperforms others in terms of coverage and network lifetime. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. A Novel Supervised Clustering Algorithm for Transportation System Applications.
- Author
-
Almannaa, Mohammed H., Elhenawy, Mohammed, and Rakha, Hesham A.
- Abstract
This paper proposes a novel supervised clustering algorithm to analyze large datasets. The proposed clustering algorithm models the problem as a matching problem between two disjoint sets of agents, namely, centroids and data points. This novel view of the clustering problem allows the proposed algorithm to be multi-objective, where each agent may have its own objective function. The proposed algorithm is used to maximize the purity and similarity in each cluster simultaneously. Our algorithm shows promising performance when tested using two different transportation datasets. The first dataset includes speed measurements along a section of Interstate 64 in the state of Virginia, while the second dataset includes the bike station status of a bike sharing system (BSS) in the San Francisco Bay Area. We clustered each dataset separately to examine how traffic and bike patterns change within clusters and then determined when and where the system would be congested or imbalanced, respectively. Using a spatial analysis of these congestion states or imbalance points, we propose potential solutions for decision makers and agencies to improve the operations of I-64 and the BSS. We demonstrate that the proposed algorithm produces better results than classical $k$ -means clustering algorithms when applied to our datasets with respect to a time event. The contributions of our paper are: 1) we developed a multi-objective clustering algorithm; 2) the algorithm is scalable (polynomial order), fast, and simple; and 3) the algorithm simultaneously identifies a stable number of clusters and clusters the data. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. hbtSOTLearner: A Hierarchical-Backtracking-Based Parameter Learner for Small Outline Transistor.
- Author
-
Sun, Hao, Yu, Jinyong, Liu, Weihua, and Yang, Xianqiang
- Subjects
SURFACE mount technology ,GAUSSIAN mixture models ,TRANSISTORS ,PACKAGING waste - Abstract
This paper is concerned with parameter learning of chips with small outline transistor (SOT) package, which is one of the most widely used package in surface mount technology (SMT) and has various subcategories. Previously learned parameter is crucial to most SOT-related industrial applications, such as location and defect inspection. However, parameter learning is a challenging work because of package diversity and image-quality deterioration in practical industrial applications. The conventional methods, checking data sheet or manual measuring, cannot meet the accuracy requirement of SMT. This paper proposes a hierarchical-backtracking-based parameter learner for SOT chips. The Gaussian mixture model based clustering algorithm and random walker algorithm are firstly applied to extracting lead regions of SOT chip; Then, chip models are inferred by grouping these lead regions with a hierarchical backtracking algorithm. Finally, redundant models are eliminated with root set pyramids and the valid chip model is obtained. The experimental results show that the proposed parameter learner performs well on SOT chips and is robust to noisy sets. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
33. Algorithm Design and Analysis for Wireless Relay Network Deployment Problem.
- Author
-
Gao, Xiaofeng, Lu, Junwei, Wang, Haotian, Wu, Fan, and Chen, Guihai
- Subjects
LOCATION problems (Programming) ,SUBMODULAR functions ,APPROXIMATION algorithms ,ALGORITHMS ,HEURISTIC ,ARTIFICIAL membranes ,MULTICASTING (Computer networks) ,WIRELESS sensor networks - Abstract
Wireless relay network has been widely used in many applications to improve the wireless service. In this paper, we aim to maximize users' satisfaction by deploying limited number of relays in a target region to form a wireless relay network, and define the Deployment of Cooperative Relay (DoCR) problem, which is proved to be NP-complete. We first propose two approximation algorithms, an $O(\log n)$O(logn) algorithm that utilizes the algorithms for budget weighted Steiner tree problem with novel position weighting assignment, and an $O(\sqrt{k})$O(k) algorithm that iteratively scans potential positions and determines relay placement plan with the help of submodular function theory, partition technique, and greedy strategy. We name them Relay Effective Deployment Algoirthm (REDA) and Submodular Iterative Deployment Algorithm (SIDA), respectively. We further propose Gradient-Descent Based Algorithm (GDBA), a heuristic method, to solve the DoCR problem releasing potential location constraints. Our extensive experiments indicate that the algorithms we propose can significantly improve the total satisfaction of the network. Furthermore, we establish a testbed using USRP to showcase our designs in real scenarios. To the best of our knowledge, we are the first to propose approximation algorithms for relay placement problem to maximize user satisfaction, which has both theoretical and practical significance in the related area. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. HEVC Encoding Optimization Using Multicore CPUs and GPUs.
- Author
-
Xiao, Wei, Li, Bin, Xu, Jizheng, Shi, Guangming, and Wu, Feng
- Subjects
DECODERS & decoding ,ENCODING ,VIDEO coding ,VIDEO codecs ,GRAPHICS processing units - Abstract
Although the High Efficiency Video Coding (HEVC) standard significantly improves the coding efficiency of video compression, it is unacceptable even in offline applications to spend several hours compressing 10 s of high-definition video. In this paper, we propose using a multicore central processing unit (CPU) and an off-the-shelf graphics processing unit (GPU) with 3072 streaming processors (SPs) for HEVC fast encoding, so that the speed optimization does not result in loss of coding efficiency. There are two key technical contributions in this paper. First, we propose an algorithm that is both parallel and fast for the GPU, which can utilize 3072 SPs in parallel to estimate the motion vector (MV) of every prediction unit (PU) in every combination of the coding unit (CU) and PU partitions. Furthermore, the proposed GPU algorithm can avoid coding efficiency loss caused by the lack of a MV predictor (MVP). Second, we propose a fast algorithm for the CPU, which can fully utilize the results from the GPU to significantly reduce the number of possible CU and PU partitions without any coding efficiency loss. Our experimental results show that compared with the reference software, we can encode high-resolution video that consumes 1.9% of the CPU time and 1.0% of the GPU time, with only a 1.4% rate increase. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
35. A Scalable Tile-Based Framework for Region-Merging Segmentation.
- Author
-
Lassalle, Pierre, Inglada, Jordi, Michel, Julien, Grizonnet, Manuel, and Malik, Julien
- Subjects
REMOTE sensing ,BIG data ,ENVIRONMENTAL monitoring ,NATURAL resources management ,DATA structures - Abstract
Processing large very high-resolution remote sensing images on resource-constrained devices is a challenging task because of the large size of these data sets. For applications such as environmental monitoring or natural resources management, complex algorithms have to be used to extract information from the images. The memory required to store the images and the data structures of such algorithms may be very high (hundreds of gigabytes) and therefore leads to unfeasibility on commonly available computers. Segmentation algorithms constitute an essential step for the extraction of objects of interest in a scene and will be the topic of the investigation in this paper. The objective of the present work is to adapt image segmentation algorithms for large amounts of data. To overcome the memory issue, large images are usually divided into smaller image tiles, which are processed independently. Region-merging algorithms do not cope well with image tiling since artifacts are present on the tile edges in the final result due to the incoherencies of the regions across the tiles. In this paper, we propose a scalable tile-based framework for region-merging algorithms to segment large images, while ensuring identical results, with respect to processing the whole image at once. We introduce the original concept of the stability margin for a tile. It allows ensuring identical results to those obtained if the whole image had been segmented without tiling. Finally, we discuss the benefits of this framework and demonstrate the scalability of this approach by applying it to real large images. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
36. Probabilistic Voltage Management Using OLTC and dSTATCOM in Distribution Networks.
- Author
-
Pezeshki, Houman, Ledwich, Gerard, Arefi, Ali, and Wolfs, Peter
- Subjects
POWER distribution networks ,ELECTRIC potential ,PHOTOVOLTAIC power generation ,PARTICLE swarm optimization ,ENERGY storage ,PROBABILITY theory - Abstract
Low-voltage (LV) feeder voltage magnitude and unbalance are often the constraining factors on a feeder's capacity to absorb rooftop photovoltaic (PV) generation. This paper presents a new probabilistic method for voltage management in distribution networks through the placement of distribution static compensators (dSTATCOMs) and on-load tap changers (OLTCs) considering the reactive capability of PV inverters in multiple LV and medium-voltage distribution networks. The method uses a modified particle swarm optimization. In this paper, several scenarios for the placements of multiple dSTATCOMs with and without embedded energy storage systems using both reactive and real power compensation are investigated in combination with an OLTC equipped with independent per-phase tap-changing control. The voltage constraints in the proposed method are statistically defined using three duration curves. These are the voltage unbalance, maximum voltage, and minimum voltage duration curves. The method is comprehensively tested for varying load and PV generation based on data from a real Australian distribution network with considerable unbalance and distributed PV generation. The results show that PV hosting capacity increases where the proposed approach is applied. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
37. Lossless In-Network Processing and Its Routing Design in Wireless Sensor Networks.
- Author
-
Guo, Peng, Liu, Xuefeng, Cao, Jiannong, and Tang, Shaojie
- Abstract
In many domain-specific monitoring applications of wireless sensor networks (WSNs), such as structural health monitoring, volcano tomography, and machine diagnosis, the raw data in WSNs are required to be losslessly gathered to the sink, where a specialized centralized algorithm is then executed to extract some global features or model parameters. To reduce the large raw data transmission, in-network processing is usually employed. However, different from most existing in-network processing works that pre-assume some common computation/aggregation functions, in-network processing of a given centralized algorithm requires exact partitioning of the algorithm first and then appropriately assigning the partitioned computations into WSNs. We call this lossless in-network processing, which has not been studied much. Lossless in-network processing raises two questions: 1) what pattern should a centralized algorithm be partitioned into so that the partitioned computations can be flexibly assigned into a WSN with arbitrary topology? and 2) for each partition pattern, how should efficient routing for the resource-limited sensor nodes be designed? These two questions can be referred to as a topology-constrained computation partition problem and a computation-constrained routing design problem, respectively. In this paper, we first introduce some general patterns on the topology-constrained computation partition. Then, with the computation constraints in the patterns, we present a series of novel routing schemes customized for different cases of computation results. The work in this paper can also serve as a guideline for distributed computing of big data, where the data spreads in a large network. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
38. An Efficient Multiresolution Clustering for Motif Discovery in Complex Networks.
- Author
-
Pursalim, Mahdi and Keong, Kwoh Chee
- Abstract
Motif discovery and network clustering in complex networks have received a lot of attention in recent years, also they are still challenging tasks in bioinformatics, big data analytics and data mining applications. Motif discovery in big data networks has a lot of important applications in different domains such as engineering, bioinformatics, cheminformatics, genomics, sociology and ecology for revealing hidden frequent structures, functional building blocks, or knowledge discovery. In this paper, a motif localization method based on a novel clustering algorithm in complex networks is presented. In our method, for each complex network, a novel structure so-called Augmented Multiresolution Network (AMN) is generated, then it is adaptively partitioned into several clusters and their corresponding subnets. Then top ranked subnets are chosen to discover network motifs. We show that the proposed method provides an efficient solution for clustering and motif discovery; It speeds up current motif discovery algorithms by pruning non-promising regions of complex networks. Experimental results show our algorithm efficiently deals with complex networks representing large datasets with high-dimensionality such as big scientific data. Our method also provides motivations for future studies in big data and complex networks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. A Bi-Level Consensus ADMM-Based Fully Distributed Inverter-Based Volt/Var Control Method for Active Distribution Networks.
- Author
-
Ju, Yuntao, Zhang, Zifeng, Wu, Wenchuan, Liu, Wenwu, and Zhang, Ruosi
- Subjects
DISTRIBUTED algorithms ,DATA privacy ,COMPUTATIONAL complexity ,MATHEMATICAL optimization ,PARALLEL algorithms - Abstract
The distributed Volt/Var control solution is a promising one for active distribution networks (ADNs), but a critical issue is how to improve the distributed algorithm's convergence with less communication burden. In this paper, we propose a bi-level consensus alternation direction multiplier method-based fully distributed Volt/Var optimization algorithm (B-FADMM) for ADNs by exploiting inverter-based rapid control devices. In the first level, the Volt/Var optimal solution of subpartitions is obtained in parallel, and the second level completes the variable increment of each partition to correct the search direction of the first level. In the algorithm, firstly, an adaptive matrix $\omega $ initialization strategy is proposed to adapt to the different magnitude of variables, which avoids non-convergence or slow convergence of the algorithm. Secondly, the null-space method is used in the second level of the algorithm to obtain the null-space basis matrix on the active constraint of the subpartition, which reduces the dimensions of the problem and the computational complexity. In addition, the conjugate gradient (CG) algorithm is used to update the dual multiplier of each partition only by exchanging coupling information among partitions, which protects information privacy. Experiments showed the proposed approach attains the same result as the centralized, the computational performance of the proposed method is far superior to some popular other distributed methods in terms of accuracy, computational efficiency, and scalability. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Exact Recovery and Sharp Thresholds of Stochastic Ising Block Model.
- Subjects
ISING model ,RANDOM graphs ,STOCHASTIC models ,PARALLEL algorithms ,STOCHASTIC processes - Abstract
The stochastic block model (SBM) is a random graph model in which the edges are generated according to the underlying cluster structure on the vertices. The (ferromagnetic) Ising model, on the other hand, assigns ±1 labels to vertices according to an underlying graph structure in a way that if two vertices are connected in the graph then they are more likely to be assigned the same label. In SBM, one aims to recover the underlying clusters from the graph structure while in Ising model, an extensively-studied problem is to recover the underlying graph structure based on i.i.d. samples (labelings of the vertices). In this paper, we propose a natural composition of SBM and the Ising model, which we call the Stochastic Ising Block Model (SIBM). In SIBM, we take SBM in its simplest form, where $n$ vertices are divided into two equal-sized clusters and the edges are connected independently with probability $p$ within clusters and $q$ across clusters. Then we use the graph $G$ generated by the SBM as the underlying graph of the Ising model and draw $m$ i.i.d. samples from it. The objective is to exactly recover the two clusters in SBM from the samples generated by the Ising model, without observing the graph $G$. As the main result of this paper, we establish a sharp threshold $m^\ast $ on the sample complexity of this exact recovery problem in a properly chosen regime, where $m^\ast $ can be calculated from the parameters of SIBM. We show that when $m\ge m^\ast $ , one can recover the clusters from $m$ samples in $O(n)$ time as the number of vertices $n$ goes to infinity. When $m < m^\ast $ , we further show that for almost all choices of parameters of SIBM, the success probability of any recovery algorithms approaches 0 as $n\to \infty $. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Efficient Link Scheduling Solutions for the Internet of Things Under Rayleigh Fading.
- Author
-
Yu, Kan, Yu, Jiguo, Cheng, Xiuzhen, Yu, Dongxiao, and Dong, Anming
- Subjects
DISTRIBUTED algorithms ,INTERNET of things ,RAYLEIGH model ,SCHEDULING - Abstract
Link scheduling is an appealing solution for ensuring the reliability and latency requirements of Internet of Things (IoT). Most existing results on the link scheduling problem were based on the graph or SINR (Signal-to-Interference-plus-Noise-Ratio) models, which ignored the impact of the random fading gain of the signals strength. In this paper, we address the link scheduling problem under the Rayleigh fading model. Both Shortest Link Scheduling (SLS) and Maximum Link Scheduling (MLS) problems are studied. In particular, we show that a set of links can be activated simultaneously under Rayleigh fading model if all link SINR constraints are satisfied. Based on the analysis of previous Link Diversity Partition (LDP) algorithm, we propose an Improved LDP (ILDP) algorithm and a centralized algorithm by localizing the global interference (denoted by CLT), building on which we design a distributed CLT algorithm (denoted by RCRDCLT) that converges to a constant approximation factor of the optimum with the time complexity of $O(\ln n)$ , where $n$ is the number of links. Furthermore, executing repeatedly RCRDCLT can solve the SLS with an approximation factor of $\Theta (\ln n)$. Extensive simulations indicate that CLT is more effective than previous six popular link scheduling algorithms, and RCRDCLT has the lowest time complexity while only losses a constant fraction of the optimum schedule. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Optimal Spectrum Partitioning and Licensing in Tiered Access Under Stochastic Market Models.
- Author
-
Saha, Gourav and Abouzeid, Alhussein A.
- Subjects
STOCHASTIC models ,MONTE Carlo method ,ALGORITHMS - Abstract
We consider the problem of partitioning a spectrum band into $M$ channels of equal bandwidth, and then further assigning these $M$ channels into $P$ licensed channels and $M-P$ unlicensed channels. Licensed channels can be accessed both for licensed and opportunistic use following a tiered structure that has a higher priority for licensed use. Unlicensed channels can be accessed only for opportunistic use. We address the following question in this paper. Given a market setup, what values of $M$ and $P$ maximize the net spectrum utilization of the spectrum band? While this problem is fundamental, it is highly relevant practically, e.g., in the context of partitioning the recently proposed Citizens Broadband Radio Service band. If $M$ is too high or too low, it may decrease spectrum utilization due to limited channel capacity or due to wastage of channel capacity, respectively. If $P$ is too high (low), it will not incentivize the wireless operators who are primarily interested in unlicensed channels (licensed channels) to join the market. These tradeoffs are captured in our optimization problem which manifests itself as a two-stage Stackelberg game. We design an algorithm to solve the Stackelberg game and hence find the optimal $M$ and $P$. The algorithm design also involves an efficient Monte Carlo integrator to evaluate the expected value of the involved random variables like spectrum utilization and operators’ revenue. We also benchmark our algorithms using numerical simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Group Partition and Dynamic Rate Adaptation for Scalable Capacity-Region-Aware Device-to-Device Communications.
- Author
-
Liou, Yi-Shing, Gau, Rung-Hung, and Chang, Chung-Ju
- Abstract
In this paper, we propose using group partition and dynamic rate adaptation for scalable throughput optimization of capacity-region-aware device-to-device communications. We adopt network information theory that allows a receiving device to simultaneously decode multiple packets from multiple transmitting devices, as long as the vector of transmitting rates is inside the capacity region. Based on graph theory, devices are first partitioned into subgroups. To optimize the throughput of a subgroup, instead of directly solving an integer-linear programming problem, we propose using a fast iterative algorithm to select active devices and using aggression levels for rate adaptation based on channel state information. Simulation results show that the proposed algorithm is scalable and could significantly outperform the greedy algorithm by more than 50%. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
44. A New Selective Clustering Ensemble Algorithm.
- Author
-
Limin, Liu and Xiaoping, Fan
- Abstract
Selective clustering ensemble is usually based on the reference partition to select members of the ensemble. General method of generating reference partition is to use preliminary ensemble results, yet it cannot eliminate the influence of the inferior clustering partitions and the final clustering result is not satisfactory. In order to solve this problem, the paper proposes a new selective clustering ensemble algorithm. The new algorithm includes two points :(1) selecting the best reference partition based on clustering validity evaluation, (2)putting forward the new selection strategy and the method of member's weight. The experimental results show that the new algorithm is effective and clustering accuracy could be significantly improved. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
45. Ant Colony optimization technique to Solve the min-max multi depot vehicle routing problem.
- Author
-
Narasimha, Koushik S. Venkata, Kivelevitch, Elad, and Kumar, Manish
- Abstract
In this paper, we extend our work on solving minmax single depot vehicle routing, published in the proceedings of the ACC 2011, to solving min-max multi depot vehicle routing problem. The min-max multi-depot vehicle routing problem involves minimizing the maximum distance travelled by any vehicle in case of vehicles starting from multiple depots and travelling to each customer location (or city) at least once. This problem is of specific significance in case of time critical applications such as emergency response in large-scale disasters, and server-client network latency. In this paper we extend the ant colony based algorithm which was proposed earlier in our previous paper and introduce a novel way to address the min-max multi-depot vehicle routing problem. The approach uses a region partitioning method developed by Carlsson et al. to convert the multi-depot problem into multiple single-depot versions. A computer simulation model using MATLAB was developed. Also, in terms of optimality of solution and computational time, a comparison with the existing Carlsson model has been carried out. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
46. SAP: Improving Continuous Top-K Queries Over Streaming Data.
- Author
-
Zhu, Rui, Wang, Bin, Yang, Xiaochun, Zheng, Baihua, and Wang, Guoren
- Subjects
DATA analysis ,ALGORITHMS ,LOGARITHMIC functions ,LOGARITHMS ,NUMERICAL analysis - Abstract
Continuous top-$k$
- Published
- 2017
- Full Text
- View/download PDF
47. Spherically Punctured Reed–Muller Codes.
- Author
-
Dumer, Ilya and Kapralova, Olga
- Subjects
HYPERCUBES ,POLYNOMIALS ,HAMMING weight ,TYMPANIC membrane perforation ,REED-Muller codes - Abstract
Consider a binary Reed–Muller code RM (r,m) defined on the m -dimensional hypercube \mathbb F2^{m} . In this paper, we study punctured Reed–Muller codes Pr(m,b) , whose positions are restricted to the m -tuples of a given Hamming weight b . In combinatorial terms, this paper concerns m -variate Boolean polynomials of any degree r , which are evaluated on a Hamming sphere of some radius b . Codes Pr(m,b) inherit some recursive properties of RM codes. In particular, they can be built from the shorter codes, by decomposing a spherical b -layer into sub-layers of smaller dimensions. However, these sub-layers have different sizes and do not form the classical Plotkin construction. We analyze recursive properties of the spherically punctured codes Pr(m,b) and find their distances for the arbitrary values of parameters $r,m$ , and $b$ . Finally, we describe recursive (successive cancellation) decoding of these codes. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
48. Partition Search Revisited.
- Author
-
Beling, Piotr
- Abstract
Partition search is a form of game search, proposed by Matthew L. Ginsberg in 1996, who wrote that the method “incorporates dependency analysis, allowing substantial reductions in the portion of the tree that needs to be expanded.” In this paper, some improvements of the partition search algorithm are proposed. The effectiveness of the most important extension we contribute, which we call
local partition search , has been verified experimentally. The results obtained (which we present in the paper) show that using this extension, leads, in the case of bridge, to a significant reduction (almost by half) of the search tree size and calculation time. Another extension we proposed allows for more effective usage of the transposition table (using it to narrow the search window or by cutting more than one entry). Additionally, we contribute a formal proof of the correctness of all presented partition search variants. We draw conclusions from it about a possible generalization of partition search by making the definition of a partition system less restrictive. We also provide a formal definition of a partition system for the double dummy bridge. [ABSTRACT FROM PUBLISHER]- Published
- 2017
- Full Text
- View/download PDF
49. Efficient Client Assignment for Client-Server Systems.
- Author
-
Zhu, Yuqing, Wu, Weili, and Li, Deying
- Abstract
Many distributed systems use a client-server model in which client assignment strategy plays an important role on the system performance. People use two criteria to evaluate server loads—1) total load and 2) load balance. The total load increases when the load balance decreases, and vice versa. It has been proved that finding the best client assignment is NP-hard. In this paper, we propose a new model for the client assignment problem and design algorithms based on semidefinite programming. We study the identical server case and general server case, present two algorithms (BSP and ABSP), and analyze these algorithms’ bounds. In simulation, we evaluate that our client assignement strategies give the satisfiable total load and load balancing using reasonable time compared to the state-of-the-art, thus proving the effectiveness of our algorithms. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
50. Guaranteed Matrix Completion via Non-Convex Factorization.
- Author
-
Sun, Ruoyu and Luo, Zhi-Quan
- Subjects
MATRICES (Mathematics) ,FACTORIZATION ,MATHEMATICAL optimization ,ALGORITHMS ,RESAMPLING (Statistics) - Abstract
Matrix factorization is a popular approach for large-scale matrix completion. The optimization formulation based on matrix factorization, even with huge size, can be solved very efficiently through the standard optimization algorithms in practice. However, due to the non-convexity caused by the factorization model, there is a limited theoretical understanding of whether these algorithms will generate a good solution. In this paper, we establish a theoretical guarantee for the factorization-based formulation to correctly recover the underlying low-rank matrix. In particular, we show that under similar conditions to those in previous works, many standard optimization algorithms converge to the global optima of a factorization-based formulation and recover the true low-rank matrix. We study the local geometry of a properly regularized objective and prove that any stationary point in a certain local region is globally optimal. A major difference of this paper from the existing results is that we do not need resampling (i.e., using independent samples at each iteration) in either the algorithm or its analysis. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.