571 results on '"MENGCHU ZHOU"'
Search Results
2. Multi-Swarm Co-Evolution Based Hybrid Intelligent Optimization for Bi-Objective Multi-Workflow Scheduling in the Cloud
- Author
-
MengChu Zhou, Yushun Fan, Yuanqing Xia, Huifang Li, and Wang Danjing
- Subjects
Job shop scheduling ,business.industry ,Computer science ,Distributed computing ,Swarm behaviour ,Cloud computing ,Dynamic priority scheduling ,Scheduling (computing) ,Local optimum ,Workflow ,Computational Theory and Mathematics ,Hardware and Architecture ,Signal Processing ,Simulated annealing ,business - Abstract
Many scientific applications can be well modeled as large-scale workflows. Cloud computing has become a suitable platform for hosting and executing them. Workflow scheduling has gained much attention in recent years. However, since cloud service providers must offer services for multiple users with various QoS demands, scheduling multiple applications with different QoS requirements is highly challenging. This work proposes a Multi-swarm Co-evolutionary-based Hybrid Optimization (MCHO) algorithm for multiple-workflow scheduling to minimize total makespan and cost while workflow deadline constraints. First, we design a multi-swarm co-evolutionary mechanism where three swarms are adopted to sufficiently search for various elite solutions. Second, to improving global search and convergence performance, we embed local and global guiding information into the updating process of a Particle Swarm Optimizer, and develop a swarm cooperation technique. Third, we propose a Genetic Algorithm-based elite enhancement strategy to exploit more non-dominated individuals, and apply the Metropolis Acceptance rule of Simulated Annealing to update the local guiding solution for each swarm so as to prevent it from being stuck into a local optimum at an early stage. Extensive experimental results demonstrate that MCHO outperforms the state-of-art scheduling algorithms with better distributed non-dominated solutions.
- Published
- 2022
3. Energy Consumption and Performance Optimized Task Scheduling in Distributed Data Centers
- Author
-
MengChu Zhou, Jing Bi, Jia Zhang, and Haitao Yuan
- Subjects
Mathematical optimization ,Computer science ,business.industry ,Response time ,Cloud computing ,Energy consumption ,Computer Science Applications ,Scheduling (computing) ,Human-Computer Interaction ,Reduction (complexity) ,Task (computing) ,Control and Systems Engineering ,Differential evolution ,Electrical and Electronic Engineering ,business ,Software ,Efficient energy use - Abstract
A growing number of organizations are hosting their software applications in distributed data centers (DCs) in the cloud, for faster response time and higher energy efficiency. The dramatic increase of user tasks, however, poses a significant challenge on DC providers to retain users' expectations on both aspects. To tackle this challenge, this work first formulates the problem into a constrained biobjective optimization problem. A biobjective algorithm, named simulated-annealing-based adaptive differential evolution (SADE), is presented to simultaneously reduce both the response time of tasks and energy cost. Meanwhile, a method of minimal Manhattan distance is adopted to search for a final knee, for achieving a good balance between response time minimization and energy cost reduction. Experimental results on real-life datasets, i.e., the electricity prices and tasks collected from a Google cluster trace, have proved that SADE yields less task response time and lower energy cost compared with state-of-the-art algorithms.
- Published
- 2022
4. Precedence-Constrained Colored Traveling Salesman Problem: An Augmented Variable Neighborhood Search Approach
- Author
-
Xinghuo Yu, Xiangping Xu, MengChu Zhou, and Jun Li
- Subjects
Travel ,Mathematical optimization ,Computer science ,business.industry ,Heuristic (computer science) ,Constrained optimization ,Travelling salesman problem ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Topological sorting ,Local search (optimization) ,Electrical and Electronic Engineering ,business ,Greedy algorithm ,Metaheuristic ,Algorithms ,Software ,Variable neighborhood search ,Information Systems - Abstract
A colored traveling salesman problem (CTSP) as a generalization of the well-known multiple traveling salesman problem utilizes colors to distinguish the accessibility of individual cities to salesmen. This work formulates a precedence-constrained CTSP (PCTSP) over hypergraphs with asymmetric city distances. It is capable of modeling the problems with operations or activities constrained to precedence relationships in many applications. Two types of precedence constraints are taken into account, i.e., 1) among individual cities and 2) among city clusters. An augmented variable neighborhood search (VNS) called POPMUSIC-based VNS (PVNS) is proposed as a main framework for solving PCTSP. It harnesses a partial optimization metaheuristic under special intensification conditions to prepare candidate sets. Moreover, a topological sort-based greedy algorithm is developed to obtain a feasible solution at the initialization phase. Next, mutation and multi-insertion of constraint-preserving exchanges are combined to produce different neighborhoods of the current solution. Two kinds of constraint-preserving k -exchange are adopted to serve as a strong local search means. Extensive experiments are conducted on 34 cases. For the sake of comparison, Lin-Kernighan heuristic, two genetic algorithms and three VNS methods are adapted to PCTSP and fine-tuned by using an automatic algorithm configurator-irace package. The experimental results show that PVNS outperforms them in terms of both search ability and convergence rate. In addition, the study of four PVNS variants each lacking an important operator reveals that all operators play significant roles in PVNS.
- Published
- 2022
5. Surrogate-Assisted Autoencoder-Embedded Evolutionary Optimization Algorithm to Solve High-Dimensional Expensive Problems
- Author
-
MengChu Zhou, Li Li, Abdullah Abusorrah, and Meiji Cui
- Subjects
Optimization algorithm ,Computer science ,business.industry ,Evolutionary algorithm ,High dimensional ,Machine learning ,computer.software_genre ,Autoencoder ,Theoretical Computer Science ,Data modeling ,Model management ,Computational Theory and Mathematics ,Benchmark (computing) ,High dimensional optimization ,Artificial intelligence ,business ,computer ,Software - Abstract
Surrogate-assisted evolutionary algorithms have been intensively used to solve computationally expensive problems with some success. However, traditional evolutionary algorithms are not suitable to deal with high-dimensional expensive problems (HEPs) with high-dimensional search space even if their fitness evaluations are assisted by surrogate models. The recently proposed autoencoder-embedded evolutionary optimization (AEO) framework is highly appropriate to deal with high-dimensional problems. This work aims to incorporate surrogate models into it to further boost its performance, thus resulting in surrogate-assisted autoencoder-embedded evolutionary optimization (SAEO). It proposes a novel model management strategy that can guarantee reasonable amounts of re-evaluations, hence the accuracy of surrogate models can be enhanced via being updated with new evaluated samples. Moreover, to ensure enough data samples before constructing surrogates, a problem-dimensionality-dependent activation condition is developed for incorporating surrogates into the SAEO framework. SAEO is tested on seven commonly used benchmark functions and compared with state-of-the-art algorithms for HEPs. The experimental results show that SAEO can further enhance the performance of AEO on most cases and SAEO performs significantly better than other algorithms. Therefore, SAEO has great potential to deal with HEPs.
- Published
- 2022
6. Kullback–Leibler Divergence-Based Fuzzy C-Means Clustering Incorporating Morphological Reconstruction and Wavelet Frames for Image Segmentation
- Author
-
Witold Pedrycz, Zhiwu Li, MengChu Zhou, and Cong Wang
- Subjects
FOS: Computer and information sciences ,Kullback–Leibler divergence ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Feature vector ,Computer Science - Computer Vision and Pattern Recognition ,Wavelet Analysis ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Wavelet ,Fuzzy Logic ,Image Processing, Computer-Assisted ,Cluster Analysis ,Segmentation ,Electrical and Electronic Engineering ,Divergence (statistics) ,Cluster analysis ,business.industry ,I.4.6 ,Pattern recognition ,Image segmentation ,Magnetic Resonance Imaging ,Computer Science Applications ,Human-Computer Interaction ,ComputingMethodologies_PATTERNRECOGNITION ,Control and Systems Engineering ,Feature (computer vision) ,Computer Science::Computer Vision and Pattern Recognition ,Artificial intelligence ,business ,62H30 ,Algorithms ,Software ,Information Systems - Abstract
Although spatial information of images usually enhance the robustness of the Fuzzy C-Means (FCM) algorithm, it greatly increases the computational costs for image segmentation. To achieve a sound trade-off between the segmentation performance and the speed of clustering, we come up with a Kullback-Leibler (KL) divergence-based FCM algorithm by incorporating a tight wavelet frame transform and a morphological reconstruction operation. To enhance FCM's robustness, an observed image is first filtered by using the morphological reconstruction. A tight wavelet frame system is employed to decompose the observed and filtered images so as to form their feature sets. Considering these feature sets as data of clustering, an modified FCM algorithm is proposed, which introduces a KL divergence term in the partition matrix into its objective function. The KL divergence term aims to make membership degrees of each image pixel closer to those of its neighbors, which brings that the membership partition becomes more suitable and the parameter setting of FCM becomes simplified. On the basis of the obtained partition matrix and prototypes, the segmented feature set is reconstructed by minimizing the inverse process of the modified objective function. To modify abnormal features produced in the reconstruction process, each reconstructed feature is reassigned to the closest prototype. As a result, the segmentation accuracy of KL divergence-based FCM is further improved. What's more, the segmented image is reconstructed by using a tight wavelet frame reconstruction operation. Finally, supporting experiments coping with synthetic, medical and color images are reported. Experimental results exhibit that the proposed algorithm works well and comes with better segmentation performance than other comparative algorithms. Moreover, the proposed algorithm requires less time than most of the FCM-related algorithms., Comment: This paper has been withdrawn by the author due to a crucial definition error of objective function
- Published
- 2022
7. Green Energy Forecast-Based Bi-Objective Scheduling of Tasks Across Distributed Clouds
- Author
-
Jing Bi, Jia Zhang, MengChu Zhou, and Haitao Yuan
- Subjects
Mathematical optimization ,Control and Optimization ,Computational Theory and Mathematics ,Hardware and Architecture ,Renewable Energy, Sustainability and the Environment ,Computer science ,business.industry ,Bi objective ,Scheduling (production processes) ,business ,Software ,Renewable energy - Published
- 2022
8. A Hybrid Prediction Method for Realistic Network Traffic With Temporal Convolutional Network and LSTM
- Author
-
Haitao Yuan, Jia Zhang, Jing Bi, Xiang Zhang, and MengChu Zhou
- Subjects
Artificial neural network ,Computer science ,business.industry ,Deep learning ,Feature extraction ,computer.software_genre ,Power (physics) ,Support vector machine ,Control and Systems Engineering ,Filter (video) ,Artificial intelligence ,Noise (video) ,Data mining ,Electrical and Electronic Engineering ,Time series ,business ,computer - Abstract
Accurate and real-time prediction of network traffic can not only help system operators allocate resources rationally according to their actual business needs but also help them assess the performance of a network and analyze its health status. In recent years, neural networks have been proved suitable to predict time series data, represented by the model of a long short-term memory (LSTM) neural network and a temporal convolutional network (TCN). This article proposes a novel hybrid prediction method named SG and TCN-based LSTM (ST-LSTM) for such network traffic prediction, which synergistically combines the power of the Savitzky-Golay (SG) filter, the TCN, as well as the LSTM. ST-LSTM employs a three-phase end-to-end methodology serving time series prediction. It first eliminates noise in raw data using the SG filter, then extracts short-term features from sequences applying the TCN, and then captures the long-term dependence in the data exploiting the LSTM. Experimental results over real-world datasets demonstrate that the proposed ST-LSTM outperforms state-of-the-art algorithms in terms of prediction accuracy.
- Published
- 2022
9. Spatiotemporal Analysis of Mobile Phone Network Based on Self-Organizing Feature Map
- Author
-
MengChu Zhou, Naiqi Wu, Mohammadhossein Ghahramani, and Yan Qiao
- Subjects
Computer Networks and Communications ,Hardware and Architecture ,Computer science ,Feature (computer vision) ,Mobile phone ,business.industry ,Signal Processing ,Spatio-Temporal Analysis ,Pattern recognition ,Artificial intelligence ,business ,Computer Science Applications ,Information Systems - Published
- 2022
10. Geography-Aware Task Scheduling for Profit Maximization in Distributed Green Data Centers
- Author
-
Haitao Yuan, MengChu Zhou, and Jing Bi
- Subjects
Operations research ,Computer Networks and Communications ,Hardware and Architecture ,Computer science ,business.industry ,For profit ,Cloud computing ,Maximization ,business ,Software ,Computer Science Applications ,Information Systems ,Scheduling (computing) - Published
- 2022
11. MO4: A Many-Objective Evolutionary Algorithm for Protein Structure Prediction
- Author
-
Zhenyu Lei, Shangce Gao, MengChu Zhou, Zhiming Zhang, and Jiujun Cheng
- Subjects
Source code ,Computer science ,business.industry ,Deep learning ,media_common.quotation_subject ,Evolutionary algorithm ,Function (mathematics) ,Protein structure prediction ,Machine learning ,computer.software_genre ,Evolutionary computation ,Theoretical Computer Science ,Protein structure ,Computational Theory and Mathematics ,Artificial intelligence ,business ,computer ,Software ,Energy (signal processing) ,media_common - Abstract
Protein structure prediction (PSP) problems are a major biocomputing challenge, owing to its scientific intrinsic that assists researchers to understand the relationship between amino acid sequences and protein structures, and to study the function of proteins. Although computational resources increased substantially over the last decade, a complete solution to PSP problems by computational methods has not yet been obtained. Using only one energy function is insufficient to characterize proteins because of their complexity. Diverse protein energy functions and evolutionary computation algorithms have been extensively studied to assist in the prediction of protein structures in different ways. Such algorithms are able to provide a better protein with less computational resources requirement than deep learning methods. For the first time, this study proposes a manyobjective protein structure prediction (MaOPSP) problem with four types of objectives to alleviate the impact of imprecise energy functions for predicting protein structures. A manyobjective evolutionary algorithm (MaOEA) is utilized to solve MaOPSP. The proposed method is compared with existing methods by examining thirty-four proteins. An analysis of the objectives demonstrates that our generated conformations are more reasonable than those generated by single/multi-objective optimization methods. Experimental results indicate that solving a PSP problem as an MaOPSP problem with four objectives yields better protein structure predictions, in terms of both accuracy and efficiency. The source code of the proposed method can be found at https://toyamaailab.github.io/sourcedata.html.
- Published
- 2022
12. Endpoint Communication Contention-Aware Cloud Workflow Scheduling
- Author
-
Quanwang Wu, MengChu Zhou, and Junhao Wen
- Subjects
Schedule ,Job shop scheduling ,business.industry ,Computer science ,Heuristic (computer science) ,Distributed computing ,Cloud computing ,computer.software_genre ,Scheduling (computing) ,Task (computing) ,Workflow ,Control and Systems Engineering ,Virtual machine ,Electrical and Electronic Engineering ,business ,computer - Abstract
Cloud platforms have recently become a popular target execution environment for numerous workflow applications. Hence, effective workflow scheduling strategies in cloud environments are in high demand. However, existing scheduling algorithms are grounded on an idealized target platform model where virtual machines are fully connected, and all communications can be performed concurrently. A significant aspect neglected by them is endpoint communication contention when executing workflows, which has a large impact on workflow makespan. This article investigates how to incorporate contention awareness into cloud workflow scheduling and proposes a new practical scheduling model. Endpoint communication contention-aware List Scheduling Heuristic (ELSH) is designed to minimize workflow makespan. It uses a novel task ranking property and schedules data communications to communication resources besides scheduling tasks to computing resources. Moreover, a rescheduling technique is employed to improve the schedule. In experiments, ELSH is evaluated against the traditional contention-oblivious list scheduling algorithm, which is adapted to address contention during execution in practice. The experimental results reveal that ELSH performs more efficaciously compared with the adapted traditional ones.
- Published
- 2022
13. Intelligent Fault Diagnosis for Large-Scale Rotating Machines Using Binarized Deep Neural Networks and Random Forests
- Author
-
MengChu Zhou, Jianqiang Li, Guangzheng Hu, and Huifang Li
- Subjects
0209 industrial biotechnology ,Artificial neural network ,business.industry ,Computer science ,Feature extraction ,Process (computing) ,Pattern recognition ,02 engineering and technology ,Fault (power engineering) ,Random forest ,020901 industrial engineering & automation ,Control and Systems Engineering ,Classifier (linguistics) ,Artificial intelligence ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Edge computing - Abstract
Recently, deep neural network (DNN) models work incredibly well, and edge computing has achieved great success in real-world scenarios, such as fault diagnosis for large-scale rotational machinery. However, DNN training takes a long time due to its complex calculation, which makes it difficult to optimize and retrain models. To address such an issue, this work proposes a novel fault diagnosis model by combining binarized DNNs (BDNNs) with improved random forests (RFs). First, a BDNN-based feature extraction method with binary weights and activations in a training process is designed to reduce the model runtime without losing the accuracy of feature extraction. Its generated features are used to train an RF-based fault classifier to relieve the information loss caused by binarization. Second, considering the possible classification accuracy reduction resulting from those very similar binarized features of two instances with different classes, we replace a Gini index with ReliefF as the attribute evaluation measure in training RFs to further enhance the separability of fault features extracted by BDNN and accordingly improve the fault identification accuracy. Third, an edge computing-based fault diagnosis mode is proposed to increase diagnostic efficiency, where our diagnosis model is deployed distributedly on a number of edge nodes close to the end rotational machines in distinct locations. Extensive experiments are conducted to validate the proposed method on the data sets from rolling element bearings, and the results demonstrate that, in almost all cases, its diagnostic accuracy is competitive to the state-of-the-art DNNs and even higher due to a form of regularization in some cases. Benefited from the relatively lower computing and storage requirements of BDNNs, it is easy to be deployed on edge nodes to realize real-time fault diagnosis concurrently.
- Published
- 2022
14. Safety-Guaranteed and Development Cost- Minimized Scheduling of DAG Functionality in an Automotive System
- Author
-
Shengjie Xu, MengChu Zhou, Zhengcai Cao, and Biao Hu
- Subjects
Schedule ,Computer science ,business.industry ,Mechanical Engineering ,Reliability (computer networking) ,Automotive industry ,Directed acyclic graph ,Automotive Safety Integrity Level ,Computer Science Applications ,Reliability engineering ,Scheduling (computing) ,Automotive Engineering ,Benchmark (computing) ,Heuristics ,business - Abstract
It is important to sufficiently guarantee an automotive system's safety, because otherwise terrible consequences may happen. Generally the safety in an automotive system includes two aspects: reliability and timeliness. Previous studies have proposed many approaches to how to improve them. However, few of them consider the development cost along with their improvement. In this study, we aim to propose a method that can build a safety-guaranteed and development cost-minimized schedule for functionality modeled as a directed acyclic graph running on an automotive system. Unlike previous studies that tightly couple the development cost minimization with other requirements together, we start by building a schedule with the minimum development cost by ignoring safety requirement. Then, reliability and real-time requirements are subsequently taken into consideration. Together with automotive safety integrity level decomposition options provided by International Standard called ISO 26262, the decomposition is evaluated for each task to improve its safety, and tasks are then successively chosen to adjust the schedule, such that its safety can be maximized with incurring the least extra development cost. This procedure continues until a schedule that meets safety requirement is built. Experiments on a real-life automotive benchmark and extensive synthetic functionality demonstrate that our proposed heuristics outperform the state-of-the-art heuristic algorithm, and a typical intelligent optimization algorithm.
- Published
- 2022
15. Energy-Efficient and QoS-Optimized Adaptive Task Scheduling and Management in Clouds
- Author
-
Haitao Yuan, MengChu Zhou, and Jing Bi
- Subjects
Computer science ,business.industry ,Quality of service ,Distributed computing ,Cloud computing ,Multi-objective optimization ,Scheduling (computing) ,Smart grid ,Control and Systems Engineering ,Server ,Simulated annealing ,Electrical and Electronic Engineering ,business ,Efficient energy use - Abstract
The enormous energy consumed by clouds becomes a significant challenge for cloud providers and smart grid operators. Due to performance concerns, applications typically run in different clouds located in multiple sites. In different clouds, many factors, including electricity prices, available servers, and task service rates, exhibit spatial variations. Therefore, it is important to manage and schedule tasks among multiple clouds in a high-quality-of-service and low-energy-cost manner. This work proposes a task scheduling method to jointly minimize energy cost and average task loss possibility (ATLP) of clouds. A problem is formulated and tackled with an adaptive biobjective differential evolution based on simulated annealing to determine a real-time and near-optimal set of solutions. A final knee solution is further chosen to specify suitable servers in clouds and task allocation among web portals. Simulation results based on realistic data prove that less average loss possibility of tasks, and smaller energy cost is obtained with it than its widely used peers.
- Published
- 2022
16. Online Scheduling and Route Planning for Shared Buses in Urban Traffic Networks
- Author
-
Ricky Y. K. Kwok, Bin Hu, Lei Guo, MengChu Zhou, Sun Shouming, Zhaolong Ning, Xiaojie Wang, and Xiping Hu
- Subjects
business.industry ,Computer science ,Mechanical Engineering ,Dynamic priority scheduling ,Multi-objective optimization ,Computer Science Applications ,Scheduling (computing) ,System model ,User experience design ,Automotive Engineering ,Last mile ,Online algorithm ,business ,Operating cost ,Computer network - Abstract
It is critical to reduce the operating cost of shared buses for bus companies and improve the user experience of passengers. However, existing studies focus on either bus scheduling or route planning, which cannot accomplish the above mentioned goals concurrently. In this paper, we construct a joint bus scheduling and route planning framework to maximize the number of passengers, minimize the total length of routes and the number of required buses, as well as guarantee good user experience of passengers. First, we establish a system model based on a real-world scenario and formulate a multi-objective combinational optimization problem. Then, based on the extracted traffic topology of urban traffic networks and the generated candidate line set, we propose an offline algorithm to cope with the similar passenger flow distributions, e.g., morning or evening peak of every day. In order to cope with dynamic real-time passenger flows, an online algorithm is designed. Experiments are carried out based on real-word scenarios. The results show that the proposed algorithms can greatly reduce the operating cost of bus companies and guarantee good user experience based on real-world scheduling data in comparison with several existing methods.
- Published
- 2022
17. A Distributed Framework for Large-scale Protein-protein Interaction Data Analysis and Prediction Using MapReduce
- Author
-
Huaqiang Yuan, Khaled Sedraoui, MengChu Zhou, Shicheng Yang, Lun Hu, and Xin Luo
- Subjects
Sequence ,Computer science ,business.industry ,Noise reduction ,Scale (chemistry) ,Deep learning ,computer.software_genre ,Data structure ,Task (project management) ,Tree (data structure) ,Memory management ,Artificial Intelligence ,Control and Systems Engineering ,Data mining ,Artificial intelligence ,business ,computer ,Information Systems - Abstract
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins. With the rapid development of high-throughput genomic technologies, massive protein-protein interaction (PPI) data have been generated, making it very difficult to analyze them efficiently. To address this problem, this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms, i.e., CoFex, using MapReduce. To do so, an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction. Respective solutions are then devised to overcome these limitations. In particular, we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins. After that, its procedure is modified by following the MapReduce framework to take the prediction task distributively. A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy. Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.
- Published
- 2022
18. G-Image Segmentation: Similarity-Preserving Fuzzy C-Means With Spatial Information Constraint in Wavelet Space
- Author
-
Shuzhi Sam Ge, MengChu Zhou, Zhiwu Li, Cong Wang, and Witold Pedrycz
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Fuzzy logic ,Wavelet ,Artificial Intelligence ,Euclidean geometry ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Spatial analysis ,Pixel ,business.industry ,Applied Mathematics ,I.4.6 ,Pattern recognition ,Image segmentation ,ComputingMethodologies_PATTERNRECOGNITION ,Computational Theory and Mathematics ,Control and Systems Engineering ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,62H30 - Abstract
G-images refer to image data defined on irregular graph domains. This work elaborates a similarity-preserving Fuzzy C-Means (FCM) algorithm for G-image segmentation and aims to develop techniques and tools for segmenting G-images. To preserve the membership similarity between an arbitrary image pixel and its neighbors, a Kullback-Leibler divergence term on membership partition is introduced as a part of FCM. As a result, similarity-preserving FCM is developed by considering spatial information of image pixels for its robustness enhancement. Due to superior characteristics of a wavelet space, the proposed FCM is performed in this space rather than Euclidean one used in conventional FCM to secure its high robustness. Experiments on synthetic and real-world G-images demonstrate that it indeed achieves higher robustness and performance than the state-of-the-art FCM algorithms. Moreover, it requires less computation than most of them., This paper has been withdrawn by the author since some statements are not right as raised by other researchers
- Published
- 2021
19. Novel Analog Implementation of a Hyperbolic Tangent Neuron in Artificial Neural Networks
- Author
-
Fatemeh Mohammadi Shakiba and MengChu Zhou
- Subjects
Artificial neural network ,Computer science ,business.industry ,Computer Science::Neural and Evolutionary Computation ,Hyperbolic function ,Activation function ,Memristor ,law.invention ,Software ,CMOS ,Neuromorphic engineering ,Control and Systems Engineering ,law ,Electronic engineering ,Electrical and Electronic Engineering ,business ,MNIST database - Abstract
Recently, enormous datasets have made power dissipation and area usage lie at the heart of designs for artificial neural networks (ANNs). Considering the significant role of activation functions in neurons and the growth of hardware-based neural networks like memristive neural networks, this work proposes a novel design for a hyperbolic tangent activation function (Tanh) to be used in memristive-based neuromorphic architectures. The purpose of implementing a CMOS-based design for Tanh is to decrease power dissipation and area usage. This design also increases the overall speed of computation in ANNs, while keeping the accuracy in an acceptable range. The proposed design is one of the first analog designs for the hyperbolic tangent and its performance is analyzed by using two well-known datsets, including the Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST. The direct implementation of the proposed design for Tanh is proposed and investigated via software and hardware modeling.
- Published
- 2021
20. Multiobjective Optimized Cloudlet Deployment and Task Offloading for Mobile-Edge Computing
- Author
-
MengChu Zhou and Xiaojian Zhu
- Subjects
education.field_of_study ,Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Network delay ,Population ,Cloud computing ,Multi-objective optimization ,Computer Science Applications ,Hardware and Architecture ,Software deployment ,Signal Processing ,Cloudlet ,education ,business ,Edge computing ,Information Systems - Abstract
Mobile-edge computing provides an effective approach to reducing the workload of smart devices and the network delay induced by data transfer through deploying computational resources in the proximity of the devices. In a mobile-edge computing system, it is of great importance to improve the quality of experience of users and reduce the deployment cost for service providers. This article investigates a joint cloudlet deployment and task offloading problem with the objectives of minimizing energy consumption and task response delay of users and the number of deployed cloudlets. Since it is a multiobjective optimization problem, a set of tradeoff solutions ought to be found. After formulating this problem as a mixed-integer nonlinear program and proving its NP-completeness, we propose a modified guided population archive whale optimization algorithm to solve it. The superiority of our devised algorithm over other methods is confirmed through extensive simulations.
- Published
- 2021
21. Rapid Detection of Blind Roads and Crosswalks by Using a Lightweight Semantic Segmentation Network
- Author
-
Biao Hu, MengChu Zhou, Zhengcai Cao, and Xiaowen Xu
- Subjects
Computer science ,business.industry ,Mechanical Engineering ,Feature extraction ,Pooling ,Context (language use) ,Image segmentation ,Computer Science Applications ,Kernel (image processing) ,Feature (computer vision) ,Automotive Engineering ,Segmentation ,Computer vision ,Pyramid (image processing) ,Artificial intelligence ,business - Abstract
Achieving the high accuracy of blind roads and crosswalks recognition is important for blind guiding equipment to help blind people sense the surrounding environment. A lightweight semantic segmentation network is proposed to quickly and accurately segment blind roads and crosswalks in a complex road environment. Specifically, a lightweight network with depthwise separable convolution as a component is used as a basic module to reduce the number of parameters of the model and increase the speed of semantic segmentation. In order to ensure the segmentation accuracy of the network, we use a densely connected atrous spatial pyramid pooling module to extract feature information of different angles and context feature modules to enhance the effectiveness of different levels of feature information fusion. To verify the effectiveness of the proposed method, we collect and produce a data set from a real environment, which contains two objects of blind roads and crosswalks 1 . Experimental results demonstrate that, compared to some state-of-the-art approaches, the proposed approach greatly improves the segmentation speed, while achieving better or similar accuracy, which shows that the proposed approach provides a better basis for the application of devices for guiding the blind. 1 https://github.com/qweawq/Blind-road-and-crosswalk-dataset
- Published
- 2021
22. Adjusting Learning Depth in Nonnegative Latent Factorization of Tensors for Accurately Modeling Temporal Patterns in Dynamic QoS Data
- Author
-
MengChu Zhou, Hao Wu, Zhigang Liu, Huaqiang Yuan, Chen Minzhi, and Xin Luo
- Subjects
business.industry ,Computer science ,Multiplicative function ,Big data ,Missing data ,Data modeling ,Factorization ,Control and Systems Engineering ,Convergence (routing) ,Tensor ,Electrical and Electronic Engineering ,Representation (mathematics) ,business ,Algorithm - Abstract
A nonnegative latent factorization of tensors (NLFT) model precisely represents the temporal patterns hidden in multichannel data emerging from various applications. It often adopts a single latent factor-dependent, nonnegative and multiplicative update on tensor (SLF-NMUT) algorithm. However, learning depth in this algorithm is not adjustable, resulting in frequent training fluctuation or poor model convergence caused by overshooting. To address this issue, this study carefully investigates the connections between the performance of an NLFT model and its learning depth via SLF-NMUT to present a joint learning-depth-adjusting scheme for it. Based on this scheme, a Depth-adjusted Multiplicative Update on tensor algorithm is innovatively proposed, thereby achieving a novel depth-adjusted nonnegative latent-factorization-of-tensors (DNL) model. Empirical studies on two industrial data sets demonstrate that compared with the state-of-the-art NLFT models, a DNL model achieves significant accuracy gain when performing missing data estimation on a high-dimensional and incomplete tensor with high efficiency. Note to Practitioners —Multichannel data are often encountered in various big-data-related applications. It is vital for a data analyzer to correctly capture the temporal patterns hidden in them for efficient knowledge acquisition and representation. This article focuses on analyzing temporal QoS data, which is a representative kind of multichannel data. To correctly extract their temporal patterns, an analyzer should correctly describe their nonnegativity. Such a purpose can be achieved by building a nonnegative latent factorization of tensors (NLFT) model relying on a single latent factor-dependent, nonnegative and multiplicative update on tensor (SLF-NMUT) algorithm. But its learning depth is not adjustable, making an NLFT model frequently suffer from severe fluctuations in its training error or even fail to converge. To address this issue, this study carefully investigates the learning rules for an NLFT model’s decision parameters using an SLF-NMUT and proposes a joint learning-depth-adjusting scheme. This scheme manipulates the multiplicative terms in SLF-NMUT-based learning rules linearly and exponentially, thereby making the learning depth adjustable. Based on it, this study builds a novel depth-adjusted nonnegative latent-factorization-of-tensors (DNL) model. Compared with the existing NLFT models, a DNL model better represents multichannel data. It meets industrial needs well and can be used to achieve high performance in data analysis tasks like temporal-aware missing data estimation
- Published
- 2021
23. Digital Technologies and Automation: The Human and Eco-Centered Foundations for the Factory of the Future [TC Spotlight]
- Author
-
George Q. Huang, Birgit Vogel-Heuser, MengChu Zhou, and Paolo Dario
- Subjects
Engineering ,business.industry ,Emerging technologies ,Production efficiency ,Automation ,Manufacturing engineering ,Computer Science Applications ,Personalization ,Control and Systems Engineering ,High productivity ,Robot ,Factory (object-oriented programming) ,Digital manufacturing ,Electrical and Electronic Engineering ,business - Abstract
Reports on the factory of the future where automation dominates operations. Digital manufacturing processes have been extensively transformed by ubiquitous connectivity and collaborative robots so much that the industry of the future is pursuing high productivity, efficiency, and customization by becoming increasingly collaborative, connected, and cognitive (IC3). In fact, these new technologies have fostered a remarkable increase in production efficiency and an exceptional economic growth worldwide.
- Published
- 2021
24. Temporal Task Scheduling of Multiple Delay-Constrained Applications in Green Hybrid Cloud
- Author
-
Haitao Yuan, Jing Bi, and MengChu Zhou
- Subjects
Information Systems and Management ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,Particle swarm optimization ,020206 networking & telecommunications ,Cloud computing ,02 engineering and technology ,Maximization ,010501 environmental sciences ,01 natural sciences ,Profit (economics) ,Computer Science Applications ,Scheduling (computing) ,Outsourcing ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Revenue ,business ,0105 earth and related environmental sciences - Abstract
A growing number of global companies select Green Data Centers (GDCs) to manage their delay-constrained applications. The fast growth of users’ tasks dramatically increases the energy consumed by GDCs owned by a company, e.g., Google and Amazon. The random nature of tasks brings a big challenge of scheduling tasks of each application with limited infrastructure resources of GDCs. Therefore, hybrid cloud is widely employed to smartly outsource some tasks to public clouds. However, the temporal variation in many factors including revenue, price of power grid, solar irradiance, wind speed, price of public clouds makes it challenging to schedule all tasks of each application in a cost-effective way while strictly meeting their expected delay constraints. This work proposes a temporal task scheduling algorithm investigating the temporal variation in green hybrid cloud to schedule all tasks within their delay constraints. Besides, it explicitly presents a mathematical equation of service rates and task refusal. The maximization problem is formulated and tackled by the proposed hybrid optimization algorithm called Genetic Simulated-annealing-based particle swarm optimization. Trace-driven experiments demonstrate that larger profit are achieved than several existing scheduling algorithms.
- Published
- 2021
25. Effective Visual Domain Adaptation via Generative Adversarial Distribution Matching
- Author
-
Kai Zhang, Qi Kang, Abdullah Abusorrah, MengChu Zhou, and Siya Yao
- Subjects
Matching (statistics) ,Artificial neural network ,Contextual image classification ,Computer Networks and Communications ,Computer science ,business.industry ,02 engineering and technology ,Machine learning ,computer.software_genre ,Field (computer science) ,Computer Science Applications ,Visualization ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Adaptation (computer science) ,business ,computer ,Software - Abstract
In the field of computer vision, without sufficient labeled images, it is challenging to train an accurate model. However, through visual adaptation from source to target domains, a relevant labeled dataset can help solve such problem. Many methods apply adversarial learning to diminish cross-domain distribution difference. They are able to greatly enhance the performance on target classification tasks. Generative adversarial network (GAN) loss is widely used in adversarial adaptation learning methods to reduce an across-domain distribution difference. However, it becomes difficult to decline such distribution difference if generator or discriminator in GAN fails to work as expected and degrades its performance. To solve such cross-domain classification problems, we put forward a novel adaptation framework called generative adversarial distribution matching (GADM). In GADM, we improve the objective function by taking cross-domain discrepancy distance into consideration and further minimize the difference through the competition between a generator and discriminator, thereby greatly decreasing cross-domain distribution difference. Experimental results and comparison with several state-of-the-art methods verify GADM's superiority in image classification across domains.
- Published
- 2021
26. Making accurate object detection at the edge: review and new approach
- Author
-
MengChu Zhou, Chen Lin, Zhenhua Huang, Yang Shunzhi, Zheng Huang, Gong Zheng, and Abdullah Abusorrah
- Subjects
Linguistics and Language ,Edge device ,business.industry ,Computer science ,Object (computer science) ,Convolutional neural network ,Language and Linguistics ,Object detection ,Task (computing) ,Computer engineering ,Artificial Intelligence ,Enhanced Data Rates for GSM Evolution ,Internet of Things ,business ,Edge computing - Abstract
With the development of Internet of Things (IoT), data are increasingly appearing at the edge of a network. Processing tasks at the network edge can effectively solve the problems of personal privacy leakage and server overloading. As a result, it has attracted a great deal of attention. A number of efficient convolutional neural network (CNN) models are proposed to do so. However, since they require much computing and memory resources, none of them can be deployed to such typical edge computing devices as Raspberry Pi 3B+ and 4B+ to meet the real-time requirements of user tasks. Considering that a traditional machine learning method can precisely locate an object with a highly acceptable calculation load, this work reviews state-of-the-art literature and then proposes a CNN with reduced input size for an object detection system that can be deployed in edge computing devices. It splits an object detection task into object positioning and classification. In particular, this work proposes a CNN model with 44 $$\times$$ 44-pixel inputs instead of much more inputs, e.g., 224 $$\times$$ 224-pixel in many existing methods, for edge computing devices with slow memory access and limited computing resources. Its overall performance has been verified via a facial expression detection system realized in Raspberry Pi 3B+ and 4B+. The work makes accurate object detection at the edge possible.
- Published
- 2021
27. An Improved Discriminative Model Prediction Approach to Real-Time Tracking of Objects With Camera as Sensors
- Author
-
Abdullah Abusorrah, MengChu Zhou, Yusuf Al-Turki, Hua Han, and Luyao Zhang
- Subjects
Computer science ,business.industry ,BitTorrent tracker ,Deep learning ,Motion blur ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Tracking (particle physics) ,Sensor fusion ,Discriminative model ,Feature (computer vision) ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Instrumentation - Abstract
Generic person tracking is a basic task in visual surveillance by using camera as sensors. Many deep learning-based trackers have obtained outstanding performance. Among them, trackers based on Siamese networks have drawn great attention and are promising. These training methods are competitive but training data can be more effectively augmented to improve their person tracking performance. Many other trackers use only one layer to extract semantic features, likely hindering their discriminative learning. In this paper, we propose an enhanced discriminative model prediction method with efficient data augmentation and robust feature fusion. Specifically, we propose to implement an effective data augmentation strategy (e.g., color jitter and motion blur) to unleash the greater potential of original training data. We also adopt a multi-layer feature fusion to obtain a more discriminative feature map. Thus, the proposed tracker can discriminate an object in complicated scenarios in real time. We conduct extensive experiments on two datasets, i.e., VOT2018 and UAV123. Objective evaluation on VOT2018 demonstrates that with its expected average overlap value of 0.430, it outperforms a state-of-the-art tracker by 4.88%. On UAV123, it does so by 4.5% in success rate and 4.4% in precision rate. In addition, our further experimental results reveal that our algorithm can reach a speed that is high enough to meet the real-time tracking requirement when camera are used as sensors.
- Published
- 2021
28. Deep Learning-Based Model Predictive Control for Continuous Stirred-Tank Reactor System
- Author
-
Jing Bi, Gongming Wang, Qing-Shan Jia, MengChu Zhou, and Junfei Qiao
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Stability (learning theory) ,System identification ,Continuous stirred-tank reactor ,02 engineering and technology ,Computer Science Applications ,Model predictive control ,Deep belief network ,Artificial Intelligence ,Control theory ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Quadratic programming ,business ,Software - Abstract
A continuous stirred-tank reactor (CSTR) system is widely applied in wastewater treatment processes. Its control is a challenging industrial-process-control problem due to great difficulty to achieve accurate system identification. This work proposes a deep learning-based model predictive control (DeepMPC) to model and control the CSTR system. The proposed DeepMPC consists of a growing deep belief network (GDBN) and an optimal controller. First, GDBN can automatically determine its size with transfer learning to achieve high performance in system identification, and it serves just as a predictive model of a controlled system. The model can accurately approximate the dynamics of the controlled system with a uniformly ultimately bounded error. Second, quadratic optimization is conducted to obtain an optimal controller. This work analyzes the convergence and stability of DeepMPC. Finally, the DeepMPC is used to model and control a second-order CSTR system. In the experiments, DeepMPC shows a better performance in modeling, tracking, and antidisturbance than the other state-of-the-art methods.
- Published
- 2021
29. Profit-Maximized Collaborative Computation Offloading and Resource Allocation in Distributed Cloud and Edge Computing Systems
- Author
-
MengChu Zhou and Haitao Yuan
- Subjects
0209 industrial biotechnology ,Schedule ,Edge device ,Computer science ,business.industry ,Distributed computing ,Cloud computing ,02 engineering and technology ,020901 industrial engineering & automation ,Control and Systems Engineering ,Server ,Computation offloading ,Resource allocation ,Enhanced Data Rates for GSM Evolution ,Electrical and Electronic Engineering ,business ,Edge computing - Abstract
Edge computing is a new architecture to provide computing, storage, and networking resources for achieving the Internet of Things. It brings computation to the network edge in close proximity to users. However, nodes in the edge have limited energy and resources. Completely running tasks in the edge may cause poor performance. Cloud data centers (CDCs) have rich resources for executing tasks, but they are located in places far away from users. CDCs lead to long transmission delays and large financial costs for utilizing resources. Therefore, it is essential to smartly offload users’ tasks between a CDC layer and an edge computing layer. This work proposes a cloud and edge computing system, which has a terminal layer, edge computing layer, and CDC layer. Based on it, this work designs a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are strictly met. In each time slot, this work jointly considers CPU, memory, and bandwidth resources, load balance of all heterogeneous nodes in the edge layer, maximum amount of energy, maximum number of servers, and task queue stability in the CDC layer. Considering the abovementioned factors, a single-objective constrained optimization problem is formulated and solved by a proposed simulated-annealing-based migrating birds optimization procedure to obtain a close-to-optimal solution. The proposed method achieves joint optimization of computation offloading between CDC and edge, and resource allocation in CDC. Realistic data-based simulation results demonstrate that it realizes higher profit than its peers. Note to Practitioners —This work considers the joint optimization of computation offloading between Cloud data center (CDC) and edge computing layers, and resource allocation in CDC. It is important to maximize the profit of distributed cloud and edge computing systems by optimally scheduling all tasks between them given user-specific response time limits of tasks. It is challenging to execute them in nodes in the edge computing layer because their computation resources and battery capacities are often constrained and heterogeneous. Current offloading methods fail to jointly optimize computation offloading and resource allocation for nodes in the edge and servers in CDC. They are insufficient and coarse-grained to schedule arriving tasks. In this work, a novel algorithm is proposed to maximize the profit of distributed cloud and edge computing systems while meeting response time limits of tasks. It explicitly specifies the task service rate and the selected node for each task in each time slot by considering resource limits, load balance requirement, and processing capacities of nodes in the edge, and server and energy constraints in CDC. Real-life data-driven simulations show that the proposed method realizes a larger profit than several typical offloading strategies. It can be readily implemented and incorporated into large-scale industrial computing systems.
- Published
- 2021
30. A Deep Latent Factor Model for High-Dimensional and Sparse Matrices in Recommender Systems
- Author
-
Di Wu, MengChu Zhou, Xin Luo, Mingsheng Shang, Yi He, and Guoyin Wang
- Subjects
Artificial neural network ,business.industry ,Computer science ,Deep learning ,RSS ,02 engineering and technology ,computer.file_format ,Recommender system ,computer.software_genre ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,Electrical and Electronic Engineering ,business ,computer ,Software ,Sparse matrix - Abstract
Recommender systems (RSs) commonly adopt a user-item rating matrix to describe users’ preferences on items. With users and items exploding, such a matrix is usually high-dimensional and sparse (HiDS). Recently, the idea of deep learning has been applied to RSs. However, current deep-structured RSs suffer from high computational complexity. Enlightened by the idea of deep forest, this paper proposes a deep latent factor model (DLFM) for building a deep-structured RS on an HiDS matrix efficiently. Its main idea is to construct a deep-structured model by sequentially connecting multiple latent factor (LF) models instead of multilayered neural networks through a nonlinear activation function. Thus, the computational complexity grows linearly with its layer count, which is easy to resolve in practice. The experimental results on four HiDS matrices from industrial RSs demonstrate that when compared with state-of-the-art LF models and deep-structured RSs, DLFM can well balance the prediction accuracy and computational efficiency, which well fits the desire of industrial RSs for fast and right recommendations.
- Published
- 2021
31. A Performance-Optimized Consensus Mechanism for Consortium Blockchains Consisting of Trust-Varying Nodes
- Author
-
MengChu Zhou, PeiYun Zhang, Omaimah Bamasag, QiXi Zhao, and Abdullah Abusorrah
- Subjects
Immutability ,Computer Networks and Communications ,business.industry ,Computer science ,Fault tolerance ,Field (computer science) ,Computer Science Applications ,Mechanism (engineering) ,Control and Systems Engineering ,Node (computer science) ,business ,Throughput (business) ,Computer network ,Block (data storage) ,Anonymity - Abstract
Blockchain technology has wide applications in the fields of finance, public welfare, and the Internet of Things. Owing to a blockchain's characteristics, which include decentralization, openness, autonomy, immutability, and anonymity, it is difficult to quickly reach a reliable consensus result among its nodes. This work proposes a performance-optimized consensus mechanism based on node classification. Nodes are classified into accounting, validating, and propagating ones based on their trust values. All accounting nodes form an accounting node group, from which one is selected as the current accounting node to package transactions into a block, and the remaining nodes in the accounting node group can be used to validate the block quickly, owing to their high trust values. Validating and propagating nodes are responsible for validating and propagating transactions, respectively. All nodes' trust values are dynamically updated according to their behavior and performance. Corresponding algorithms are designed to realize the proposed consensus mechanism. The experimental results show that the proposed consensus mechanism provides higher throughput, lower consumption, and higher fault tolerance than some popularly used methods, thereby advancing the field of consortium blockchains.
- Published
- 2021
32. Reliability-Aware and Deadline-Constrained Mobile Service Composition Over Opportunistic Networks
- Author
-
Yuandou Wang, Shu Wang, Mingwei Lin, MengChu Zhou, Chunrong Wu, Qinglan Peng, Shanchen Pang, Xin Luo, and Yunni Xia
- Subjects
Service (business) ,business.industry ,Computer science ,Quality of service ,Mobile computing ,Services computing ,Cloud computing ,Control and Systems Engineering ,Mobile telephony ,Electrical and Electronic Engineering ,business ,Mobile device ,Mobile service ,Computer network - Abstract
An opportunistic link between two mobile devices or nodes can be constructed when they are within each other’s communication range. Typically, cyber–physical environments consist of a number of mobile devices that are potentially able to establish opportunistic contacts and serve mobile applications in a cost-effective way. Opportunistic mobile service computing is a promising paradigm capable of utilizing the pervasive mobile computational resources around the users. Mobile users are thus allowed to exploit nearby mobile services to boost their computing capabilities without investment in their resource pool. Nevertheless, various challenges, especially its quality-of-service and reliability-aware scheduling, are yet to be addressed. Existing studies and related scheduling strategies consider mobile users to be fully stable and available. In this article, we propose a novel method for reliability-aware and deadline-constrained service composition over opportunistic networks. We leverage the Krill–Herd-based algorithm to yield a deadline-constrained, reliability-aware, and well-executable service composition schedule based on the estimation of completion time and reliability of schedule candidates. We carry out extensive case studies based on some well-known mobile service composition templates and a real-world opportunistic contact data set. The comparison results suggest that the proposed approach outperforms existing ones in terms of success rate and completion time of composed services. Note to Practitioners —Recently, the rapid development of mobile devices and mobile communication leads to the prosperity of mobile service computing. Services running on mobile devices within a limited range are allowed to be composed to coordinate through wireless communication technologies and perform complex tasks and business processes. Despite its great potential, mobile service compositions remains a challenge since the mobility of users and devices imposes high unpredictability on the execution of tasks. A careful investigation into existing methods has found their various limitations, e.g., assuming time-invariant availability of mobile services. This article presents a novel reliability-aware and deadline-constrained service composition method for mobile opportunistic networks. Instead of assuming time-invariant availability of mobile nodes, the proposed method is capable of estimating service availability at run-time and leveraging a Krill–Herd-based algorithm to yield the deadline-constrained, reliability-aware, and well-executable service composition schedules. Case studies based on well-known service composition templates and real-world data sets suggest that it outperforms traditional ones in terms of success and completion time of composed services. It can thus aid the design and optimization of composite services as well as their smooth execution in a mobile environment. It can help practitioners better manage the reliability and performance of real-world applications built upon mobile services.
- Published
- 2021
33. Privacy-Preserving Behavioral Correctness Verification of Cross-Organizational Workflow With Task Synchronization Patterns
- Author
-
Hua Duan, MengChu Zhou, Jiujun Cheng, Cong Liu, Long Cheng, and Qingtian Zeng
- Subjects
0209 industrial biotechnology ,Correctness ,Computer science ,Process (engineering) ,business.industry ,02 engineering and technology ,Petri net ,Task (computing) ,020901 industrial engineering & automation ,Workflow ,Computer engineering ,Computer security ,Task analysis ,Synchronization ,Organizations ,Petri nets ,Privacy ,Standards organizations ,Behavioral correctness verification ,business privacy preservation ,cross-organizational workflow ,discrete event systems ,task synchronization pattern ,Control and Systems Engineering ,Synchronization (computer science) ,Key (cryptography) ,Electrical and Electronic Engineering ,Software engineering ,business - Abstract
Workflow management technology has become a key means to improve enterprise productivity. More and more workflow systems are crossing organizational boundaries and may involve multiple interacting organizations. This article focuses on a type of loosely coupled workflow architecture with collaborative tasks, i.e., each business partner owns its private business process and is able to operate independently, and all involved organizations need to be synchronized at a certain point to complete certain public tasks. Because of each organization’s privacy consideration, they are unwilling to share the business details with others. In this way, traditional correctness verification approaches via reachability analysis are not practical as a global business process model is unavailable for privacy preservation. To ensure its globally correct execution, this work establishes a correctness verification approach for the cross-organizational workflow with task synchronization patterns. Its core idea is to use local correctness of each suborganizational workflow process to guarantee its global correctness. We prove that the proposed approach can be used to investigate the behavioral property preservation when synthesizing suborganizational workflows via collaborative tasks. A medical diagnosis running case is used to illustrate the applicability of the proposed approaches. Note to Practitioners —Cross-organizational workflow verification techniques play an increasingly important role in ensuring the correct execution of collaborative enterprise businesses. This work addresses the issue of correctness verification for loosely coupled interactive workflows with collaborative tasks. To ensure the globally correct execution, a behavioral correctness verification approach is established. All proposed concepts and techniques are supported by open-source tools, and evaluation over a medical diagnosis process case has shown their applicability. The proposed methodology is readily applicable to industrial-size workflow correctness verification problems.
- Published
- 2021
34. Sparse Regularization-Based Fuzzy C-Means Clustering Incorporating Morphological Grayscale Reconstruction and Wavelet Frames
- Author
-
MengChu Zhou, Cong Wang, Witold Pedrycz, and Zhiwu Li
- Subjects
Computer science ,business.industry ,Applied Mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Iterative reconstruction ,Real image ,Grayscale ,ComputingMethodologies_PATTERNRECOGNITION ,Wavelet ,Computational Theory and Mathematics ,Kernel (image processing) ,Artificial Intelligence ,Control and Systems Engineering ,Computer Science::Computer Vision and Pattern Recognition ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,Cluster analysis ,business - Abstract
The conventional fuzzy C -means (FCM) algorithm is not robust to noise and its rate of convergence is generally impacted by data distribution. Consequently, it is challenging to develop FCM-related algorithms that have good performance and require less computing time. In this article, we elaborate on a comprehensive FCM-related algorithm for image segmentation. To make FCM robust, we first utilize a morphological grayscale reconstruction (MGR) operation to filter observed images before clustering, which guarantees noise-immunity and image detail-preservation. Since real images can generally be approximated by sparse coefficients in a tight wavelet frame system, feature spaces of observed and filtered images can be obtained. Taking such features to be clustered, we investigate an improved FCM model in which a sparse regularization term is introduced into the objective function of FCM. We design a three-step iterative algorithm to solve the sparse regularization-based FCM model, which is constructed by the Lagrangian multiplier method, hard-threshold operator, and normalization operator, respectively. Such an algorithm can not only perform well for image segmentation, but also come with high computational efficiency. To further enhance the segmentation accuracy, we use MGR to filter the label set generated by clustering. Finally, a large number of supporting experiments and comparative studies with other FCM-related algorithms available in the literature are provided. The obtained results for synthetic, medical and color images indicate that the proposed algorithm has good ability for multiphase image segmentation, and performs better than other alternative FCM-related algorithms. Moreover, the proposed algorithm requires less time than most of the existing algorithms.
- Published
- 2021
35. Artificial neural networks for water quality soft-sensing in wastewater treatment: a review
- Author
-
Jing Bi, Gongming Wang, Qing-Shan Jia, Abdullah Abusorrah, Junfei Qiao, and MengChu Zhou
- Subjects
Linguistics and Language ,Computational complexity theory ,Artificial neural network ,business.industry ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Language and Linguistics ,Deep belief network ,Artificial Intelligence ,Radial basis function neural ,Soft sensing ,Artificial intelligence ,Water quality ,Echo state network ,business - Abstract
This paper aims to present a comprehensive survey on water quality soft-sensing of a wastewater treatment process (WWTP) based on artificial neural networks (ANNs). We mainly present problem formulation of water quality soft-sensing, common soft-sensing models, practical soft-sensing examples and discussion on the performance of soft-sensing models. In details, problem formulation includes characteristic analysis and modeling principle of water quality soft-sensing. The common soft-sensing models mainly include a back-propagation neural network, radial basis function neural network, fuzzy neural network (FNN), echo state network (ESN), growing deep belief network and deep belief network with event-triggered learning (DBN-EL). They are compared in terms of accuracy, efficiency and computational complexity with partial-least-square-regression DBN (PLSR-DBN), growing ESN, sparse deep belief FNN, self-organizing DBN, wavelet-ANN and self-organizing cascade neural network (SCNN). In addition, this paper generally discusses and explains what factors affect the accuracy of the ANNs-based soft-sensing models. Finally, this paper points out several challenges in soft-sensing models of WWTP, which may be helpful for researchers and practitioner to explore the future solutions for their particular applications.
- Published
- 2021
36. A Hybrid Probabilistic Multiobjective Evolutionary Algorithm for Commercial Recommendation Systems
- Author
-
MengChu Zhou, Guoshuai Wei, and Quanwang Wu
- Subjects
Linear programming ,business.industry ,Computer science ,RSS ,Evolutionary algorithm ,Probabilistic logic ,02 engineering and technology ,computer.file_format ,Recommender system ,Machine learning ,computer.software_genre ,Multi-objective optimization ,Evolutionary computation ,Human-Computer Interaction ,Cold start ,020204 information systems ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer ,Social Sciences (miscellaneous) - Abstract
As big-data-driven complex systems, commercial recommendation systems (RSs) have been widely used in such companies as Amazon and Ebay. Their core aim is to maximize total profit, which relies on recommendation accuracy and profits from recommended items. It is also important for them to treat new items equally for a long-term run. However, traditional recommendation techniques mainly focus on recommendation accuracy and suffer from a cold-start problem (i.e., new items cannot be recommended). Differing from them, this work designs a multiobjective RS by considering item profit and novelty besides accuracy. Then, a hybrid probabilistic multiobjective evolutionary algorithm (MOEA) is proposed to optimize these conflicting metrics. In it, some specifically designed genetic operators are proposed, and two classical MOEA frameworks are adaptively combined such that it owns their complementary advantages. The experimental results reveal that it outperforms some state-of-the-art algorithms as it achieves a higher hypervolume value than them.
- Published
- 2021
37. Sparse Common Feature Representation for Undersampled Face Recognition
- Author
-
Shicheng Yang, MengChu Zhou, Lianghua He, and Ying Wen
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Feature extraction ,Pattern recognition ,02 engineering and technology ,010501 environmental sciences ,Mixture model ,01 natural sciences ,Facial recognition system ,Computer Science Applications ,Discriminative model ,Hardware and Architecture ,Face (geometry) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,0105 earth and related environmental sciences ,Information Systems ,Test data - Abstract
This work investigates the problem of undersampled face recognition (i.e., insufficient training data) encountered in practical Internet-of-Things (IoT) applications. Insufficient and uncertain samples captured by IoT devices may include background and facial disguise that makes face recognition more challenging than that with sufficient and reliable images. Many models work well in face recognition on a big data set, but when training data are insufficient, they achieve unsatisfactory performance. This work proposes a novel method named sparse common feature-based representation (SCFR) that provides a unique and stable result and completely avoids very time-consuming training required by a deep learning model. Specially, it constructs a common feature dictionary using both training and test images. Thereinto, a common feature is based on a discriminative common vector and learned by a Gaussian mixture model for both training and test images in a semisupervised learninig manner, which would reduce the difference among samples in each class. In the optimization, the latent indicator of test data is initialized by the estimated label. This can avoid learning invalid information and lead to good prototype images. A new variation dictionary characterizes variables that can be shared by different classes. Finally, this work adopts minimum reconstruction residuals to recognize test images, thus bringing about a substantial improvement in SCFR’s performance. Extensive results on benchmark face databases demonstrate that the proposed method is better than the state-of-the-art methods handling undersampled face recognition.
- Published
- 2021
38. Biobjective Task Scheduling for Distributed Green Data Centers
- Author
-
Jing Bi, MengChu Zhou, Qing Liu, Ahmed Chiheb Ammari, and Haitao Yuan
- Subjects
0209 industrial biotechnology ,Operations research ,Computer science ,business.industry ,Quality of service ,Cloud computing ,02 engineering and technology ,Multi-objective optimization ,Scheduling (computing) ,020901 industrial engineering & automation ,Green computing ,Smart grid ,Control and Systems Engineering ,Differential evolution ,Task analysis ,Electrical and Electronic Engineering ,business - Abstract
The industry of data centers is the fifth largest energy consumer in the world. Distributed green data centers (DGDCs) consume 300 billion kWh per year to provide different types of heterogeneous services to global users. Users around the world bring revenue to DGDC providers according to actual quality of service (QoS) of their tasks. Their tasks are delivered to DGDCs through multiple Internet service providers (ISPs) with different bandwidth capacities and unit bandwidth price. In addition, prices of power grid, wind, and solar energy in different GDCs vary with their geographical locations. Therefore, it is highly challenging to schedule tasks among DGDCs in a high-profit and high-QoS way. This work designs a multiobjective optimization method for DGDCs to maximize the profit of DGDC providers and minimize the average task loss possibility of all applications by jointly determining the split of tasks among multiple ISPs and task service rates of each GDC. A problem is formulated and solved with a simulated-annealing-based biobjective differential evolution (SBDE) algorithm to obtain an approximate Pareto-optimal set. The method of minimum Manhattan distance is adopted to select a knee solution that specifies the Pareto-optimal task service rates and task split among ISPs for DGDCs in each time slot. Real-life data-based experiments demonstrate that the proposed method achieves lower task loss of all applications and larger profit than several existing scheduling algorithms. Note to Practitioners —This work aims to maximize the profit and minimize the task loss for DGDCs powered by renewable energy and smart grid by jointly determining the split of tasks among multiple ISPs. Existing task scheduling algorithms fail to jointly consider and optimize the profit of DGDC providers and QoS of tasks. Therefore, they fail to intelligently schedule tasks of heterogeneous applications and allocate infrastructure resources within their response time bounds. In this work, a new method that tackles drawbacks of existing algorithms is proposed. It is achieved by adopting the proposed SBDE algorithm that solves a multiobjective optimization problem. Simulation experiments demonstrate that compared with three typical task scheduling approaches, it increases profit and decreases task loss. It can be readily and easily integrated and implemented in real-life industrial DGDCs. The future work needs to investigate the real-time green energy prediction with historical data and further combine prediction and task scheduling together to achieve greener and even net-zero-energy data centers.
- Published
- 2021
39. Revenue and Energy Cost-Optimized Biobjective Task Scheduling for Green Cloud Data Centers
- Author
-
MengChu Zhou, Haitao Yuan, Liu Heng, and Jing Bi
- Subjects
0209 industrial biotechnology ,Mathematical optimization ,Computer science ,business.industry ,Profit maximization ,Evolutionary algorithm ,Cloud computing ,02 engineering and technology ,Energy consumption ,Multi-objective optimization ,Scheduling (computing) ,020901 industrial engineering & automation ,Control and Systems Engineering ,Revenue ,Minification ,Electrical and Electronic Engineering ,business - Abstract
The significant growth in the number and types of tasks of heterogeneous applications in green cloud data centers (GCDCs) dramatically increases their providers’ revenue from users as well as energy consumption. It is a big challenge to maximize such revenue, while minimizing energy cost in a market where prices of electricity, availability of renewable power generation, and behind-the-meter renewable generation contract models differ among the geographical sites of the GCDCs. A multiobjective optimization method that investigates such spatial differences in the GCDCs is for the first time proposed to trade off such two objectives by cost-effectively executing all tasks while meeting their delay constraints. In each time slot, a constrained biobjective optimization problem is formulated and solved by an improved multiobjective evolutionary algorithm based on decomposition. Realistic data-based simulations prove that the proposed method achieves a larger total profit in faster convergence speed than the two state-of-the-art algorithms. Note to Practitioners —This article considers the tradeoff between profit maximization and energy cost minimization for the green cloud data center (GCDC) providers while meeting the delay constraints of all tasks. Current task-scheduling methods fail to take the advantage of spatial variations in many factors, e.g., prices of electricity and availability of renewable power generation at geographically distributed GCDC locations. As a result, they fail to execute all tasks of heterogeneous applications within their delay bounds in a high-revenue and low-energy-cost manner. In this article, a multiobjective optimization method that addresses the disadvantages of the existing methods is proposed. It is realized by a proposed intelligent optimization algorithm. Simulations demonstrate that in comparison with the two state-of-the-art scheduling algorithms, the proposed one increases the profit and reduces the convergence time. It can be readily implemented and integrated into actual industrial GCDCs.
- Published
- 2021
40. Residual-driven Fuzzy C-Means Clustering for Image Segmentation
- Author
-
Cong Wang, MengChu Zhou, Witold Pedrycz, and Zhiwu Li
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Residual ,Regularization (mathematics) ,Fuzzy logic ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Cluster analysis ,business.industry ,Image and Video Processing (eess.IV) ,I.4.6 ,Pattern recognition ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,Weighting ,Noise ,Control and Systems Engineering ,Computer Science::Computer Vision and Pattern Recognition ,Outlier ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,62H30 ,Information Systems - Abstract
Due to its inferior characteristics, an observed (noisy) image's direct use gives rise to poor segmentation results. Intuitively, using its noise-free image can favorably impact image segmentation. Hence, the accurate estimation of the residual between observed and noise-free images is an important task. To do so, we elaborate on residual-driven Fuzzy C-Means (FCM) for image segmentation, which is the first approach that realizes accurate residual estimation and leads noise-free image to participate in clustering. We propose a residual-driven FCM framework by integrating into FCM a residual-related fidelity term derived from the distribution of different types of noise. Built on this framework, we present a weighted $\ell_{2}$-norm fidelity term by weighting mixed noise distribution, thus resulting in a universal residual-driven FCM algorithm in presence of mixed or unknown noise. Besides, with the constraint of spatial information, the residual estimation becomes more reliable than that only considering an observed image itself. Supporting experiments on synthetic, medical, and real-world images are conducted. The results demonstrate the superior effectiveness and efficiency of the proposed algorithm over existing FCM-related algorithms., 14 pages, 13 figures, 6 tables
- Published
- 2021
41. 3D-RVP: A method for 3D object reconstruction from a single depth view using voxel and point
- Author
-
MengChu Zhou, Gang Xiong, Zhao Meihua, Fei-Yue Wang, and Zhen Shen
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Cognitive Neuroscience ,Deep learning ,02 engineering and technology ,Virtual reality ,Object (computer science) ,computer.software_genre ,Computer Science Applications ,020901 industrial engineering & automation ,Artificial Intelligence ,Voxel ,Metric (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Point (geometry) ,Artificial intelligence ,business ,computer ,Algorithm - Abstract
Three-dimensional object reconstruction technology has a wide range of applications such as augment reality, virtual reality, industrial manufacturing and intelligent robotics. Although deep learning-based 3D object reconstruction technology has developed rapidly in recent years, there remain important problems to be solved. One of them is that the resolution of reconstructed 3D models is hard to improve because of the limitation of memory and computational efficiency when deployed on resource-limited devices. In this paper, we propose 3D-RVP to reconstruct a complete and accurate 3D geometry from a single depth view, where R, V and P represent Reconstruction, Voxel and Point, respectively. It is a novel two-stage method that combines a 3D encoder-decoder network with a point prediction network. In the first stage, we propose a 3D encoder-decoder network with residual learning to output coarse prediction results. In the second stage, we propose an iterative subdivision algorithm to predict the labels of adaptively selected points. The proposed method can output high-resolution 3D models by increasing a small number of parameters. Experiments are conducted on widely used benchmarks of a ShapeNet dataset in which four categories of models are selected to test the performance of neural networks. Experimental results show that our proposed method outperforms the state-of-the-arts, and achieves about 2.7 % improvement in terms of the intersection-over-union metric.
- Published
- 2021
42. Delaunay-Triangulation-Based Variable Neighborhood Search to Solve Large-Scale General Colored Traveling Salesman Problems
- Author
-
Jun Li, MengChu Zhou, and Xiangping Xu
- Subjects
Divide and conquer algorithms ,050210 logistics & transportation ,Mathematical optimization ,Hypergraph ,Delaunay triangulation ,Generalization ,business.industry ,Computer science ,Mechanical Engineering ,05 social sciences ,Travelling salesman problem ,Computer Science Applications ,0502 economics and business ,Automotive Engineering ,Genetic algorithm ,Local search (optimization) ,business ,Variable neighborhood search - Abstract
A colored traveling salesman problem (CTSP) is a generalization of the well-known multiple traveling salesman problem. It utilizes colors to differentiate the accessibility of its cities to its salesmen. In our prior work, CTSPs are formulated over graphs associated with a city-color matrix. This work redefines a general colored traveling salesman problem (GCTSP) in the framework of hypergraphs and reveals several important properties of GCTSP. In GCTSP, the setting of city colors is richer than that in CTSPs. As results, it can be used to model and address various complex scheduling problems. Then, a Delaunay-triangulation-based Variable Neighborhood Search (DVNS) algorithm is developed to solve large-scale GCTSPs. At the beginning stage of DVNS, a divide and conquer algorithm is exploited to prepare a Delaunay candidate set for lean insertion. Next, the incumbent solution is perturbed by utilizing greedy multi-insertion and exchange mutation to obtain a variety of neighborhoods. Subsequently, 2-opt and 3-opt are used for local search in turn. Extensive experiments are conducted for many large scale GCTSP cases among which two maximal ones are up to 33000+ cities for 4 salesmen and 240 salesmen given 11000+ cities, respectively. The results show that the proposed method outperforms the existing four genetic algorithms and two VNS methods in terms of search ability and convergence rate.
- Published
- 2021
43. A Double-Blind Anonymous Evaluation-Based Trust Model in Cloud Computing Environments
- Author
-
PeiYun Zhang, MengChu Zhou, and Yang Kong
- Subjects
Service (systems architecture) ,media_common.quotation_subject ,Reliability (computer networking) ,Data_MISCELLANEOUS ,0211 other engineering and technologies ,Cloud computing ,02 engineering and technology ,Computer security ,computer.software_genre ,User requirements document ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Set (psychology) ,media_common ,021110 strategic, defence & security studies ,business.industry ,Node (networking) ,020206 networking & telecommunications ,Deception ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Collusion ,business ,computer ,Software - Abstract
In the last ten years, cloud services provided many applications in various areas. Most of them are hosted in a heterogeneous distributed large-scale cloud computing environment and face inherent uncertainty, unreliability, and malicious attacks that trouble both service users and providers. To solve the problems of malicious attacks (including solo and collusion deception ones) in a public cloud computing environment, we for the first time propose a double-blind anonymous evaluation-based trust model. Based on it, cloud service providers and users are anonymously matched according to user requirements. It can be used to effectively handle some malicious attacks that intend to distort trust evaluations. Providers may secretly hide gain-sharing information into service results and send the results to users to ask for higher trust evaluations than their deserved ones. This paper proposes to adopt checking nodes to help detect such behavior. It then conducts gain–loss analysis for providers who intend to perform provider–user collusion deception. The proposed trust model can be used to effectively help one recognize collusion deception behavior and allow policy-makers to set suitable loss to punish malicious providers. Consequently, provider-initiated collusion deception behavior can be greatly discouraged in public cloud computing systems. Simulation results show that the proposed method outperform two updated methods, i.e., one based on fail-stop signature and another based on fuzzy mathematics in terms of malicious node detection ratio and speed.
- Published
- 2021
44. Energy-Optimized Partial Computation Offloading in Mobile-Edge Computing With Genetic Simulated-Annealing-Based Particle Swarm Optimization
- Author
-
Shuaifei Duanmu, Jing Bi, Haitao Yuan, MengChu Zhou, and Abdullah Abusorrah
- Subjects
Schedule ,Mobile edge computing ,Computer Networks and Communications ,Computer science ,business.industry ,Distributed computing ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science Applications ,Hardware and Architecture ,Server ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,Wireless ,Resource allocation ,Computation offloading ,020201 artificial intelligence & image processing ,Data center ,Enhanced Data Rates for GSM Evolution ,business ,Mobile device ,Metaheuristic ,Information Systems - Abstract
Smart mobile devices (SMDs) can meet users’ high expectations by executing computational intensive applications but they only have limited resources, including CPU, memory, battery power, and wireless medium. To tackle this limitation, partial computation offloading can be used as a promising method to schedule some tasks of applications from resource-limited SMDs to high-performance edge servers. However, it brings communication overhead issues caused by limited bandwidth and inevitably increases the latency of tasks offloaded to edge servers. Therefore, it is highly challenging to achieve a balance between high-resource consumption in SMDs and high communication cost for providing energy-efficient and latency-low services to users. This work proposes a partial computation offloading method to minimize the total energy consumed by SMDs and edge servers by jointly optimizing the offloading ratio of tasks, CPU speeds of SMDs, allocated bandwidth of available channels, and transmission power of each SMD in each time slot. It jointly considers the execution time of tasks performed in SMDs and edge servers, and transmission time of data. It also jointly considers latency limits, CPU speeds, transmission power limits, available energy of SMDs, and the maximum number of CPU cycles and memories in edge servers. Considering these factors, a nonlinear constrained optimization problem is formulated and solved by a novel hybrid metaheuristic algorithm named genetic simulated annealing-based particle swarm optimization (GSP) to produce a close-to-optimal solution. GSP achieves joint optimization of computation offloading between a cloud data center and the edge, and resource allocation in the data center. Real-life data-based experimental results prove that it achieves lower energy consumption in less convergence time than its three typical peers.
- Published
- 2021
45. A Novel Semi-Supervised Learning Approach to Pedestrian Reidentification
- Author
-
Wenjin Ma, Abdullah Abusorrah, MengChu Zhou, Qiang Guo, and Hua Han
- Subjects
Computer Networks and Communications ,Computer science ,Sample (statistics) ,02 engineering and technology ,Semi-supervised learning ,010501 environmental sciences ,01 natural sciences ,Field (computer science) ,Image (mathematics) ,Task (project management) ,Discriminative model ,0202 electrical engineering, electronic engineering, information engineering ,0105 earth and related environmental sciences ,Training set ,business.industry ,Supervised learning ,Pattern recognition ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Hardware and Architecture ,Signal Processing ,Metric (mathematics) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Information Systems - Abstract
One of the important Internet-of-Things applications is to use image and video to realize automatic people monitoring, surveillance, tracking, and reidentification (Re-ID). Despite some recent advances, pedestrian Re-ID remains a challenging task. Existing algorithms based on fully supervised learning for it usually requires numerous labeled image and video data, while often ignoring the problem of data imbalance. This work proposes a method based on unlabeled samples generated by cycle generative adversarial networks. For a newly generated unlabeled sample, it learns its pseudorelationship between unlabeled samples and labeled ones in a low-dimensional space by using a self-paced learning approach. Then, these unlabeled ones having pseudo-relationship with labeled ones are added in a training set to better mine discriminative information between positive and negative samples, which is in turn used to learn a more effective metric. We name this method as a semi-supervised learning approach based on the built pseudopairwise relations between labeled data and unlabeled one. It can greatly enhance the performance of pedestrian Re-ID in case of insufficient labeled images. By using only about 10% labeled images in a given database, the proposed method obtains higher accuracy than state-of-the-art supervised learning methods using all labeled ones, e.g., deep-learning ones, thus greatly advancing the field of pedestrian Re-ID.
- Published
- 2021
46. Cost-Profit Trade-Off for Optimally Locating Automotive Service Firms Under Uncertainty
- Author
-
Cheng-Hu Yang, Peng Wu, MengChu Zhou, Khaled Sedraoui, Feng Chu, and Fahad S. Al Sokhiry
- Subjects
050210 logistics & transportation ,Mathematical optimization ,Computer science ,Stochastic process ,business.industry ,Mechanical Engineering ,05 social sciences ,Monte Carlo method ,Automotive industry ,Solver ,Profit (economics) ,Computer Science Applications ,Nonlinear system ,0502 economics and business ,Automotive Engineering ,Probability distribution ,Location-allocation ,business - Abstract
This work investigates the problem of optimally locating an automotive service firm (ASF) subject to stochastic customer demands, varying setup cost and regional constraints. The goal is to minimize the transportation cost while maintaining the specified profit of the ASF. This work studies two variants of the problem: ASF location with known demand probability distributions and with partial demand information, i.e., only the support and mean of the customer demands are known. For the former, a chance-constrained program is formulated that improves an existing model, and then an equivalent deterministic nonlinear program is constructed based on our property analysis results. For the latter, a novel distribution-free model is developed. The proposed models are solved by solver LINGO. Computational results on the benchmark examples show that: i) for the first variant, the proposed approach outperforms the existing one; ii) for the second one, the proposed distribution-free model can effectively handle stochastic customer demands without complete probability distributions; and iii) the results of the distribution-free model are slightly worse than those of the deterministic nonlinear one, but the former is more cost-efficient for the practical ASF location as it is less expensive in obtaining demand information. Moreover, the proposed models and approaches are extended to address a multi-ASF location allocation under demand uncertainty.
- Published
- 2021
47. Integrated deep learning method for workload and resource prediction in cloud systems
- Author
-
MengChu Zhou, Shuang Li, Jing Bi, and Haitao Yuan
- Subjects
0209 industrial biotechnology ,Computer science ,business.industry ,Cognitive Neuroscience ,Deep learning ,Cloud computing ,Workload ,02 engineering and technology ,Grid ,computer.software_genre ,Computer Science Applications ,020901 industrial engineering & automation ,Resource (project management) ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Resource allocation ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,business ,computer ,Network model - Abstract
Cloud computing providers face several challenges in precisely forecasting large-scale workload and resource time series. Such prediction can help them to achieve intelligent resource allocation for guaranteeing that users’ performance needs are strictly met with no waste of computing, network and storage resources. This work applies a logarithmic operation to reduce the standard deviation before smoothing workload and resource sequences. Then, noise interference and extreme points are removed via a powerful filter. A Min–Max scaler is adopted to standardize the data. An integrated method of deep learning for prediction of time series is designed. It incorporates network models including both bi-directional and grid long short-term memory network to achieve high-quality prediction of workload and resource time series. The experimental comparison demonstrates that the prediction accuracy of the proposed method is better than several widely adopted approaches by using datasets of Google cluster trace.
- Published
- 2021
48. Inversion Based on a Detached Dual-Channel Domain Method for StyleGAN2 Embedding
- Author
-
MengChu Zhou, Xiwang Guo, Liang Qi, Nan Yang, and Bingjie Xia
- Subjects
Channel (digital image) ,Computer science ,Image quality ,business.industry ,Applied Mathematics ,Deep learning ,020206 networking & telecommunications ,02 engineering and technology ,Iterative reconstruction ,Image (mathematics) ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Embedding ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Encoder ,Algorithm - Abstract
A style-based generative adversarial network (StyleGAN2) yields remarkable results in image-to-latent embedding. This work proposes a Detached Dual-channel Domain Encoder as an effective and robust method to embed an image to a latent code, i.e., GAN inversion. It infers a latent code from two aspects: a) a detached dual-channel design to support faithful image reconstruction; and b) a local skip connection that allows conveying pieces of information with image details. We further introduce a hierarchical progressive training strategy that allows the proposed encoder to separately capture different semantic features. The qualitative and quantitative experimental results show that the well-trained encoder can embed an image into a latent code in StyleGAN2 latent space with less time than its peers while preserving facial identity and image details well.
- Published
- 2021
49. Internet of Things as System of Systems: A Review of Methodologies, Frameworks, Platforms, and Tools
- Author
-
MengChu Zhou, Giancarlo Fortino, Claudio Savaglio, and Giandomenico Spezzano
- Subjects
Computer science ,media_common.quotation_subject ,IoT platform ,Interoperability ,02 engineering and technology ,IoT methodology ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,system of systems ,Baseline (configuration management) ,media_common ,System of systems ,Class (computer programming) ,business.industry ,020206 networking & telecommunications ,020207 software engineering ,IoT framework ,Data science ,Internet of Things (IoT) ,Computer Science Applications ,Human-Computer Interaction ,Control and Systems Engineering ,Scalability ,IoT tool ,Internet of Things ,business ,Software ,Autonomy - Abstract
The Internet of Things (IoT) is the latest example of the System of Systems (SoS), demanding for both innovative and evolutionary approaches to tame its multifaceted aspects. Over the years, different IoT methodologies, frameworks, platforms, and tools have been proposed by industry and academia, but the jumbled abundance of such development products have resulted into a high (and disheartening) entry-barrier to IoT system engineering. In this survey, we steer IoT developers by: 1) providing baseline definitions to identify the most suitable class of development products - methodologies, frameworks, platforms, and tools-for their purposes and 2) reviewing seventy relevant products through a comparative and practical approach, based on general SoS engineering features revised in the light of main IoT systems desiderata (i.e., interoperability, scalability, smartness, and autonomy). Indeed, we aim to lessen the confusion related to IoT methodologies, frameworks, platforms, and tools as well as to freeze their current state, for eventually easing the approach towards IoT system engineering.
- Published
- 2021
50. Inductive Representation Learning via CNN for Partially-Unseen Attributed Networks
- Author
-
Liang Qi, Zhongying Zhao, Liang Chang, Hui Zhou, and MengChu Zhou
- Subjects
Computer Networks and Communications ,Computer science ,business.industry ,Deep learning ,Node (networking) ,02 engineering and technology ,Complex network ,Convolutional neural network ,Computer Science Applications ,Control and Systems Engineering ,Complete information ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Task analysis ,Embedding ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Feature learning - Abstract
Network embedding aims to map a complex network into a low-dimensional vector space while maximally preserving the properties of the original network. An attributed network is a typical real-world network that models the relationships and attributes of real-world entities. Its analysis is of great significance in many applications. However, most such networks are incomplete with partially-known attributes, links and labels. Traditional network embedding methods are designed for a complete network and cannot be applied to a network with incomplete information. Thus, this work proposes an inductive embedding model to learn the robust representations for a partially-unseen attributed network. It is designed based on a multi-core convolutional neural network and a semi-supervised learning mechanism, which can preserve the properties of such a network and generate the effective representations for unseen nodes in a model training process. We evaluate its performance on the task of inductive node classification and community detection via three real-world attributed networks. Experimental results show that it significantly outperforms the state-of-the-art.
- Published
- 2021
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.