34 results on '"Hai J"'
Search Results
2. Comprehensive Analysis of RF Hot-Carrier Reliability Sensitivity and Design Explorations for 28GHz Power Amplifier Applications
- Author
-
Hai, J., primary, Cacho, F., additional, Divay, A., additional, Lauga-Larroze, E., additional, Arnould, J.-D., additional, Forest, J., additional, Knopik, V., additional, and Garros, X., additional
- Published
- 2022
- Full Text
- View/download PDF
3. 65nm RFSOI Power Amplifier Transistor Ageing at mm W frequencies, 14 GHz and 28 GHz
- Author
-
Divay, A., primary, Forest, J., additional, Knopik, V., additional, Hai, J., additional, Revil, N., additional, Antonijevic, J., additional, Michard, A., additional, Cacho, F., additional, Vincent, E., additional, Gaillard, F., additional, and Garros, X., additional
- Published
- 2021
- Full Text
- View/download PDF
4. A Model Combining Multi Branch Spectral-Temporal CNN, Efficient Channel Attention, and LightGBM for MI-BCI Classification
- Author
-
Hai Jia, Shiqi Yu, Shunjie Yin, Lanxin Liu, Chanlin Yi, Kaiqing Xue, Fali Li, Dezhong Yao, Peng Xu, and Tao Zhang
- Subjects
Motor imagery ,deep learning ,spectral-temporal ,attention mechanism ,LightGBM ,Medical technology ,R855-855.5 ,Therapeutics. Pharmacology ,RM1-950 - Abstract
Accurately decoding motor imagery (MI) brain-computer interface (BCI) tasks has remained a challenge for both neuroscience research and clinical diagnosis. Unfortunately, less subject information and low signal-to-noise ratio of MI electroencephalography (EEG) signals make it difficult to decode the movement intentions of users. In this study, we proposed an end-to-end deep learning model, a multi-branch spectral-temporal convolutional neural network with channel attention and LightGBM model (MBSTCNN-ECA-LightGBM), to decode MI-EEG tasks. We first constructed a multi branch CNN module to learn spectral-temporal domain features. Subsequently, we added an efficient channel attention mechanism module to obtain more discriminative features. Finally, LightGBM was applied to decode the MI multi-classification tasks. The within-subject cross-session training strategy was used to validate classification results. The experimental results showed that the model achieved an average accuracy of 86% on the two-class MI-BCI data and an average accuracy of 74% on the four-class MI-BCI data, which outperformed current state-of-the-art methods. The proposed MBSTCNN-ECA-LightGBM can efficiently decode the spectral and temporal domain information of EEG, improving the performance of MI-based BCIs.
- Published
- 2023
- Full Text
- View/download PDF
5. Multifunctionality of maghemite nanoparticles functionalized by HSA for drug delivery
- Author
-
Hai, J., primary, Piraux, H., additional, Mazario, E., additional, Volatron, J., additional, Ha-Duong, N., additional, Decorse, P., additional, Espinosa, A., additional, Whilem, C., additional, Verbeke, P., additional, Gazeau, F., additional, Ammar, S., additional, El Hage Chahine, J., additional, and Hemadi, M., additional
- Published
- 2017
- Full Text
- View/download PDF
6. A Malware Detection Approach Using Autoencoder in Deep Learning
- Author
-
Xiaofei Xing, Xiang Jin, Haroon Elahi, Hai Jiang, and Guojun Wang
- Subjects
Malware detection ,autoencoders ,malware images ,mobile application security ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Today, in the field of malware detection, the expanding limitations of traditional detection methods and the increasing accuracy of detection methods designed on the basis of artificial intelligence algorithms are driving research findings in this area in favour of the latter. Therefore, we propose a novel malware detection model in this paper. This model combines a grey-scale image representation of malware with an autoencoder network in a deep learning model, analyses the feasibility of the grey-scale image approach of malware based on the reconstruction error of the autoencoder, and uses the dimensionality reduction features of the autoencoder to achieve the classification of malware from benign software. The proposed detection model achieved an accuracy of 96% and a stable F-score of about 96% by using the Android-side dataset we collected, which outperformed some traditional machine learning detection algorithms.
- Published
- 2022
- Full Text
- View/download PDF
7. MH UNet: A Multi-Scale Hierarchical Based Architecture for Medical Image Segmentation
- Author
-
Parvez Ahmad, Hai Jin, Roobaea Alroobaea, Saqib Qamar, Ran Zheng, Fady Alnajjar, and Fathia Aboudi
- Subjects
BraTS ,convolutions ,dense connections ,encoder-decoder ,ISLES ,MICCAI ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
UNet and its variations achieve state-of-the-art performances in medical image segmentation. In end-to-end learning, the training with high-resolution medical images achieves higher accuracy for medical image segmentation. However, the network depth, a massive number of parameters, and low receptive fields are issues in developing deep architecture. Moreover, the lack of multi-scale contextual information degrades the segmentation performance due to the different sizes and shapes of regions of interest. The extraction and aggregation of multi-scale features play an important role in improving medical image segmentation performance. This paper introduces the MH UNet, a multi-scale hierarchical-based architecture for medical image segmentation that addresses the challenges of heterogeneous organ segmentation. To reduce the training parameters and increase efficient gradient flow, we implement densely connected blocks. Residual-Inception blocks are used to obtain full contextual information. A hierarchical block is introduced between the encoder-decoder for acquiring and merging features to extract multi-scale information in the proposed architecture. We implement and validate our proposed architecture on four challenging MICCAI datasets. Our proposed approach achieves state-of-the-art performance on the BraTS 2018, 2019, and 2020 Magnetic Resonance Imaging (MRI) validation datasets. Our approach is 14.05 times lighter than the best method of BraTS 2018. In the meantime, our proposed approach has 2.2 times fewer training parameters than the top 3D approach on the ISLES 2018 Computed Tomographic Perfusion (CTP) testing dataset. MH UNet is available at https://github.com/parvezamu/MHUnet.
- Published
- 2021
- Full Text
- View/download PDF
8. ATCS: Auto-Tuning Configurations of Big Data Frameworks Based on Generative Adversarial Nets
- Author
-
Mingyu Li, Zhiqiang Liu, Xuanhua Shi, and Hai Jin
- Subjects
Big data ,generative adversarial nets ,spark ,genetic algorithm ,automatic tune parameters ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Big data processing frameworks (e.g., Spark, Storm) have been extensively used for massive data processing in the industry. To improve the performance and robustness of these frameworks, developers provide users with highly-configurable parameters. Due to the high-dimensional parameter space and complicated interactions of parameters, manual tuning of parameters is time-consuming and ineffective. Building performance-predicting models for big data frameworks is challenging for several reasons: (1) the significant time required to collect training data and (2) the poor accuracy of the prediction model when training data are limited. To meet this challenge, we proposes an auto-tuning configuration parameters system (ATCS), a new auto-tuning approach based on Generative Adversarial Nets (GAN). ATCS can build a performance prediction model with less training data and without sacrificing model accuracy. Moreover, an optimized Genetic Algorithm (GA) is used in ATCS to explore the parameter space for optimum solutions. To prove the effectiveness of ATCS, we select five frequently-used workloads in Spark, each of which runs on five different sized data sets. The results demonstrate that ATCS improves the performance of five frequently-used Spark workloads compared to the default configurations. We achieved a performance increase of 3.5× on average, with a maximum of 6.9×. To obtain similar model accuracy, experiment results also demonstrate that the quantity of ATCS training data is only 6% of Deep Neural Network (DNN) data, 13% of Support Vector Machine (SVM) data, 18% of Decision Tree (DT) data. Moreover, compared to other machine learning models, the average performance increase of ATCS is 1.7× that of DNN, 1.6× that of SVM, 1.7× that of DT on the five typical Spark programs.
- Published
- 2020
- Full Text
- View/download PDF
9. Numerical Studies of Electrokinetically Controlled Concentration of Diluted DNA Molecules in a T-Shaped Microchannel
- Author
-
Yanli Gong, Hai Jiang, Yunfei Bai, Zijian Wu, Bei Peng, and Xuan Weng
- Subjects
Sample concentration ,electrokinetic concentration ,electroosmotic flow ,electroosmotic induced pressure flow ,enrichment ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Sample concentration is extremely important in microfluidic detection systems, especially for the detection of trace substance. Among various sample concentration techniques used in microfluidic devices, direct electrokinetic trapping is more convenient and easier to realize. In this paper, a T-shaped microchannel configuration was developed to achieve electrokinetic concentration. Numerical simulation analysis on two-dimensional (2D) configuration model were performed. The microfluidic configuration for DNA enrichment was firstly optimized by analyzing various field distributions. Then, the influence of selected dimension and electrical parameters on enrichment rate was analyzed, including size of the transition chamber and enrichment chamber, distance between the inlet branches, length of the inlet vertical branches, size of the electrode and the electric field intensity. With optimized parameters, our model is able to achieve an optimal enrichment rate of 234.2 at an applied voltage of 20 V within a period of 1200s. Our method provides a valuable guidance for the design of an easy-controlled microfluidic system of precise enrichment capability.
- Published
- 2020
- Full Text
- View/download PDF
10. ReGra: Accelerating Graph Traversal Applications Using ReRAM With Lower Communication Cost
- Author
-
Haoqiang Liu, Qiang-Sheng Hua, Hai Jin, and Long Zheng
- Subjects
Processing-in-memory ,resistive memory ,ReRAM ,architecture ,communication ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
There is a growing gap between data explosion speed and the improvement of graph processing systems on conventional architectures. The main reason lies in the large overhead of random access and data movement, as well as the unbalanced and unordered communication cost. The emerging metal-oxide resistive random access memory (ReRAM) has great potential to solve these in the context of processing-inmemory (PIM) technology. However, the unbalanced and irregular communication under different graph organizations is not well addressed. In this paper, we present a PIM graph traversal accelerator using ReRAM with a lower communication cost named ReGra. ReGra optimizes the graph organization and communication efficiency in graph traversal. Benefiting from high density and efficient access of ReRAM, graphs are organized compactly and partitioned into processing cubes by the proposed Interval-Block Hash Balance (IBHB) method to balance graph distribution. Moreover, remote cube updates in graph traversal are converged into batched messages and transferred in a concentrated period via the custom circular round communication phase. This eliminates irregular and unpredictable inter-cube communication and overlaps partial computation and communication. Comparative experiments with previous work like Tesseract and RPBFS show that ReGra achieves better performance and yields a speedup of up to 2.2×. Besides the communication cost is reduced by up to 76%. It also achieves an average reduction in energy consumption of 70%.
- Published
- 2020
- Full Text
- View/download PDF
11. Secure Data Sharing and Search for Cloud-Edge-Collaborative Storage
- Author
-
Ye Tao, Peng Xu, and Hai Jin
- Subjects
Cloud-edge-collaborative storage ,data sharing ,data search ,searchable encryption ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Cloud-edge-collaborative storage (CECS) is a promising framework to process data of the internet of things (IoT). It allows edge servers to process IoT data in real-time and stores them on a cloud server. Hence, it can rapidly respond to the requests of IoT devices, provide a massive volume of cloud storage for IoT data, and conveniently share IoT data with users. However, due to the vulnerability of edge and cloud servers, CECS suffers from the risk of data leakage. Existing secure CECS schemes are secure only if all edge servers are trusted. In other words, if any edge server is compromised, all cloud data (generated by IoT devices) will be leaked. Additionally, it is costly to request expected data from the cloud, which is linear with respect to the number of edge servers. To address the above problems, we propose a new secure data search and sharing scheme for CECS. Our scheme improves the existing secure CECS scheme in the following two ways. First, it enables users to generate a public-and-private key pair and manage private keys by themselves. In contrast, the existing solution requires edge servers to manage users' private keys. Second, it uses searchable public-key encryption to achieve more secure, efficient, and flexible data searching. In terms of security, our scheme ensures the confidentiality of cloud data and secure data sharing and searching and avoids a single point of breakthrough. In terms of performance, the experimental results show that our scheme significantly reduces users' computing costs by delegating most of the cryptographic operations to edge servers. Especially, our scheme reduces the computing and communication overhead for generating a search trapdoor compared with the existing secure CECS scheme.
- Published
- 2020
- Full Text
- View/download PDF
12. TLB Coalescing for Multi-Grained Page Migration in Hybrid Memory Systems
- Author
-
Xiaoyuan Wang, Haikun Liu, Xiaofei Liao, Hai Jin, and Yu Zhang
- Subjects
Virtual memory ,hybrid memory system ,page migration ,TLB coalescing ,multiple page size ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Superpages have long been proposed to enlarge the coverage of translation lookaside buffer (TLB). They are extremely beneficial for reducing address translation overhead in big memory systems, such as hybrid memory systems that composed of DRAM and non-volatile memories (NVMs). However, superpages conflict with fine-grained memory migration, one of the key techniques in hybrid memory systems to improve performance and energy efficiency. Fine-grained page migrations usually require to splinter superpages, mitigating the benefit of TLB hardware for superpages. In this paper, we present Tamp, an efficient memory management mechanism to support multiple page sizes in hybrid memory systems. We manage large-capacity NVM using superpages, and use a relatively small size of DRAM to cache hot base pages within the superpages. We find that there are remarkable contiguity exist for hot base pages in superpages. In response, we bind those contiguous hot pages together and migrate them to DRAM. We also propose multi-grained TLBs to coalesce multiple page address translations into a single TLB entry. Our experimental results show that Tamp can significantly reduce TLB misses by 62.4% on average, and improve application performance (IPC) by 16.2%, compared to a page migration policy without TLB coalescing support.
- Published
- 2020
- Full Text
- View/download PDF
13. Research on the aged coal mine identification and coal mine lifecycle system simulation
- Author
-
Lu, G., primary, Sun, Y. B., additional, Cheng, W., additional, and Hai, J., additional
- Published
- 2009
- Full Text
- View/download PDF
14. Echo: An Edge-Centric Code Offloading System With Quality of Service Guarantee
- Author
-
Li Lin, Peng Li, Xiaofei Liao, Hai Jin, and Yu Zhang
- Subjects
Code offloading ,edge computing ,offloading decision ,quality of service ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Code offloading is a promising way to accelerate mobile applications and reduce the energy consumption of mobile devices by shifting some computation to the cloud. However, existing code offloading systems suffer from a long communication delay between mobile devices and the cloud. To address this challenge, in this paper, we consider to deploy edge nodes in close proximity to mobile devices and study how they benefit code offloading. We design an edge-centric code offloading system, called Echo, over a three-layer computing hierarchy consisting of mobile devices, the edge, and the cloud. A critical problem needs to be addressed by Echo is to decide which methods should be offloaded to which computing platform (the edge or the cloud). Different from existing offloading systems that let mobile devices individually make offloading decisions, Echo implements a centralized decision engine at the edge. This edge-centric design can fully exploit limited hardware resources at the edge to provide offloading services with the quality-of-service guarantee. Furthermore, we propose some novel mechanisms, e.g., lazy object transmission and differential object update, to further improve system performance. The results of a small-scale real deployment and trace-driven simulations show that Echo significantly outperforms existing code offloading systems at both execution time and energy consumption.
- Published
- 2019
- Full Text
- View/download PDF
15. Mpchecker: Use-After-Free Vulnerabilities Protection Based on Multi-Level Pointers
- Author
-
Weizhong Qiang, Weifeng Li, Hai Jin, and Jayachander Surbiryala
- Subjects
Software security ,dangling pointers ,use-after-free ,LLVM ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Highly efficient languages, such as C/C++, have low-level control over memory. Due to the lack of validity detection for pointers and garbage collection for memory, developers are responsible for dynamic memory management by explicitly allocating and deallocating memory. However, explicit memory management brings a large number of memory safety-related vulnerabilities, such as use-after-free. The threat of use-after-free vulnerabilities has become more and more serious due to their high level of the severity and quick emergence of the number. In this paper, a dynamic defense system is proposed against use-after-free exploits by introducing an approach based on multi-level pointers that insert intermediate pointers between a heap object and its related pointers. First, the relationship between a heap object to be protected, and the related pointers pointing to it, is established by combing with intermediate pointers. Then, all of the accesses to this object via its related pointers can only be achieved through these intermediate pointers. Finally, to prevent the dangling pointers from being dereferenced to this object, all the intermediate pointers related to this object are invalidated when it is freed so that any access to a freed object can be prevented due to the invalidated intermediate pointers. The prototype system MPChecker is implemented, which can prevent use-after-free exploits for C/C++ multi-threaded programs. Compared with the related methods, MPChecker can protect pointers that are copied in a type-unsafe way from being de-referenced to freed objects. In addition, it can also defend against dangling pointers located on the whole memory, including the stack, the heap, and global memory, rather than the heap only. The defense capability is proved by protecting against two exploits to a real-world program, comparing the support of type-unsafe copy with a self-written program. The performance evaluation of MPChecker with some benchmarks, multi-threaded programs, and real-world programs, shows its comparable efficiency.
- Published
- 2019
- Full Text
- View/download PDF
16. Optimal Slot Length Configuration in Cognitive Radio Networks
- Author
-
Ziling Wei and Hai Jiang
- Subjects
Cognitive radio ,spectrum sensing ,slot length ,quality of service ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In cognitive radio networks, a slotted time structure is widely adopted. Accordingly, the slot length is a factor that can largely affect the performance of cognitive radio networks. In this paper, a slot length configuration scheme is proposed. In the proposed scheme, we assume imperfect spectrum sensing. The spectrum sensing result is considered when configuring the slot length. Therefore, slots with different sensing results have different slot lengths. This setting fully takes into account the fact that the sojourn time of channel idle state and busy state are usually different. An optimization problem to find out the optimal slot length configuration is formulated (which maximizes the secondary throughput) and analyzed. In the formulated problem, primary activities are protected by limiting the percentage of time that the primary activities are interfered with, and the energy efficiency of the secondary system is guaranteed by limiting the percentage of time for spectrum sensing. After a theoretical analysis of the problem, an algorithm is proposed to solve it. In the case of perfect spectrum sensing, another algorithm with low complexity is developed to solve the problem. The numerical results demonstrate that, by having different slot lengths with different sensing results, largely improved performance can be achieved. Impacts of system parameters on the secondary system performance are also discussed.
- Published
- 2019
- Full Text
- View/download PDF
17. ComQA: Question Answering Over Knowledge Base via Semantic Matching
- Author
-
Hai Jin, Yi Luo, Chenjing Gao, Xunzhu Tang, and Pingpeng Yuan
- Subjects
Question answering ,knowledge graph ,semantic matching ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Question answering over knowledge base (KBQA) is a powerful tool to extract answers from graph-like knowledge bases. Here, we present ComQA-a three-phase KBQA framework by which end-users can ask complex questions and get answers in a natural way. In ComQA, a complex question is decomposed into several triple patterns. Then, ComQA retrieves candidate subgraphs matching the triple patterns from the knowledge base and evaluates the semantic similarity between the subgraphs and the triple patterns to find the answer. It is a long-standing problem to evaluate the semantic similarity between the question and the heterogeneous subgraph containing the answer. To handle this problem, first, a semantic-based extension method is proposed to identify entities and relations in the question while considering the underlying knowledge base. The precision of identifying entities and relations determines the correctness of successive steps. Second, by exploiting the syntactic pattern in the question, ComQA constructs the query graphs for natural language questions so that it can filter out topology-mismatch subgraphs and narrow down the search space in knowledge bases. Finally, by incorporating the information from the underlying knowledge base, we fine-tune general word vectors, making them more specific to ranking possible answers in KBQA task. Extensive experiments over a series of QALD challenges confirm that the performance of ComQA is comparable to those state-of-the-art approaches with respect to precision, recall, and F1-score.
- Published
- 2019
- Full Text
- View/download PDF
18. Coverage, Capacity, and Error Rate Analysis of Multi-Hop Millimeter-Wave Decode and Forward Relaying
- Author
-
Khagendra Belbase, Chintha Tellambura, and Hai Jiang
- Subjects
5G ,blockage ,mmWave communication ,multi-hop network ,relay ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this paper, we analyze the end-to-end (e2e) performance of a millimeter-wave (mmWave) multi-hop relay network. The relays in it are decode-and-forward (DF) type. As appropriate for mmWave bands, we incorporate path loss and blockages considering the links to be either line of sight (LOS) or non line of sight (NLOS). The links also experience Nakagami-m fading with different m-parameters for the LOS and NLOS states. We consider two scenarios, namely sparse and dense deployments. In the sparse case, the nodes (relays and the destination) are limited by additive noise only. We derive closed-form expressions for the distribution of equivalent e2e signal-to-noise-ratio (SNR), coverage probability, ergodic capacity, and symbol error rate (SER) for the three classes of digital modulation schemes, namely, binary phase shift keying (BPSK), differential BPSK (DBPSK), and square-quadrature amplitude modulation (QAM). In the dense case, the nodes are limited by interference only. Here, we consider two situations: 1) interference powers are independent and identically distributed (i.i.d.) and 2) they are independent but not identically distributed (i.n.i.d.). For the latter situation, closed-form analysis is exceedingly difficult. Therefore, we use the Welch-Satterthwaite Approximation for the sum of Gamma variables to derive the distribution of the total interference. For both situations, we derive the distribution of signal-to-interference ratio (SIR), coverage probability, ergodic capacity, and SERs for the DBPSK and BPSK. We study how these measures are affected by the number of hops. The accuracy of the analytical results is verified via Monte-Carlo simulation. We show that multi-hop relaying provides significant coverage improvements in blockage-prone mmWave networks.
- Published
- 2019
- Full Text
- View/download PDF
19. A Comparative Study of Deep Learning-Based Vulnerability Detection System
- Author
-
Zhen Li, Deqing Zou, Jing Tang, Zhihao Zhang, Mingqian Sun, and Hai Jin
- Subjects
Vulnerability detection ,deep learning ,source code ,comparative study ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Source code static analysis has been widely used to detect vulnerabilities in the development of software products. The vulnerability patterns purely based on human experts are laborious and error prone, which has motivated the use of machine learning for vulnerability detection. In order to relieve human experts of defining vulnerability rules or features, a recent study shows the feasibility of leveraging deep learning to detect vulnerabilities automatically. However, the impact of different factors on the effectiveness of vulnerability detection is unknown. In this paper, we collect two datasets from the programs involving 126 types of vulnerabilities, on which we conduct the first comparative study to quantitatively evaluate the impact of different factors on the effectiveness of vulnerability detection. The experimental results show that accommodating control dependency can increase the overall effectiveness of vulnerability detection F1-measure by 20.3%; the imbalanced data processing methods are not effective for the dataset we create; bidirectional recurrent neural networks (RNNs) are more effective than unidirectional RNNs and convolutional neural network, which in turn are more effective than multi-layer perception; using the last output corresponding to the time step for the bidirectional long short-term memory (BLSTM) can reduce the false negative rate by 2.0% at the price of increasing the false positive rate by 0.5%.
- Published
- 2019
- Full Text
- View/download PDF
20. NGraph: Parallel Graph Processing in Hybrid Memory Systems
- Author
-
Wei Liu, Haikun Liu, Xiaofei Liao, Hai Jin, and Yu Zhang
- Subjects
Graph processing ,data placement ,graph partitioning ,DRAM/NVM hybrid memory ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Big data applications like graph processing are highly imposed on memory capacity. Byte-addressable non-volatile memory (NVM) technologies can offer much larger memory capacity, lower cost per bit relative to traditional DRAM. They are expected to play a crucial role in mitigating I/O operations for big data processing. However, since the NVMs show higher access latency and lower bandwidth compared with DRAM, it is still challenging to fully exploit the advantages of both the DRAM and NVM for graph processing. In this paper, we propose NGraph, a new parallel graph processing framework specially designed for hybrid memory systems. According to different access patterns of graph data, NGraph exploits memory heterogeneity-aware data placement strategies to avoid random accesses and frequent updates to NVM. NGraph partitions graph by destination vertices and exploits a task decomposition scheme to avoid data contention between multicores. Meanwhile, the NGraph balances the execution time of parallel graph data processing on multicores through a work-stealing strategy. Moreover, the NGraph also proposes software-based data pre-fetching to improve cache hit rate, and supports huge page to reduce address translation overhead. We evaluate NGraph using a hybrid memory emulator. The experimental results show that NGraph can achieve up to 48.28% performance improvement for several typical benchmarks compared with the state-of-the-art systems Ligra and Polymer.
- Published
- 2019
- Full Text
- View/download PDF
21. Multi-Objective Optimum Design of High-Speed Backplane Connector Using Particle Swarm Optimization
- Author
-
Wenjie Yu, Zhi Zeng, Bei Peng, Shuo Yan, Yueshuang Huang, Hai Jiang, Xunbo Li, and Tao Fan
- Subjects
High-speed backplane connector ,contact pairs ,insertion force ,contact resistance ,multi-objective connector design ,particle swarm optimization algorithm ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
This paper outlines a new procedure for computer modeling and optimum design for the dynamic mechanical and electrical study of a high-speed backplane connector, which is a key electrical interconnection technology in large communications equipment, ultra-high performance servers, supercomputers, industrial computers, high-end storage devices, and so on. The optimum structure design of contact pairs is important for a backplane connector in meeting multiple challenges in terms of minimizing the maximum insertion force and the contact resistance. Current optimization schemes, such as the quadrature method, are relatively complex. Therefore, we designed the connector contact pairs for simultaneously obtaining the proper insertion force and the contact resistance through a multi-objective particle swarm optimization (MCDPSO) method with simpler settings and faster convergence speed. In this paper, the required insertion force was minimized during the entire process, and the minimum contact resistance was maintained after insertion. To this end, an MCDPSO algorithm was proposed for the connector design. A dynamic weight coefficient was developed to calculate the interval values of the reserved solutions for the selection of the operator, and an external archive update based on roulette wheel selection and gbest selection strategies was developed to increase the diversity of the solutions. A set of optimal structure solutions of the contact pairs was obtained for the subsequent design optimization. The feasibility and effectiveness of the proposed method were verified by comparing with the results from ANSYS finite element simulation.
- Published
- 2018
- Full Text
- View/download PDF
22. Entity-Based Language Model Smoothing Approach for Smart Search
- Author
-
Feng Zhao, Zeliang Tian, and Hai Jin
- Subjects
Language model smoothing ,entity ,knowledge base ,semantic relevance ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Smart search plays an important role in all walks of life, for example, according to business needs, accurate search of required knowledge from massive resources is an important way to enhance industrial intelligence. Smoothing of the language model is essential for obtaining high-quality search results because it helps to reduce mismatching and overfitting problems caused by data sparseness. Traditional smoothing methods lexically focus on the global corpus and locally cluster documents information without semantic analysis, which leads to deficiency of the semantic correlations between query statements and documents. In this paper, we propose an entity-based language model smoothing approach for smart search that uses semantic correlation and takes entities as bridges to build the entity semantic language model using a knowledge base. In this approach, entities in the documents are linked to an external knowledge base, such as Wikipedia. Then, the entity semantic language model is generated by using soft-fused and hardfused methods. A two-level merging strategy is also presented to smooth the language model according to whether a given word is semantically relevant to the document or not, which integrates the Dir-smoothing and JM-smoothing methods. Experimental results show that the smoothed language model more closely approximates the word probability distribution under the document semantic theme and more accurately estimates the relevance between query and document.
- Published
- 2018
- Full Text
- View/download PDF
23. A Fine-Grained Multi-Tenant Permission Management Framework for SDN and NFV
- Author
-
Deqing Zou, Yu Lu, Bin Yuan, Haoyu Chen, and Hai Jin
- Subjects
Software-defined network ,API misuse ,permission control ,network functions ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Although software defined network (SDN) has been widely recognized for several years, it still has not addressed security issues regarding permission management on SDN controller, one lack of which is the effective permission allocation of application programming interfaces (APIs) for multitenant requirements. Besides, the tendency of integrating SDN and network function virtualization (NFV) introduces a new problem about permission management on both SDN and NFV APIs. This paper presents a fine-grained permission management for secure sharing of APIs in multi-tenant networks. We propose a permission policy language, which provides three-level permission abstraction to define available APIs for multiple tenants to access both OpenFlow switches and network functions. We introduce a permission management framework to effectively enforce permissions and ensure user isolation. A prototype of the proposed framework is implemented on top of the RYU controller. Extensive experiments show that our system is effective for reducing API abuse and only introduces negligible overhead.
- Published
- 2018
- Full Text
- View/download PDF
24. Coverage Analysis of Millimeter Wave Decode-and-Forward Networks With Best Relay Selection
- Author
-
Khagendra Belbase, Zhang Zhang, Hai Jiang, and Chintha Tellambura
- Subjects
5G ,decode-and-forward relay ,millimeter wave communications ,relay selection ,stochastic geometry ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this paper, we investigate the coverage probability improvement of a millimeter wave network due to the deployment of spatially random decode-and-forward (DF) relays. The source and receiver are located at a fixed distance and all the relay nodes are distributed as a 2-D homogeneous Poisson point process (PPP). We first derive the spatial distribution of the set of decoding relays whose received signal-to-noise ratio (SNR) are above the minimum SNR threshold. This set is a 2-D inhomogeneous PPP. From this set, we select a relay that has minimum path loss to the receiver and derive the achievable coverage due to this selection. The analysis is developed using tools from stochastic geometry and is verified using Monte-Carlo simulation. The coverage probabilities of the direct link without relaying, a randomly chosen relay link, and the selected relay link are compared to show the significant performance gain when relay selection is used. We also analyze the effects of beam misalignment and different power allocations at the source and relay on coverage probability. In addition, rate coverage and spectral efficiency are compared for direct and selected relay links to show impressive performance gains with relaying.
- Published
- 2018
- Full Text
- View/download PDF
25. BoundShield: Comprehensive Mitigation for Memory Disclosure Attacks via Secret Region Isolation
- Author
-
Hai Jin, Benxi Liu, Yajuan Du, and Deqing Zou
- Subjects
Memory disclosure attacks ,execute-only memory ,software security ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Address space layout randomization (ASLR) is now widely adopted in modern operating systems to thwart code reuse attacks. However, an adversary can still bypass fine-grained ASLR by exploiting memory corruption vulnerabilities and performing memory disclosure attacks. Although Execute-no-Read schemes have been proven to be an efficient solution against read-based memory disclosures, existing solutions need modifications to kernel or hypervisor. Besides, the defense of execution-based memory disclosures has been ignored. In this paper, we propose BoundShield, a self-protection scheme that provides comprehensive protection against memory disclosure attacks, especially against those based on executing arbitrary code by leveraging Intel Memory Protection Extension. BoundShield protects code memory by defending not only read-based memory disclosure attacks but also execution-based memory disclosure attacks. On one hand, read-based memory disclosures can be eliminated by hiding all code sections and code pointers in a secret region separated from the user address space. On the other hand, BoundShield prevents return addresses from being corrupted and ensures that all function pointers point to the legitimate entries whenever they are dereferenced, which significantly reduces the attack surface for execution-based memory disclosures. We have implemented a prototype of BoundShield based on a set of modifications to compiler toolchain and the standard C library. Our evaluation results show that the BoundShield can provide strong defenses against memory disclosure attacks while incurring a small performance overhead.
- Published
- 2018
- Full Text
- View/download PDF
26. Performance Analysis of Wireless Powered Incremental Relaying Networks With an Adaptive Harvest-Store-Use Strategy
- Author
-
Guoxin Li and Hai Jiang
- Subjects
Energy harvesting ,diversity order ,incremental relaying ,power splitting ,relays ,wireless communication ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In this paper, we consider a wireless powered cooperative network, in which a source with constant power supply communicates with a destination under the assistance of an energy harvesting (EH) relay. From signals of the source, the relay can perform EH and information decoding simultaneously by using the power splitting (PS) technique. To increase the spectrum efficiency of the system and save energy consumption at the relay, an incremental decode-and-forward (IDF) relaying protocol is adopted to forward information. Inspired by the features of the IDF protocol, we propose a new energy harvesting and use strategy, named adaptive harvest-store-use (AHSU). In this proposed strategy, the relay adaptively sets its PS ratio according to a one-bit feedback from the destination, the channel estimation result for the source-to-relay link, and the relay's energy status. A finite-state Markov chain (MC) is employed to model the charging/discharging behavior of the relay's battery. The steady-state distribution of the MC is first derived, and then used to calculate the exact outage probability. In order to gain further insights, we investigate the outage performance of the system when the transmit signal-to-noise ratio of the source is high. Based on the asymptotic outage probability expression, the diversity order and coding gain are characterized, which demonstrates that a full diversity order is achieved by our proposed AHSU strategy in the considered network.
- Published
- 2018
- Full Text
- View/download PDF
27. A Large-Scale Study of I/O Workload’s Impact on Disk Failure
- Author
-
Song Wu, Yusheng Yi, Jiang Xiao, Hai Jin, and Mao Ye
- Subjects
Disk failure ,I/O workload ,duty cycle ,bandwidth ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In large-scale data centers, disk failure is the norm rather than an exception. Frequent disk failure noticeably hurts user experience and results in unavailability of data in the worst case. Previous researches from both industry and academia have studied the reasons of disk failure; however, there is a lack of knowledge of the intrinsic relation between failed disks and their I/O workload. In this paper, we collect and investigate about four billion drive hours I/O traces over 500 000 disks in Tencent's data centers. Our focus is to first exploit the key characteristics of I/O workload that influences disk reliability. We further present the impact of these I/O workload features on lifespan of disks and uncover the root causes. Finally, we introduce a new metric to accurately identify the "dangerous" I/O workload which is extremely harmful to disk health. To the best of our knowledge, this research is by far the first in-depth analysis of the I/O workload's impact on disk reliability and opens up a new dimension for I/O schedule policy in data centers.
- Published
- 2018
- Full Text
- View/download PDF
28. PrivGuard: Protecting Sensitive Kernel Data From Privilege Escalation Attacks
- Author
-
Weizhong Qiang, Jiawei Yang, Hai Jin, and Xuanhua Shi
- Subjects
Kernel ,non-control-data ,credential ,privilege escalation ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Kernels of operating systems are written in low-level unsafe languages, which make them inevitably vulnerable to memory corruption attacks. Most existing kernel defense mechanisms focus on preventing control-data attacks. Recently, attackers have turned the direction to non-control-data attacks by hijacking data flow, so as to bypass current defense mechanisms. Previous work has proved that noncontrol-data attacks are the critical threat to kernels. One of the important purposes of these attacks is to achieve privilege escalation by overwriting sensitive kernel data. The goal of our research is to develop a lightweight protection mechanism to mitigate non-control-data attacks that compromise sensitive kernel data. We propose an approach that enforces data integrity of sensitive kernel data by preventing the illegal write to these data to mitigate privilege escalation attacks. The main challenge of the proposed approach is to validate the modification of sensitive kernel data at runtime. The validation routine must verify the legitimacy of the duplicated sensitive data and ensure the credibility of the verification. To address this challenge, we modify the system call entry point to monitor the change of the sensitive kernel data without any change to Linux access control mechanism. Then, we use stack canaries to protect the duplication of sensitive kernel data that are used for integrity checking. In addition, we protect the integrity of sensitive kernel data by forbidding illegal updates to them. We have implemented the prototype for Linux kernel on Ubuntu Linux platform. The evaluation results of our prototype demonstrate that it can mitigate privilege escalation attacks and its performance overhead is moderate.
- Published
- 2018
- Full Text
- View/download PDF
29. SBLWT: A Secure Blockchain Lightweight Wallet Based on Trustzone
- Author
-
Weiqi Dai, Jun Deng, Qinyuan Wang, Changze Cui, Deqing Zou, and Hai Jin
- Subjects
Bitcoin ,SPV ,wallet ,Trustzone ,blockchain ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Due to the increasing total value of the digital currency, the security of encryption wallet is becoming more and more important. The hardware-based wallet is safe, but it is inconvenient because users need to carry an additional physical device, the software-based wallet is convenient, but the safety cannot be guaranteed. All these wallets need to synchronize the blockchain, while most current mobile devices do not have the capability to store all blocks. To solve these problems, mobile devices can use simplified payment verification (SPV). Nevertheless, in existing methods, there is no good way to protect the verification process of the transaction. In this paper, we design a secure blockchain lightweight wallet based on Trustzone to protect SPV. It is more portable compared with the hardware wallet, and safer than the software wallet. Through the isolation, it can also protect the private key and the wallet's address from being stolen by the attackers no matter whether the Rich OS is malicious or not. Meanwhile, it can protect the verification process by verifying transactions in the secure execution environment (SEE), and keep the local block headers unreadable directly from the Rich OS through encryption. We deploy it on the RASPBERRY PI 3 MODEL B development board. The result of the experiment shows that it has little impact on the system.
- Published
- 2018
- Full Text
- View/download PDF
30. Solving Anomalies in NFV-SDN Based Service Function Chaining Composition for IoT Network
- Author
-
Deqing Zou, Zirong Huang, Bin Yuan, Haoyu Chen, and Hai Jin
- Subjects
Software defined network ,network function virtualization ,policy composition ,service function chaining ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Service function chaining (SFC) is able to provide customizable network function services to the traffic flows of different IoT subjects. Nowadays, SFC becomes profound to implement the service requirements of different IoT devices with the flexibility and programmability provided by emerging technologies, software defined network (SDN) and network function virtualization. These techniques play an increasingly important role for service deployment and allow the service requirement for certain IoT device to be specified by different subjects, including SDN applications and network managers. However, independent generation of SFC policies by multiple policy makers over the same device may introduce several problems in the process of deploying SFCs to IoT network. Turning the individual considerations into coherent global SFC policies can be challenging. It requires special process of composition and transition, considering the scenario of combining policies with different concerns specified by different entities who have no insight into the policies of others. In this paper, we propose a composition method to solve the anomalies existing in the process of composing distinct policies in the environment of IoT network with multiple IoT service managers. We design two algorithms for the proposed anomaly-free policy composition method, and implement a prototype. Extensive experiment results show that our proposed method can eliminate the anomalies between policies and only induces trivial overhead in the process of generating data plane rules.
- Published
- 2018
- Full Text
- View/download PDF
31. Optimal Offloading in Fog Computing Systems With Non-Orthogonal Multiple Access
- Author
-
Ziling Wei and Hai Jiang
- Subjects
Computation offloading ,multiaccess communication ,non-orthogonal multiple access (NOMA) ,power allocation ,resource management ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Fog computing has recently become a promising method to meet the increasing computation demands from mobile applications in the Internet of Things (IoT). In fog computing, the computation tasks of an IoT device can be offloaded to fog nodes. Due to the limited computation capacity of a fog node, the IoT device may try to offload its tasks to multiple fog nodes. In this paper, to improve the offloading efficiency, downlink non-orthogonal multiple access is applied in fog computing systems such that the IoT device can perform simultaneous offloading to multiple fog nodes. Then, to maximize the long-term average system utility, a task and power allocation problem for computation offloading is formulated subject to task delay and energy cost constraints. By the Lyapunov optimization method, the original problem is transformed to an online optimization problem in each time slot, which is non-convex. Accordingly, we propose an algorithm to solve the non-convex online optimization problem with polynomial complexity.
- Published
- 2018
- Full Text
- View/download PDF
32. CloudVMI: A Cloud-Oriented Writable Virtual Machine Introspection
- Author
-
Weizhong Qiang, Gongping Xu, Weiqi Dai, Deqing Zou, and Hai Jin
- Subjects
Virtual machine introspection ,cloud management ,security monitoring ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
IoT generates considerable amounts of data, which often requires leveraging cloud computing to effectively scale the costs of transferring and computing these data. The concern regarding cloud security is more severe because many devices are connected to the cloud. It is important to automatically monitor and control these resources and services to efficiently and securely deliver cloud computing. The writable virtual machine introspection (VMI) technique can not only detect the runtime state of a guest VM from the outside but also update the state from the outside without any need for administrator efforts. Thus, the writable VMI technique can provide the benefit of high automation, which is helpful for automated cloud management. However, the existing writable VMI technique produces high overhead, fails to monitor the VMs distributed on different host nodes, and fails to monitor multiple VMs with heterogeneous guest OSes within a cloud; therefore, it cannot be applied for automated and centralized cloud management. In this paper, we present CloudVMI, which is a writable and crossnode monitoring VMI framework that can overcome the aforementioned issues. CloudVMI solves the semantic gap problem by redirecting the critical execution of system calls issued by the VMI program into the monitored VM. It has strong practicability by allowing one introspection program to inspect heterogeneous guest OSes and to monitor VMs distributed on remote host nodes. Thus, CloudVMI can be directly applied for automated and centralized cloud management. Moreover, we implement some defensive measures to secure CloudVMI itself. To highlight the writable capability and practical usefulness of CloudVMI, we implement four applications based on CloudVMI. CloudVMI is designed, implemented, and systematically evaluated. The experimental results demonstrate that CloudVMI is effective and practical for cloud management and that its performance overhead is acceptable compared with existing VMI systems.
- Published
- 2017
- Full Text
- View/download PDF
33. Fast and Parallel Keyword Search Over Public-Key Ciphertexts for Cloud-Assisted IoT
- Author
-
Peng Xu, Xiaolan Tang, Wei Wang, Hai Jin, and Laurence T. Yang
- Subjects
Cloud-assisted Internet-of-Things ,searchable public-key ciphertexts ,hidden relationship ,semantic security ,parallel search ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Cloud-assisted Internet of Things (IoT) is a popular system model to merge the advantages of both the cloud and IoT. In this model, IoT collects the real-world data, and the cloud maximizes the value of these data by sharing and analyzing them. Due to the sensitivity of the collected data, maintaining the security of these data is one of the main requirements in practice. Searchable public-key encryption is a fundamental tool to achieve secure delegated keyword search over ciphertexts in the cloud. To accelerate the search performance, Xu et al. propose a new concept of searchable public-key ciphertexts with hidden structures (SPCHSs), and it constructs a SPCHS instance to achieve search complexity that is sublinear with the total number of ciphertexts rather than the linear complexity as in the traditional works. However, this paper cannot achieve the parallel keyword search due to its inherent limitations. Clearly, the aforementioned instance is impractical. To address this problem, we propose a new instance of SPCHS to achieve fast and parallel keyword search over public-key ciphertexts. In contrast to the work by Xu et al., a new type of hidden relationship among searchable ciphertexts is constructed by the new instance, where every searchable ciphertext has a hidden relationship with a common and public parameter. Upon receiving a keyword search trapdoor, one can disclose all corresponding relationships in parallel and then find all matching ciphertexts. Hence, the new relationship allows a keyword search task to be performed in parallel. In addition, due to the limited capability of IoT, the new instance achieves a more efficient encryption algorithm to save time and communication cost.
- Published
- 2017
- Full Text
- View/download PDF
34. Patch-Related Vulnerability Detection Based on Symbolic Execution
- Author
-
Weizhong Qiang, Yuehua Liao, Guozhong Sun, Laurence T. Yang, Deqing Zou, and Hai Jin
- Subjects
Patch testing ,symbolic execution ,memory vulnerability ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
During the lifecycle of a software system, software patches are committed to software repositories to fix discovered bugs or append new features. Unfortunately, the patches may bring new bugs or vulnerabilities, which could break the stability and security of the software system. A study shows that more than 15% of software patches are erroneous due to poor testing. In this paper, we present a novel approach for automatically determining whether a patch brings new vulnerabilities. Our approach combines symbolic execution with data flow analysis and static analysis, which allows a quick check of patch-related codes. We focus on typical memory-related vulnerabilities, including buffer overflows, memory leaks, uninitialized data, and dangling pointers. We have implemented our approach as a tool called KPSec, which we used to test a set of real-world software patches. Our experimental results show that our approach can effectively identify typical memory-related vulnerabilities introduced by the patches and improve the security of the updated software.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.