8 results on '"YaJuan Du"'
Search Results
2. Reducing tail latency of LSM-tree based key-value store via limited compaction
- Author
-
Yongchao Hu and Yajuan Du
- Subjects
Write amplification ,Computer science ,business.industry ,Reading (computer) ,Real-time computing ,Compaction ,020207 software engineering ,02 engineering and technology ,Associative array ,Data access ,User experience design ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Latency (engineering) ,business ,Stall (engine) - Abstract
Key-value stores based on log-structured merge-tree (LSM-tree), e.g. LevelDB by Google and RocksDB by Facebook, have been widely used in a lot of cyber-physical systems scenarios to provide flexible data access and high performance. Compaction is a necessary operation to delete some invalid data, reduce the data size and improve the efficiency of reading and writing. However, compaction would stall write requests, which significantly degrades system performance. This paper studies the effect of compaction on tail latency that shows frequent latency spikes. These spikes would cause poor user experience and have already concerned by industrial researchers. Furthermore, the number of involved SSTables proportionally decides the height of latency spikes. We propose the limited compaction to only allow a part of SSTables involved in the compaction. We implement the limited compaction with a basic method to choose the SSTables randomly and a selective method to choose the SSTables according to their overlapped ranges with the next level. Experimental results on LevelDB with comprehensive benchmarks have shown that the proposed methods can effectively reduce the tail latency, while inducing an acceptable write amplification.
- Published
- 2021
- Full Text
- View/download PDF
3. PreGC: Pre-migrating valid pages to relieve performance cliff of 3D solid-state drives
- Author
-
Wei Liu, Jason Xue, Yao Zhou, Rui Wang, and Yajuan Du
- Subjects
010302 applied physics ,geography ,geography.geographical_feature_category ,Write amplification ,Computer science ,Solid-state ,Response time ,02 engineering and technology ,Parallel computing ,Work in process ,01 natural sciences ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Cliff ,Latency (engineering) - Abstract
In order to satisfy the increased concern on SSD performance, this paper studied GC performance in the view of performance cliff and tail latency. At first, our preliminary experiments figure out that increased page migrations is the root cause of performance cliff. Then, combined with the existing works aiming at GC performance in 2D SSDs, including the partial GC and aggressive GC, a novel GC-assisting method PreGC is proposed to relieve GC-induced performance cliff. The key idea of PreGC is that a part of pages within victim blocks can be pre-migrated ahead of the time nearby GC invoking and when system is idle. Thus, normal page migrations induced by GC can be reduced and response time peaks would be lowered down. Experimental results show that PreGC can efficiently relieve the performance cliff by reducing migrated page numbers in normal GC as well as the tail latency while inducing negligible write amplification.
- Published
- 2019
- Full Text
- View/download PDF
4. Adapting Layer RBERs Variations of 3D Flash Memories via Multi-granularity Progressive LDPC Reading
- Author
-
Yajuan Du, Yao Zhou, Wei Liu, Shengwu Xiong, and Meng Zhang
- Subjects
Flash (photography) ,Network on a chip ,business.industry ,Computer science ,Reading (computer) ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,Latency (audio) ,02 engineering and technology ,Low-density parity-check code ,business ,Computer hardware ,020202 computer hardware & architecture - Abstract
Existing studies have uncovered that there exist significant Raw Bit Error Rates (RBERs) variations among different layers of 3D flash memories due to manufacture process variation. These RBER variations would cause significantly diversed read latencies when reading data with traditional Low-Density Parity-Check (LDPC) codes designed for planar flash memories, which induces sub-optimal read performance of flash-based Solid-State Drives (SSDs). To investigate the latency diversity, this paper first performs a preliminary experiment and observes that LDPC read levels proportional to latencies increase in diverse speeds along with data retention. Then, by exploiting the observation results, a Multi-Granularity LDPC (MG-LDPC) read method is proposed to adapt level increase speed for each layer. Five LDPC engines with varied increase granularity are designed to adapt RBER speed requirements. Finally, two implementations for MG-LDPC are applied to assign LDPC engines for each flash layer in a fixed way or dynamically according to prior read levels. Experimental results show that the proposed two implementations can reduce SSD read response time by 21% and 47% on average, respectively.CCS CONCEPTS• Hardware→Temperature optimization; Network on chip; 3D integrated circuits.
- Published
- 2019
- Full Text
- View/download PDF
5. United SSD block cleaning via constrained victim block selection
- Author
-
Yu Zhu, Meng Zhang, Yajuan Du, and Wei Liu
- Subjects
Computer science ,Write amplification ,business.industry ,020207 software engineering ,02 engineering and technology ,Flash (manufacturing) ,020204 information systems ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,business ,Selection (genetic algorithm) ,Wear leveling ,Garbage collection ,Block (data storage) - Abstract
Solid state drives (SSDs) widely used in cyber-physical systems have to use block cleaning operations to reclaim storage space because of the out-of-place update characteristics. As these operations often take high time costs and do harm to flash lifetime because of write amplification, SSD system performance has been largely decreased. Garbage collection and wear leveling, as two typical block cleaning techniques, are invoked without enough communication between each other, which may induce one block repeatedly cleaned within a short period, leading to worse performance and lifetime of SSDs. This paper proposes an unified block cleaning method called UniBC by considering the two invoking conditions unitedly to reduce the cost of overall block cleaning. In details, UniBC makes garbage collection able to consider the wear degree of victim blocks by utilizing constrained victim block selection techniques and wear leveling be aware of the utilization degree of victim blocks. By exploiting UniBC, repetitive block cleaning operations can be avoided and overall system performance can be improved. Experimental results show that UniBC can improve 10% block cleaning performance on average.
- Published
- 2019
- Full Text
- View/download PDF
6. FastGC
- Author
-
Changsheng Xie, Shunzhuo Wang, Fei Wu, Jiaona Zhou, Chengmo Yang, and Yajuan Du
- Subjects
Scheme (programming language) ,Computer science ,Reliability (computer networking) ,020207 software engineering ,02 engineering and technology ,computer.software_genre ,020202 computer hardware & architecture ,0202 electrical engineering, electronic engineering, information engineering ,Operating system ,Data Corruption ,computer ,Data migration ,Garbage collection ,computer.programming_language - Abstract
Copyback is an advanced command contributing to accelerating data migration in garbage collection (GC). Unfortunately, detecting copyback feasibility (whether copyback can be carried out with assurable reliability) against data corruption in the traditional copyback-based GC causes an expensive performance penalty. This paper first explores copyback error characteristics on real NAND flash chips, then proposes a fast garbage collection scheme called FastGC. It utilizes copyback error characteristics to efficiently detect copyback feasibility of data instead of transferring out all valid data for detecting. Experiment results in the SSDsim show the proposed FastGC greatly promotes write response time and read response time by up to 44.2% and 66.3% respectively, compared to the traditional copyback-based GC.
- Published
- 2018
- Full Text
- View/download PDF
7. A PV aware data placement scheme for read performance improvement on LDPC based flash memory
- Author
-
Yejia Di, Qingfeng Zhuge, Yajuan Du, Kaijie Wu, Liang Shi, Chun Jason Xue, Edwin H.-M. Sha, and Qiao Li
- Subjects
business.industry ,Computer science ,02 engineering and technology ,Work in process ,Partition (database) ,Flash memory ,020202 computer hardware & architecture ,Process variation ,Computer engineering ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,Low-density parity-check code ,Performance improvement ,business ,Wear leveling ,Data placement - Abstract
This paper proposes to improve read performance of LDPC based flash memory by exploiting process variation (PV). The work includes three parts. First, a block grouping approach is proposed to classify the flash blocks based on their reliability. Second, based on the grouping approach, a read data placement scheme is proposed, which is designed to place read-hot data on flash blocks with high reliability. However, simply placing read-hot data to high reliable blocks conflicts with recent PV-aware wear leveling schemes. In the third part of the work, a grouping partition scheme is proposed to limit the number of high reliable blocks for read-hot data. In this case, read performance can be well improved with little impact on the lifetime improvement.
- Published
- 2017
- Full Text
- View/download PDF
8. Reducing LDPC Soft Sensing Latency by Lightweight Data Refresh for Flash Read Performance Improvement
- Author
-
Yajuan Du, Chun Jason Xue, Hai Jin, Liang Shi, Qiao Li, and Deqing Zou
- Subjects
Hardware_MEMORYSTRUCTURES ,business.industry ,Computer science ,020208 electrical & electronic engineering ,02 engineering and technology ,020202 computer hardware & architecture ,Soft sensing ,0202 electrical engineering, electronic engineering, information engineering ,Latency (engineering) ,Performance improvement ,Low-density parity-check code ,Error detection and correction ,business ,Computer hardware - Abstract
In order to relieve reliability problem caused by technology scaling, LDPC codes have been widely applied in flash memories to provide high error correction capability. However, LDPC read performance slowdown along with data retention largely weakens the access speed advantage of flash memories. This paper considers to apply the concept of refresh, that were used for flash lifetime improvement, to optimize flash read performance. Exploiting data read characteristics, this paper proposes LDR, a lightweight data refresh method, that aggressively corrects errors in read-hot pages with long read latency and reprograms error-free data into new pages. Experimental results show that LDR can achieve 29% read performance improvement with only 0.2% extra P/E cycles on average, which causes negligible overhead on flash lifetime.
- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.