101 results on '"Yiyu Shi"'
Search Results
2. 'One-Shot' Reduction of Additive Artifacts in Medical Images
- Author
-
Yu-Jen Chen, Yen-Jung Chang, Shao-Cheng Wen, Xiaowei Xu, Meiping Huang, Haiyun Yuan, Jian Zhuang, Yiyu Shi, and Tsung-Yi Ho
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,FOS: Electrical engineering, electronic engineering, information engineering ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients' characteristics, surrounding environment, etc. However, existing deep-learning-based artifact reduction methods are restricted by their training set with specific predetermined artifact types and patterns. As such, they have limited clinical adoption. In this paper, we introduce One-Shot medical image Artifact Reduction (OSAR), which exploits the power of deep learning but without using pre-trained general networks. Specifically, we train a light-weight image-specific artifact reduction network using data synthesized from the input image at test-time. Without requiring any prior large training data set, OSAR can work with almost any medical images that contain varying additive artifacts which are not in any existing data sets. In addition, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are used as vehicles and show that the proposed method can reduce artifacts better than state-of-the-art both qualitatively and quantitatively using shorter test time.
- Published
- 2021
- Full Text
- View/download PDF
3. Invited:Hardware-aware Real-time Myocardial Segmentation Quality Control in Contrast Echocardiography
- Author
-
Xiaowei Xu, Jian Zhuang, Meiping Huang, Jingtong Hu, Haiyun Yuan, Yiyu Shi, Dewen Zeng, and Yukun Ding
- Subjects
Artificial neural network ,business.industry ,Computer science ,media_common.quotation_subject ,Regularization (mathematics) ,Term (time) ,Resource (project management) ,Data acquisition ,Segmentation ,Quality (business) ,Latency (engineering) ,business ,Computer hardware ,media_common - Abstract
Automatic myocardial segmentation of contrast echocardio-graphy has shown great potential in the quantification of myocardial perfusion parameters. Segmentation quality control is an important step to ensure the accuracy of segmentation results for quality research as well as its clinical application. Usually, the segmentation quality control happens after the data acquisition. At the data acquisition time, the operator could not know the quality of the segmentation results. On-the-fly segmentation quality control could help the operator to adjust the ultrasound probe or retake data if the quality is unsatisfied, which can greatly reduce the effort of time-consuming manual correction. However, it is infeasible to deploy state-of-the-art DNN-based models because the segmentation module and quality control module must fit in the limited hardware resource on the ultrasound machine while satisfying strict latency constraints. In this paper, we propose a hardware-aware neural architecture search framework for automatic myocardial segmentation and quality control of contrast echocardiography. We explicitly incorporate the hardware latency as a regularization term into the loss function during training. The proposed method searches the best neural network architecture for the segmentation module and quality prediction module with strict latency.
- Published
- 2021
- Full Text
- View/download PDF
4. Enabling On-Device Model Personalization for Ventricular Arrhythmias Detection by Generative Adversarial Networks
- Author
-
Feng Hong, Zhenge Jia, Yiyu Shi, Lichuan Ping, and Jingtong Hu
- Subjects
Edge device ,Defibrillation ,Computer science ,business.industry ,medicine.medical_treatment ,Process (computing) ,Inference ,Implantable cardioverter-defibrillator ,Machine learning ,computer.software_genre ,Convolutional neural network ,Personalization ,Generative model ,medicine ,Artificial intelligence ,business ,computer - Abstract
Implantable Cardioverter Defibrillator (ICD) is an ultra-low-power device which monitors heart rate and delivers in-time defibrillation on detected ventricular arrhythmias (VAs). The parameters of VAs detection mechanism on each recipient’s ICD are supposed to be fine-tuned to obtain accurate detection due to the individual’s unique rhythm features. However, the process extremely relies on clinical expertise and thus must be conducted manually and routinely by cardiologists diagnosing massive amount of rhythm data. In this paper, we introduce a novel self-supervised on-device personalization of convolutional neural network (CNNs) for VAs detection. We first propose a computing framework consisting of an edge device and an ICD to enable efficient on-device CNNs personalization and real-time inference respectively. Then, we propose a generative model that learns to synthesize patient-specific intracardiac EGMs signals, which can then be used as personalized training data to improve patient-specific VAs detection performance on ICDs. Evaluations on three detection models show that the self-supervised on-device personalization significantly improve VAs detection performance under a patient-specific setting.
- Published
- 2021
- Full Text
- View/download PDF
5. Exploration of Quantum Neural Architecture by Mixing Quantum Neuron Designs: (Invited Paper)
- Author
-
Zhepeng Wang, Zhiding Liang, Shanglin Zhou, Caiwen Ding, Yiyu Shi, and Weiwen Jiang
- Published
- 2021
- Full Text
- View/download PDF
6. Contrastive Learning with Temporal Correlated Medical Images: A Case Study using Lung Segmentation in Chest X-Rays (Invited Paper)
- Author
-
Dewen Zeng, John N. Kheir, Peng Zeng, and Yiyu Shi
- Published
- 2021
- Full Text
- View/download PDF
7. Can Noise on Qubits Be Learned in Quantum Neural Network? A Case Study on QuantumFlow (Invited Paper)
- Author
-
Zhiding Liang, Zhepeng Wang, Junhuan Yang, Lei Yang, Yiyu Shi, and Weiwen Jiang
- Published
- 2021
- Full Text
- View/download PDF
8. Federated Contrastive Learning for Dermatological Disease Diagnosis via On-device Learning (Invited Paper)
- Author
-
Yawen Wu, Dewen Zeng, Zhepeng Wang, Yi Sheng, Lei Yang, Alaina J. James, Yiyu Shi, and Jingtong Hu
- Published
- 2021
- Full Text
- View/download PDF
9. A mining pool solution for novel proof-of-neural-architecture consensus
- Author
-
Boyang Li, Yiyu Shi, Weiwen Jiang, Qing Lu, and Taeho Jung
- Subjects
Service (systems architecture) ,Cryptocurrency ,Computer science ,business.industry ,Deep learning ,Hash function ,InformationSystems_DATABASEMANAGEMENT ,Workload ,Space (commercial competition) ,Machine learning ,computer.software_genre ,Task (computing) ,ComputingMethodologies_PATTERNRECOGNITION ,Artificial intelligence ,Architecture ,business ,computer - Abstract
In many recent novel blockchain consensuses, deep learning training procedure becomes the task for miners to prove their workload, thus the computation power of miners will not purely be spent on the hash puzzle. Therefore, the hardware and energy will support the blockchain service and deep learning training at the same time. The incentive of miners is to earn tokens and individual miners will find mining pools become more competitive. To the best of our knowledgeWe are the first to demonstrate a mining pool solution for novel consensuses based on deep learning. This work adopts from exist Proof-of-Deep-Learning (PoDL) as the consensus and Neural Architecture Search (NAS) as the workload. The mining pool manager partitions the full searching space into subspaces and all miners contributes to the NAS task in the assigned tasks. The strong miners are assigned for exploration and the weak miners are assigned for exploitation. In section IV, it shows the performance of this mining pool is more competitive than an individual miner in conducting NAS as workload.
- Published
- 2021
- Full Text
- View/download PDF
10. Ct Image Denoising With Encoder-Decoder Based Graph Convolutional Networks
- Author
-
Yu-Jen Chen, Tsung-Yi Ho, Xiaowei Xu, Jian Zhuang, Meiping Huang, Cheng-Yen Tsai, Haiyun Yuan, and Yiyu Shi
- Subjects
Similarity (geometry) ,Computer science ,business.industry ,Shot noise ,Process (computing) ,Pattern recognition ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,symbols.namesake ,0302 clinical medicine ,Feature (computer vision) ,Gaussian noise ,030220 oncology & carcinogenesis ,Key (cryptography) ,symbols ,Graph (abstract data type) ,Artificial intelligence ,business - Abstract
Image denoising of low-dose CT images is a key problem in modern medical practice. Recently, several works adopted Convolutional Neural Network (CNN) to precisely capture the similarity between local features resulting in significant improvements. However, we discovered that the main drawback of existing works is the lack of non-local feature processing. On the other hand, currently, graph convolutional networks (GCN) have been widely used to process non-Euclidean geometry data considering both local and non-local features. Motivated by the property of GCN, in this paper, we propose an encoder-decoder-based graph convolutional network (ED-GCN) for CT image denoising. Particularly, we combine local convolutions and graph convolutions to process both local and non-local features. We collected seven CT volumes with Gaussian noise and Poisson noise in the experiment. Experimental results show that the proposed method outperforms existing CNN-based approaches significantly.
- Published
- 2021
- Full Text
- View/download PDF
11. Do Noises Bother Human and Neural Networks In the Same Way? A Medical Image Analysis Perspective
- Author
-
Qianjun Jia, Yu-Jen Chen, Tsung-Yi Ho, Jian Zhuang, Shao-Cheng Wen, Wujie Wen, Zihao Liu, Meiping Huang, Yiyu Shi, and Xiaowei Xu
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Noise reduction ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,02 engineering and technology ,Machine Learning (cs.LG) ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Discrete cosine transform ,Segmentation ,Artificial neural network ,business.industry ,Deep learning ,Image and Video Processing (eess.IV) ,Perspective (graphical) ,Pattern recognition ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,ComputingMethodologies_PATTERNRECOGNITION ,Computer Science::Computer Vision and Pattern Recognition ,020201 artificial intelligence & image processing ,Noise (video) ,Artificial intelligence ,business - Abstract
Deep learning had already demonstrated its power in medical images, including denoising, classification, segmentation, etc. All these applications are proposed to automatically analyze medical images beforehand, which brings more information to radiologists during clinical assessment for accuracy improvement. Recently, many medical denoising methods had shown their significant artifact reduction result and noise removal both quantitatively and qualitatively. However, those existing methods are developed around human-vision, i.e., they are designed to minimize the noise effect that can be perceived by human eyes. In this paper, we introduce an application-guided denoising framework, which focuses on denoising for the following neural networks. In our experiments, we apply the proposed framework to different datasets, models, and use cases. Experimental results show that our proposed framework can achieve a better result than human-vision denoising network.
- Published
- 2020
- Full Text
- View/download PDF
12. Intermittent Inference with Nonuniformly Compressed Multi-Exit Neural Network for Energy Harvesting Powered Devices
- Author
-
Zhepeng Wang, Yiyu Shi, Zhenge Jia, Jingtong Hu, and Yawen Wu
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial neural network ,Computer science ,05 social sciences ,Real-time computing ,Inference ,Machine Learning (stat.ML) ,010501 environmental sciences ,01 natural sciences ,Machine Learning (cs.LG) ,Power (physics) ,Microcontroller ,Statistics - Machine Learning ,0502 economics and business ,Available energy ,050207 economics ,Energy harvesting ,Energy (signal processing) ,0105 earth and related environmental sciences - Abstract
This work aims to enable persistent, event-driven sensing and decision capabilities for energy-harvesting (EH)-powered devices by deploying lightweight DNNs onto EH-powered devices. However, harvested energy is usually weak and unpredictable and even lightweight DNNs take multiple power cycles to finish one inference. To eliminate the indefinite long wait to accumulate energy for one inference and to optimize the accuracy, we developed a power trace-aware and exit-guided network compression algorithm to compress and deploy multi-exit neural networks to EH-powered microcontrollers (MCUs) and select exits during execution according to available energy. The experimental results show superior accuracy and latency compared with state-of-the-art techniques.
- Published
- 2020
- Full Text
- View/download PDF
13. Zero-Shot Medical Image Artifact Reduction
- Author
-
Xiaowei Xu, Yu-Jen Chen, Yen-Jung Chang, Meiping Huang, Qianjun Jia, Yiyu Shi, Jian Zhuang, Shao-Cheng Wen, and Tsung-Yi Ho
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,05 social sciences ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,050801 communication & media studies ,Computed tomography ,Magnetic resonance imaging ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Image Artifact ,0508 media and communications ,0302 clinical medicine ,medicine ,Computer vision ,Artificial intelligence ,business - Abstract
Medical images may contain various types of artifacts with different patterns and mixtures, which depend on many factors such as scan setting, machine condition, patients' characteristics, surrounding environment, etc. However, existing deep learning based artifact reduction methods are restricted by their training set with specific predetermined artifact type and pattern. As such, they have limited clinical adoption. In this paper, we introduce a “Zero-Shot” medical image Artifact Reduction (ZSAR) framework, which leverages the power of deep learning but without using general pre-trained networks or any clean image reference. Specifically, we utilize the low internal visual entropy of an image and train a light-weight image-specific artifact reduction network to reduce artifacts in an image at test-time. We use Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) as vehicles to show that ZSAR can reduce artifacts better than the state-of-the-art both qualitatively and quantitatively, while using shorter test time. To the best of our knowledge, this is the first deep learning framework that reduces artifacts in medical images without using a priori training set.
- Published
- 2020
- Full Text
- View/download PDF
14. Multi-Cycle-Consistent Adversarial Networks for CT Image Denoising
- Author
-
Meiping Huang, Qianjun Jia, Bike Xie, Yukun Ding, Jian Zhuang, Yiyu Shi, Jinglan Liu, Jinjun Xiong, and Chunchen Liu
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Noise reduction ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,Process (computing) ,Electrical Engineering and Systems Science - Image and Video Processing ,Translation (geometry) ,Domain (software engineering) ,Task (project management) ,Adversarial system ,FOS: Electrical engineering, electronic engineering, information engineering ,Noise (video) ,Image denoising ,Algorithm - Abstract
CT image denoising can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain $X$ (noisy images) and a target domain $Y$ (clean images). Recently, cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data. Our detailed analysis of CCADN raises a number of interesting questions. For example, if the noise is large leading to significant difference between domain $X$ and domain $Y$, can we bridge $X$ and $Y$ with an intermediate domain $Z$ such that both the denoising process between $X$ and $Z$ and that between $Z$ and $Y$ are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle-consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency. The global cycle-consistency couples all generators together to model the whole denoising process, while the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms the state-of-the-art., Accepted in ISBI 2020. 5 pages, 4 figures
- Published
- 2020
- Full Text
- View/download PDF
15. Statistical Training for Neuromorphic Computing using Memristor-based Crossbars Considering Process Variations and Noise
- Author
-
Grace Li Zhang, Tsung-Yi Ho, Ulf Schlichtmann, Tianchen Wang, Yiyu Shi, Ying Zhu, and Bing Li
- Subjects
010302 applied physics ,Artificial neural network ,Computer science ,Process (computing) ,02 engineering and technology ,Memristor ,01 natural sciences ,020202 computer hardware & architecture ,law.invention ,Noise ,Neuromorphic engineering ,Computer engineering ,law ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering - Abstract
Memristor-based crossbars are an attractive platform to accelerate neuromorphic computing. However, process variations during manufacturing and noise in memristors cause significant accuracy loss if not addressed. In this paper, we propose to model process variations and noise as correlated random variables and incorporate them into the cost function during training. Consequently, the weights after this statistical training become more robust and together with global variation compensation provide a stable inference accuracy. Simulation results demonstrate that the mean value and the standard deviation of the inference accuracy can be improved significantly, by even up to 54% and 31%, respectively, in a two-layer fully connected neural network.
- Published
- 2020
- Full Text
- View/download PDF
16. Co-Exploring Neural Architecture and Network-on-Chip Design for Real-Time Artificial Intelligence
- Author
-
Lei Yang, Weiwen Jiang, Weichen Liu, Yiyu Shi, Edwin H.-M. Sha, and Jingtong Hu
- Subjects
010302 applied physics ,Network on chip design ,Computer science ,business.industry ,02 engineering and technology ,Space (commercial competition) ,Accuracy improvement ,01 natural sciences ,Bottleneck ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,Architecture ,business ,Design space ,Throughput (business) - Abstract
Hardware-aware Neural Architecture Search (NAS), which automatically finds an architecture that works best on a given hardware design, has prevailed in response to the ever-growing demand for real-time Artificial Intelligence (AI). However, in many situations, the underlying hardware is not pre-determined. We argue that simply assuming an arbitrary yet fixed hardware design will lead to inferior solutions, and it is best to co-explore neural architecture space and hardware design space for the best pair of neural architecture and hardware design. To demonstrate this, we employ Network-on-Chip (NoC) as the infrastructure and propose a novel framework, namely NANDS, to co-explore NAS space and NoC Design Search (NDS) space with the objective to maximize accuracy and throughput. Since two metrics are tightly coupled, we develop a multi-phase manager to guide NANDS to gradually converge to solutions with the best accuracy-throughput tradeoff. On top of it, we propose techniques to detect and alleviate timing performance bottleneck, which allows better and more efficient exploration of NDS space. Experimental results on common datasets, CIFAR10, CIFAR-100 and STL-10, show that compared with state-of-the-art hardware-aware NAS, NANDS can achieve 42.99% higher throughput along with 1.58% accuracy improvement. There are cases where hardware-aware NAS cannot find any feasible solutions while NANDS can.
- Published
- 2020
- Full Text
- View/download PDF
17. Accurate Congenital Heart Disease Model Generation for 3D Printing
- Author
-
Haiyun Yuan, Tianchen Wang, Qianjun Jia, Dewen Zeng, Jian Zhuang, Yiyu Shi, Xiaowei Xu, and Meiping Huang
- Subjects
FOS: Computer and information sciences ,Heart disease ,business.industry ,Blood pool ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Deep learning ,Image and Video Processing (eess.IV) ,Computer Science - Computer Vision and Pattern Recognition ,3D printing ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,020202 computer hardware & architecture ,Great vessels ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Leverage (statistics) ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business - Abstract
3D printing has been widely adopted for clinical decision making and interventional planning of Congenital heart disease (CHD), while whole heart and great vessel segmentation is the most significant but time-consuming step in the model generation for 3D printing. While various automatic whole heart and great vessel segmentation frameworks have been developed in the literature, they are ineffective when applied to medical images in CHD, which have significant variations in heart structure and great vessel connections. To address the challenge, we leverage the power of deep learning in processing regular structures and that of graph algorithms in dealing with large variations and propose a framework that combines both for whole heart and great vessel segmentation in CHD. Particularly, we first use deep learning to segment the four chambers and myocardium followed by the blood pool, where variations are usually small. We then extract the connection information and apply graph matching to determine the categories of all the vessels. Experimental results using 683D CT images covering 14 types of CHD show that our method can increase Dice score by 11.9% on average compared with the state-of-the-art whole heart and great vessel segmentation method in normal anatomy. The segmentation results are also printed out using 3D printers for validation., Comment: 6 figures, 2 tables, accepted by the IEEE International Workshop on Signal Processing Systems
- Published
- 2019
- Full Text
- View/download PDF
18. Power Delivery Resonant Virus: Concept and Applications
- Author
-
Cheng Zhuo, Tianhao Shen, Yiyu Shi, and Di Gao
- Subjects
Computer science ,business.industry ,020208 electrical & electronic engineering ,Workload ,02 engineering and technology ,Chip ,Capacitance ,020202 computer hardware & architecture ,Power (physics) ,Inductance ,Noise ,Software ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,business ,Degradation (telecommunications) - Abstract
Various hardware attacks have recently emerged to fail chips in critical civil and military infrastructures. However, most of them jeopardize the circuit functionality through additional hardware, where several countermeasures have been developed. In this paper, we present a very interesting yet powerful virus that can cause chip failure. Instead of directly injecting hardware sub-circuits that require layout modification or split manufacturing, we use resonant noise in power delivery system as the weapon. We show that, with simple but particular manipulations at software layer, repetitive excitations can be created. As the period gets closer to the resonance of the power delivery system, caused by on-chip capacitance and package inductance, significant voltage overshoot and undershoot can occur, preventing the regular operations of phase-locked-loops and other sensitive components. In short, the virus can hide deep within the software programs, but is easy to activate and impose severe impacts. Experimental results show that the proposed resonant virus may result in noise up to 33-53% of the nominal supply level, which doubles the noise generated by PARSEC3 workload. Moreover, the virus brings 8-19% more performance degradation than the regular workload.
- Published
- 2019
- Full Text
- View/download PDF
19. When Neural Architecture Search Meets Hardware Implementation: from Hardware Awareness to Co-Design
- Author
-
Yiyu Shi, Xinyi Zhang, Jingtong Hu, and Weiwen Jiang
- Subjects
010302 applied physics ,Flexibility (engineering) ,Network architecture ,Optimization problem ,Artificial neural network ,Computer science ,business.industry ,02 engineering and technology ,01 natural sciences ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,System on a chip ,Applications of artificial intelligence ,Architecture ,business ,Field-programmable gate array ,Computer hardware - Abstract
Neural Architecture Search (NAS), that automatically identifies the best network architecture, is a promising technique to respond to the ever-growing demand for application-specific Artificial Intelligence (AI). On the other hand, a large number of research efforts have been put on implementing and optimizing AI applications on the hardware. Out of all leading computation platforms, Field Programmable Gate Arrays (FPGAs) stand out due to its flexibility and versatility over ASICs and its efficiency over CPUs and GPUs. To identify the best neural architecture and hardware implementation pair, a number of research works are emerging to involve the awareness of hardware efficiency in the NAS process, which is called "hardware-aware NAS". Unlike the conventional NAS with a mono-criteria of accuracy, hardware-aware NAS is a multi-objective optimization problem, which aims to identify the best network and hardware pair to maximize accuracy with guaranteed hardware efficiency. Most recently, the co-design of neural architecture and hardware has been put forward to further push forward the Pareto frontier between accuracy and efficiency trade-off. This paper will review and discuss the current progress in the neural architecture search and the implementation on hardware.
- Published
- 2019
- Full Text
- View/download PDF
20. Exploiting Computation Power of Blockchain for Biomedical Image Segmentation
- Author
-
Boyang Li, Taeho Jung, Yiyu Shi, Xiaowei Xu, and Changhao Chenli
- Subjects
Artificial neural network ,Computer science ,business.industry ,Computation ,Hash function ,02 engineering and technology ,Image segmentation ,030218 nuclear medicine & medical imaging ,020202 computer hardware & architecture ,03 medical and health sciences ,0302 clinical medicine ,Computer engineering ,Digital signature ,0202 electrical engineering, electronic engineering, information engineering ,Overhead (computing) ,Segmentation ,Artificial intelligence ,business - Abstract
Biomedical image segmentation based on Deep neural network (DNN) is a promising approach that assists clinical diagnosis. This approach demands enormous computation power because these DNN models are complicated, and the size of the training data is usually very huge. As blockchain technology based on Proof-of-Work (PoW) has been widely used, an immense amount of computation power is consumed to maintain the PoW consensus. In this paper, we propose a design to exploit the computation power of blockchain miners for biomedical image segmentation, which lets miners perform image segmentation as the Proof-of-Useful-Work (PoUW) instead of calculating useless hash values. This work distinguishes itself from other PoUW by addressing various limitations of related others. As the overhead evaluation shown in Section 5 indicates, for U-net and FCN, the average overhead of digital signature is 1.25 seconds and 0.98 seconds, respectively, and the average overhead of network is 3.77 seconds and 3.01 seconds, respectively. These quantitative experiment results prove that the overhead of the digital signature and network is small and comparable to other existing PoUW designs.
- Published
- 2019
- Full Text
- View/download PDF
21. Energy-recycling Blockchain with Proof-of-Deep-Learning
- Author
-
Taeho Jung, Boyang Li, Yiyu Shi, and Changhao Chenli
- Subjects
FOS: Computer and information sciences ,Cryptocurrency ,Computer Science - Cryptography and Security ,Blockchain ,business.industry ,Computer science ,media_common.quotation_subject ,Distributed computing ,Deep learning ,Computer Science - Distributed, Parallel, and Cluster Computing ,Cash ,Benchmark (computing) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Artificial intelligence ,Energy recycling ,business ,Cryptography and Security (cs.CR) ,media_common ,Block (data storage) - Abstract
An enormous amount of energy is wasted in Proofof-Work (PoW) mechanisms adopted by popular blockchain applications (e.g., PoW-based cryptocurrencies), because miners must conduct a large amount of computation. Owing to this, one serious rising concern is that the energy waste not only dilutes the value of the blockchain but also hinders its further application. In this paper, we propose a novel blockchain design that fully recycles the energy required for facilitating and maintaining it, which is re-invested to the computation of deep learning. We realize this by proposing Proof-of-Deep-Learning (PoDL) such that a valid proof for a new block can be generated if and only if a proper deep learning model is produced. We present a proof-of-concept design of PoDL that is compatible with the majority of the cryptocurrencies that are based on hash-based PoW mechanisms. Our benchmark and simulation results show that the proposed design is feasible for various popular cryptocurrencies such as Bitcoin, Bitcoin Cash, and Litecoin., Comment: 5 pages
- Published
- 2019
- Full Text
- View/download PDF
22. Quantization of Fully Convolutional Networks for Accurate Biomedical Image Segmentation
- Author
-
Yu Hu, Sharon Hu, Danny Z. Chen, Yiyu Shi, Qing Lu, Xiaowei Xu, and Lin Yang
- Subjects
FOS: Computer and information sciences ,Artificial neural network ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Quantization (signal processing) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Overfitting ,030218 nuclear medicine & medical imaging ,020202 computer hardware & architecture ,Reduction (complexity) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Segmentation ,Artificial intelligence ,Quantization (image processing) ,business - Abstract
With pervasive applications of medical imaging in health-care, biomedical image segmentation plays a central role in quantitative analysis, clinical diagno- sis, and medical intervention. Since manual anno- tation su ers limited reproducibility, arduous e orts, and excessive time, automatic segmentation is desired to process increasingly larger scale histopathological data. Recently, deep neural networks (DNNs), par- ticularly fully convolutional networks (FCNs), have been widely applied to biomedical image segmenta- tion, attaining much improved performance. At the same time, quantization of DNNs has become an ac- tive research topic, which aims to represent weights with less memory (precision) to considerably reduce memory and computation requirements of DNNs while maintaining acceptable accuracy. In this paper, we apply quantization techniques to FCNs for accurate biomedical image segmentation. Unlike existing litera- ture on quantization which primarily targets memory and computation complexity reduction, we apply quan- tization as a method to reduce over tting in FCNs for better accuracy. Speci cally, we focus on a state-of- the-art segmentation framework, suggestive annotation [22], which judiciously extracts representative annota- tion samples from the original training dataset, obtain- ing an e ective small-sized balanced training dataset. We develop two new quantization processes for this framework: (1) suggestive annotation with quantiza- tion for highly representative training samples, and (2) network training with quantization for high accuracy. Extensive experiments on the MICCAI Gland dataset show that both quantization processes can improve the segmentation performance, and our proposed method exceeds the current state-of-the-art performance by up to 1%. In addition, our method has a reduction of up to 6.4x on memory usage., Comment: 9 pages, 11 Figs, 1 Table, Accepted by CVPR
- Published
- 2018
- Full Text
- View/download PDF
23. Resource constrained cellular neural networks for real-time obstacle detection using FPGAs
- Author
-
Yiyu Shi, Qing Lu, Tianchen Wang, and Xiaowei Xu
- Subjects
Speedup ,Computer science ,Resource constrained ,Advanced driver assistance systems ,02 engineering and technology ,020202 computer hardware & architecture ,Computer engineering ,Cellular neural network ,Obstacle ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quantization (image processing) ,Field-programmable gate array ,Implementation - Abstract
Due to the fast growing industry of smart cars and autonomous driving, advanced driver assistance systems (ADAS) with its applications have attracted a lot of attention. As a crucial part of ADAS, obstacle detection has been challenge due to the real-tme and resource-constraint requirements. Cellular neural network (CeNN) has been popular for obstacle detection, however suffers from high computation complexity. In this paper we propose a compressed CeNN framework for real-time ADAS obstacle detection in embedded FPGAs. Particularly, parameter quantizaion is adopted. Parameter quantization quantizes the numbers in CeNN templates to powers of two, so that complex and expensive multiplications can be converted to simple and cheap shift operations, which only require a minimum number of registers and LEs. Experimental results on FPGAs show that our approach can significantly improve the resource utilization, and as a direct consequence a speedup up to 7.8x can be achieved with no performance loss compared with the state-of-the-art implementations.
- Published
- 2018
- Full Text
- View/download PDF
24. Combating Data Leakage Trojans in Sequential Circuits Through Randomized Encoding
- Author
-
Charles A. Kamhoua, Travis E. Schulze, Daryl G. Beetner, Laurent Njilla, Yiyu Shi, and Kevin Kwiat
- Subjects
Combinational logic ,Hardware security module ,Sequential logic ,business.industry ,Computer science ,Cryptography ,02 engineering and technology ,Chip ,Encryption ,020202 computer hardware & architecture ,Trojan ,Sequential analysis ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,business - Abstract
Globalization of micro-chip fabrication has opened a new avenue of cyber-crime. It is now possible to insert hardware Trojans directly into the chip during the manufacturing process. These hardware Trojans are capable of destroying a chip, reducing performance or even capturing sensitive data. This paper presents a modification to a recently presented method of Trojan defense known as RECORD: Randomized Encoding of COmbinational Logic for Resistance to Data Leakage. RECORD aims to prevent data leakage through a randomized encoding and split manufacturing scheme. Its weakness, however, it that it is only applicable to combinational circuits. Sequential RECORD proposes a method to extend RECORD concepts to sequential designs. Experimental work with Sequential RECORD on a Data Encryption Standard circuit show that it is effective with the cost of a 3.75x area overhead, 4.5x power overhead and only a 3% decrease in performance.
- Published
- 2017
- Full Text
- View/download PDF
25. Edge segmentation: Empowering mobile telemedicine with compressed cellular neural networks
- Author
-
Xiaowei Xu, Qing Lu, Tianchen Wang, Jinglan Liu, Cheng Zhuo, Xiaobo Sharon Hu, and Yiyu Shi
- Published
- 2017
- Full Text
- View/download PDF
26. Application of machine learning methods in post-silicon yield improvement
- Author
-
Baris Yigit, Grace Li Zhang, Yiyu Shi, Ulf Schlichtmann, and Bing Li
- Subjects
Yield (engineering) ,business.industry ,Computer science ,020208 electrical & electronic engineering ,Work (physics) ,Process (computing) ,Sampling (statistics) ,Scale (descriptive set theory) ,Sample (statistics) ,02 engineering and technology ,Machine learning ,computer.software_genre ,020202 computer hardware & architecture ,0202 electrical engineering, electronic engineering, information engineering ,Artificial intelligence ,State (computer science) ,business ,Integer programming ,computer - Abstract
In nanometer scale manufacturing, process variations have a significant impact on circuit performance. To handle them, post-silicon clock tuning buffers can be included into the circuit to balance timing budgets of neighboring critical paths. The state of the art is a sampling-based approach, in which an integer linear programming (ILP) problem must be solved for every sample. The runtime complexity of this approach is the number of samples multiplied by the required time for an ILP solution. Existing work tries to reduce the number of samples but still leaves the problem of a long runtime unsolved. In this paper, we propose a machine learning approach to reduce the runtime by learning the positions and sizes of post-silicon tuning buffers. Experimental results demonstrate that we can predict buffer locations and sizes with a very good accuracy (90% and higher) and achieve a significant yield improvement (up to 18.8%) with a significant speed-up (up to almost 20 times) compared to existing work.
- Published
- 2017
- Full Text
- View/download PDF
27. Generative adversarial network based scalable on-chip noise sensor placement
- Author
-
Jianlei Yang, Jinglan Liu, Yukun Ding, Ulf Schlichtmann, and Yiyu Shi
- Subjects
Computer science ,Heuristic (computer science) ,Power integrity ,02 engineering and technology ,010501 environmental sciences ,01 natural sciences ,020202 computer hardware & architecture ,Power (physics) ,Reduction (complexity) ,Noise margin ,Noise ,Computer engineering ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,System on a chip ,0105 earth and related environmental sciences - Abstract
The relentless efforts towards power reduction of integrated circuits have led to the prevalence of near-threshold computing paradigms. With the significantly reduced noise margin, therefore, it is no longer possible to fully assure power integrity at design time. As a result, designers seek to contain noise violations, commonly known as voltage emergencies, through various runtime techniques. All these techniques require accurate capture of voltage emergencies through noise sensors. Although existing approaches have explored the optimal placement of noise sensors, they all exploited the statistical modeling of noise, which requires a large number of samples in a high-dimensional space. For large scale power grids, these techniques may not work due to the very long simulation time required to get the samples. In this paper, we explore a novel approach based on generative adversarial network (GAN), which only requires a small number of samples to train. Experimental results show that compared with a simple heuristic which takes in the same number of samples, our approach can reduce the miss rate of voltage emergency detection by up to 65.3% on an industrial design.
- Published
- 2017
- Full Text
- View/download PDF
28. CN-SIM: A cycle-accurate full system power delivery noise simulator
- Author
-
Shih-Chieh Chang, Cheng Zhuo, Chung-Han Chou, Yiyu Shi, and Kassan Unda
- Subjects
010302 applied physics ,Engineering ,business.industry ,Suite ,0211 other engineering and technologies ,02 engineering and technology ,01 natural sciences ,Application layer ,Power (physics) ,Parsec ,Noise ,0103 physical sciences ,Electronic engineering ,Granularity ,Layer (object-oriented design) ,business ,Electrical impedance ,Simulation ,021106 design practice & management - Abstract
This paper introduces CN-SIM, a cycle accurate, full system, power delivery (PD) noise simulator. CN-SIM provides a cross layer connectivity form application layer, to the architecture layer, to the circuit layer, which is much needed to realistically estimate PD noise. Thus, making it easier for system architects to explore multilayer design optimizations. CN-SIM's granularity at its deepest is at the functional unit (FU) level. The experimental results of running PARSEC suite benchmarks for different system configurations and different industrial PD design have illustrated CN-SIM's capability to capture the crosslayer impact on PD noise.
- Published
- 2017
- Full Text
- View/download PDF
29. RECORD: Temporarily Randomized Encoding of COmbinational Logic for Resistance to Data Leakage from hardware Trojan
- Author
-
Shih-Chieh Chang, Travis E. Schulze, Kevin Kwiat, Yiyu Shi, and Charles A. Kamhoua
- Subjects
Combinational logic ,Engineering ,business.industry ,Advanced Encryption Standard ,020206 networking & telecommunications ,02 engineering and technology ,Chip ,020202 computer hardware & architecture ,Hardware Trojan ,Quilt packaging ,Logic gate ,Embedded system ,Dynamic demand ,0202 electrical engineering, electronic engineering, information engineering ,Side channel attack ,business - Abstract
Many design companies have gone fabless and rely on external fabrication facilities to produce chips due to increasing cost of semiconductor manufacturing. However, not all of these facilities can be considered trustworthy; some may inject hardware Trojans and jeopardize the security of the system. One common objective of hardware Trojans is to a establish side channel for data leakage. While extensive literature exists on various defensive measures, almost all of them focus on preventing the establishment of side channels, and can be compromised if attackers gain access to the physical chip and can perform reverse engineering between multiple fabrication runs. In this paper, we propose RECORD: Temporarily Randomized Encoding of COmbinational Logic for Resistance to Data Leakage. RECORD a novel scheme of temporarily randomized encoding for combinational logic that, with the aid of Quilt Packaging, aims to prevent attackers from interpreting the data. Experimental results on a 45 nm 8-bit Advanced Encryption Standard (AES) Substitution Box (Sbox) showed that RECORD can effectively hide information with 2.3× area overhead, 2.77× dynamic power increase and negligible delay overhead.
- Published
- 2016
- Full Text
- View/download PDF
30. On the measurement of power grid robustness under load uncertainties
- Author
-
Yiyu Shi, Jie Wu, and Ulf Schlichtmann
- Subjects
Engineering ,Normal conditions ,business.industry ,020209 energy ,Real-time computing ,Load balancing (electrical power) ,02 engineering and technology ,Grid ,Reliability engineering ,Electric power transmission ,Upgrade ,Smart grid ,Dynamic demand ,0202 electrical engineering, electronic engineering, information engineering ,business ,Pace - Abstract
Power grids are unarguably the backbone of national infrastructures, providing vital support to economic growth. On the other hand, economic growth and the integration of plug-in hybrid electric vehicles have led to relentless power demand increase. Unfortunately, the update of power grids cannot always keep pace with power demand growth, which imposes potential threats on the reliable operation of power grids. When this occurs, the grids may still operate well under normal conditions, but when some loads change suddenly, they may fail much more easily. It is, therefore, imperative to establish a quantitative method to evaluate the robustness of power grids under load uncertainties, which can benefit grid planning, upgrade, and optimization. Unfortunately, despite the importance of this problem, it remains open in the literature. In this paper, we formally formulate the problem of power grid robustness and propose an efficient framework to measure it based on two sampling techniques: StopSign and Interval. The performance of the proposed framework has been evaluated on IEEE standard bus systems. To the best of the authors' knowledge, this is the first in-depth work on power grid robustness estimation under load uncertainties.
- Published
- 2016
- Full Text
- View/download PDF
31. PWM-controlled DC-DC converter designs in 3D ICs using through-silicon-via inductors
- Author
-
Umamaheswara Rao Tida, Cheng Zhou, Yiyu Shi, and Prateek N. Kankonkar
- Subjects
010302 applied physics ,Engineering ,Through-silicon via ,business.industry ,Buck converter ,Overhead (engineering) ,Electrical engineering ,02 engineering and technology ,Converters ,Inductor ,01 natural sciences ,020202 computer hardware & architecture ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Electronic engineering ,business ,Dc dc converter ,Pulse-width modulation ,Voltage - Abstract
There has been a tremendous research effort in recent years to move DC-DC converters on chip for enhanced performance. To reduce the large area overhead induced by conventional spiral inductors, existing literature has used through-silicon-vias (TSVs) inductors in basic DC-DC converters without feedback control. In this paper, we further study the possible application of TSV inductors in DC-DC converters with Pulse Width Modulation (PWM)-controlled feedback. The DC-DC converter with TSV inductor is at par in terms of performance with other conventional designs in the literature. Experimental results show that by replacing conventional spiral inductors with TSV inductors, up to 4.3x inductor area reduction can be achieved with almost the same efficiency and output voltage for 45 nm buck converters.
- Published
- 2016
- Full Text
- View/download PDF
32. Novel applications of deep learning hidden features for adaptive testing
- Author
-
Jinjun Xiong, Bingjun Xiao, and Yiyu Shi
- Subjects
Engineering ,Artificial neural network ,business.industry ,Deep learning ,Online machine learning ,02 engineering and technology ,Integrated circuit ,Statistical process control ,Machine learning ,computer.software_genre ,020202 computer hardware & architecture ,law.invention ,Test (assessment) ,law ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Computerized adaptive testing ,business ,computer ,Dynamic testing - Abstract
Adaptive test of integrated circuits (IC) promises to increase the quality and yield of products with reduced manufacturing test cost compared to traditional static test flows. Two mostly widely used techniques are Statistical Process Control (SPC) and Part Average Testing (PAT), whose capabilities to capture complex correlation between test measurements and the underlying IC's physical and electrical properties are, however, limited. Based on recent progress on machine learning, this paper proposes a novel deep learning based method for adaptive test. Compared to most machine learning techniques, deep learning has the distinctive advantage of being able to capture the underlying key features automatically from data without manual intervention. In this paper, we start from a trained deep neuron network (DNN) with a much higher accuracy than the conventional test flow for the pass and fail prediction. We further develop two novel applications by leveraging the features learned from DNN: one to enable partial testing, i.e., make decisions on pass and fail without finishing the entire test flow, and two to enable dynamic test ordering, i.e., changing the sequence of tests adaptively. Experiment results show significant improvement on the accuracy and effectiveness of our proposed method.
- Published
- 2016
- Full Text
- View/download PDF
33. Multi-threading based parallel dynamic simulator for transient behavior analysis of power systems
- Author
-
Peter Feldmann, Jinjun Xiong, Jie Wu, and Yiyu Shi
- Subjects
Dynamic simulation ,Electric power system ,Smart grid ,Software ,Computer science ,business.industry ,Embedded system ,Transient (computer programming) ,Power-flow study ,business ,Power system simulator for engineering ,Bottleneck ,Simulation - Abstract
Power grids reliability and security are influenced by transient behaviors of various power system components. However, analyzing these transient behaviors needs to estimate millions of time-domain state variables with high computational efficiency. Most of the existing simulators are not based on parallel software platforms which have been proven to be powerful in many other applications. They are thus incapable of simulating transient behaviors of large-scale power systems efficiently. The lack of powerful dynamic simulation tool has become a major bottleneck towards the smarterization of power grids. Inventing a software based novel fully parallel topology of power systems, we propose a multi-threading based parallel simulator for transient behavior analysis in this paper. The accuracy and efficiency of the proposed simulator has been evaluated on IEEE and Polish standard bus systems, against the state-of-the-art open source software package (MatDyn). The results show that the runtime of the proposed simulator attains up to 80.49 ×speedup compared with MatDyn on 12-thread CPUs.
- Published
- 2015
- Full Text
- View/download PDF
34. 1-bit compressed sensing based framework for built-in resonance frequency prediction using on-chip noise sensors
- Author
-
Tao Wang, Jinglan Liu, Cheng Zhuo, and Yiyu Shi
- Published
- 2015
- Full Text
- View/download PDF
35. Effective CAD research in the sea of papers
- Author
-
Jinglan Liu, Da-Cheng Juan, and Yiyu Shi
- Published
- 2015
- Full Text
- View/download PDF
36. A novel entropy production based full-chip TSV fatigue analysis
- Author
-
Tianchen Wang, Sandeep Kumar Samal, Sung Kyu Lim, and Yiyu Shi
- Published
- 2015
- Full Text
- View/download PDF
37. Optimal selected phasor measurement units for identifying multiple line outages in smart grid
- Author
-
Jie Wu, Yiyu Shi, Jinjun Xiong, and Prasenjit Shil
- Subjects
Units of measurement ,Engineering ,Smart grid ,business.industry ,Real-time computing ,Global Positioning System ,Phasor ,Electronic engineering ,Benchmark (computing) ,Transmission system ,business ,Greedy algorithm ,Clock synchronization - Abstract
Thanks to the advent of clock synchronization via the global positioning system (GPS), phasor measurement units (PMUs) collecting both the magnitude and phase angle with higher precision have implications for monitoring performance across the wide area of smart grids. Owing to high PMUs installation costs, developing optimal strategy of PMUs placement is important to monitor the transmission system status in the wide area. To characterize the performance of line outage identification, this paper firstly proposes the statistical model to describe the average identification capability of multiple line outages. Using this model, we develop a global optimal PMUs placement strategy to maximize the average identification capability under the budget of PMUs. Furthermore, a greedy heuristic strategy is developed to bypass the combinatorial search to achieve a sub-optimal solution at a reduced complexity. The proposed methods are used to benchmark the optimal PMU placement solutions for the IEEE 14- and 57-bus systems. The experimental study demonstrates that the proposed techniques successfully select optimal PMU locations, targeting on maximum the average identification capability. The result shows that the proposed PMUs placement strategies improve about 10% location identification performance of multiple line outages when compared to random PMUs placement method.
- Published
- 2015
- Full Text
- View/download PDF
38. Obstacle-avoiding wind turbine placement for power-loss and wake-effect optimization
- Author
-
Sudip Roy, Yiyu Shi, Tsung-Yi Ho, and Yu-Wei Wu
- Subjects
Power optimizer ,Power loss ,Engineering ,Wind power ,Electricity generation ,business.industry ,Obstacle ,Wake ,business ,Turbine ,Simulation ,Marine engineering ,Renewable energy - Abstract
As finite energy resources are being consumed at fast rate than they can be replaced, renewable energy resources have drawn an extensive attention. Wind power development is one such example, which is growing significantly throughout the world. The main difficulty in wind power development is that wind turbines interfere with each other. The produced turbulence, known as wake effect, directly reduces the power generation. In addition, wirelength among wind turbines is not merely an economic factor, but also it decides power loss in the wind farm. Moreover, in reality, obstacles exist in the wind farm which is unavoidable, e.g., private land, lake and so on. Nevertheless, to the best of our knowledge, none of the existing works consider wake effect, wirelength and obstacle-avoiding at the same time in the wind turbine placement problem. In this paper, we propose an analytical method to obtain the obstacle-avoiding placement of wind turbines optimizing both power loss and wake effect. Simulation results show that the wind power produced by our tool is similar to that by the industrial tool AWS OpenWind. Besides, our algorithm can reduce the wirelength and avoid obstacles successfully while finding the locations of wind turbines at the same time.
- Published
- 2015
- Full Text
- View/download PDF
39. Opportunistic through-silicon-via inductor utilization in LC resonant clocks: Concept and algorithms
- Author
-
Umamaheswara Rao Tida, Varun Mittapalli, Cheng Zhuo, and Yiyu Shi
- Published
- 2014
- Full Text
- View/download PDF
40. A novel grid load management technique using electric water heaters and Q-learning
- Author
-
Don C. Wunsch, Yiyu Shi, Jinjun Xiong, and Khalid Al-jabery
- Subjects
Load management ,Mathematical optimization ,Engineering ,Smart grid ,business.industry ,Q factor ,Q-learning ,Load balancing (electrical power) ,Control engineering ,Customer satisfaction ,Grid ,business ,Fuzzy logic - Abstract
This paper describes a novel technique for controlling demand-side management (DSM) by optimizing the power consumed by Domestic Electric Water Heaters (DEWH) while maintaining customer satisfaction. The system has 18 states based on three factors: instantaneous grid load, water consumption, and the temperature of the water supplied. The current state of the system is defined based on its fuzzy membership for each factor. The resulting model represents a Semi-Markov decision process (SMDP) with two possible actions, “On” and “Off.” Rewards are assigned for each action-state pairs proportionally to the fuzzy membership of the system in the new state. A simulation study was conducted to compare the proposed method with three previous approaches. The proposed method demonstrated better performance in reducing the overall grid power demand and flattening its peaks. Furthermore, it provides better rate of customers' satisfaction than the uncontrolled operation.
- Published
- 2014
- Full Text
- View/download PDF
41. Variation aware optimal threshold voltage computation for on-chip noise sensors
- Author
-
Tao Wang, Chun Zhang, Jinjun Xiong, Pei-Wen Luo, Liang-Chia Cheng, and Yiyu Shi
- Published
- 2014
- Full Text
- View/download PDF
42. Fast and accurate emissivity and absolute temperature maps measurement for integrated circuits
- Author
-
Tzu-Yi Liao, Tianchen Wang, Hsueh-Ling Yu, Yih-Lang Li, Yiyu Shi, and Shu-Fei Tsai
- Subjects
Measure (data warehouse) ,021103 operations research ,Materials science ,Pixel ,Noise measurement ,Infrared ,0211 other engineering and technologies ,02 engineering and technology ,Temperature measurement ,020202 computer hardware & architecture ,Wavelength ,Hardware and Architecture ,0202 electrical engineering, electronic engineering, information engineering ,Emissivity ,Calibration ,Electrical and Electronic Engineering ,Software ,Remote sensing - Abstract
The comparison of temperatures between measurement and simulation (i.e., temperature correlation) is commonly needed in many thermal studies. However, existing methods to measure temperature maps are either inaccurate or inconvenient due to various assumptions or measurement conditions needed. It still remains a missing piece in the literature how to accurately and flexibly measure temperature maps. Toward this, we propose a practical and feasible method for emissivity map measurement. Two reference plates are utilized to obtain an emissivity map, from which real emissivity value of each pixel of the infrared thermographer is obtained. According to the experimental results herein, the deviation of the emissivity measured using this method is on the order of 0.01, consistent with the minimum resolution of all currently available infrared thermographic instruments. With the emissivity map, highly accurate temperature map is then obtained. The method can be flexibly applied to various test samples whether the emissivity of the test sample changes with the wavelength or not. Experimental results on real ICs indicate that compared with commonly used infrared thermographer with uniform emissivity setting or black coating approaches, our method can obtain significantly better temperature correlation.
- Published
- 2014
- Full Text
- View/download PDF
43. Real time anomaly detection in wide area monitoring of smart grids
- Author
-
Jie Wu, Jinjun Xiong, Prasenjit Shil, and Yiyu Shi
- Published
- 2014
- Full Text
- View/download PDF
44. Random walk based capacitance extraction for 3D ICs with cylindrical inter-tier-vias
- Author
-
Wenjian Yu, Chao Zhang, Qing Wang, and Yiyu Shi
- Published
- 2014
- Full Text
- View/download PDF
45. Optimal PMU placement for identification of multiple power line outages in smart grids
- Author
-
Yiyu Shi, Prasenjit Shil, Jinjun Xiong, and Jie Wu
- Subjects
Engineering ,Identification (information) ,Smart grid ,business.industry ,Electronic engineering ,Control engineering ,Line (text file) ,business ,Power (physics) - Published
- 2014
- Full Text
- View/download PDF
46. 'Green' On-chip Inductors in Three-Dimensional Integrated Circuits
- Author
-
Varun Mittapalli, Cheng Zhuo, Umamaheswara Rao Tida, and Yiyu Shi
- Subjects
Limiting factor ,Engineering ,Buck converter ,business.industry ,Overhead (engineering) ,Electrical engineering ,Integrated circuit ,Converters ,Inductor ,law.invention ,law ,Electronic engineering ,Spiral inductor ,business ,Electronic circuit - Abstract
Through-silicon-vias (TSVs) are the enabling technique for three-dimensional integrated circuits (3D ICs). However, their large area significantly reduces the benefits that can be obtained by 3D ICs. On the other hand, a major limiting factor for the implementation of many on-chip circuits such as DC-DC converters and resonant clocking is the large area overhead induced by spiral inductors. Several works have been proposed in the literature to make inductors out of idle TSVs. In this paper, we will demonstrate the effectiveness of such TSV inductors in addressing both challenges. Experimental results show that by replacing conventional spiral inductors with TSV inductors, the inductor area can be reduced by up to 4.3x and 7.7x for a single-phase buck converter design and an LC resonant clocking design respectively, under the same performance constraints.
- Published
- 2014
- Full Text
- View/download PDF
47. Ambiguity group based location recognition for multiple power line outages in smart grids
- Author
-
Yiyu Shi, Jinjun Xiong, and Jie Wu
- Subjects
Cost reduction ,Engineering ,Units of measurement ,Smart grid ,Computational complexity theory ,business.industry ,Distributed computing ,Line (geometry) ,Phasor ,Electronic engineering ,Algorithm design ,business ,Cascading failure - Abstract
The efficient location recognition of multiple outage lines is critical to not only cascading failure elimination but also repair cost reduction. Most of existing methods, however, fail to handle location recognition well. This failure typically occurs because the methods cannot overcome three challenges: the very limited number of Phasor Measurement Units (PMUs), the high computational complexity, and the possibility of concurrent multiple outage lines. This paper presents an efficient algorithm inspired by the ambiguity group theory to recognize the most likely outage line locations with limited PMUs. Using IEEE 14-, 57-, and 118-bus system, the experimental study demonstrates that the proposed technique can successfully recognize the most likely multiple outage line locations with a reasonable computation complexity.
- Published
- 2014
- Full Text
- View/download PDF
48. Phasor Measurement Unit Placement for Identifying Power Line Outages in Wide-Area Transmission System Monitoring
- Author
-
Yiyu Shi and Hao Zhu
- Subjects
Electric power system ,Mathematical optimization ,Electric power transmission ,Linear programming ,Computer science ,Phasor ,Condition monitoring ,Transmission system ,Grid ,Greedy algorithm ,Phasor measurement unit - Abstract
Phasor measurement units (PMUs) are widely recognized to be instrumental for enhancing the power system situational awareness - a key step toward the future power grid. With limited PMU resources and high installation costs, it is of great importance to develop strategic PMU deployment methods. This paper focuses on optimally selecting PMU locations for monitoring transmission line status across the wide-area grid. To bypass the combinatorial search involved, a linear programming reformulation is first developed to provide an upper bound estimate for the global optimum. Furthermore, a greedy heuristic method is adopted with the complexity only linearly in the number of PMUs, while leveraging on the upper bound estimate a branch-and-bound algorithm is also developed to achieve a near-optimal performance at a reduced complexity. Numerical tests on various IEEE test cases demonstrate the validity of our proposed methods, greatly advocating the satisfactory performance the simple greedy method at only linear complexity. This suggests a hybrid method by using the greedy solution as a warm start to reduce the number of BB iterations.
- Published
- 2014
- Full Text
- View/download PDF
49. Through-silicon-via inductor: Is it real or just a fantasy?
- Author
-
Cheng Zhuo, Yiyu Shi, and Umamaheswara Rao Tida
- Subjects
Footprint (electronics) ,Inductance ,Engineering ,Substrate (building) ,Through-silicon via ,business.industry ,Q factor ,Electronic engineering ,Electrical engineering ,Three-dimensional integrated circuit ,Point (geometry) ,business ,Inductor - Abstract
Through-silicon-vias (TSVs) can potentially be used to implement inductors in three-dimensional (3D) integrated systems for minimal footprint and large inductance. However, different from conventional 2D spiral inductors, TSV inductors are fully buried in the lossy substrate, thus suffering from low quality factor. In this paper, we propose a novel shield mechanism utilizing the micro-channel, a technique conventionally used for heat removal, to reduce the substrate loss. This technique increases the quality factor and the inductance of the TSV inductor by up to 21x and 17x respectively. It enables us to implement TSV inductors of up to 38x smaller area and 33% higher quality factor, compared with spiral inductors of the same inductance. To the best of the authors' knowledge, this is the first proposal on improving quality factor of TSV inductors. We hope our study shall point out a new and exciting research direction for 3D IC designers.
- Published
- 2014
- Full Text
- View/download PDF
50. Novel crack sensor for TSV-based 3D integrated circuits: Design and deployment perspectives
- Author
-
Chun Zhang, Moongon Jung, Sung Kyu Lim, and Yiyu Shi
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.