44 results on '"Amith Singhee"'
Search Results
2. Incremental Analysis of Legacy Applications Using Knowledge Graphs for Application Modernization
- Author
-
Saravanan Krishnan, Alex Mathai, Amith Singhee, Atul Kumar, Shivali Agarwal, Keerthi Narayan Raghunath, and David Wenk
- Published
- 2022
- Full Text
- View/download PDF
3. Rightsizing Clusters for Time-Limited Tasks
- Author
-
Seshadri Padmanabha Venkatagiri, Pooja Aggarwal, Anamitra R. Choudhury, Amith Singhee, Ashok Kumar, Yogish Sabharwal, and Venkatesan T. Chakaravarthy
- Subjects
FOS: Computer and information sciences ,Linear programming ,Computer science ,Bin packing problem ,Node (networking) ,Time-sharing ,Approximation algorithm ,Task (computing) ,Resource (project management) ,Computer Science - Distributed, Parallel, and Cluster Computing ,Computer Science - Data Structures and Algorithms ,Data Structures and Algorithms (cs.DS) ,Distributed, Parallel, and Cluster Computing (cs.DC) ,Cluster analysis ,Algorithm - Abstract
In conventional public clouds, designing a suitable initial cluster for a given application workload is important in reducing the computational foot-print during run-time. In edge or on-premise clouds, cold-start rightsizing the cluster at the time of installation is crucial in avoiding the recurrent capital expenditure. In both these cases, rightsizing has to balance cost-performance trade-off for a given application with multiple tasks, where each task can demand multiple resources, and the cloud offers nodes with different capacity and cost. Multidimensional bin-packing can address this cold-start rightsizing problem, but assumes that every task is always active. In contrast, real-world tasks (e.g. load bursts, batch and dead-lined tasks with time-limits) may be active only during specific time-periods or may have dynamic load profiles. The cluster cost can be reduced by reusing resources via time sharing and optimal packing. This motivates our generalized problem of cold-start rightsizing for time-limited tasks: given a timeline, time-periods and resource demands for tasks, the objective is to place the tasks on a minimum cost cluster of nodes without violating node capacities at any time instance. We design a baseline two-phase algorithm that performs penalty-based mapping of task to node-type and then, solves each node-type independently. We prove that the algorithm has an approximation ratio of O(D min(m, T)), where D, m and T are the number of resources, node-types and timeslots, respectively. We then present an improved linear programming based mapping strategy, enhanced further with a cross-node-type filling mechanism. Our experiments on synthetic and real-world cluster traces show significant cost reduction by LP-based mapping compared to the baseline, and the filling mechanism improves further to produce solutions within 20% of (a lower-bound to) the optimal solution., An abridged version appears in IEEE Cloud 2021
- Published
- 2021
- Full Text
- View/download PDF
4. Monolith to Microservice Candidates using Business Functionality Inference
- Author
-
Hiroaki Nakamuro, Amith Singhee, Utkarsh Desai, Raunak Sinha, Pratap Das, Shivali Agarwal, Giriprasad Sridhara, and Srikanth G. Tamilselvam
- Subjects
Structure (mathematical logic) ,Class (computer programming) ,Code refactoring ,Computer science ,Application domain ,Formal concept analysis ,Data mining ,Web service ,Cluster analysis ,computer.software_genre ,computer ,Modularity - Abstract
In this paper, we propose a novel approach for monolith decomposition, that maps the implementation structure of a monolith application to a functional structure that in turn can be mapped to business functionality. First, we infer the classes in the monolith application that are distinctively representative of the business functionality in the application domain. This is done using formal concept analysis on statically determined code flow structures in a completely automated manner. Then, we apply a clustering technique, guided by the inferred representatives, on the classes belonging to the monolith to group them into different types of partitions, mainly: 1) functional groups representing microservice candidates, 2) a utility class group, and 3) a group of classes that require significant refactoring to enable a clean microservice architecture. This results in microservice candidates that are naturally aligned with the different business functions exposed by the application. A detailed evaluation on four publicly available applications show that our approach is able to determine better quality microservice candidates when compared to other existing state of the art techniques. We also conclusively show that clustering quality metrics like modularity are not reliable indicators of microservice candidate goodness.
- Published
- 2021
- Full Text
- View/download PDF
5. Konveyor Move2Kube: Automated Replatforming of Applications to Kubernetes
- Author
-
Amith Singhee, Ashok Kumar, Akash Nayak, Pablo Loyola, Harikrishnan Balagopal, Seshadri Padmanabha Venkatagiri, Mudit Verma, and Chander Govindarajan
- Subjects
Reduction (complexity) ,Pipeline transport ,Community project ,Software deployment ,business.industry ,Computer science ,Cloud computing ,Architecture ,business ,Software engineering ,Pipeline (software) ,Personalization - Abstract
We present Move2Kube, a replatforming framework that automates the transformation of the deployment specification and development pipeline of an application from a non-Kubernetes platform to a Kubernetes-based one, minimizing changes to the application's functional implementation and architecture. Our contributions include: (1) a standardized intermediate representation to which diverse application deployment artifacts could be translated, (2) an extension framework for adding support for new source platforms, and target artifacts while allowing customization as per organizational standards. We provide initial evidence of its effectiveness in terms of effort reduction, and highlight the current research challenges and future lines of work. Move2Kube is being developed as an open source community project and it is available at https://move2kube.konveyor.io/
- Published
- 2021
- Full Text
- View/download PDF
6. Robust resource demand estimation using hierarchical Bayesian model in a distributed service system
- Author
-
Amith Singhee, Digbalay Bose, Sumanta Mukherjee, Nupur Aggarwal, and Krishnasuri Narayanam
- Subjects
Identification (information) ,Resource (project management) ,Operations research ,Computer science ,Service (economics) ,media_common.quotation_subject ,Volume (computing) ,Variance (accounting) ,Bayesian inference ,Categorical variable ,Task (project management) ,media_common - Abstract
Robust resource demand prediction is crucial for efficient allocation of resources to service requests in a distributed service delivery system. There are two problems in resource demand prediction: firstly to estimate the volume of service requests that come in at different time points and at different geo-locations, secondly to estimate the resource demand given the estimated volume of service requests. While a lot of literature exists to address the first problem, in this work, we have proposed a data-driven statistical method for robust resource demand prediction to address the second problem. The method automates the identification of various system operational characteristics and contributing factors that influence the system behavior to generate an adaptive low variance resource demand prediction model. Factors can be either continuous or categorical in nature. The method assumes that each service request resolution involves multiple tasks. Each task is composed of multiple activities. Each task belongs to a task type, based on the type of the resource it requires to resolve that task. Our method supports configurable tasks per service request, and configurable activities per task. The demand prediction model produces an aggregated resource demand required to resolve all the activities under a task by activity sequence modeling; and aggregated resource demand by resource type, required to resolve all the activities under a service request by task sequence modeling.
- Published
- 2021
- Full Text
- View/download PDF
7. SiLVR: Projection Pursuit for Response Surface Modeling
- Author
-
Amith Singhee
- Published
- 2019
- Full Text
- View/download PDF
8. Enterprise Scale Privacy Aware Occupancy Sensing
- Author
-
Satyam Dwivedi, Ashok Pon Kumar Sree Prakash, Rohun Tripathi, Surya Shravan Kumar Sajja, Marnik Vermeulen, and Amith Singhee
- Subjects
Occupancy ,Edge device ,Computer science ,business.industry ,Fingerprint (computing) ,Computer security ,computer.software_genre ,Scalability ,Location-based service ,Enhanced Data Rates for GSM Evolution ,IBM ,business ,computer ,Building automation - Abstract
Location based services inside smart buildings are dependent on scalable localization methods. However, for enterprises, privacy of individual employees is a major concern. In this paper, we present a privacy aware occupancy sensing mechanism for large scale enterprises with multiple floors in multiple buildings of multiple cities. This is achieved through Wi-Fi fingerprint based localization methods implemented on edge devices. We present some preliminary results on occupancy sensing from our pilot study inside the office spaces of IBM India.
- Published
- 2018
- Full Text
- View/download PDF
9. Probabilistic forecasts of service outage counts from severe weather in a distribution grid
- Author
-
Amith Singhee and Haijing Wang
- Subjects
Severe weather ,Computer science ,020209 energy ,Probabilistic logic ,Weather forecasting ,Storm ,02 engineering and technology ,computer.software_genre ,Data modeling ,Statistics ,0202 electrical engineering, electronic engineering, information engineering ,Tobit model ,Distribution grid ,computer ,Random variable - Abstract
In this work we develop a machine learning based model for computing a probabilistic prediction of the number of distribution grid customers that will lose power during a given severe weather event. The model takes as input a prediction of damage counts at the level of substation regions or service regions, as proposed in earlier work, and generates the customer count impact forecast in aggregate for the entire service territory (or a large region thereof). The relationship between damage count and customer count is highly noisy in general, given the branching structure of distribution grids. Here we exploit the fact that the noise reduces as the damage count increases and develop a Tobit model applicable for severe weather events. We validate the forecasting system using data from a utility.
- Published
- 2017
- Full Text
- View/download PDF
10. A Simulation Study of Oxygen Vacancy-Induced Variability in ${\rm HfO}_{2}$ /Metal Gated SOI FinFET
- Author
-
Pranita Kerber, David J. Frank, Takashi Ando, Emrah Acar, Amit Ranjan Trivedi, Saibal Mukhopadhyay, and Amith Singhee
- Subjects
Materials science ,business.industry ,Silicon on insulator ,Dielectric ,Electronic, Optical and Magnetic Materials ,Logic gate ,MOSFET ,Electronic engineering ,Energy level ,Optoelectronics ,Work function ,Electrical and Electronic Engineering ,business ,AND gate ,High-κ dielectric - Abstract
Deposition of a metal gate on high-K dielectric HfO2 is known to generate oxygen vacancy (OVs) defects. Positively charged OVs in the dielectric affect the gate electrostatics and modulate the effective gate workfunction (WF). Count and spatial allocation of OVs varies from device-to-device and induces significant local variability in WF and Vth. This paper presents the statistical models to simulate OV concentration and placement depending on the gate formation conditions. OV-induced variability is studied for SOI FinFET, and compared against the other sources of variability across the technologies. The implications of gate first and gate last processes to the OV concentration/distribution are studied. Simulations show that with channel length and gate dielectric thickness scaling, the OV-induced variability becomes a significant concern.
- Published
- 2014
- Full Text
- View/download PDF
11. Unlocking the hidden potential of data towards efficient buildings: Findings from a pilot study in India
- Author
-
Sridhar R, Sunil Kumar Ghai, Vijay Arya, Megha Nawhal, Amith Singhee, Heena Bansal, Vikas Chandan, Deva P. Seetharam, Zainul Charbiwala, Harshad Khadilkar, Ashok Kumar, and B. Ramesh
- Subjects
Building management system ,Engineering ,Occupancy ,Scope (project management) ,business.industry ,Energy management ,05 social sciences ,020207 software engineering ,02 engineering and technology ,Energy consumption ,Risk analysis (engineering) ,Backup ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,business ,Telecommunications ,Operating expense ,Building management ,050107 human factors - Abstract
Energy cost is one of the significant contributors to the operational expenses of commercial buildings. In developing countries facing problems of frequent power outages and deficient grid connectivity, diesel generators are used as backup power source which significantly increase the costs incurred in management of commercial establishments. Integration of information and communication technologies to building management systems provides a reliable platform to analyze various aspects of the building such as energy consumption trends and occupancy inferences thereby proposing reactive or pro-active strategies directed towards efficient and cost-effective building management. Usually, this potential of data available to building management agencies stays untapped in developing countries. In this paper, we take a data-driven approach to understand various operational aspects of a commercial establishment. To demonstrate the scope for optimization of building operations by exploiting the energy consumption data, a pilot study was conducted in an IT office building in India.
- Published
- 2016
- Full Text
- View/download PDF
12. Two Fast Methods for Estimating the Minimum Standby Supply Voltage for Large SRAMs
- Author
-
Amith Singhee, Benton H. Calhoun, Jiajing Wang, and Rob A. Rutenbar
- Subjects
Computer science ,Monte Carlo method ,Orders of magnitude (voltage) ,Computer Graphics and Computer-Aided Design ,Noise (electronics) ,symbols.namesake ,Generalized Pareto distribution ,Statistics ,symbols ,Probability distribution ,Pareto distribution ,Static random-access memory ,Electrical and Electronic Engineering ,Algorithm ,Software ,Quantile - Abstract
The data retention voltage (DRV) defines the minimum supply voltage for an SRAM cell to hold its state. Intra-die variation causes a statistical distribution of DRV for individual cells in a memory array. We present two fast and accurate methods to estimate the tail of the DRV distribution. The first method uses a new analytical model based on the relationship between DRV and static noise margin. The second method extends the statistical blockade technique to a recursive formulation. It uses conditional sampling for rapid statistical simulation and fits the results to a generalized Pareto distribution (GPD) model. Both the analytical DRV model and the generic GPD model show a good match with Monte Carlo simulation results and offer speedups of up to four or five orders of magnitude over Monte Carlo at the 6σ point. In addition, the two models show a very close agreement with each other at the tail up to 8σ. For error within 5% with a confidence of 95%, the analytical DRV model and the GPD model can predict DRV quantiles out to 8σ and 6.6σ respectively; and for the mean of the estimate, both models offer within 1% error relative to Monte Carlo at the 4σ point.
- Published
- 2010
- Full Text
- View/download PDF
13. Why Quasi-Monte Carlo is Better Than Monte Carlo or Latin Hypercube Sampling for Statistical Circuit Analysis
- Author
-
Amith Singhee and Rob A. Rutenbar
- Subjects
Polynomial ,Theoretical computer science ,Computer science ,Quantum Monte Carlo ,Monte Carlo method ,Rejection sampling ,Markov chain Monte Carlo ,Computer Graphics and Computer-Aided Design ,Hybrid Monte Carlo ,symbols.namesake ,Nonlinear system ,Latin hypercube sampling ,symbols ,Probability distribution ,Monte Carlo integration ,Monte Carlo method in statistical physics ,Quasi-Monte Carlo method ,Analysis of variance ,Kinetic Monte Carlo ,Electrical and Electronic Engineering ,Algorithm ,Software ,Monte Carlo molecular modeling - Abstract
At the nanoscale, no circuit parameters are truly deterministic; most quantities of practical interest present themselves as probability distributions. Thus, Monte Carlo techniques comprise the strategy of choice for statistical circuit analysis. There are many challenges in applying these techniques efficiently: circuit size, nonlinearity, simulation time, and required accuracy often conspire to make Monte Carlo analysis expensive and slow. Are we-the integrated circuit community-alone in facing such problems? As it turns out, the answer is “no.” Problems in computational finance share many of these characteristics: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive sample evaluation. We perform a detailed experimental study of how one celebrated technique from that domain-quasi-Monte Carlo (QMC) simulation-can be adapted effectively for fast statistical circuit analysis. In contrast to traditional pseudorandom Monte Carlo sampling, QMC uses a (shorter) sequence of deterministically chosen sample points. We perform rigorous comparisons with both Monte Carlo and Latin hypercube sampling across a set of digital and analog circuits, in 90 and 45 nm technologies, varying in size from 30 to 400 devices. We consistently see superior performance from QMC, giving 2× to 8× speedup over conventional Monte Carlo for roughly 1% accuracy levels. We present rigorous theoretical arguments that support and explain this superior performance of QMC. The arguments also reveal insights regarding the (low) latent dimensionality of these circuit problems; for example, we observe that over half of the variance in our test circuits is from unidimensional behavior. This analysis provides quantitative support for recent enthusiasm in dimensionality reduction of circuit problems.
- Published
- 2010
- Full Text
- View/download PDF
14. Statistical Blockade: Very Fast Statistical Simulation and Modeling of Rare Circuit Events and Its Application to Memory Design
- Author
-
Rob A. Rutenbar and Amith Singhee
- Subjects
Theoretical computer science ,Computer science ,Monte Carlo method ,Sampling (statistics) ,Computer Graphics and Computer-Aided Design ,Computer engineering ,Rare events ,Probability distribution ,Electrical and Electronic Engineering ,Discrete event simulation ,Extreme value theory ,Software ,Random access ,Parametric statistics - Abstract
Circuit reliability under random parametric variation is an area of growing concern. For highly replicated circuits, e.g., static random access memories (SRAMs), a rare statistical event for one circuit may induce a not-so-rare system failure. Existing techniques perform poorly when tasked to generate both efficient sampling and sound statistics for these rare events. Statistical blockade is a novel Monte Carlo technique that allows us to efficiently filter-to block-unwanted samples that are insufficiently rare in the tail distributions we seek. The method synthesizes ideas from data mining and extreme value theory and, for the challenging application of SRAM yield analysis, shows speedups of 10 - 100 times over standard Monte Carlo.
- Published
- 2009
- Full Text
- View/download PDF
15. HAMS
- Author
-
Fook-Luen Heng, Mark A. Lavin, Ulrich Finkler, Amith Singhee, Steven Hirsch, and Jun Mei Qu
- Subjects
Power graph analysis ,Alpha-numeric grid ,Electric power system ,Smart grid ,Grid network ,Computer science ,Distributed computing ,Grid ,Network topology ,Electrical grid - Abstract
Advanced analytical applications that will enable the smart grid need to analyze the connectivity of the power grid under multiple different operating scenarios, taking into account time-varying topology of the grid. This paper proposes a highly memory-efficient representation of the power grid that enables efficient construction of multiple topological and operational states in memory for high-performance graph analysis. The proposed representation exploits repeating patterns in the grid and uses a hierarchical graph as the core model. Time-varying topology and operational conditions are modeled as mapping functions on this hierarchical graph, so as to avoid construction of multiple graphs to represent multiple topologies. The efficiency and performance of the proposed representation is demonstrated on a large real-world distribution electrical grid.
- Published
- 2015
- Full Text
- View/download PDF
16. Optimal spatio-temporal emergency crew planning for a distribution system
- Author
-
Ashish Sabharwal, Haijing Wang, Amith Singhee, Gerard Labut, Richard Mueller, and Ali Koc
- Subjects
Service (systems architecture) ,Engineering ,Operations research ,Business process ,business.industry ,Crew ,Constraint programming ,Staffing ,Scenario analysis ,business ,Task (project management) ,Scheduling (computing) - Abstract
A common problem that distribution utilities grapple with is planning crew levels on a day-to-day basis, especially in the face of large weather events, while accounting for complex business constraints. This paper proposes a method for optimally planning hourly crew staffing levels across different organizations (service centers, local contractors, mutual aid crews) and different crew types. The goal is to estimate these staffing levels over different shifts on a time range of days, in a way as to optimize the overall Estimated Time to Restoration (ETR) while maximizing crew efficiency, and honoring business constraints such as labor rules, organizational structure, business processes and public safety. The proposed method uses a constraint programming based task scheduling to capture these complex business constraints and objectives, and solve for an optimal solution. The paper demonstrates how this crewplanning tool can be used for what-if scenario analysis to evaluate different escalation scenarios and aid in decisionmaking.
- Published
- 2015
- Full Text
- View/download PDF
17. Spatio-temporal forecasting of weather-driven damage in a distribution system
- Author
-
Zhiguo Li, Haijing Wang, Abhishek Raman, Amith Singhee, Richard Mueller, Fook-Luen Heng, Gerard Labut, and Stuart A. Siegel
- Subjects
Global Forecast System ,Engineering ,Operations research ,Emergency management ,business.industry ,Weather forecasting ,computer.software_genre ,Data modeling ,Model output statistics ,Hindcast ,Probabilistic forecasting ,business ,computer ,Rapid Refresh - Abstract
A major ongoing effort by utilities is to improve their emergency preparedness process for weather events, in order to: 1) reduce outage time 2) reduce repair and restoration costs and 3) improve customer satisfaction. This paper proposes a method for forecasting the number of damages of different types that will result from a weather event, up to 3 days before the event actually occurs. The proposed method overcomes practical issues with sparsity of historical damage and weather records by 1) using a spatial clustering-based scheme to work even in cases where there are very few historical incidents of damage, 2) combining data from multiple weather observation networks, 3) using weather hindcast data and 4) accounting for variability in damage susceptibility across different substation regions. The performance of the method is evaluated on real utility data.
- Published
- 2015
- Full Text
- View/download PDF
18. OPRO: Precise emergency preparedness for electric utilities
- Author
-
Younghun Kim, G. M. Gauthier, Richard Mueller, Haijing Wang, Zhiguo Li, Ali Koc, James P. Cipriani, Gerard Labut, Lloyd A. Treinish, Amith Singhee, Ashok Kumar, and R. A. Foltman
- Subjects
education.field_of_study ,General Computer Science ,Emergency management ,Business process ,Computer science ,business.industry ,Process (engineering) ,020209 energy ,Population ,Context (language use) ,02 engineering and technology ,Reliability engineering ,Hotspot (Wi-Fi) ,Resource (project management) ,Risk analysis (engineering) ,0202 electrical engineering, electronic engineering, information engineering ,Customer satisfaction ,business ,education - Abstract
Electric utilities spend a large amount of their resources and budget on managing unplanned outages, the majority of which are driven by weather. The weather is the largest contributing factor for power outages faced by the population in the United States and several other countries. A major ongoing effort by utilities is to improve their emergency preparedness process, in order to 1) reduce outage time, 2) reduce repair and restoration costs, and 3) improve customer satisfaction. We present an approach called Outage Prediction and Response Optimization (OPRO) to improve emergency preparedness by combining a) localized and highly accurate weather prediction, b) damage prediction, c) infrastructure health-aware damage hotspot analysis, and d) optimal resource planning. The combination of these capabilities can enable utilities to initiate their storm preparation process 1 to 2 days in advance of the storm and precisely plan their resource schedules and escalation stance. This would be a profound change to the business process of utilities, which today tends to be reactionary once the storm hits. We describe these capabilities and their effectiveness in terms of metrics relevant to a utility, the related use cases, and the overall business process that brings them together in the context of a real U.S. utility.
- Published
- 2016
- Full Text
- View/download PDF
19. Enabling coupled models to predict the business impact of weather on electric utilities
- Author
-
Jean-Baptiste Fiot, Praino Anthony Paul, Amith Singhee, Bryant Chen, Mathieu Sinn, Vincent P. A. Lonij, Haijing Wang, Lloyd A. Treinish, and James P. Cipriani
- Subjects
Engineering ,General Computer Science ,Meteorology ,business.industry ,Stochastic modelling ,020209 energy ,Weather forecasting ,02 engineering and technology ,Transmission system ,Numerical weather prediction ,computer.software_genre ,Reliability engineering ,Renewable energy ,Electric utility ,Electricity generation ,0202 electrical engineering, electronic engineering, information engineering ,Predictability ,business ,computer - Abstract
Efficient, resilient, and safe operation of an electric utility is dependent on the local weather conditions at the scale of its infrastructure. This sensitivity to weather includes such factors as damage to distribution or transmission systems due to relative extremes in precipitation or wind, determining electricity demand and load, and power generation from renewable facilities. Hence, the availability of highly focused weather predictions has the potential to enable proactive planning for the effect of weather on utility systems. Often, such information is simply unavailable. The initial step to address this gap is the application of state-of-the-art physical weather models at the spatial scale of the utility's infrastructure, calibrated to avoid this mismatch in predictability. The results of such a model are then coupled to a data-driven stochastic model to represent the weather impacts. The deployment of such methods requires an abstraction of the weather forecasting component to drive the model coupling.
- Published
- 2016
- Full Text
- View/download PDF
20. A platform for the next generation of smarter energy applications
- Author
-
E. Pelletier, Ulrich Finkler, Amith Singhee, Mark A. Lavin, Steven Hirsch, Ashok Kumar, Fook-Luen Heng, and Jun Mei Qu
- Subjects
010302 applied physics ,General Computer Science ,business.industry ,Computer science ,Distributed computing ,Context (language use) ,02 engineering and technology ,Grid ,01 natural sciences ,Data science ,Visualization ,Data model ,020204 information systems ,Distributed generation ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Programming paradigm ,Data as a service ,business ,Host (network) - Abstract
A number of key technological, social, and business disruptions will drive a new generation of smarter energy applications. The disruptions include the following: 1) large sensor deployments, resulting in a huge increase in data volumes and variety, 2) a move toward clean energy and intermittent renewable energy sources, and 3) a move to highly distributed energy resources. To enable resilient and efficient power delivery, with these disruptions, will require a host of new applications that analyze large amounts and varieties of data in the context of the connected grid and perform analysis, visualization, and control in real-time with very low latency. In this paper, we present a set of capabilities that enable such applications, and a software and hardware platform that combines these capabilities to enable rapid development of a wide array of high-performance and analytics-rich applications. These capabilities include: 1) high-performance time-series ingestion, 2) a flexible data model that spans multiple contexts, 3) high-performance, in-memory analysis of time-varying, hierarchical graphs, 4) data service for co-presenting real-time and static spatiotemporal data for real-time web-based visualization, and 5) a seamless combination of event-based and service-oriented programming models.
- Published
- 2016
- Full Text
- View/download PDF
21. A dynamic method for efficient random mismatch characterization of standard cells
- Author
-
Jinjun Xiong, Chandu Visweswariah, Amol A. Joshi, Wangyang Zhang, Amith Singhee, James E. Sundquist, and Peter A. Habitz
- Subjects
Statistical static timing analysis ,Computer science ,Electronic engineering ,Function (mathematics) ,Dynamic method ,Algorithm ,Characterization (materials science) - Abstract
To enable statistical static timing analysis, for each cell in a digital library, a timing model that considers variations must be characterized. In this paper, we propose a dynamic method to accurately and efficiently characterize a cell's delay and output slew as a function of random mismatch variations. Based on a tight error bound for characterization using partial devices, our method sequentially performs simulations based on decreasing importance of devices and stops when the error requirement is met. Results on an industrial 32nm library demonstrate that the proposed method achieves significantly better accuracy-efficiency trade-off compared to other partial finite differencing approaches.
- Published
- 2012
- Full Text
- View/download PDF
22. DRC-free high density layout exploration with layout morphing and patterning quality assessment, with application to SRAM
- Author
-
Emrah Acar, Aditya Bansal, Mohammad I. Younus, Amith Singhee, and Rama Nand Singh
- Subjects
Engineering drawing ,Design rule checking ,Engineering ,business.industry ,Page layout ,Image processing ,computer.software_genre ,Integrated circuit layout ,Morphing ,Static random-access memory ,Physical design ,business ,computer ,IC layout editor - Abstract
A system for layout exploration without design-rule checking is presented. It comprises of two key and new capabilities: 1) layout morphing to generate multi-mask layer layout variants, given basis layouts, and 2) feature-driven layout quality evaluation using through-process patterning simulations. The former uses morphing techniques inspired from image processing. The latter uses design-specific marker shapes to identify relevant features and efficient geometric operations on the simulated contours and these markers. The methodology is useful for high-density layout design and design rule development where design rules are insufficient or irrelevant, especially at 22 nm technology and beyond. We demonstrate it on an aggressive 22 nm SRAM bitcell design.
- Published
- 2012
- Full Text
- View/download PDF
23. Stream computing based synchrophasor application for power grids
- Author
-
Deva P. Seetharam, Kaushik Das, Jagabondhu Hazra, and Amith Singhee
- Subjects
Computer science ,Analytics ,business.industry ,Stream ,Real-time computing ,Phasor ,Benchmark (computing) ,Volume (computing) ,Grid ,business ,Power (physics) - Abstract
This paper proposes an application of stream computing analytics framework to high speed synchrophasor data for real time monitoring and control of electric grid. High volume streaming synchrophasor data from geographically distributed grid sensors (namely, Phasor Measurement Units) are collected, synchronized, aggregated when required and analyzed using a stream computing platform to estimate the grid stability in real time. This real time stability monitoring scheme will help the grid operators to take preventive or corrective measures ahead of time to mitigate any disturbance before they develop into wide-spread. A protptype of the scheme is demonstrated on a benchmark 3 machines 9 bus system and the IEEE 14 bus test system.
- Published
- 2011
- Full Text
- View/download PDF
24. Pareto sampling
- Author
-
Amith Singhee and Pamela Castalino
- Subjects
Mathematical optimization ,Optimization problem ,Iterative method ,Convex optimization ,Nonuniform sampling ,Pareto principle ,Partial derivative ,Sampling (statistics) ,Multi-objective optimization ,Mathematics - Abstract
The convex weighted-sum method for multi-objective optimization has the desirable property of not worsening the difficulty of the optimization problem, but can lead to very nonuniform sampling. This paper explains the relationship between the weights and the partial derivatives of the tradeoff surface, and shows how to use it to choose the right weights and uniformly sample largely convex tradeoff surfaces. It proposes a novel method, Derivative Pursuit (DP), that iteratively refines a simplicial approximation of the tradeoff surface by using partial derivative information to guide the weights generation. We demonstrate the improvements offered by DP on both synthetic and circuit test cases, including a 22 nm SRAM bitcell design problem with strict read and write yield constraints, and power and performance objectives.
- Published
- 2010
- Full Text
- View/download PDF
25. Extreme Value Theory: Application to Memory Statistics
- Author
-
Robert C. Aitken, Rob A. Rutenbar, and Amith Singhee
- Subjects
Bit cell ,Sense amplifier ,Margin (machine learning) ,Computer science ,media_common.quotation_subject ,Cumulative distribution function ,Assertion ,Quality (business) ,Function (mathematics) ,Arithmetic ,Extreme value theory ,media_common - Abstract
Device variability is increasingly important in memory design, and a fundamental question is how much design margin is enough to ensure high quality and robust operation without overconstraining performance. For example, it is very unlikely that the “worst” bit cell is associated with the “worst” sense amplifier, making an absolute “worst-case” margin method overly conservative, but this assertion needs to be formalized and tested. Standard statistical techniques tend to be of limited use for this type of analysis, for two primary reasons: First, worst-case values by definition involve the tails of distributions, where data is limited and normal approximations can be poor. Second, the worst-case function itself does not lend itself to simple computation (the worst case of a sum is not the sum of worst cases, for example). These concepts are elaborated later in this chapter.
- Published
- 2010
- Full Text
- View/download PDF
26. Introduction
- Author
-
Amith Singhee
- Published
- 2010
- Full Text
- View/download PDF
27. Extreme Statistics in Nanoscale Memory Design
- Author
-
Amith Singhee and Rob A. Rutenbar
- Subjects
Hardware_MEMORYSTRUCTURES ,Computer science ,Computation ,Probabilistic logic ,Computer Science::Hardware Architecture ,Computer Science::Emerging Technologies ,Margin (machine learning) ,Nano cmos ,Statistics ,Electronic engineering ,Point (geometry) ,Static random-access memory ,Extreme value theory ,Importance sampling - Abstract
Extreme Statistics in Memories.- Statistical Nano CMOS Variability and Its Impact on SRAM.- Importance Sampling-Based Estimation: Applications to Memory Design.- Direct SRAM Operation Margin Computation with Random Skews of Device Characteristics.- Yield Estimation by Computing Probabilistic Hypervolumes.- Most Probable Point-Based Methods.- Extreme Value Theory: Application to Memory Statistics.
- Published
- 2010
- Full Text
- View/download PDF
28. Extreme Statistics in Memories
- Author
-
Amith Singhee
- Subjects
Memory cell ,Computer science ,Yield (finance) ,Process (computing) ,Extreme value theory ,Memory array ,Reliability engineering ,Power (physics) ,Event (probability theory) ,Voltage - Abstract
Memory design specifications typically include yield requirements, apart from performance and power requirements. These yield requirements are usually specified for the entire memory array at some supply voltage and temperature conditions. For example, the designer may be comfortable with an array failure probability of one in a thousand at 100∘C and 1 V supply, i.e., Ff,array ≤ 10−3. However, how does this translate to a yield requirement for the memory cell? How do we even estimate the statistical distribution of memory cell performance metrics in this extreme rare event regime? We will answer these questions and in the process see the application of certain machine learning techniques and extreme value theory in memory design.
- Published
- 2010
- Full Text
- View/download PDF
29. Yield estimation of SRAM circuits using 'Virtual SRAM Fab'
- Author
-
Amith Singhee, Ching-Te Chuang, Koushik K. Das, Rama Nand Singh, Jin-Fuw Lee, Fook-Luen Heng, Sani R. Nassif, Saibal Mukhopadhyay, Aditya Bansal, Emrah Acar, Rouwaida Kanj, and Keunwoo Kim
- Subjects
Very-large-scale integration ,Engineering ,Hardware_MEMORYSTRUCTURES ,business.industry ,Silicon on insulator ,Schematic ,Process design ,Integrated circuit layout ,Bottleneck ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Static random-access memory ,business ,Random access - Abstract
Static Random Access Memories (SRAMs) are key components of modern VLSI designs and a major bottleneck to technology scaling as they use the smallest size devices with high sensitivity to manufacturing details. Analysis performed at the "schematic" level can be deceiving as it ignores the interdependence between the implementation layout and the resulting electrical performance. We present a computational framework, referred to as "Virtual SRAM Fab", for analyzing and estimating pre-Si SRAM array manufacturing yield considering both lithographic and electrical variations. The framework is being demonstrated for SRAM design/optimization in 45nm nodes and currently being used for both 32nm and 22nm technology nodes. The application and merit of the framework are illustrated using two different SRAM cells in a 45nm PD/SOI technology, which have been designed for similar stability/performance, but exhibit different parametric yields due to layout/lithographic variations. We also demonstrate the application of Virtual SRAM Fab for prediction of layout-induced imbalance in an 8T cell, which is a popular candidate for SRAM implementation in 32-22nm technology nodes.
- Published
- 2009
- Full Text
- View/download PDF
30. Robust Circuit Design: Challenges and Solutions
- Author
-
Saurabh K. Tiwary, Amith Singhee, and Vikas Chandra
- Subjects
Digital electronics ,Moore's law ,business.industry ,Computer science ,media_common.quotation_subject ,Circuit design ,Design flow ,Integrated circuit design ,Circuit reliability ,Design for manufacturability ,Electronic engineering ,Electronic design automation ,business ,media_common - Abstract
Scaling with Moore’s law is taking us to feature sizes of 32nm and smaller. At these technology nodes designers are faced with an explosion in design complexity at all levels. In this tutorial we discuss three somewhat novel and particularly confounding dimensions of this complexity: • Electrical complexity: Digital circuit designers have particularly benefited from abstractions of the underlying MOS devices while designing circuits. For them, a transistor is an ideal switch and a wire is a perfect short between two nodes. This simplified abstraction is the driving force behind our capability to design and verify chips with about a billion transistors today. However, with aggressive device scaling, the properties of the devices that are being manufactured today are moving further away from the abstractions that we have been using to verify our designs. In this section of the tutorial, we look at some of the recent trends along these lines and some of the techniques that designers use to extract ideal functionality from non-ideal devices. We use design examples, both from analog/mixed-signal (PLL, ADC design) and digital domain (clock tree, power network, static timing, etc.), as illustrative cases studies. • Manufacturing complexity: Minimum feature sizes at 45nm are already a quarter of the wavelength of light used for lithography. Consequently, imperfections in manufacturing are unavoidable and large enough to significantly change the intended design, resulting in dreaded yield loss. Any design today has to satisfy stringent manufacturability and yield requirements. At the same time, the complexity of critical variation mechanisms renders any simplified methods, like corner analysis, ineffective. Design methods and tools are being changed at all levels of the design flow to improve yield prediction and increase manufacturing robustness. In this vein, this tutorial will cover a broad spectrum of topics: 1) relevant state-of-the-art manufacturing process steps at 45 nm (193 nm lithography, ion implantation, etc.) and the physical mechanisms resulting in electrical performance variations, 2) recently proposed design techniques for mitigating the electrical variability, and 3) recently proposed design tools for increasing robustness and predicting the yield impact of this variability. We will look at various design phases from circuit architecture down to post-layout, and at several applications from SRAMs to ASICs to analog. • Reliability complexity: With nearly three decades of continued CMOS scaling, the devices have now been pushed to their physical and reliability limits. Transistors on the latest chips in 45nm technology are so small that some of their parts are just a few atoms apart. Designs manufactured correctly may become unreliable over time because of mechanisms like NBTI, gate oxide breakdown and soft errors. The impact of unreliability manifests as time-dependent variability where the electrical characteristics of the devices vary statistically in a temporal manner, directly translating into design uncertainty in manufactured chips. Scaling to sub45nm technology nodes changes the nature of reliability effects from abrupt functional problems to progressive degradation of the performance characteristics. The material presented in this section of the tutorial is intended for designers to form a thorough
- Published
- 2009
- Full Text
- View/download PDF
31. Novel Algorithms for Fast Statistical Analysis of Scaled Circuits
- Author
-
Amith Singhee and Rob A. Rutenbar
- Published
- 2009
- Full Text
- View/download PDF
32. Concluding Observations
- Author
-
Amith Singhee and Rob A. Rutenbar
- Published
- 2009
- Full Text
- View/download PDF
33. Statistical Blockade: Estimating Rare Event Statistics
- Author
-
Rob A. Rutenbar and Amith Singhee
- Subjects
Normal distribution ,Computer science ,Monte Carlo method ,Statistics ,Rare events ,Failure rate ,Static random-access memory ,Extreme value theory ,Event (probability theory) ,Term (time) - Abstract
Consider the case of a 1 megabit (Mb) SRAM array, which has 1 million “identical” instances of an SRAM cell. These instances are designed to be identical, but due to manufacturing variations, they usually differ. Suppose we desire a chip yield of 99%; that is, no more that one chip per 100 should fail. This means that on average, not more than (approx.) one per 100×1 million SRAM cells; that is 10 per billion, should fail. This translates to a required circuit yield of 99.999999%, or a maximum failure rate of 0.01 ppm for the SRAM cell. This failure probability is the same as for a 5.6σ point on the standard normal distribution. If we want to estimate the yield of such an SRAM cell in the design phase, a standard Monte Carlo approach would require at least 100 million SPICE simulations on average to obtain just one failing sample point! Even then, the estimate of the yield or failure probability will be suspect because of the lack of statistical confidence, the estimate being computed using only one failing example. Such a large number of simulation is utterly intractable. This example clearly illustrates the widespread problem with designing robust memories in the presence of process variations: we need to simulate rare or extreme events and estimate the statistics of these rare events. The problem of simulating and modeling rare events stands for any circuit that has a large number of identical replications on the same chip, as in DRAM arrays and non-volatile memories. We term such circuits as high replication circuits (HRCs).
- Published
- 2009
- Full Text
- View/download PDF
34. Statistical Blockade: Estimating Rare Event Statistics for Memories
- Author
-
Amith Singhee and Rob A. Rutenbar
- Subjects
Non-volatile memory ,Computer science ,Statistics ,Monte Carlo method ,Process (computing) ,Sensitivity (control systems) ,Static random-access memory ,Extreme value theory ,Parametric statistics ,Electronic circuit - Abstract
As we move deeper into sub-65 nm technology nodes, uncontrollable random parametric variations have become a critical hurdle for achieving high yield. This problem is particularly crippling for high-replication circuits (HRCs) – circuits like SRAM cells, nonvolatile memory cells, and other memory cells that are replicated millions of times on the same chip – because of aggressive cell design, the requirement of meeting very high >5σ levels of yield and the usual higher sensitivity of such circuits to process variations. However, it has proved difficult to even estimate such high yield values efficiently, making it very difficult for designers to adopt an accurate, variation-aware design methodology. This chapter develops a general statistical methodology to estimate parametric memory yields. The keystone of the methodology is a technique is called statistical blockade, which combines Monte Carlo simulation, machine learning, and extreme value theory to simulate very rare failure events and to compute analytical models for the tail distributions of the circuit performance metrics. Several circuit examples are analyzed in detail to enable a deep understanding of the theory and its practical use in a real-world setting. The treatment is directed toward both the memory designer and the EDA engineer.
- Published
- 2009
- Full Text
- View/download PDF
35. Quasi-Monte Carlo for Fast Statistical Simulation of Circuits
- Author
-
Rob A. Rutenbar and Amith Singhee
- Subjects
Combinational logic ,Computer science ,Circuit design ,Monte Carlo method ,Dynamic Monte Carlo method ,Monte Carlo method in statistical physics ,Quasi-Monte Carlo method ,Process corners ,Algorithm ,Monte Carlo molecular modeling - Abstract
Continued device scaling has dramatically increased the statistical variability with which circuit designers must contend to ensure the reliability of a circuit to these variations. As discussed in the introduction to this thesis, traditional process corner analysis is no longer reliable because the variations are numerous and much more complex than can be handled by such simple techniques. Going forward, it is increasingly important that we account accurately for the statistics of these variations during circuit design. In a few special cases, we have analytical methods that can cast this inherently statistical problem into a deterministic formulation, e.g., optimal transistor sizing and threshold assignment in combinational logic under statistical yield and timing constraints, as in Mani et al. (Proc. IEEE/ACM Design Autom. Conf., 2005). Unfortunately, such analytical solutions remain rare. In the general case, some combination of complex statistics, high dimensionality, profound nonlinearity or non-normality, stringent accuracy, and expensive performance evaluation (e.g., SPICE simulation) thwart our analytical aspirations. This is where Monte Carlo methods (Glasserman, Monte Carlo Methods in Financial Engineering, Springer, Berlin, 2004) come to our rescue as true statistical methods.
- Published
- 2009
- Full Text
- View/download PDF
36. SiLVR: Projection Pursuit for Response Surface Modeling
- Author
-
Amith Singhee and Rob A. Rutenbar
- Published
- 2009
- Full Text
- View/download PDF
37. Exploiting correlation kernels for efficient handling of intra-die spatial correlation, with application to statistical timing
- Author
-
Amith Singhee, Sonia Singhal, and Rob A. Rutenbar
- Subjects
Karhunen–Loève theorem ,Spatial correlation ,Mathematical optimization ,Random field ,Computer science ,Robustness (computer science) ,Numerical analysis ,Monte Carlo method ,Algorithm design ,Galerkin method ,Random variable ,Algorithm - Abstract
Intra-die manufacturing variations are unavoidable in nanoscale processes. These variations often exhibit strong spatial correlation. Standard grid-based models assume model parameters (grid-size, regularity) in an ad hoc manner and can have high measurement cost. The random field model overcomes these issues. However, no general algorithm has been proposed for the practical use of this model in statistical CAD tools. In this paper, we propose a robust and efficient numerical method, based on the Galerkin technique and Karhunen Loeve Expansion, that enables effective use of the model. We test the effectiveness of the technique using a Monte Carlo-based Statistical Static Timing Analysis algorithm, and see errors less than 0.7%, while reducing the number of random variables from thousands to 25, resulting in speedups of up to 100x.
- Published
- 2008
- Full Text
- View/download PDF
38. Recursive Statistical Blockade: An Enhanced Technique for Rare Event Simulation with Application to SRAM Circuit Design
- Author
-
Amith Singhee, Jiajing Wang, Benton H. Calhoun, and Rob A. Rutenbar
- Subjects
Computer science ,Circuit design ,Monte Carlo method ,Parallel computing ,Static random-access memory ,Integrated circuit design ,Statistical process control ,Circuit reliability ,FLOPS ,Electronic circuit - Abstract
Circuit reliability under statistical process variation is an area of growing concern. For highly replicated circuits such as SRAMs and flip flops, a rare statistical event for one circuit may induce a not-so-rare system failure. The Statistical Blockade was proposed as a Monte Carlo technique that allows us to efficiently filter-to block-unwanted samples insufficiently rare in the tail distributions we seek. However, there are significant practical problems with the technique. In this work, we show common scenarios in SRAM design where these problems render Statistical Blockade ineffective. We then propose significant extensions to make Statistical Blockade practically usable in these common scenarios. We show speedups of 102+ over standard Statistical Blockade and 104+ over standard Monte Carlo, for an SRAM cell in an industrial 90 nm technology.
- Published
- 2008
- Full Text
- View/download PDF
39. Statistical Blockade: A Novel Method for Very Fast Monte Carlo Simulation of Rare Circuit Events, and its Application
- Author
-
Amith Singhee and Rob A. Rutenbar
- Published
- 2008
- Full Text
- View/download PDF
40. Statistical modeling for the minimum standby supply voltage of a full SRAM array
- Author
-
Jiajing Wang, Rob A. Rutenbar, Benton H. Calhoun, and Amith Singhee
- Subjects
Engineering ,business.industry ,Monte Carlo method ,Statistical model ,Upper and lower bounds ,Reduction (complexity) ,symbols.namesake ,Control theory ,symbols ,Electronic engineering ,Pareto distribution ,Static random-access memory ,Cache ,business ,Voltage - Abstract
This paper presents two fast and accurate methods to estimate the lower bound of supply voltage scaling for standby SRAM/cache leakage power reduction of an SRAM array. The data retention voltage (DRV) defines the minimum supply voltage for a cell to hold its state. Within-die variation causes a statistical distribution of DRV for individual cells in a memory array, and cells far out the tail (i.e. >6sigma) limit the array DRV for large memories. We present two statistical methods to estimate the tail of the DRV distribution. First, we develop a new statistical model based on the connection between DRV and static noise margin (SNM). Second, we apply our Statistical Blockade tool to obtain fast Monte-Carlo simulation and a generalized Pareto distribution (GPD) model for comparison. Both the new model and the GPD model offer a high accuracy ( 104times for 1 G-b memory) over Monte-Carlo simulation. In addition, both models show a very close agreement with each other at the tail even beyond 7sigma.
- Published
- 2007
- Full Text
- View/download PDF
41. From Finance to Flip Flops: A Study of Fast Quasi-Monte Carlo Methods from Computational Finance Applied to Statistical Circuit Analysis
- Author
-
Rob A. Rutenbar and Amith Singhee
- Subjects
Digital electronics ,Mathematical optimization ,Analogue electronics ,Computer science ,business.industry ,Computational finance ,Monte Carlo method ,Sample (statistics) ,Quasi-Monte Carlo method ,business ,Algorithm ,Parametric statistics ,Network analysis - Abstract
Problems in computational finance share many of the characteristics that challenge us in statistical circuit analysis: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive sample simulation. We offer a detailed experimental study of how one celebrated technique from this domain - quasi-Monte Carlo (QMC) analysis - can be used for fast statistical circuit analysis. In contrast with traditional pseudo-random Monte Carlo sampling, QMC substitutes a (shorter) sequence of deterministically chosen sample points. Across a set of digital and analog circuits, in 90nm and 45nm technologies, varying in size from 30 to 400 devices, we obtain speedups in parametric yield estimation from 2times to 50times
- Published
- 2007
- Full Text
- View/download PDF
42. Beyond low-order statistical response surfaces
- Author
-
Rob A. Rutenbar and Amith Singhee
- Subjects
Nonlinear system ,Mathematical optimization ,Quadratic equation ,Artificial neural network ,Computer science ,Dimensionality reduction ,Regression analysis ,Limit (mathematics) ,Latent variable ,Algorithm - Abstract
The number and magnitude of process variation sources are increasing as we scale further into the nano regime. Today's most successful response surface methods limit us to low-order forms -- linear, quadratic -- to make the fitting tractable. Unfortunately, not all variation-al scenarios are well modeled with low-order surfaces. We show how to exploit latent variable regression ideas to support efficient extraction of arbitrarily nonlinear statistical response surfaces. An implementation of these ideas called SiLVR, applied to a range of analog and digital circuits, in technologies from 90 to 45nm, shows significant improvements in prediction, with errors reduced by up to 21X, with very reasonable runtime costs.
- Published
- 2007
- Full Text
- View/download PDF
43. Probabilistic interval-valued computation
- Author
-
James D. Ma, Amith Singhee, Claire F. Fang, and Rob A. Rutenbar
- Subjects
Mathematical optimization ,Monte Carlo method ,Probabilistic logic ,Statistical model ,Computer Graphics and Computer-Aided Design ,Confidence interval ,Interval arithmetic ,Range (mathematics) ,Statistics ,Graph (abstract data type) ,Interval (graph theory) ,Affine transformation ,Electrical and Electronic Engineering ,Software ,Mathematics - Abstract
Interval methods offer a general fine-grain strategy for modeling correlated range uncertainties in numerical algorithms. We present a new improved interval algebra that extends the classical affine form to a more rigorous statistical foundation. Range uncertainties now take the form of confidence intervals. In place of pessimistic interval bounds, we minimize the probability of numerical "escape"; this can tighten interval bounds by an order of magnitude while yielding 10-100 times speedups over Monte Carlo. The formulation relies on the following three critical ideas: liberating the affine model from the assumption of symmetric intervals; a unifying optimization formulation; and a concrete probabilistic model. We refer to these as probabilistic intervals for brevity. Our goal is to understand where we might use these as a surrogate for expensive explicit statistical computations. Results from sparse matrices and graph delay algorithms demonstrate the utility of the approach and the remaining challenges.
- Published
- 2006
- Full Text
- View/download PDF
44. Remembrance of circuits past
- Author
-
Hongzhou Liu, Amith Singhee, L.R. Carley, and Rob A. Rutenbar
- Subjects
Theoretical computer science ,Computer science ,Parameterized complexity ,Algorithm design ,Topology (electrical circuits) ,Integrated circuit design ,Data mining ,Space (mathematics) ,computer.software_genre ,computer ,Electronic circuit - Abstract
The introduction of simulation-based analog synthesis tools creates a new challenge for analog modeling. These tools routinely visit 10/sup 3/ to 10/sup 5/ fully simulated circuit solution candidates. What might we do with all this circuit data? We show how to adapt recent ideas from large-scale data mining to build models that capture significant regions of this visited performance space, parameterized by variables manipulated by synthesis, trained by the data points visited during synthesis. Experimental results show that we can automatically build useful nonlinear regression models for large analog design spaces.
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.