17 results on '"David Thompson"'
Search Results
2. A mathematical model for electricity scarcity pricing in ERCOT real-time energy-only market
- Author
-
Jian Chen, Sean Chang, David Thompson, Resmi Surendran, Jiachun Guo, Dave Maggio, Cong Liu, Zhengguo Chu, and Hailong Hui
- Subjects
Optimization problem ,Operations research ,Operating reserve ,Computer science ,020209 energy ,media_common.quotation_subject ,Economic dispatch ,02 engineering and technology ,Scarcity ,Reservation price ,Order (exchange) ,Demand curve ,0202 electrical engineering, electronic engineering, information engineering ,Production (economics) ,media_common - Abstract
In order to support an appropriate level of resource adequacy in the long term and achieve reliable operations in the short term, a better scarcity pricing design and mechanism will be a key factor for success. This paper proposes an optimization based mathematical model to deduce different items of price adders in scarcity conditions in ERCOT real-time energy-only market. Two successive optimization problems, dispatch correction and extended security-constrained economic dispatch (SCED), are presented. The extended SCED is to minimize the production costs minus the expected social welfare of real-time online/offline reserve capacity associated with construct operating reserve demand curves (ORDC). The derivatives of social welfare is formulated as real-time online reserve price adders. In case studies, we calculate the price adders and compare them with the production results in the ERCOT real-time energy-only market.
- Published
- 2017
- Full Text
- View/download PDF
3. Implantable neurostimulator lead transfer function based on the transmission line model
- Author
-
David Thompson, Hjalti H. Sigmarsson, and Sattar Atash-bahar
- Subjects
Materials science ,medicine.diagnostic_test ,020206 networking & telecommunications ,Magnetic resonance imaging ,02 engineering and technology ,Diagnostic tools ,Transfer function ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Transmission line ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Lead (electronics) ,Mri scan ,Energy (signal processing) ,Biomedical engineering - Abstract
Magnetic Resonance Imaging (MRI) has become one of the most important clinical diagnostic tools. However, millions of patients with implanted medical devices are either excluded or require special MRI scan conditions due to the possibility of heating caused by energy coupled between the Lead and the RF portion of the MRI. A Transmission Line (TL) model and transfer function for the Lead are necessary to understand the behavior during the MRI scanning and reduce the induced current coupled onto the Lead. In this paper, a novel method to generate the Lead transfer function is introduced. The measurement results confirm the accuracy of the TL model.
- Published
- 2017
- Full Text
- View/download PDF
4. Virtually Unplugged: Rich Data Capture to Evaluate CS Pedagogy in 3D Virtual Worlds
- Author
-
Tim Bell and David Thompson
- Subjects
Data collection ,Multimedia ,Virtual world ,Computer science ,business.industry ,Teaching method ,Automatic identification and data capture ,computer.software_genre ,Metaverse ,Test (assessment) ,Software ,Human–computer interaction ,Server ,business ,computer - Abstract
Being able to teach students about CS concepts effectively is a growing priority in primary schools. Teachers need to develop not just their content knowledge but also best -- practice pedagogical approaches. Direct observation is one way of evaluating new teaching methods, but research with video/screen capture is typically time -- consuming and difficult to get quick insights from. In this study we explore using 3D virtual worlds to quickly collect richer information from students working on an educational activity in a simulated 3D environment such as Second Life. We report on a case study exploring the insights that could be gleaned from this data collection, and compare how students learned in a 3D virtual world with simpler 2D map -- based versions of a CS Unplugged problem -- solving activity. The activity chosen was initially designed for physical use, and did not intrinsically favour either computer -- based environment. Students using the 2D condition found a non -- optimal solution faster, but the amount of extra time taken to find the optimal solution was similar to the 3D condition, and students had similar post -- test scores in both conditions. Students using the 3D condition appeared to choose their actions more carefully, reflecting the higher physical load and slower response times of the virtual world environment. More students thought the 3D activity was fun and would choose to do again, and reported that it felt more like they were learning. While authoring tools are improving, virtual world content can still be challenging to develop and deploy. Care is needed to design engaging activities, and avoid creating environments that are immersive but a poor fit for learning.
- Published
- 2015
- Full Text
- View/download PDF
5. Combining in-situ and in-transit processing to enable extreme-scale scientific analysis
- Author
-
Janine C. Bennett, Hasan Abbasi, Peer-Timo Bremer, Ray Grout, Attila Gyulassy, Tong Jin, Scott Klasky, Hemanth Kolla, Manish Parashar, Valerio Pascucci, Philippe Pebay, David Thompson, Hongfeng Yu, Fan Zhang, and Jacqueline Chen
- Published
- 2012
- Full Text
- View/download PDF
6. The ParaView Coprocessing Library: A scalable, general purpose in situ visualization library
- Author
-
Berk Gevecik, Nathan Fabian, Kenneth E. Jansen, Andrew Bauer, Kenneth Moreland, David Thompson, Michel Rasquin, and Pat Marion
- Subjects
Computer science ,business.industry ,Feature extraction ,computer.software_genre ,Supercomputer ,Data modeling ,Visualization ,Workflow ,Data visualization ,Computer architecture ,Scalability ,Operating system ,business ,computer ,Data compression - Abstract
As high performance computing approaches exascale, CPU capability far outpaces disk write speed, and in situ visualization becomes an essential part of an analyst's workflow. In this paper, we describe the ParaView Coprocessing Library, a framework for in situ visualization and analysis coprocessing. We describe how coprocessing algorithms (building on many from VTK) can be linked and executed directly from within a scientific simulation or other applications that need visualization and analysis. We also describe how the ParaView Coprocessing Library can write out partially processed, compressed, or extracted data readable by a traditional visualization application for interactive post-processing. Finally, we will demonstrate the library's scalability in a number of real-world scenarios.
- Published
- 2011
- Full Text
- View/download PDF
7. Analysis of large-scale scalar data using hixels
- Author
-
Joshua A. Levine, Janine C. Bennett, Attila Gyulassy, Philippe Pierre Pebay, David Thompson, Peer-Timo Bremer, and Valerio Pascucci
- Subjects
Computer science ,business.industry ,Feature extraction ,computer.software_genre ,Data structure ,External Data Representation ,Visualization ,Data modeling ,Data visualization ,Histogram ,Probability distribution ,Data mining ,business ,computer - Abstract
One of the greatest challenges for today's visualization and analysis communities is the massive amounts of data generated from state of the art simulations. Traditionally, the increase in spatial resolution has driven most of the data explosion, but more recently ensembles of simulations with multiple results per data point and stochastic simulations storing individual probability distributions are increasingly common. This paper introduces a new data representation for scalar data, called hixels, that stores a histogram of values for each sample point of a domain. The histograms may be created by spatial down-sampling, binning ensemble values, or polling values from a given distribution. In this manner, hixels form a compact yet information rich approximation of large scale data. In essence, hixels trade off data size and complexity for scalar-value “uncertainty”. Based on this new representation we propose new feature detection algorithms using a combination of topological and statistical methods. In particular, we show how to approximate topological structures from hixel data, extract structures from multi-modal distributions, and render uncertain isosurfaces. In all three cases we demonstrate how using hixels compares to traditional techniques and provide new capabilities to recover prominent features that would otherwise be either infeasible to compute or ambiguous to infer. We use a collection of computer tomography data and large scale combustion simulations to illustrate our techniques.
- Published
- 2011
- Full Text
- View/download PDF
8. Computing Contingency Statistics in Parallel: Design Trade-Offs and Limiting Cases
- Author
-
David Thompson, Janine C. Bennett, and Philippe Pierre Pebay
- Subjects
Contingency table ,Speedup ,Theoretical computer science ,Descriptive statistics ,Computer science ,Entropy (statistical thermodynamics) ,Embarrassingly parallel ,Mutual information ,Entropy (classical thermodynamics) ,Statistics ,Scalability ,Principal component analysis ,Entropy (information theory) ,Algorithm design ,Marginal distribution ,Entropy (energy dispersal) ,Entropy (arrow of time) ,Random variable ,Numerical stability ,Entropy (order and disorder) - Abstract
Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and c2 independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.
- Published
- 2010
- Full Text
- View/download PDF
9. Quantifying effectiveness of failure prediction and response in HPC systems: Methodology and example
- Author
-
Diana C. Roe, Frank Xiaoxiao Chen, Philippe Pierre Pebay, Matthew H. Wong, James M. Brandt, Ann C. Gentile, David Thompson, Vincent De Sapio, and Jackson R. Mayo
- Subjects
Mean time between failures ,Resource (project management) ,Computer science ,Outlier ,Probabilistic logic ,Resource allocation ,Resilience (network) ,Process migration ,System software ,Reliability engineering - Abstract
Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining the accuracy and cost-benefit of predictors.
- Published
- 2010
- Full Text
- View/download PDF
10. Combining Virtualization, resource characterization, and Resource management to enable efficient high performance compute platforms through intelligent dynamic resource allocation
- Author
-
Jim Brandt, Matthew H. Wong, V. De Sapio, Frank Xiaoxiao Chen, Philippe Pierre Pebay, Jackson R. Mayo, Ann C. Gentile, David Thompson, and Diana C. Roe
- Subjects
Human resource management system ,Resource (project management) ,Computer science ,Distributed computing ,Control reconfiguration ,Resource allocation ,Resource management ,Orchestration (computing) ,System monitoring ,Virtualization ,computer.software_genre ,computer - Abstract
Improved resource utilization and fault tolerance of large-scale HPC systems can be achieved through fine-grained, intelligent, and dynamic resource (re)allocation. We explore components and enabling technologies applicable to creating a system to provide this capability: specifically 1) Scalable fine-grained monitoring and analysis to inform resource allocation decisions, 2) Virtualization to enable dynamic reconfiguration, 3) Resource management for the combined physical and virtual resources and 4) Orchestration of the allocation, evaluation, and balancing of resources in a dynamic environment. We discuss both general and HPC-centric issues that impact the design of such a system. Finally, we present our prototype system, giving both design details and examples of its application in real-world scenarios.
- Published
- 2010
- Full Text
- View/download PDF
11. Using Cloud Constructs and Predictive Analysis to Enable Pre-Failure Process Migration in HPC Systems
- Author
-
Vincent De Sapio, Jackson R. Mayo, Frank Xiaoxiao Chen, Philippe Pierre Pebay, James M. Brandt, Ann C. Gentile, Matthew H. Wong, David Thompson, and Diana C. Roe
- Subjects
business.industry ,Computer science ,Distributed computing ,Probabilistic logic ,Cloud computing ,Fault tolerance ,computer.software_genre ,Virtualization ,Supercomputer ,Grid computing ,Process control ,business ,computer ,Process migration - Abstract
Accurate failure prediction in conjunction with efficient process migration facilities including some Cloud constructs can enable failure avoidance in large-scale high performance computing (HPC) platforms. In this work we demonstrate a prototype system that incorporates our probabilistic failure prediction system with virtualization mechanisms and techniques to provide a whole system approach to failure avoidance. This work utilizes a failure scenario based on a real-world HPC case study.
- Published
- 2010
- Full Text
- View/download PDF
12. Numerically stable, single-pass, parallel statistics algorithms
- Author
-
Janine C. Bennett, Philippe Pierre Pebay, David Thompson, Diana C. Roe, and Ray Grout
- Subjects
Covariance matrix ,Computer science ,Group method of data handling ,Parallel algorithm ,Method of moments (statistics) ,Robustness (computer science) ,Principal component analysis ,Statistics ,Scalability ,Concurrent computing ,Probability distribution ,Algorithm design ,Pairwise comparison ,Statistical theory ,Algorithm ,Numerical stability - Abstract
Statistical analysis is widely used for countless scientific applications in order to analyze and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. In this paper we derive a series of formulas that allow for single-pass, yet numerically robust, pairwise parallel and incremental updates of both arbitrary-order centered statistical moments and co-moments. Using these formulas, we have built an open source parallel statistics framework that performs principal component analysis (PCA) in addition to computing descriptive, correlative, and multi-correlative statistics. The results of a scalability study demonstrate numerically stable, near-optimal scalability on up to 128 processes and results are presented in which the statistical framework is used to process large-scale turbulent combustion simulation data with 1500 processes.
- Published
- 2009
- Full Text
- View/download PDF
13. A low-loss DC-DC converter for a renewable energy converter
- Author
-
O.A. Eno and David Thompson
- Subjects
Forward converter ,Materials science ,business.industry ,Flyback converter ,Buck converter ,Boost converter ,Ćuk converter ,Buck–boost converter ,Electrical engineering ,Inverter ,business ,Renewable energy - Abstract
The maximization of energy transfer from a low-voltage source to a grid is investigated. The converter has two stages, a DC-DC converter followed by an inverter. Attention to the DC-DC converter losses shows how its energy transfer is maximized and found to be acceptable.
- Published
- 2008
- Full Text
- View/download PDF
14. Digital Control of Two Stage High Power Inverter
- Author
-
David Thompson and Otu A. Eno
- Subjects
Engineering ,business.industry ,Distributed generation ,Power inverter ,Frequency grid ,Photovoltaic system ,Electrical engineering ,Energy transformation ,Digital control ,Converters ,business ,Renewable energy - Abstract
With increasing demands worldwide for more electrical energy and the desire to reduce greenhouse-gas emissions, increasing attention is directed at sources of renewable energy, such as photovoltaic and wind, the development of clean distributed generation becomes increasingly important. The electrical output from such sources is small and dc. Coupling such outputs to a constant voltage and frequency grid requires two stages of energy conversion: dc-dc and dc-ac. As small-scale inverters tend to have lower efficiency than larger inverters, it is important to optimize the control circuits and to choose a topology with the lowest possible power dissipation, if they are to compete with larger scale converters. This contribution focuses on such a system and its overall control for the efficient and reliable injection of energy into a grid.
- Published
- 2006
- Full Text
- View/download PDF
15. Digital Control of Two Stage High Power Inverter
- Author
-
Otu Eno and David Thompson
- Published
- 2006
- Full Text
- View/download PDF
16. Framework for Visualizing Higher-Order Basis Functions
- Author
-
R. O'Barall, F. Bertel, William J. Schroeder, M. Malaterre, Philippe Pierre Pebay, Saurabh Tendulkar, and David Thompson
- Subjects
Tessellation (computer graphics) ,Data visualization ,Computer simulation ,Computer science ,business.industry ,Basis function ,Computational geometry ,business ,Software architecture ,Finite element method ,Computational science ,Visualization - Abstract
Techniques in numerical simulation such as the finite element method depend on basis functions for approximating the geometry and variation of the solution over discrete regions of a domain. Existing visualization systems can visualize these basis functions if they are linear, or for a small set of simple non-linear bases. However, newer numerical approaches often use basis functions of elevated and mixed order or complex form; hence existing visualization systems cannot directly process them. In this paper we describe an approach that supports automatic, adaptive tessellation of general basis functions using a flexible and extensible software architecture in conjunction with an on demand, edge-based recursive subdivision algorithm. The framework supports the use of functions implemented in external simulation packages, eliminating the need to reimplement the bases within the visualization system. We demonstrate our method on several examples, and have implemented the framework in the open-source visualization system VTK.
- Published
- 2006
- Full Text
- View/download PDF
17. A Framework for the Design of a Novel Haptic-Based Medical Training Simulator.
- Author
-
Amir M. Tahmasebi, Keyvan Hashtrudi-Zaad, David Thompson, and Purang Abolmaesumi
- Subjects
SYNTHETIC training devices ,MEDICAL imaging systems ,IMAGE databases ,MAGNETIC resonance imaging ,ELECTRONIC systems ,ERGONOMICS ,MEDICAL radiology - Abstract
This paper presents a framework for the design of a haptic-based medical ultrasound training simulator. The proposed simulator is composed of a PHANToM haptic device and a modular software package that allows for visual feedback and kinesthetic interactions between an operator and multimodality image databases. The system provides real-time ultrasound images in the same fashion as a typical ultrasound machine, enhanced with corresponding augmented computerized tomographic (CT) and/or MRI images. The proposed training system allows trainees to develop radiology techniques and knowledge of the patient's anatomy with minimum practice on live patients, or in places or at times when radiology devices or patients with rare cases may not be available. Low-level details of the software structure that can be migrated to other similar medical simulators are described. A preliminary human factors study, conducted on the prototype of the developed simulator, demonstrates the potential usage of the system for clinical training. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.