21 results on '"Jeremy Johnson"'
Search Results
2. Designing AR Systems to Explore Point-of-View, Bias, and Trans-cultural Conflict
- Author
-
Laureen L. Hill, Maribeth Gandy, Tony Lemieux, Jeremy Johnson, Jeff Wilson, Scott P. Robertson, Michele Sumler, Darlene Mashman, Susan Tamasi, and Laura Levy
- Subjects
021110 strategic, defence & security studies ,Teamwork ,Point (typography) ,Computer science ,business.industry ,media_common.quotation_subject ,05 social sciences ,Applied psychology ,0211 other engineering and technologies ,02 engineering and technology ,Deliberation ,Cultural conflict ,0506 political science ,Jury ,User experience design ,Human–computer interaction ,Terrorism ,050602 political science & public administration ,Augmented reality ,business ,media_common - Abstract
Over ten years ago, we created a novel dramatic augmented reality (AR) experience exploring bias and point-of-view (PoV) based upon the classic film “Twelve Angry Men,” which allowed a user to experience a dramatic jury room deliberation from the PoV of each of four different characters. Recently, informed by this previous work, we have created a new AR platform for engaging users in different PoVs, exposing forms of biases, and studying cultural conflicts. We are currently using this system for training and assessment in two domains: healthcare and psychological studies of terrorism. In this paper we present the requirements we have identified for this type of user experience, the co-design of both AR environments with domain experts, and the results of an initial user study of technology acceptance that yielded positive feedback from participants.
- Published
- 2016
- Full Text
- View/download PDF
3. Information Technology Availability and Use in the United States: A Multivariate and Geospatial Analysis by State
- Author
-
James B. Pick, Avijit Sarkar, and Jeremy Johnson
- Subjects
Geospatial analysis ,business.industry ,media_common.quotation_subject ,Information technology ,computer.software_genre ,Empirical research ,Information and Communications Technology ,Openness to experience ,Conceptual model ,Regional science ,Social media ,Marketing ,business ,Digital divide ,computer ,media_common - Abstract
Exploratory empirical studies of the digital divide exist for various nations including the United States. The contribution of this paper is to enhance understanding of factors associated with availability and utilization of information and communication technologies (ICTs) at the state level in the US. In our conceptual model of technology utilization, eight dependent technology availability and utilization factors are posited to be associated with twelve independent socio-economic, demographic, innovation, and societal openness factors. Technology utilization variables are spatially analyzed to determine extent of agglomeration or randomness, and regression residuals are examined to eliminate spatial bias. We find that societal openness, urbanization, and ethnicities are significantly associated with higher ICT utilization. We report interesting findings for social media communication technologies of Facebook and Twitter. Implications for policymakers at both federal and state levels are discussed.
- Published
- 2014
- Full Text
- View/download PDF
4. Using model-based assurance to strengthen diagnostic procedures
- Author
-
Ann Patterson-Hine, Robyn R. Lutz, and Jeremy Johnson
- Subjects
Work (electrical) ,Computer science ,business.industry ,media_common.quotation_subject ,Quality (business) ,Software system ,Software engineering ,business ,media_common ,Unit (housing) - Abstract
In previous work we described Diagnostic Tree for Verification (DTV), a partially automated software engineering technique by which diagnostic trees generated from system models are used to help check out diagnostic procedures. Diagnostic procedures are instructions used to isolate failures during operations. Assuring such procedures manually is time-consuming and costly. This paper reports our recent experience in applying DTV to diagnostic procedures for lighting failures in NASA's Habitat Demonstration Unit (HDU), a prototype for astronauts' living quarters. DTV identified missing and inconsistent instructions, as well as more-efficient sequences of diagnostic steps. Unexpectedly, the most significant benefit was finding assumptions that will not remain true as the system evolves. We describe both the challenges faced in applying DTV and how its independent perspective helped in assuring the procedures' adequacy and quality. Finally, the paper discusses more generally how software systems that are model-based, rapidly evolving and safety-critical appear most likely to benefit from this approach.
- Published
- 2011
- Full Text
- View/download PDF
5. Reconfigurable multicore architecture for power flow calculation
- Author
-
Jeremy Johnson, Prawat Nagvajara, and Kevin Cunningham
- Subjects
Multi-core processor ,Speedup ,business.industry ,Computer science ,Parallel computing ,Reconfigurable computing ,LU decomposition ,law.invention ,Electric power system ,Software ,law ,Field-programmable gate array ,business ,Direct memory access - Abstract
This paper investigates the advantages of using multicore architectures, comprising high-performance processors and reconfigurable cores, for sparse Lower-Upper (LU) triangular decomposition, used in power flow calculations and contingency analysis. The proposed architecture combines a general-purpose processor with a custom row-reduction accelerator, sending streams of data to the accelerator through the use of a direct memory access module. The simple accelerator provides a speedup of 1.29X over existing high-performance sparse LU software on power system applications. As architectures with tightly-coupled processor cores and reconfigurable cores start to appear on the market, techniques presented in this paper provide a simple way to improve performance in important computations, such as those needed for power system analysis.
- Published
- 2011
- Full Text
- View/download PDF
6. SDC testbed: Software defined communications testbed for wireless radio and optical networking
- Author
-
Jeremy Johnson, James Chacko, Nagarajan Kandasamy, Timothy P. Kurzweg, Boris Shishkin, Danh H. Nguyen, Kapil R. Dandekar, Kevin Wanuga, and Doug Pfeil
- Subjects
Software ,Computer architecture ,Computer science ,business.industry ,Testbed ,Optical networking ,Software development ,Wireless ,Radio frequency ,Software-defined radio ,Modular design ,business ,Computer network - Abstract
This paper describes the development of a new Software Defined Communications (SDC) testbed architecture. SDC aims to generalize the area of software defined radio to include propagation media not exclusively limited to radio frequencies (optical, ultrasonic, etc.). This SDC platform leverages existing and custom hardware in combination with reference software applications in order to provide a complete research and development platform. This platform can be used to implement current and future standards that make use of highly demanding communications techniques, including ultrawideband (UWB) radio and free-space optical communications. This paper describes the commercial and custom hardware that is being integrated into the platform, including the baseband hardware and the modular transceiver frontends. Furthermore, the paper describes the software development currently in progress with this platform, including the integration of available open source designs into the platform, and the development of custom IP for scalable OFDM PHY implementations in radio and optical communications. We seek to create a complete research platform for the commercial and academic wireless communities, capable of delivering the highest possible performance and flexibility while providing the necessary development tools and reference designs in order to minimize system learning curve and development cost.
- Published
- 2011
- Full Text
- View/download PDF
7. Singular Value Decomposition Hardware for MIMO: State of the Art and Custom Design
- Author
-
Yue Wang, Kevin Cunningham, Jeremy Johnson, and Prawat Nagvajara
- Subjects
Software ,Orthogonal frequency-division multiplexing ,business.industry ,Computer science ,Pipeline (computing) ,MIMO ,Singular value decomposition ,Linear algebra ,Clock rate ,business ,Field-programmable gate array ,Computer hardware - Abstract
This paper presents a custom hardware design for computing Singular Value Decomposition (SVD) of the radio communication channel characteristic matrix. The custom hard-ware was implemented to reduce the SVD computing time. The pipeline hardware developed is suitable for computing the SVD of a sequence of 2 × 2 complex-value matrices used in MIMO-OFDM standards, such as the IEEE 802.11n. The hardware developed achieves an optimum pipeline rate which equaled the maximum hardware clock rate. The proposed architecture provides performance gains over standard software libraries, such as the ZGESVD function of Linear Algebra Package (LAPACK) library, when running on standard processors.
- Published
- 2010
- Full Text
- View/download PDF
8. FPGA hardware results for power system computation
- Author
-
Jeremy Johnson, P. Vachranukunkiet, T. Chagnon, Prawat Nagvajara, and C.O. Nwankpa
- Subjects
Electric power system ,Software ,Speedup ,SCADA ,business.industry ,Computer science ,Embedded system ,Benchmark (computing) ,Pentium ,business ,Field-programmable gate array ,Computer hardware ,Power (physics) - Abstract
This paper presents preliminary computational results based on an alternative computing platform comprising a host computer interconnected to a Field Programmable Gate Array (FPGA). These results represent load flow calculations performed on realistically sized power systems. The motivation behind this work lies in the influence of computational time reduction on reliable power system operation. Operators work with a variety of power system analytical packages aimed at ensuring real-time processing of data being transmitted from the SCADA system via network and telemetry. Results presented in this paper are compared to performance measures obtained from modeling these processes on benchmark power grids. Observations further justify that algorithm-specific hardware can provide in certain cases up to an order of magnitude speedup over the software program of the same algorithm running on Pentium-based PCs.
- Published
- 2009
- Full Text
- View/download PDF
9. Special purpose hardware for power system computation
- Author
-
Z. Lin, P. Vachranukunkiet, M. Murach, Prawat Nagvajara, Jeremy Johnson, and C.O. Nwankpa
- Subjects
Energy management system ,Electric power system ,Speedup ,Computer science ,business.industry ,Energy management ,Linear system ,State (computer science) ,Field-programmable gate array ,business ,Computer hardware - Abstract
This paper presents a cost effective approach, using special-purpose hardware implemented using field programmable gate arrays (FPGA) - commodity programmable hardware, to accelerate the performance of key components of an energy management system (EMS). These components include state estimation, optimal power flow, and load flow computation. All of these components have at their core the solution of large sparse linear systems, which can be effectively accelerated by the proposed hardware. Predicted performance on a series of power system benchmarks suggest up to an order of magnitude speedup over conventional approaches. The improved performance should enhance the day-to-day operation of the power grid, by providing the operator with more complete and accurate information.
- Published
- 2008
- Full Text
- View/download PDF
10. State Estimation Using Sparse Givens Rotation Field Programmable Gate Array
- Author
-
Z. Lin, Chika O. Nwankpa, Prawat Nagvajara, and Jeremy Johnson
- Subjects
Speedup ,Software ,SCADA ,business.industry ,Computer science ,Electronic engineering ,Benchmark (computing) ,Givens rotation ,Pentium ,State (computer science) ,business ,Field-programmable gate array ,Computer hardware - Abstract
In the on-line assessment of the power grid to assure reliable operation, the state estimation is the front-end real-time processing of the measurement data being transmitted from the SCADA system via network and telemetry. An alternative high-performance computing platform comprising a host computer interconnected to a Field Programmable Gate Array (FPGA) via a PCI-Express bus is proposed. The predicted performance obtained from the state estimation of benchmark power grids (118 and 1648 bus systems) shows that the row-oriented Givens algorithm-specific hardware can provide an order of magnitude speedup over the software program of the same Givens rotation algorithm running on Pentium 4 M 1.7 GHz processor with 1 GB of RAM. This result indicates that algorithm-specific hardware on FPGA may provide a cost-effective solution to high-performance state estimation.
- Published
- 2007
- Full Text
- View/download PDF
11. Rapid Prototyping of Large-scale Analog Circuits With Field Programmable Analog Array
- Author
-
Paolo D'Alberto, Franz Franchetti, Jose M. F. Moura, Peter Milder, Jeremy Johnson, Markus Püschel, Aliaksei Sandryhaila, and James C. Hoe
- Subjects
Kernel (linear algebra) ,Computer science ,Joule (programming language) ,Heuristic (computer science) ,PowerPC ,Algorithm design ,Parallel computing ,Field-programmable gate array ,computer ,Discrete Fourier transform ,Efficient energy use ,computer.programming_language - Abstract
We present a domain-specific approach to generate high-performance hardware-software partitioned implementations of the discrete Fourier transform (DFT) in fixed point precision. The partitioning strategy is a heuristic based on the DFT's divide-and-conquer algorithmic structure and fine tuned by the feedback-driven exploration of candidate designs. We have integrated this approach in the Spiral linear-transform code-generation framework to support push-button automatic implementation. We present evaluations of hardware-software DFT implementations running on the embedded PowerPC processor and the reconfigurable fabric of the Xilinx Virtex-II Pro FPGA. In our experiments, the 1D and 2D DFT's FPGA-accelerated libraries exhibit between 2 and 7.5 times higher performance (operations per second) and up to 2.5 times better energy efficiency (operations per Joule) than the software-only version.
- Published
- 2007
- Full Text
- View/download PDF
12. Performance Analysis of a Family of WHT Algorithms
- Author
-
M. Andrews and Jeremy Johnson
- Subjects
Divide and conquer algorithms ,Computer science ,Cache miss ,Cache ,Limit (mathematics) ,Cache-oblivious algorithm ,Supercomputer ,Cache algorithms ,Algorithm - Abstract
This paper explores the correlation of instruction counts and cache misses to runtime performance for a large family of divide and conquer algorithms to compute the Walsh-Hadamard transform (WHT). Previous work showed how to compute instruction counts and cache misses from a high-level description of the algorithm and proved theoretical results about their minimum, maximum, mean, and distribution. While the models themselves do not accurately predict performance, it is shown that they are statistically correlated to performance and thus can be used to prune the search space for fast implementations. When the size of the transform fits in cache the instruction count itself is used; however, when the transform no longer fits in cache, a linear combination of instruction counts and cache misses is used. Thus for small transforms it is safe to ignore algorithms which have a high instruction count and for large transforms it is safe to ignore algorithms with a high value in the combined instruction count/cache miss model. Since the models can be computed from a high-level description of the algorithms, they can be obtained without runtime measurement and the previous theoretical results on the models can be applied to limit empirical search.
- Published
- 2007
- Full Text
- View/download PDF
13. Optimal reconfigurable HW/SW co-design of load flow and optimal power flow computation
- Author
-
Prawat Nagvajara, M. Murach, Jeremy Johnson, P. Vachranukunkiet, and Chika O. Nwankpa
- Subjects
Speedup ,Computer science ,Triangular matrix ,Pentium ,Power-flow study ,Parallel computing ,Field-programmable gate array ,Matrix multiplication ,Sparse matrix ,Matrix decomposition - Abstract
Load flow and optimal power flow (OPF) constitute core computations used in energy market operation. We considered different design partitions of computational tasks for a desktop computer equipped with field programmable gate array (FPGA). Load flow and OPF require lower-upper triangular matrix decomposition (LU). The number of clock cycles required for data transfer and floating-point operations were used as performance measures in determining optimal hardware/software partitions for each problem. Optimal partition performance is achieved by assigning the lower-upper triangular decomposition (LU) and matrix multiplication operations to custom hardware cores. A comparison between the proposed partition and software implemented using a state-of-the-art sparse matrix package running on a 3.2 GHz Pentium 4 shows a six-fold speedup.
- Published
- 2006
- Full Text
- View/download PDF
14. Jacobi load flow accelerator using FPGA
- Author
-
Prawat Nagvajara, J. Foertsch, and Jeremy Johnson
- Subjects
Computer science ,business.industry ,Numerical analysis ,MathematicsofComputing_NUMERICALANALYSIS ,Jacobi method ,Parallel computing ,Solver ,Computer Science::Numerical Analysis ,Power (physics) ,Computer Science::Hardware Architecture ,Electric power system ,symbols.namesake ,Software ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,Convergence (routing) ,symbols ,Field-programmable gate array ,business - Abstract
Full-AC load flow is a crucial task in power system analysis. Solving full-AC load flow utilizes iterative numerical methods such as Jacobi, Gauss-Seidel or Newton-Raphson. Newton-Raphson is currently the preferred solver used in industrial applications such as power world and PSS/E due to it faster convergence than either Jacobi or Gauss-Seidel. In this paper, we reexamine the Jacobi method for use in a fully pipelined hardware implementation using a field programmable gate array (FPGA) as an alternative to Newton-Raphson. Using benchmark data from representative power systems, we compare the operation counts of Newton-Raphson software to the proposed Jacobi FPGA hardware. Our studies show that Jacobi method implemented in an FPGA for a sufficiently large power system has the potential to be a state of the art full-AC load flow engine.
- Published
- 2005
- Full Text
- View/download PDF
15. Optimal power flow utilizing FPGA technology
- Author
-
Prawat Nagvajara, M. Murach, Chika O. Nwankpa, and Jeremy Johnson
- Subjects
Floating point ,Workstation ,business.industry ,Energy management ,Computer science ,Computation ,Data structure ,law.invention ,Power flow ,Electric power system ,law ,business ,Field-programmable gate array ,Computer hardware - Abstract
Optimal power flow (OPF) is a performance driven application in energy management systems that is currently performed on high-performance workstations. However, general purpose processors perform poorly due to irregular data structures commonly encountered in power system analysis. We propose the use of FPGA hardware to accelerate floating point performance in the evaluation of OPF. Our results indicate that LU performance can be enhanced by 6x and overall large-scale OPF computation by at least 3x using FPGA technology over general-purpose workstations.
- Published
- 2005
- Full Text
- View/download PDF
16. A self-adapting distributed memory package for fast signal transforms
- Author
-
Jeremy Johnson and K. Chen
- Subjects
Matrix (mathematics) ,Distribution (mathematics) ,Computer science ,Walsh function ,Message passing ,Fast Fourier transform ,SIGNAL (programming language) ,Distributed memory ,Parallel computing ,Matrix decomposition - Abstract
Summary form only given. We present a self-adapting distributed memory package for computing the Walsh-Hadamard transform (WHT), a prototypical fast signal transform, similar to the fast Fourier transform. A family of distributed memory algorithms are derived from different factorizations of the WHT matrix. Different factorizations correspond to different data distributions and communication patterns. Thus, searching over the space of factorizations leads to the best data distribution and communication pattern for a given platform. The distributed memory WHT package provides a framework for converting factorizations of the WHT matrix into MPl programs and exploring their performance by searching the space of factorizations.
- Published
- 2004
- Full Text
- View/download PDF
17. A recursive implementation of the dimensionless FFT
- Author
-
Jeremy Johnson and Xu Xu
- Subjects
Divide and conquer algorithms ,Signal processing ,Theoretical computer science ,Dimension (vector space) ,Computer science ,Prime-factor FFT algorithm ,Minor (linear algebra) ,Fast Fourier transform ,Decomposition (computer science) ,Algorithm ,Discrete Fourier transform ,Dimensionless quantity - Abstract
A divide and conquer algorithm is presented for computing arbitrary multi-dimensional discrete Fourier transforms. In contrast to standard approaches such as the row-column algorithm, this algorithm allows an arbitrary decomposition, based solely on the size of the transform independent of the dimension of the transform. Only minor modifications are required to compute transforms with different dimension. These modifications were incorporated into the FFTW package so that the algorithm for computing one-dimensional transforms can be used to compute arbitrary dimensional transforms. This reduced the runtime of many multi-dimensional transforms.
- Published
- 2003
- Full Text
- View/download PDF
18. A methodology for generating data distributions to optimize communication
- Author
-
Sandeep K. S. Gupta, Chua-Huang Huang, Jeremy Johnson, R. W. Johnson, P. Sadayappan, and Sandeep Kaushik
- Subjects
Tensor product ,Theoretical computer science ,Computer science ,Fortran ,Fast Fourier transform ,Algorithm design ,computer ,Massively parallel ,Algorithm ,Block (data storage) ,High Performance Fortran ,computer.programming_language - Abstract
The authors present an algebraic theory, based on the tensor product for describing the semantics of regular data distributions such as block, cyclic, and block-cyclic distributions. These distributions have been proposed in high performance Fortran, an ongoing effort for developing a Fortran extension for massively parallel computing. This algebraic theory has been used for designing and implementing block recursive algorithms on shared-memory and vector multiprocessors. In the present work, the authors extend this theory to generate programs with explicit data distribution commands from tensor product formulas. A methodology to generate data distributions that optimize communication is described. This methodology is demonstrated by generating efficient programs with data distribution for the fast Fourier transform. >
- Published
- 2003
- Full Text
- View/download PDF
19. Design, optimization, and implementation of a universal FFT processor
- Author
-
Jeremy Johnson, P. Kumhom, and Prawat Nagvajara
- Subjects
Computer science ,Fast Fourier transform ,Prime-factor FFT algorithm ,Hardware description language ,Process design ,Integrated circuit design ,Parallel computing ,ComputerSystemsOrganization_PROCESSORARCHITECTURES ,Computer Science::Hardware Architecture ,Split-radix FFT algorithm ,Algorithm design ,computer ,Twiddle factor ,computer.programming_language - Abstract
There exist Fast Fourier transform (FFT) algorithms, called dimensionless FFTs, that work independent of dimension. These algorithms can be configured to compute different dimensional DFTs simply by relabeling the input data and by changing the values of the twiddle factors occurring in the butterfly operations. This observation allows us to design an FFT processor, which with minor reconfiguring, can compute one, two, and three dimensional DFTs. In this paper we design a family of FFT processors, parameterized by the number of points, the dimension, the number of processors, and the internal dataflow, and show how to map different dimensionless FFTs onto this hardware design. Different dimensionless FFTs have different dataflows and consequently lead to different performance characteristics. Using a performance model we search for the optimal algorithm for the family of processors we considered. The resulting algorithm and corresponding hardware design was implemented using FPGA.
- Published
- 2002
- Full Text
- View/download PDF
20. In search of the optimal Walsh-Hadamard transform
- Author
-
Jeremy Johnson and Markus Püschel
- Subjects
Signal processing ,Multidimensional signal processing ,Computer science ,Hadamard transform ,SIGNAL (programming language) ,Algorithm - Abstract
This paper describes an approach to implementing and optimizing fast signal transforms. Algorithms for computing signal transforms are expressed by symbolic expressions, which can be automatically generated and translated into programs. Optimizing an implementation involves searching for the fastest program obtained from one of the possible expressions. We apply this methodology to the implementation of the Walsh-Hadamard transform. An environment, accessible from MATLAB, is provided for generating and timing WHT algorithms. These tools are used to search for the fastest WHT algorithm. The fastest algorithm found is substantially faster than standard approaches to implementing the WHT. The work reported in this paper is part of the SPIRAL project. An ongoing project whose goal is to automate the implementation and optimization of signal processing algorithms.
- Published
- 2002
- Full Text
- View/download PDF
21. A prototypical self-optimizing package for parallel implementation of fast signal transforms
- Author
-
Jeremy Johnson and Kang Chen
- Subjects
Signal processing ,Speedup ,business.industry ,Computer science ,Walsh function ,Fast Fourier transform ,Concurrent computing ,Multiprocessing ,Parallel computing ,business ,Digital signal processing - Abstract
This paper presents a sell-adapting parallel package for computing the Walsh-Hadamard transform (WHT), a prototypical fast signal transform, similar to the fast Fourier transform. Using a search over a space of mathematical formulas representing different algorithms to compute the WHT the package finds the best parallel implementation on a given shared-memory multiprocessor. The search automatically finds the best combination of sequential and parallel code leading to the most effective granularity, load balance and cache utilization. Experimental results are presented showing the optimizations required to obtain nearly linear speedup on a sample symmetric multiprocessor.
- Published
- 2002
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.