1,356 results
Search Results
2. Announcement of the principal findings and value addition in Computer Science research papers.
- Author
-
Shehzad, Wasima
- Subjects
COMPUTER science research ,RHETORICAL analysis ,DISCURSIVE practices ,CORPORA ,DISCOURSE analysis ,COMPUTER training ,COMPUTER systems ,TECHNICAL writing ,COMPUTER architecture - Abstract
Copyright of Iberica is the property of Asociacion Europea de Lenguas para Fines Especificos and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2010
3. Title Paper: Natural computing: A problem solving paradigm with granular information processing.
- Author
-
Pal, Sankar K. and Meher, Saroj K.
- Subjects
NATURAL computation ,PROBLEM solving ,INFORMATION processing ,APPLICATION software ,COMPUTER systems ,COMPUTER science - Abstract
Highlights: [•] Granular computing aspects of natural computing. [•] Review of different granular soft computing research. [•] Biological motivation, design principles, application areas, open research problems and challenging issues of these models. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
4. Desktop publishing and medical imaging: Paper as hardcopy medium for digital images
- Author
-
Stewart Denslow
- Subjects
Diagnostic Imaging ,Paper ,Time Factors ,Computer science ,computer.software_genre ,Digital image ,Software ,Microcomputers ,Computer Systems ,Digital image processing ,Computer Graphics ,Image Processing, Computer-Assisted ,Medical imaging ,Radiology, Nuclear Medicine and imaging ,Image analysis ,Publishing ,Radiological and Ultrasound Technology ,Multimedia ,Point (typography) ,business.industry ,computer.file_format ,Local Area Networks ,Magnetic Resonance Imaging ,Desktop publishing ,Computer Science Applications ,Radiology Information Systems ,Printing ,Image file formats ,Word Processing ,business ,computer - Abstract
Desktop-publishing software and hardware has progressed to the point that many widely used word-processing programs are capable of printing high-quality digital images with many shades of gray from black to white. Accordingly, it should be relatively easy to print digital medical images on paper for reports, instructional materials, and in research notes. Components were assembled that were necessary for extracting image data from medical imaging devices and converting the data to a form usable by word-processing software. A system incorporating these components was implemented in a medical setting and has been operating for 18 months. The use of this system by medical staff has been monitored.
- Published
- 1994
5. Reviewer bias in single- versus double-blind peer review.
- Author
-
Tomkins, Andrew, Min Zhang, and Heavlin, William D.
- Subjects
DATA mining ,COMPUTER science ,WEB search engines ,INFORMATION asymmetry ,COMPUTER systems - Abstract
Peer review may be "single-blind," in which reviewers are aware of the names and affiliations of paper authors, or "double-blind," in which this information is hidden. Noting that computer science research often appears first or exclusively in peer-reviewed conferences rather than journals, we study these two reviewing models in the context of the 10th Association for Computing Machinery International Conference on Web Search and Data Mining, a highly selective venue (15.6% acceptance rate) in which expert committee members review full-length submissions for acceptance. We present a controlled experiment in which four committee members review each paper. Two of these four reviewers are drawn from a pool of committee members with access to author information; the other two are drawn from a disjoint pool without such access. This information asymmetry persists through the process of bidding for papers, reviewing papers, and entering scores. Reviewers in the single-blind condition typically bid for 22% fewer papers and preferentially bid for papers from top universities and companies. Once papers are allocated to reviewers, single-blind reviewers are significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors, top universities, and top companies. The estimated odds multipliers are tangible, at 1.63, 1.58, and 2.10, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
6. The impact of paper prototyping on card sorting: A case study
- Author
-
Slegers, Karin and Donoso, Verónica
- Subjects
- *
SORTING (Electronic computers) , *COMPUTER users , *COMPUTER engineering , *MENTAL models theory (Communication) , *NEAR field communication , *COMPUTER simulation , *COMPUTER science , *COMPUTER systems - Abstract
Abstract: Combining the techniques of paper prototyping and card sorting into a single session has the benefits of helping users to understand a new technology on the one hand, and of gaining insight into the users’ mental models of that technology on the other hand. However, acquainting users with a new technology via a paper prototype might affect their mental models, as assessed with the card sorting technique. The aim of this paper was to explore the possibility of combining the two techniques in a single research session. Thirty-seven users participated in a study concerning a payment system based on Near Field Communication (NFC). Eight group sessions were organized, including both a paper prototyping exercise and a card sorting exercise. The order of the exercises was alternated. The findings of this case study seem to suggest that the paper prototyping exercise resulted into deeper insights into the participants’ mental models resulting from the card sorting exercise. At the same time, paper prototyping seemed to prevent participants to come up with new names for their card sorting categories. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
7. 2013 International Symposium on Computer Architecture Influential Paper Award.
- Author
-
Martonosi, Margaret
- Subjects
- *
COMPUTER architecture , *COMPUTER input-output equipment , *SYSTEMS development , *COMPUTER science , *COMPUTER systems - Abstract
This column discusses an award given in 2013: the 2013 SIGARCH/TCCA Influential ISCA Paper Award, given to the authors of the paper "Pipeline Gating: Speculation Control for Energy Reduction." [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
8. Underrepresentation of women in computer systems research.
- Author
-
Frachtenberg, Eitan and Kaner, Rhody D.
- Subjects
COMPUTER systems ,WOMEN authors ,GENDER inequality ,COMPUTER science ,ACQUISITION of data - Abstract
The gender gap in computer science (CS) research is a well-studied problem, with an estimated ratio of 15%–30% women researchers. However, far less is known about gender representation in specific fields within CS. Here, we investigate the gender gap in one large field, computer systems. To this end, we collected data from 72 leading peer-reviewed CS conferences, totalling 6,949 accepted papers and 19,829 unique authors (2,946 women, 16,307 men, the rest unknown). We combined these data with external demographic and bibliometric data to evaluate the ratio of women authors and the factors that might affect this ratio. Our main findings are that women represent only about 10% of systems researchers, and that this ratio is not associated with various conference factors such as size, prestige, double-blind reviewing, and inclusivity policies. Author research experience also does not significantly affect this ratio, although author country and work sector do. The 10% ratio of women authors is significantly lower than the 16% in the rest of CS. Our findings suggest that focusing on inclusivity policies alone cannot address this large gap. Increasing women's participation in systems research will require addressing the systemic causes of their exclusion, which are even more pronounced in systems than in the rest of CS. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Selecting Computer Software Packages-A Self Help Guide: Discussion Paper
- Author
-
G Stevens
- Subjects
Quality Assurance, Health Care ,business.industry ,Computer science ,General Medicine ,030227 psychiatry ,World Wide Web ,Self-help ,03 medical and health sciences ,0302 clinical medicine ,Software ,England ,Computer Systems ,Computer software ,Methods ,030212 general & internal medicine ,Family Practice ,business ,Research Article - Published
- 1988
10. Deep Learning for Low-Dose CT Denoising Using Perceptual Loss and Edge Detection Layer
- Author
-
Paul Babyn, Javad Alirezaie, and Maryam Gholizadeh-Ansari
- Subjects
Computer science ,Noise reduction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Residual ,Edge detection ,030218 nuclear medicine & medical imaging ,Convolution ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,Computer Systems ,Humans ,Radiology, Nuclear Medicine and imaging ,Original Paper ,Radiological and Ultrasound Technology ,Artificial neural network ,business.industry ,Deep learning ,Pattern recognition ,Computer Science Applications ,Dilation (morphology) ,Neural Networks, Computer ,Artificial intelligence ,Artifacts ,Tomography, X-Ray Computed ,business ,030217 neurology & neurosurgery ,Smoothing - Abstract
Low-dose CT denoising is a challenging task that has been studied by many researchers. Some studies have used deep neural networks to improve the quality of low-dose CT images and achieved fruitful results. In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non-trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects causing by per-pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while changing the complexity of the network, minimally.
- Published
- 2019
11. Repeatability in Computer Systems Research.
- Author
-
COLLBERG, CHRISTIAN and PROEBSTING, TODD A.
- Subjects
COMPUTER systems ,RESEARCH ,COMPUTER science ,DATA ,COMPUTER file sharing - Abstract
The article discusses the factors that affect the sharing of research artifacts relating to computer systems according to the authors. Topics covered include the importance of sharing research artifacts for repeatability and benefaction, the research studies the evaluated the willingness of computer science researchers in sharing code and data, and details relating to the three measures of weak repeatability determined by the authors.
- Published
- 2016
- Full Text
- View/download PDF
12. Special Issue: Selected papers of the 7th and 8th workshops on Logical and Semantic Frameworks with Applications (LSFA).
- Author
-
Finger, Marcelo and Kesner, Delia
- Subjects
- *
SPECIAL issues of periodicals , *SEMANTICS , *COMPUTER systems , *INFORMATION theory , *COMPUTER science - Published
- 2015
- Full Text
- View/download PDF
13. Microcomputer networking in the hospital environment: powerful computing for the common man? Discussion paper
- Author
-
Michael M. Webb-Peploe, Stephen W. Hughes, A. Crowther, and I. C. Cooper
- Subjects
Operations research ,business.industry ,Computer science ,Local area network ,Cardiology ,General Medicine ,Local Area Networks ,World Wide Web ,Software ,Microcomputers ,Computer Systems ,Microcomputer ,London ,Hospital Information Systems ,Humans ,Forms and Records Control ,business ,Research Article - Published
- 1989
14. Estimating parameters of nonlinear dynamic systems in pharmacology using chaos synchronization and grid search
- Author
-
Sorell L. Schwartz, Aris Dokoumetzidis, Thang Ho, Nikhil Pillai, Robert R. Bies, and I. Freedman
- Subjects
Bridging (networking) ,Computer science ,030226 pharmacology & pharmacy ,Least squares ,03 medical and health sciences ,0302 clinical medicine ,Computer Systems ,Chaos synchronization ,Synchronization (computer science) ,Parameter estimation ,Humans ,Computer Simulation ,Chaotic system ,Pharmacology ,Original Paper ,Models, Statistical ,Estimation theory ,Explained sum of squares ,Delay differential equation ,Nonlinear system ,Nonlinear Dynamics ,030220 oncology & carcinogenesis ,Hyperparameter optimization ,Algorithm ,Algorithms - Abstract
Bridging fundamental approaches to model optimization for pharmacometricians, systems pharmacologists and statisticians is a critical issue. These fields rely primarily on Maximum Likelihood and Extended Least Squares metrics with iterative estimation of parameters. Our research combines adaptive chaos synchronization and grid search to estimate physiological and pharmacological systems with emergent properties by exploring deterministic methods that are more appropriate and have potentially superior performance than classical numerical approaches, which minimize the sum of squares or maximize the likelihood. We illustrate these issues with an established model of cortisol in human with nonlinear dynamics. The model describes cortisol kinetics over time, including its chaotic oscillations, by a delay differential equation. We demonstrate that chaos synchronization helps to avoid the tendency of the gradient-based optimization algorithms to end up in a local minimum. The subsequent analysis illustrates that the hybrid adaptive chaos synchronization for estimation of linear parameters with coarse-to-fine grid search for optimal values of non-linear parameters can be applied iteratively to accurately estimate parameters and effectively track trajectories for a wide class of noisy chaotic systems. Electronic supplementary material The online version of this article (10.1007/s10928-019-09629-4) contains supplementary material, which is available to authorized users.
- Published
- 2019
15. Guest Editor's Introduction.
- Author
-
Ayguade, Eduard
- Subjects
COMPUTER programming ,PARALLEL processing ,PARALLEL programming ,COMPUTER systems ,COMPUTER science - Abstract
This article introduces the articles in volume 31, number 3 of the "International Journal of Parallel Programming". The volume is devoted to a collection of papers on the parallel programming API OpenMP. The papers have been selected from the presentations at the second International Workshop on OpenMP: Experiences and Implementations WOMPEI2002, as part of ISHPC-IV International Symposium on High-Performance Computing, that took place in Kansai Science City in Japan. This collection includes 4 papers from some of the active research groups in OpenMP-related issues and its evaluation. It is the author's hope that the papers in the collection will prove to be interesting and useful to readers, and that the issues raised will stimulate many of them to further research on OpenMP.
- Published
- 2003
- Full Text
- View/download PDF
16. Integrating Option Grid Patient Decision Aids in the Epic Electronic Health Record: Case Study at 5 Health Systems
- Author
-
Danielle Schubbe, Paul Barr, Peter Scalia, Marie-Anne Durand, Rachel C Forcino, Farhan Ahmad, and Glyn Elwyn
- Subjects
Process management ,020205 medical informatics ,Computer science ,Interoperability ,Computer applications to medicine. Medical informatics ,shared decision making ,R858-859.7 ,Health Informatics ,02 engineering and technology ,Troubleshooting ,Decision Support Techniques ,03 medical and health sciences ,0302 clinical medicine ,Computer Systems ,Health care ,0202 electrical engineering, electronic engineering, information engineering ,Decision aids ,HL7 SMART on FHIR ,Electronic Health Records ,Humans ,030212 general & internal medicine ,patient decision aids ,implementation ,Protected health information ,Original Paper ,business.industry ,Timeline ,electronic health record ,Workflow ,Facilitator ,Public aspects of medicine ,RA1-1270 ,business ,Software - Abstract
Background Some researchers argue that the successful implementation of patient decision aids (PDAs) into clinical workflows depends on their integration into electronic health records (EHRs). Anecdotally, we know that EHR integration is a complex and time-consuming task; yet, the process has not been examined in detail. As part of an implementation project, we examined the work involved in integrating an encounter PDA for symptomatic uterine fibroids into Epic EHR systems. Objective This study aims to identify the steps and time required to integrate a PDA into the Epic EHR system and examine facilitators and barriers to the integration effort. Methods We conducted a case study at 5 academic medical centers in the United States. A clinical champion at each institution liaised with their Epic EHR team to initiate the integration of the uterine fibroid Option Grid PDAs into clinician-facing menus. We scheduled regular meetings with the Epic software analysts and an expert Epic technologist to discuss how best to integrate the tools into Epic for use by clinicians with patients. The meetings were then recorded and transcribed. Two researchers independently coded the transcripts and field notes before categorizing the codes and conducting a thematic analysis to identify the facilitators and barriers to EHR integration. The steps were reviewed and edited by an Epic technologist to ensure their accuracy. Results Integrating the uterine fibroid Option Grid PDA into clinician-facing menus required an 18-month timeline and a 6-step process, as follows: task priority negotiation with Epic software teams, security risk assessment, technical review, Epic configuration; troubleshooting, and launch. The key facilitators of the process were the clinical champions who advocated for integration at the institutional level and the presence of an experienced technologist who guided Epic software analysts during the build. Another facilitator was the use of an emerging industry standard app platform (Health Level 7 Substitutable Medical Applications and Reusable Technologies on Fast Healthcare Interoperability Resources) as a means of integrating the Option Grid into existing systems. This standard platform enabled clinicians to access the tools by using single sign-on credentials and prevented protected health information from leaving the EHR. Key barriers were the lack of control over the Option Grid product developed by EBSCO (Elton B Stephens Company) Health; the periodic Epic upgrades that can result in a pause on new software configurations; and the unforeseen software problems with Option Grid (ie, inability to print the PDA), which delayed the launch of the PDA. Conclusions The integration of PDAs into the Epic EHR system requires a 6-step process and an 18-month timeline. The process required support and prioritization from a clinical champion, guidance from an experienced technologist, and a willing EHR software developer team.
- Published
- 2020
17. Performance of digital contact tracing tools for COVID-19 response in Singapore : cross-sectional study
- Author
-
Yee Mun Lee, Huiling Guo, Zhilian Huang, Eu Chin Ho, Angela Chow, Hou Ang, and Lee Kong Chian School of Medicine (LKCMedicine)
- Subjects
020205 medical informatics ,Coronavirus disease 2019 (COVID-19) ,Cross-sectional study ,Computer science ,infectious disease ,Pneumonia, Viral ,Wearable computer ,Real-time Locating Systems ,Health Informatics ,Infectious Disease ,02 engineering and technology ,contact tracing ,Wearable Electronic Devices ,03 medical and health sciences ,COVID-19 Testing ,0302 clinical medicine ,Time frame ,Computer Systems ,real-time locating systems ,electronic medical records ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Electronic Health Records ,Humans ,Medicine [Science] ,030212 general & internal medicine ,Android (operating system) ,Pandemics ,Original Paper ,Physician-Patient Relations ,Singapore ,Clinical Laboratory Techniques ,Medical record ,public health ,COVID-19 ,Reproducibility of Results ,medicine.disease ,Mobile Applications ,Real-time locating system ,Cross-Sectional Studies ,Medical emergency ,Coronavirus Infections ,Contact tracing - Abstract
BackgroundEffective contact tracing is labor intensive and time sensitive during the COVID-19 pandemic, but also essential in the absence of effective treatment and vaccines. Singapore launched the first Bluetooth-based contact tracing app—TraceTogether—in March 2020 to augment Singapore’s contact tracing capabilities.ObjectiveThis study aims to compare the performance of the contact tracing app—TraceTogether—with that of a wearable tag-based real-time locating system (RTLS) and to validate them against the electronic medical records at the National Centre for Infectious Diseases (NCID), the national referral center for COVID-19 screening.MethodsAll patients and physicians in the NCID screening center were issued RTLS tags (CADI Scientific) for contact tracing. In total, 18 physicians were deployed to the NCID screening center from May 10 to May 20, 2020. The physicians activated the TraceTogether app (version 1.6; GovTech) on their smartphones during shifts and urged their patients to use the app. We compared patient contacts identified by TraceTogether and those identified by RTLS tags within the NCID vicinity during physicians’ 10-day posting. We also validated both digital contact tracing tools by verifying the physician-patient contacts with the electronic medical records of 156 patients who attended the NCID screening center over a 24-hour time frame within the study period.ResultsRTLS tags had a high sensitivity of 95.3% for detecting patient contacts identified either by the system or TraceTogether while TraceTogether had an overall sensitivity of 6.5% and performed significantly better on Android phones than iPhones (Android: 9.7%, iPhone: 2.7%; PConclusionsTraceTogether had a much lower sensitivity than RTLS tags for identifying patient contacts in a clinical setting. Although the tag-based RTLS performed well for contact tracing in a clinical setting, its implementation in the community would be more challenging than TraceTogether. Given the uncertainty of the adoption and capabilities of contact tracing apps, policy makers should be cautioned against overreliance on such apps for contact tracing. Nonetheless, leveraging technology to augment conventional manual contact tracing is a necessary move for returning some normalcy to life during the long haul of the COVID-19 pandemic.
- Published
- 2020
18. E2mC: Improving Emergency Management Service Practice through Social Media and Crowdsourcing Analysis in Near Real Time
- Author
-
Clemens Havas, Jose Luis Fernandez-Marquez, Chiara Francalanci, Gabriele Scalia, M.R. Mondardini, Milan Kalas, Birgit Kirsch, Barbara Pernici, Bernd Resch, Tim Van Achte, Domenico Grandoni, Valerio Lorini, Gunter Zeug, Stefan Rüping, and Publica
- Subjects
Service (systems architecture) ,Emergency Medical Services ,Geospatial analysis ,architecture ,Time Factors ,Civil defense ,Computer science ,social media ,0211 other engineering and technologies ,02 engineering and technology ,crowdsourcing ,geospatial analysis ,machine learning ,image classification ,geolocation ,3D reconstruction ,disaster management ,near real time ,computer.software_genre ,Crowdsourcing ,lcsh:Chemical technology ,Biochemistry ,Analytical Chemistry ,Disasters ,Computer Systems ,0202 electrical engineering, electronic engineering, information engineering ,Emergency medical services ,Social media ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,Instrumentation ,021101 geological & geomatics engineering ,Emergency management ,Event (computing) ,business.industry ,Concept Paper ,Data science ,Atomic and Molecular Physics, and Optics ,Systems architecture ,020201 artificial intelligence & image processing ,business ,computer - Abstract
In the first hours of a disaster, up-to-date information about the area of interest is crucial for effective disaster management. However, due to the delay induced by collecting and analysing satellite imagery, disaster management systems like the Copernicus Emergency Management Service (EMS) are currently not able to provide information products until up to 4872 h after a disaster event has occurred. While satellite imagery is still a valuable source for disaster management, information products can be improved through complementing them with user-generated data like social media posts or crowdsourced data. The advantage of these new kinds of data is that they are continuously produced in a timely fashion because users actively participate throughout an event and share related information. The research project Evolution of Emergency Copernicus services (E2mC) aims to integrate these novel data into a new EMS service component called Witness, which is presented in this paper. Like this, the timeliness and accuracy of geospatial information products provided to civil protection authorities can be improved through leveraging user-generated data. This paper sketches the developed system architecture, describes applicable scenarios and presents several preliminary case studies, providing evidence that the scientific and operational goals have been achieved. (VLID)2350401
- Published
- 2017
- Full Text
- View/download PDF
19. A Survey on Quantum Computing for Recommendation Systems.
- Author
-
Pilato, Giovanni and Vella, Filippo
- Subjects
RECOMMENDER systems ,QUANTUM computing ,COMPUTER systems ,SCIENTIFIC community ,COMPUTER science - Abstract
Recommendation systems play a key role in everyday life; they are used to suggest items that are selected among many candidates that usually belong to huge datasets. The recommendations require a good performance both in terms of speed and the effectiveness of the provided suggestions. At the same time, one of the most challenging approaches in computer science is quantum computing. This computational paradigm can provide significant acceleration for resource-demanding and time-consuming algorithms. It has become very popular in recent years, thanks to the different tools available to the scientific and technical communities. Since performance has great relevance in recommendation systems, many researchers in the scientific community have recently proposed different improvements that exploit quantum approaches to provide better performance in recommendation systems. This paper gives an overview of the current state of the art in the literature, outlining the different proposed methodologies and techniques and highlighting the challenges that arise from this new approach to the recommendation systems domain. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Automatic motion compensation for structured illumination endomicroscopy using a flexible fiber bundle
- Author
-
Michael Hughes and Andrew D. Thrapp
- Subjects
Paper ,Laser scanning ,Optical sectioning ,Computer science ,Biomedical Engineering ,mosaicking ,Image registration ,Field of view ,01 natural sciences ,Digital micromirror device ,law.invention ,010309 optics ,Biomaterials ,Motion ,Computer Systems ,law ,0103 physical sciences ,Image Processing, Computer-Assisted ,Endomicroscopy ,Fiber Optic Technology ,Computer vision ,QC355 ,digital micromirror device ,Pattern orientation ,Lighting ,Microscopy ,Motion compensation ,business.industry ,Equipment Design ,endomicroscopy ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,motion compensation ,structured illumination ,Artificial intelligence ,Artifacts ,business - Abstract
Significance: Confocal laser scanning enables optical sectioning in clinical fiber bundle endomicroscopes, but lower-cost, simplified endomicroscopes use widefield incoherent illumination instead. Optical sectioning can be introduced in these simple systems using structured illumination microscopy (SIM), a multiframe digital subtraction process. However, SIM results in artifacts when the probe is in motion, making the technique difficult to use in vivo and preventing the use of mosaicking to synthesize a larger effective field of view (FOV).\ud \ud Aim: We report and validate an automatic motion compensation technique to overcome motion artifacts and allow generation of mosaics in SIM endomicroscopy.\ud \ud Approach: Motion compensation is achieved using image registration and real-time pattern orientation correction via a digital micromirror device. We quantify the similarity of moving probe reconstructions to those acquired with a stationary probe using the relative mean of the absolute differences (MAD). We further demonstrate mosaicking with a moving probe in mechanical and freehand operation.\ud \ud Results: Reconstructed SIM images show an improvement in the MAD from 0.85 to 0.13 for lens paper and from 0.27 to 0.12 for bovine tissue. Mosaics also show vastly reduced artifacts.\ud \ud Conclusion: The reduction in motion artifacts in individual SIM reconstructions leads to mosaics that more faithfully represent the morphology of tissue, giving clinicians a larger effective FOV than the probe itself can provide.
- Published
- 2020
21. Development of a real-time internal and external marker tracking system for particle therapy: a phantom study using patient tumor trajectory data
- Author
-
Wonjoong Cheon, Junsang Cho, Hyunuk Jung, Sanghee Ahn, Heesoon Sheen, Hee Chul Park, and Youngyih Han
- Subjects
marker tracking ,medicine.medical_specialty ,Time Factors ,Computer science ,Health, Toxicology and Mutagenesis ,medicine.medical_treatment ,education ,Simulation system ,patient external surface ,Imaging phantom ,internal/external fiducials ,030218 nuclear medicine & medical imaging ,Motion ,03 medical and health sciences ,0302 clinical medicine ,Computer Systems ,Neoplasms ,Regular Paper ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Computer vision ,Radiation ,Particle therapy ,Phantoms, Imaging ,business.industry ,Marker tracking ,Process (computing) ,Reproducibility of Results ,Tracking system ,Radiation therapy ,respiratory gating ,correlation ,030220 oncology & carcinogenesis ,Trajectory ,Artificial intelligence ,business - Abstract
Target motion–induced uncertainty in particle therapy is more complicated than that in X-ray therapy, requiring more accurate motion management. Therefore, a hybrid motion-tracking system that can track internal tumor motion and as well as an external surrogate of tumor motion was developed. Recently, many correlation tests between internal and external markers in X-ray therapy have been developed; however, the accuracy of such internal/external marker tracking systems, especially in particle therapy, has not yet been sufficiently tested. In this article, the process of installing an in-house hybrid internal/external motion-tracking system is described and the accuracy level of tracking system was acquired. Our results demonstrated that the developed in-house external/internal combined tracking system has submillimeter accuracy, and can be clinically used as a particle therapy system as well as a simulation system for moving tumor treatment.
- Published
- 2017
22. Adapting State-of-the-Art Deep Language Models to Clinical Information Extraction Systems: Potentials, Challenges, and Solutions
- Author
-
Zhou, Liyuan, Suominen, Hanna, and Gedeon, Tom
- Subjects
020205 medical informatics ,Computer science ,Health Informatics ,Context (language use) ,02 engineering and technology ,information storage and retrieval ,computer.software_genre ,Health informatics ,Task (project management) ,Domain (software engineering) ,03 medical and health sciences ,0302 clinical medicine ,Health Information Management ,computer systems ,0202 electrical engineering, electronic engineering, information engineering ,medical informatics ,030212 general & internal medicine ,nursing records ,Original Paper ,business.industry ,Deep learning ,deep learning ,artificial intelligence ,patient handoff ,Information extraction ,Artificial intelligence ,Language model ,Transfer of learning ,business ,computer ,Natural language processing - Abstract
Background: Deep learning (DL) has been widely used to solve problems with success in speech recognition, visual object recognition, and object detection for drug discovery and genomics. Natural language processing has achieved noticeable progress in artificial intelligence. This gives an opportunity to improve on the accuracy and human-computer interaction of clinical informatics. However, due to difference of vocabularies and context between a clinical environment and generic English, transplanting language models directly from up-to-date methods to real-world health care settings is not always satisfactory. Moreover, the legal restriction on using privacy-sensitive patient records hinders the progress in applying machine learning (ML) to clinical language processing. Objective: The aim of this study was to investigate 2 ways to adapt state-of-the-art language models to extracting patient information from free-form clinical narratives to populate a handover form at a nursing shift change automatically for proofing and revising by hand: first, by using domain-specific word representations and second, by using transfer learning models to adapt knowledge from general to clinical English. We have described the practical problem, composed it as an ML task known as information extraction, proposed methods for solving the task, and evaluated their performance. Methods: First, word representations trained from different domains served as the input of a DL system for information extraction. Second, the transfer learning model was applied as a way to adapt the knowledge learned from general text sources to the task domain. The goal was to gain improvements in the extraction performance, especially for the classes that were topically related but did not have a sufficient amount of model solutions available for ML directly from the target domain. A total of 3 independent datasets were generated for this task, and they were used as the training (101 patient reports), validation (100 patient reports), and test (100 patient reports) sets in our experiments. Results: Our system is now the state-of-the-art in this task. Domain-specific word representations improved the macroaveraged F1 by 3.4%. Transferring the knowledge from general English corpora to the task-specific domain contributed a further 7.1% improvement. The best performance in populating the handover form with 37 headings was the macroaveraged F1 of 41.6% and F1 of 81.1% for filtering out irrelevant information. Performance differences between this system and its baseline were statistically significant (P
- Published
- 2019
23. Editorial Definition for Computing Practices.
- Author
-
Sibley, Edgar H. and Aiken, Robert M.
- Subjects
NEWSPAPER sections, columns, etc. ,COMPUTER systems ,ELECTRONIC data processing ,COMPUTER science - Abstract
This article offers information on the Computing Practices section of the magazine "Communications of the ACM." The section focuses on articles with a message about general ideas and techniques that can be transferred from one place to another. Instead of technical papers, the section concentrates on discussions of how, what, why and when to use the computing methods and systems. Topics that cover areas of interest to readers of the section include improvement of the systems design process, future of computing, and implication of new theory to application systems.
- Published
- 1980
24. A grid-enabled web service for low-resolution crystal structure refinement
- Author
-
Daniel O'Donovan, Axel T. Brunger, Ian Stokes-Rees, Stephen C. Blacklow, Yunsun Nam, Gunnar F. Schröder, and Piotr Sliz
- Subjects
Open science ,Computer science ,Distributed computing ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Crystallography, X-Ray ,computer.software_genre ,GeneralLiterature_MISCELLANEOUS ,deformable elastic network restraints ,User-Computer Interface ,03 medical and health sciences ,0302 clinical medicine ,Software ,Cyberinfrastructure ,Computer Systems ,Structural Biology ,ComputingMilieux_COMPUTERSANDEDUCATION ,030304 developmental biology ,Internet ,0303 health sciences ,SIMPLE (military communications protocol) ,business.industry ,Computational Biology ,General Medicine ,Grid ,Research Papers ,DEN refinement ,The Internet ,low-resolution refinement ,User interface ,Web service ,business ,computer ,030217 neurology & neurosurgery - Abstract
The deformable elastic network (DEN) method for reciprocal-space crystallographic refinement improves crystal structures, especially at resolutions lower than 3.5 Å. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements., Deformable elastic network (DEN) restraints have proved to be a powerful tool for refining structures from low-resolution X-ray crystallographic data sets. Unfortunately, optimal refinement using DEN restraints requires extensive calculations and is often hindered by a lack of access to sufficient computational resources. The DEN web service presented here intends to provide structural biologists with access to resources for running computationally intensive DEN refinements in parallel on the Open Science Grid, the US cyberinfrastructure. Access to the grid is provided through a simple and intuitive web interface integrated into the SBGrid Science Portal. Using this portal, refinements combined with full parameter optimization that would take many thousands of hours on standard computational resources can now be completed in several hours. An example of the successful application of DEN restraints to the human Notch1 transcriptional complex using the grid resource, and summaries of all submitted refinements, are presented as justification.
- Published
- 2012
25. COVID-19 Automatic Detection Using Deep Learning.
- Author
-
Sanajalwe, Yousef, Anbar, Mohammed, and Al-E'mari, Salam
- Subjects
DEEP learning ,COMPUTER systems ,INTERNET of things ,COMPUTER science ,COVID-19 pandemic - Abstract
The novel coronavirus disease 2019 (COVID-19) is a pandemic disease that is currently affecting over 200 countries around the world and impacting billions of people. The first step to mitigate and control its spread is to identify and isolate the infected people. But, because of the lack of reverse transcription polymerase chain reaction (RT-CPR) tests, it is important to discover suspected COVID-19 cases as early as possible, such as by scan analysis and chest X-ray by radiologists. However, chest X-ray analysis is relatively time-consuming since it requires more than 15 minutes per case. In this paper, an automated novel detection model of COVID-19 cases is proposed to perform real-time detection of COVID-19 cases. The proposed model consists of three main stages: image segmentation using Harris Hawks optimizer, synthetic image augmentation using an enhanced Wasserstein And Auxiliary Classifier Generative Adversarial Network, and image classification using Conventional Neural Network. Raw chest X-ray images datasets are used to train and test the proposed model. Experiments demonstrate that the proposed model is very efficient in the automatic detection of COVID-19 positive cases. It achieved 99.4% accuracy, 99.15% precision, 99.35% recall, 99.25% F-measure, and 98.5% specificity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. The caBIG® Life Science Business Architecture Model
- Author
-
Anna T. Fernandez, Lauren Becnel Boyd, Stephen Goldstein, Benjamin Tycko, Juli Klemm, Michele Ehlman, Elaine T. Freund, Scott P. Hunicke-Smith, Uma R. Chandran, David Steffen, Robert A. Dennis, and Grace A. Stafford
- Subjects
Statistics and Probability ,Biomedical Research ,Computer science ,Databases and Ontologies ,Biochemistry ,Software ,Resource (project management) ,Unified Modeling Language ,Computer Systems ,Neoplasms ,Business architecture ,Architecture ,Molecular Biology ,computer.programming_language ,business.industry ,Software development ,Computational Biology ,Original Papers ,Data science ,National Cancer Institute (U.S.) ,United States ,Computer Science Applications ,Computational Mathematics ,Vocabulary, Controlled ,Computational Theory and Mathematics ,Software engineering ,business ,computer - Abstract
Motivation: Business Architecture Models (BAMs) describe what a business does, who performs the activities, where and when activities are performed, how activities are accomplished and which data are present. The purpose of a BAM is to provide a common resource for understanding business functions and requirements and to guide software development. The cancer Biomedical Informatics Grid (caBIG®) Life Science BAM (LS BAM) provides a shared understanding of the vocabulary, goals and processes that are common in the business of LS research. Results: LS BAM 1.1 includes 90 goals and 61 people and groups within Use Case and Activity Unified Modeling Language (UML) Diagrams. Here we report on the model's current release, LS BAM 1.1, its utility and usage, and plans for future use and continuing development for future releases. Availability and Implementation: The LS BAM is freely available as UML, PDF and HTML (https://wiki.nci.nih.gov/x/OFNyAQ). Contact: lbboyd@bcm.edu; laurenbboyd@gmail.com Supplementary information: Supplementary data) are avaliable at Bioinformatics online.
- Published
- 2011
27. New paradigm for macromolecular crystallography experiments at SSRL: automated crystal screening and remote data collection
- Author
-
Henry van dem Bedem, Irimpan I. Mathews, Pete W. Dunten, S.E. McPhillips, Ashley M. Deacon, J. Song, R. Paul Phizackerley, Guenter Wolf, Paul J. Ellis, Clyde A. Smith, K. Sharp, Nicholas K. Sauter, Ana Gonzalez, Penjit Moorhead, Mitch Miller, Peter Kuhn, Hsui Chui, Michael Hollenbeck, Thomas Eriksson, Aina E. Cohen, Timothy M. McPhillips, S. Michael Soltis, and Irina Tsyba
- Subjects
Research groups ,Computer science ,030303 biophysics ,Crystallography, X-Ray ,Computer Communication Networks ,User-Computer Interface ,03 medical and health sciences ,Computer Systems ,Structural Biology ,Component (UML) ,Computer communication networks ,030304 developmental biology ,robotics ,Electronic Data Processing ,0303 health sciences ,remote crystallography data collection ,Data collection ,business.industry ,Data Collection ,Macromolecular crystallography ,General Medicine ,Research Papers ,Automation ,Crystallography ,Multiprotein Complexes ,User interface ,Crystallization ,business ,Computer hardware - Abstract
Through the combination of robust mechanized experimental hardware and a flexible control system with an intuitive user interface, SSRL researchers have screened over 200 000 biological crystals for diffraction quality in an automated fashion. Three quarters of SSRL researchers are using these data-collection tools from remote locations., Complete automation of the macromolecular crystallography experiment has been achieved at SSRL through the combination of robust mechanized experimental hardware and a flexible control system with an intuitive user interface. These highly reliable systems have enabled crystallography experiments to be carried out from the researchers’ home institutions and other remote locations while retaining complete control over even the most challenging systems. A breakthrough component of the system, the Stanford Auto-Mounter (SAM), has enabled the efficient mounting of cryocooled samples without human intervention. Taking advantage of this automation, researchers have successfully screened more than 200 000 samples to select the crystals with the best diffraction quality for data collection as well as to determine optimal crystallization and cryocooling conditions. These systems, which have been deployed on all SSRL macromolecular crystallography beamlines and several beamlines worldwide, are used by more than 80 research groups in remote locations, establishing a new paradigm for macromolecular crystallography experimentation.
- Published
- 2008
28. Seven Durable Ideas
- Author
-
John Glaser
- Subjects
Knowledge management ,Delivery of Health Care, Integrated ,Computer science ,business.industry ,Field (Bourdieu) ,Health Informatics ,Public relations ,Viewpoint Paper ,Organizational Affiliation ,Incrementalism ,Computer Systems ,Clinical information ,Health care ,Information system ,Architecture ,business ,Centrality ,Implementation ,Information Systems - Abstract
Partners Healthcare, and its affiliated hospitals, have a long track record of accomplishments in clinical information systems implementations and research. Seven ideas have shaped the information systems strategies and tactics at Partners; centrality of processes, organizational partnerships, progressive incrementalism, agility, architecture, embedded research, and engage the field. This article reviews the ideas and discusses the rationale and steps taken to put the ideas into practice.
- Published
- 2008
29. ConfVD: System Reactions Analysis and Evaluation Through Misconfiguration Injection.
- Author
-
Li, Shanshan, Li, Wang, Liao, Xiangke, Peng, Shaoliang, Zhou, Shulin, Jia, Zhouyang, and Wang, Teng
- Subjects
COMPUTER software ,OPEN source software ,COMPUTER systems ,SOFTWARE engineering ,COMPUTER science - Abstract
In recent years, misconfigurations have become one of the major causes of software system failures, resulting in numerous service outages. What is worse, misconfigurations are also costly to diagnose and troubleshoot. This remains a great challenge for sysadmins (system administrators) to detect, diagnose, or troubleshoot these misconfigurations. Unlike software bugs, misconfigurations are more vulnerable to sysadmins’ mistakes. Developers and researchers are attempting to improve system reactions to misconfigurations to ease the burden of sysadmins’ diagnoses. Such efforts would greatly benefit from the techniques that can comprehensively detect bad system reactions through injected misconfigurations. Unfortunately, few such studies have achieved the above goal in the past, primarily because they only relied on generic alterations and failed to find a way to systematically generate misconfigurations. In this paper, we study eight mature open-source and commercial software packages and summarize a fine-grained classification of option types. Based on this classification, we use Augmented Backus–Naur Form to summarize and extract syntactic and semantic constraints of each type. In order to generate comprehensive misconfigurations in the test systems, we propose misconfiguration generation methods for our constraints. We implement a tool named Configuration Vulnerability Detector (ConfVD) to conduct misconfiguration injection and further analyze the systems’ reaction abilities to various misconfigurations. We carried out comprehensive analyses upon Apache Httpd, MySQL, PostgreSQL, and Yum. The results of our analysis show that our option classification covers 96% of 1582 options from the above-mentioned systems. Our constraints are more fine grained than previous works and their accuracy was found to be 91% (ascertained by manual verification). Our technique could improve generic alteration approaches without constraints, and we found that ConfVD could find nearly three times the bad reactions that were found by ConfErr. In total, we found 65 bad reactions from the systems being tested and our fine-grained constraints contributed 27.7% more bad reactions than techniques only using coarse-grained constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
30. Pivot Tracing: Dynamic Causal Monitoring for Distributed Systems.
- Author
-
Mace, Jonathan, Roelke, Ryan, and Fonseca, Rodrigo
- Subjects
COMPUTER systems ,DEBUGGING ,COMPUTER science ,ELECTRONIC data processing ,QUERY languages (Computer science) - Abstract
Monitoring and troubleshooting distributed systems are notoriously difficult; potential problems are complex, varied, and unpredictable. The monitoring and diagnosis tools commonly used today—logs, counters, and metrics—have two important limitations: what gets recorded is defined a priori, and the information is recorded in a component- or machine-centric way, making it extremely hard to correlate events that cross these boundaries. This paper presents Pivot Tracing, a monitoring framework for distributed systems that addresses both limitations by combining dynamic instrumentation with a novel relational operator: the happened-before join. Pivot Tracing gives users, at runtime, the ability to define arbitrary metrics at one point of the system, while being able to select, filter, and group by events meaningful at other parts of the system, even when crossing component or machine boundaries. Pivot Tracing does not correlate cross-component events using expensive global aggregations, nor does it perform offline analysis. Instead, Pivot Tracing directly correlates events as they happen by piggybacking metadata alongside requests as they execute. This gives Pivot Tracing low runtime overhead—less than 1% for many cross-component monitoring queries. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. High-performance hardware implementation of a parallel database search engine for real-time peptide mass fingerprinting
- Author
-
Istvan Bogdan, Jenny Rivers, Daniel Coca, and Robert J. Beynon
- Subjects
Statistics and Probability ,Databases, Factual ,Computer science ,Proteolysis ,Information Storage and Retrieval ,Peptide ,Mass spectrometry ,Biochemistry ,Peptide Mapping ,Mass Spectrometry ,Software ,Peptide mass fingerprinting ,Computer Systems ,medicine ,Molecular Biology ,chemistry.chemical_classification ,medicine.diagnostic_test ,business.industry ,Peptide mapping ,Parallel database ,Fingerprint (computing) ,A protein ,Equipment Design ,Trypsin ,Original Papers ,Reconfigurable computing ,Computer Science Applications ,Equipment Failure Analysis ,Computational Mathematics ,Computational Theory and Mathematics ,chemistry ,Embedded system ,Mass spectrum ,Database Management Systems ,business ,Computer hardware ,medicine.drug - Abstract
Motivation: Peptide mass fingerprinting (PMF) is a method for protein identification in which a protein is fragmented by a defined cleavage protocol (usually proteolysis with trypsin), and the masses of these products constitute a ‘fingerprint’ that can be searched against theoretical fingerprints of all known proteins. In the first stage of PMF, the raw mass spectrometric data are processed to generate a peptide mass list. In the second stage this protein fingerprint is used to search a database of known proteins for the best protein match. Although current software solutions can typically deliver a match in a relatively short time, a system that can find a match in real time could change the way in which PMF is deployed and presented. In a paper published earlier we presented a hardware design of a raw mass spectra processor that, when implemented in Field Programmable Gate Array (FPGA) hardware, achieves almost 170-fold speed gain relative to a conventional software implementation running on a dual processor server. In this article we present a complementary hardware realization of a parallel database search engine that, when running on a Xilinx Virtex 2 FPGA at 100 MHz, delivers 1800-fold speed-up compared with an equivalent C software routine, running on a 3.06 GHz Xeon workstation. The inherent scalability of the design means that processing speed can be multiplied by deploying the design on multiple FPGAs. The database search processor and the mass spectra processor, running on a reconfigurable computing platform, provide a complete real-time PMF protein identification solution. Contact: d.coca@sheffield.ac.uk
- Published
- 2008
32. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy
- Author
-
Jannick P. Rolland, Cristina Canavesi, Anand P. Santhanam, Patrice Tankam, Kye-Sung Lee, and Jungeun Won
- Subjects
Computer science ,Biomedical Engineering ,Graphics processing unit ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Computational science ,Biomaterials ,Computer graphics ,Software ,Imaging, Three-Dimensional ,Computer Systems ,Computer Graphics ,Image Processing, Computer-Assisted ,Humans ,Throughput (business) ,Image resolution ,Skin ,Microscopy ,business.industry ,Computers ,Special Section Papers ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Visualization ,Refractometry ,Scalability ,business ,Algorithms ,Tomography, Optical Coherence - Abstract
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image process- ing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi- graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000 × 1000 A-scans. The proposed parallelized multi-GPU computing framework enables process- ing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD- OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1 × 1 × 0.6 mm 3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work
- Published
- 2013
33. WikiIdRank++: EXTENSIONS AND IMPROVEMENTS OF THE WikiIdRank SYSTEM FOR ENTITY LINKING.
- Author
-
JIMÉNEZ, M. D., FERNÁNDEZ, N., ARIAS FISTEUS, J., and SÁNCHEZ, L.
- Subjects
INFORMATION theory ,WIKIS ,COMPUTER systems ,SEMANTIC Web ,AUTOMATION ,COMPUTER performance ,COMPUTER science - Abstract
The amount of information available on the Web has grown considerably in recent years, leading to the need to structure it in order to access it in a quick and accurate way. In order to develop techniques to automate the structuring process, the Knowledge Base Population (KBP) track of the Text Analysis Conference (TAC) was created. This forum aims to encourage research in automated systems capable of capturing knowledge in unstructured information. One of the tasks proposed in the context of the KBP track is named entity linking, and its goal is to link named entities mentioned in a document to instances in a reference knowledge base built from Wikipedia. This paper focuses on the entity linking task in the context of KBP 2010, where two different varieties of this task were considered, depending on whether the use of the text from Wikipedia was allowed or not. Specifically, the paper proposes a set of modifications to a system that participated in KBP 2010, named WikiIdRank, in order to improve its performance. The different modifications were evaluated in the official KBP 2010 corpus, showing that the best combination increases the accuracy of the initial system in a 7.04%. Though the resultant system, named WikiIdRank++, is unsupervised and does not take advantage of Wikipedia text, a comparison with other approaches in KBP indicates that the system would rank as 4th (out of 16) in the global comparison, outperforming other approaches that use human supervision and take advantage of Wikipedia textual contents. Furthermore, the system would rank as 1st in the category of systems that do not use Wikipedia text. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
34. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization
- Author
-
Yu Zhou, Paul L. Carson, Xueding Wang, Guan Xu, Xiaojun Liu, Jie Yuan, and Yao Yu
- Subjects
Computer science ,Image quality ,Research Papers: Imaging ,Biomedical Engineering ,Graphics processing unit ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Iterative reconstruction ,Multimodal Imaging ,Sensitivity and Specificity ,Biomaterials ,Photoacoustic Techniques ,Elasticity Imaging Techniques ,Computer Systems ,Image Interpretation, Computer-Assisted ,Computer Graphics ,Computer vision ,Image restoration ,business.industry ,Phantoms, Imaging ,Reproducibility of Results ,Equipment Design ,Frame rate ,Image Enhancement ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Equipment Failure Analysis ,Artificial intelligence ,business ,Preclinical imaging ,Algorithms ,Software - Abstract
Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.
- Published
- 2013
35. Development of a Smartphone App for a Genetics Website : The Amyotrophic Lateral Sclerosis Online Genetics Database (ALSoD)
- Author
-
Peter M Andersen, Ashley R. Jones, Olubunmi Abel, John Powell, Ammar Al-Chalabi, and Aleksey Shatunov
- Subjects
amyotrophic lateral sclerosis ,Computer science ,Health Informatics ,Information technology ,Web-bases ,computer.software_genre ,frontotemporal dementia ,03 medical and health sciences ,0302 clinical medicine ,Computer Systems ,medicine ,genetics ,Amyotrophic lateral sclerosis ,mobile website ,app ,database ,030304 developmental biology ,Genetics ,Original Paper ,0303 health sciences ,Database ,bioinformatics ,medicine.disease ,T58.5-58.64 ,3. Good health ,ALSoD ,Datorsystem ,Smartphone app ,Public aspects of medicine ,RA1-1270 ,computer ,030217 neurology & neurosurgery - Abstract
BackgroundThe ALS Online Genetics Database (ALSoD) website holds mutation, geographical, and phenotype data on genes implicated in amyotrophic lateral sclerosis (ALS) and links to bioinformatics resources, publications, and tools for analysis. On average, there are 300 unique visits per day, suggesting a high demand from the research community. To enable wider access, we developed a mobile-friendly version of the website and a smartphone app. ObjectiveWe sought to compare data traffic before and after implementation of a mobile version of the website to assess utility. MethodsWe identified the most frequently viewed pages using Google Analytics and our in-house analytic monitoring. For these, we optimized the content layout of the screen, reduced image sizes, and summarized available information. We used the Microsoft .NET framework mobile detection property (HttpRequest.IsMobileDevice in the Request.Browser object in conjunction with HttpRequest.UserAgent), which returns a true value if the browser is a recognized mobile device. For app development, we used the Eclipse integrated development environment with Android plug-ins. We wrapped the mobile website version with the WebView object in Android. Simulators were downloaded to test and debug the applications. ResultsThe website automatically detects access from a mobile phone and redirects pages to fit the smaller screen. Because the amount of data stored on ALSoD is very large, the available information for display using smartphone access is deliberately restricted to improve usability. Visits to the website increased from 2231 to 2820, yielding a 26% increase from the pre-mobile to post-mobile period and an increase from 103 to 340 visits (230%) using mobile devices (including tablets). The smartphone app is currently available on BlackBerry and Android devices and will be available shortly on iOS as well. ConclusionsFurther development of the ALSoD website has allowed access through smartphones and tablets, either through the website or directly through a mobile app, making genetic data stored on the database readily accessible to researchers and patients across multiple devices.
- Published
- 2013
36. General News and Notes.
- Subjects
INFORMATION technology ,COMPUTER science ,COMPUTER systems ,HIGH technology - Abstract
The article offers news briefs related to information technology. The Association for Systems Management (ASM) has announced a program to certify systems personnel and to recertify every three years those who continue their educations and comply with a code of conduct. The organization Information Age Institute has been formed to study the impact of information technology in Washington, D.C.
- Published
- 1984
37. Algorithms for Computing the Volume and Other Integral Properties of Solids. II. A Family of Algorithms Based on Representation Conversion and Cellular Approximation.
- Author
-
Yong Tsui Lee, Requicha, Aristides A. G., and Fritsch, Fred
- Subjects
COMPUTER algorithms ,SOLID geometry ,ELECTRONIC data processing ,COMPUTER science ,INFORMATION technology ,COMPUTER systems - Abstract
This paper discusses a family of algorithms for computing the volume, moments of inertia, and other integral properties of geometrically complex solids, e.g. typical mechanical parts. The algorithms produce approximate decompositions of solids into cuboid cells whose integral properties are easy to compute. The paper focuses on versions of the algorithms which operate on solids represented by Constructive Solid Geometry (CSG), i.e., as set-theoretical combinations of primitive solid "building blocks." Two known algorithms are summarized and a new algorithm is presented. The efficiencies and accuracies of the three algorithms are analyzed theoretically and compared experimentally. The new algorithm uses recursive subdivision to convert CSG representations of complex solids into approximate cellular decompositions based on variably sized blocks. Experimental data show that the new algorithm is efficient and has predictable accuracy. It also has other potential applications, e.g., in producing approximate octree representations of complex solids and in robot navigation. [ABSTRACT FROM AUTHOR]
- Published
- 1982
- Full Text
- View/download PDF
38. Implementing Clenshaw-Curtis Quadrature, I Methodology and Experience.
- Author
-
timlake, W. P. and Gentleman, W. Morven
- Subjects
NUMERICAL integration ,COMPUTER systems ,ALGORITHMS ,COMPUTER science ,NUMERICAL analysis ,ARITHMETIC - Abstract
Clenshaw-Curtis quadrature is a particularly important automatic quadrature scheme for a variety of reasons, especially the high accuracy obtained from relatively few integrand values. However, it has received little use because it requires the computation of a cosine transformation, and the arithmetic cost of this has been prohibitive. This paper is in two parts; a companion paper, "II Computing the Cosine Transformation," shows that this objection can be overcome by computing the cosine transformation by a modification of the fast Fourier transform algorithm. This first part discusses the strategy and various error estimates, and summarizes experience with a particular implementation of the scheme. [ABSTRACT FROM AUTHOR]
- Published
- 1972
39. Implementing Clenshaw-Curtis Quadrature, II Computing the Cosine Transformation.
- Author
-
Tirniake, W. P. and Gentleman, W. Morven
- Subjects
NUMERICAL integration ,COMPUTER systems ,ALGORITHMS ,NUMERICAL analysis ,COMPUTER science ,ARITHMETIC - Abstract
In a companion paper to this, "I Methodology and Experiences," the automatic Clenshaw-Curtis quadrature scheme was described and how each quadrature formula used in the scheme requires a cosine transformation of the integrand values was shown. The high cost of these cosine transformations has been a serious drawback in using Clenshaw-Curtis quadrature. Two other problems related to the cosine transformation have also been troublesome. First, ... conventional computation of the cosine transformation by recurrence relation is numerically unstable, particularly at the low frequencies which have the largest effect upon the integral. Second, in case the automatic scheme should require refinement of the sampling, storage is required to save the integrand values after the cosine transformation is computed. This second part of the paper shows how the cosine transformation can be computed by a modification of the fast Fourier transform and all three problems overcome. The modification is also applicable in other circumstances requiring cosine or sine transformations, such as polynomial interpolation through the Chebyshev points. [ABSTRACT FROM AUTHOR]
- Published
- 1972
40. Using Metrics to Describe the Participative Stances of Members Within Discussion Forums
- Author
-
Christabel Owens, Tamsin Ford, Janet Smithson, Tobit Emmens, Bryony Sheaves, Elaine Hewis, Siobhan Sharkey, and Ray Jones
- Subjects
Adult ,Time Factors ,Adolescent ,Computer science ,Discourse analysis ,Internet privacy ,Health Informatics ,lcsh:Computer applications to medicine. Medical informatics ,Community Networks ,Online Systems ,self-harm ,participative stance ,World Wide Web ,metrics ,Young Adult ,discussion forums ,Online communities ,Computer Systems ,Surveys and Questionnaires ,Humans ,Cooperative Behavior ,Publication ,Retrospective Studies ,Original Paper ,Internet ,moderation ,business.industry ,lcsh:Public aspects of medicine ,Communication ,Social Support ,lcsh:RA1-1270 ,Focus Groups ,Middle Aged ,Moderation ,Focus group ,lcsh:R858-859.7 ,The Internet ,Metric (unit) ,business ,Self-Injurious Behavior ,Software - Abstract
BackgroundResearchers using forums and online focus groups need to ensure they are safe and need tools to make best use of the data. We explored the use of metrics that would allow better forum management and more effective analysis of participant contributions. ObjectiveTo report retrospectively calculated metrics from self-harm discussion forums and to assess whether metrics add to other methods such as discourse analysis. We asked (1) which metrics are most useful to compare and manage forums, and (2) how metrics can be used to identify the participative stances of members to help manage discussion forums. MethodsWe studied the use of metrics in discussion forums on self-harm. SharpTalk comprised five discussion forums, all using the same software but with different forum compositions. SharpTalk forums were similar to most moderated forums but combined support and general social chat with online focus groups discussing issues on self-harm. Routinely recorded time-stamp data were used to derive metrics of episodes, time online, pages read, and postings. We compared metrics from the forums with views from discussion threads and from moderators. We identified patterns of participants’ online behavior by plotting scattergrams and identifying outliers and clusters within different metrics. ResultsIn comparing forums, important metrics seem to be number of participants, number of active participants, total time of all participants logged on in each 24 hours, and total number of postings by all participants in 24 hours. In examining participative stances, the important metrics were individuals’ time logged per 24 hours, number of episodes, mean length of episodes, number of postings per 24 hours, and location within the forum of those postings. Metric scattergrams identified several participative stances: (1) the “caretaker,” who was “always around,” logged on for a much greater time than most other participants, posting but mainly in response to others and rarely initiating threads, (2) the “butterfly,” who “flitted in and out,” had a large number of short episodes, (3) two “discussants,” who initiated many more discussion threads than anybody else and posted proportionately less in the support room, (4) “here for you,” who posted frequently in the support room in response to other participants’ threads, and (5) seven “people in distress,” who posted many comments in the support room in comparison with their total postings and tended to post on their own threads. ConclusionsReal-time metrics may be useful: (1) by offering additional ways of comparing different discussion forums helping with their management, and (2) by identifying participative stances of individuals so allowing better moderation and support of forums, and more effective use of the data collected. For this to happen, researchers need to publish metrics for their discussion forums and software developers need to offer more real-time metrics facilities.
- Published
- 2011
41. An analysis of computer-related patient safety incidents to inform the development of a classification
- Author
-
Enrico Coiera, Mei-Sing Ong, Farah Magrabi, William B. Runciman, Magrabi, Farah, Ong, Mei-Sing, Runciman, William Ben, and Coiera, E.W
- Subjects
Evidence-based practice ,Computer science ,government.form_of_government ,Specialty ,MEDLINE ,Information Storage and Retrieval ,Health Informatics ,Computer security ,computer.software_genre ,Health informatics ,User-Computer Interface ,Patient safety ,Computer Systems ,medicine ,Humans ,Risk Management ,Medical Errors ,Descriptive statistics ,business.industry ,Australia ,Equipment Failure Analysis ,medicine.disease ,government ,Medical emergency ,business ,computer ,Medical Informatics ,Research Paper ,Incident report - Abstract
Objective To analyze patient safety incidents associated with computer use to develop the basis for a classification of problems reported by health professionals. Design Incidents submitted to a voluntary incident reporting database across one Australian state were retrieved and a subset (25%) was analyzed to identify ‘natural categories’ for classification. Two coders independently classified the remaining incidents into one or more categories. Free text descriptions were analyzed to identify contributing factors. Where available medical specialty, time of day and consequences were examined. Measurements Descriptive statistics; inter-rater reliability. Results A search of 42 616 incidents from 2003 to 2005 yielded 123 computer related incidents. After removing duplicate and unrelated incidents, 99 incidents describing 117 problems remained. A classification with 32 types of computer use problems was developed. Problems were grouped into information input (31%), transfer (20%), output (20%) and general technical (24%). Overall, 55% of problems were machine related and 45% were attributed to human–computer interaction. Delays in initiating and completing clinical tasks were a major consequence of machine related problems (70%) whereas rework was a major consequence of human–computer interaction problems (78%). While 38% (n=26) of the incidents were reported to have a noticeable consequence but no harm, 34% (n=23) had no noticeable consequence. Conclusion Only 0.2% of all incidents reported were computer related. Further work is required to expand our classification using incident reports and other sources of information about healthcare IT problems. Evidence based user interface design must focus on the safe entry and retrieval of clinical information and support users in detecting and correcting errors and malfunctions.
- Published
- 2010
42. Proposal for fulfilling strategic objectives of the U.S. Roadmap for national action on clinical decision support through a service-oriented architecture leveraging HL7 services
- Author
-
Kensaku Kawamoto and David F. Lobach
- Subjects
Decision support system ,Service (systems architecture) ,Knowledge management ,business.industry ,Computer science ,computer.internet_protocol ,Health Plan Implementation ,Health Informatics ,Service-oriented architecture ,Health Services ,Decision Support Systems, Clinical ,Clinical decision support system ,Health informatics ,United States ,Terminology ,Viewpoint Paper ,Health Planning ,Computer Systems ,Health care ,Orchestration (computing) ,business ,computer ,Editorial Comment ,Decision Making, Computer-Assisted ,Software - Abstract
Despite their demonstrated effectiveness, clinical decision support (CDS) systems are not widely used within the U.S. The Roadmap for National Action on Clinical Decision Support, published in June 2006 by the American Medical Informatics Association, identifies six strategic objectives for achieving widespread adoption of effective CDS capabilities. In this manuscript, we propose a Service-Oriented Architecture (SOA) for CDS that facilitates achievement of these six objectives. Within the proposed framework, CDS capabilities are implemented through the orchestration of independent software services whose interfaces are being standardized by Health Level 7 and the Object Management Group through their joint Healthcare Services Specification Project (HSSP). Core services within this framework include the HSSP Decision Support Service, the HSSP Common Terminology Service, and the HSSP Retrieve, Locate, and Update Service. Our experiences, and those of others, indicate that the proposed SOA approach to CDS could enable the widespread adoption of effective CDS within the U.S. health care system.
- Published
- 2007
43. Optimizing geometric accuracy of four-dimensional CT scans acquired using the wall- and couch-mounted Varian® Real-time Position Management™ camera systems
- Author
-
B.F. O'Connell, Aidan J Cole, Gerard G Hanna, Conor K. McGarry, and Denise M. Irvine
- Subjects
Lung Neoplasms ,Image quality ,Computer science ,Image processing ,Imaging phantom ,Motion ,Software ,Computer Systems ,Position (vector) ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Four-Dimensional Computed Tomography ,Radiation treatment planning ,Full Paper ,Phantoms, Imaging ,business.industry ,Respiration ,Reproducibility of Results ,Equipment Design ,General Medicine ,Amplitude ,Artificial intelligence ,Cube ,Artifacts ,business - Abstract
The aim of this study was to identify sources of anatomical misrepresentation owing to the location of camera mounting, tumour motion velocity and image processing artefacts in order to optimize the four-dimensional CT (4DCT) scan protocol and improve geometrical-temporal accuracy.A phantom with an imaging insert was driven with a sinusoidal superior-inferior motion of varying amplitude and period for 4DCT scanning. The length of a high-density cube within the insert was measured using treatment planning software to determine the accuracy of its spatial representation. Scan parameters were varied, including the tube rotation period and the cine time between reconstructed images. A CT image quality phantom was used to measure various image quality signatures under the scan parameters tested.No significant difference in spatial accuracy was found for 4DCT scans carried out using the wall- or couch-mounted camera for sinusoidal target motion. Greater spatial accuracy was found for 4DCT scans carried out using a tube rotation speed of 0.5 s rather than 1.0 s. The reduction in image quality when using a faster rotation speed was not enough to require an increase in patient dose.The 4DCT accuracy may be increased by optimizing scan parameters, including choosing faster tube rotation speeds. Peak misidentification in the recorded breathing trace may lead to spatial artefacts, and this risk can be reduced by using a couch-mounted infrared camera.This study explicitly shows that 4DCT scan accuracy is improved by scanning with a faster CT tube rotation speed.
- Published
- 2015
44. Parallel Cloth Simulation Using OpenGL Shading Language.
- Author
-
Hongly Va, Min-Hyung Choi, and Min Hong
- Subjects
CENTRAL processing units ,COMPUTER systems ,COMPUTER science ,STATISTICAL correlation ,COMPUTER software - Abstract
The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic. In general, the mass-spring system is applied to real-time cloth simulation with three types of springs. However, hard spring cloth simulation using the mass-spring system requires a small integration time-step in order to use a large stiffness coefficient. Furthermore, to obtain stable behavior, constraint enforcement is used instead of maintenance of the force of each spring. Constraint force computation involves a large sparse linear solving operation. Due to the large computation, we implement a cloth simulation using adaptive constraint activation and deactivation techniques that involve the mass-spring system and constraint enforcement method to prevent excessive elongation of cloth. At the same time, when the length of the spring is stretched or compressed over a defined threshold, adaptive constraint activation and deactivation method deactivates the spring and generate the implicit constraint. Traditional method that uses a serial process of the Central Processing Unit (CPU) to solve the system in every frame cannot handle the complex structure of cloth model in real-time. Our simulation utilizes the Graphic Processing Unit (GPU) parallel processing with compute shader in OpenGL Shading Language (GLSL) to solve the system effectively. In this paper, we design and implement parallel method for cloth simulation, and experiment on the performance and behavior comparison of the mass-spring system, constraint enforcement, and adaptive constraint activation and deactivation techniques the using GPU-based parallel method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Historical Reflections: Actually, Turing Did Not Invent the Computer.
- Author
-
Haigh, Thomas
- Subjects
COMPUTER science ,COMPUTERS ,TURING machines ,COMPUTER systems ,COMPUTER scientists - Abstract
The article provides information on the origins of computer science and technology, in light of the 100th anniversary of the birth of computer scientist Alan Turing in 2012. The author emphasizes that Turing did not invent the computer. In 1936, Turing launched the concept called the Turing Machine, which has become the main abstract model of computation used by computer scientists. After the Second World War, he designed an electronic computer called the automatic computing engine (ACE) for the National Physical Laboratory in London, England. It discusses the role of Turing in the founding of theoretical computer science.
- Published
- 2014
- Full Text
- View/download PDF
46. A Simple Experiment in Top-Down Design.
- Author
-
Comer, Douglas and Halstead, Maurice H.
- Subjects
COMPUTER software ,COMPUTER systems ,SOFTWARE engineering ,COMPUTER programming ,COMPUTER science - Abstract
In this paper we: 1) discuss the need for quantitatively reproducible experiments in the study of top-down design; 2) propose the design and writing of tutorial papers as a suitably general and in- expensive vehicle; 3) suggest the software science parameters as appropriate metrics; 4) report two experiments validating the use of these metrics on outlines and prose; and 5) demonstrate that the experiments tended toward the same optimal modularity. The last point appears to offer a quantitative approach to the estimation of the total length or volume (and the mental effort required to produce it) from an early stage of the top-down design process. If results of these experiments are validated elsewhere, then they will provide basic guidelines for the design process. [ABSTRACT FROM AUTHOR]
- Published
- 1979
47. Evaluation of Mutation Testing in a Nuclear Industry Case Study.
- Author
-
Delgado-Perez, Pedro, Habli, Ibrahim, Gregory, Steve, Alexander, Rob, Clark, John, and Medina-Bulo, Inmaculada
- Subjects
NUCLEAR industry ,COMPUTER systems ,COMPUTER software quality control ,FAILURE analysis ,COMPILERS (Computer programs) - Abstract
For software quality assurance, many safety-critical industries appeal to the use of dynamic testing and structural coverage criteria. However, there are reasons to doubt the adequacy of such practices. Mutation testing has been suggested as an alternative or complementary approach but its cost has traditionally hindered its adoption by industry, and there are limited studies applying it to real safety-critical code. This paper evaluates the effectiveness of state-of-the-art mutation testing on safety-critical code from within the U.K. nuclear industry, in terms of revealing flaws in test suites that already meet the structural coverage criteria recommended by relevant safety standards. It also assesses the practical feasibility of implementing such mutation testing in a real setting. We applied a conventional selective mutation approach to a C codebase supplied by a nuclear industry partner and measured the mutation score achieved by the existing test suite. We repeated the experiment using trivial compiler equivalence (TCE) to assess the benefit that it might provide. Using a conventional approach, it first appeared that the existing test suite only killed 82% of the mutants, but applying TCE revealed that it killed 92%. The difference was due to equivalent or duplicate mutants that TCE eliminated. We then added new tests to kill all the surviving mutants, increasing the test suite size by 18% in the process. In conclusion, mutation testing can potentially improve fault detection compared to structural-coverage-guided testing, and may be affordable in a nuclear industry context. The industry feedback on our results was positive, although further evidence is needed from application of mutation testing to software with known real faults. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Systems science and complexity: some proposals for future development.
- Author
-
Leleur, Steen
- Subjects
COMPUTER systems ,SYSTEMS development ,SYSTEMS theory ,SYSTEMS design ,COMPUTATIONAL complexity ,OPERATIONS research ,COMPUTER science ,INFORMATION technology ,SYSTEMS engineering - Abstract
In this paper some new, conceptual ideas referred to as a complexity orientation are presented on the basis of systemic planning (SP), which has been developed as a proactive, multi-methodology approach for complex planning tasks. Specifically, the complexity orientation is compared to other systems science research orientations represented by the following labels: functionalist, interpretive, emancipatory and postmodern. After presentation and discussion of the research orientation framework, SP is compared to other current systems and management practice. Finally, the potential of a complexity orientation is treated more generally and also with a focus on its epistemological interpretation, which is carried out with a special emphasis on the work of Luhmann. General findings of the paper are that a complexity orientation ought to complement the other four research orientations to serve as a methodological platform for management practice and that Luhmann's theories ought to be addressed by the systems community as a potential for informing further development of systems science and thinking. Copyright © 2008 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
49. Event-related functional MRI: past, present, and future
- Author
-
Bruce R. Rosen, Randy L. Buckner, and Anders M. Dale
- Subjects
Brain Mapping ,Multidisciplinary ,medicine.diagnostic_test ,Computer science ,Haemodynamic response ,Brain activity and meditation ,Brain ,Magnetic resonance imaging ,Sensory system ,Cognition ,Brain mapping ,Magnetic Resonance Imaging ,Positron emission tomography ,Computer Systems ,Temporal resolution ,Cerebrovascular Circulation ,Colloquium Paper ,medicine ,Humans ,Neuroscience - Abstract
The past two decades have seen an enormous growth in the field of human brain mapping. Investigators have extensively exploited techniques such as positron emission tomography and MRI to map patterns of brain activity based on changes in cerebral hemodynamics. However, until recently, most studies have investigated equilibrium changes in blood flow measured over time periods upward of 1 min. The advent of high-speed MRI methods, capable of imaging the entire brain with a temporal resolution of a few seconds, allows for brain mapping based on more transient aspects of the hemodynamic response. Today it is now possible to map changes in cerebrovascular parameters essentially in real time, conferring the ability to observe changes in brain state that occur over time periods of seconds. Furthermore, because robust hemodynamic alterations are detectable after neuronal stimuli lasting only a few tens of milliseconds, a new class of task paradigms designed to measure regional responses to single sensory or cognitive events can now be studied. Such “event related” functional MRI should provide for fundamentally new ways to interrogate brain function, and allow for the direct comparison and ultimately integration of data acquired by using more traditional behavioral and electrophysiological methods.
- Published
- 1998
50. Relating research to practice: imperative or circumstance?
- Author
-
Strand, Dixi
- Subjects
INFORMATION technology ,RESEARCH ,INFORMATION resources management ,COMPUTER systems ,SYSTEMS development ,COMPUTER science - Abstract
This paper provides a starting point for thinking beyond a research–practice divide and discusses possible new conceptualizations of intervention and the role of IT research in contemporary organizational settings. ‘IT research’ denotes a conglomerate of overlapping research conducted under the headings of Information Systems, Systems Development, Critical IS Research and Participatory Design. The paper applies this joint notion of IT research and the IT researcher to draw parallels across these niches of research regarding the question of intervention. Through an analysis of selected field study events, a prominent notion of intervention (as being active as opposed to being passive) is reworked in terms of intervention as circumstance, a circumstantial interplay of situated practices. In closing, subsequent possibilities for repositioning the IT researcher are discussed in terms of reflexivity, facilitation or being a trickster. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.