39 results on '"Jesus Bisbal"'
Search Results
2. Towards Collaborative Data Management in the VPH-Share Project.
- Author
-
Siegfried Benkner, Jesus Bisbal, Gerhard Engelbrecht, Rod D. Hose, Yuriy Kaniovskyi, Martin Koehler, Carlos Pedrinaci, and Steven Wood
- Published
- 2011
- Full Text
- View/download PDF
3. Prediction of Cerebral Aneurysm Rupture Using Hemodynamic, Morphologic and Clinical Features: A Data Mining Approach.
- Author
-
Jesus Bisbal, Gerhard Engelbrecht, Maria-Cruz Villa-Uriol, and Alejandro F. Frangi
- Published
- 2011
- Full Text
- View/download PDF
4. Archetype-based semantic mediation: Incremental provisioning of data services.
- Author
-
Jesus Bisbal, Gerhard Engelbrecht, and Alejandro F. Frangi
- Published
- 2010
- Full Text
- View/download PDF
5. Towards negotiable SLA-based QoS support for biomedical data services.
- Author
-
Gerhard Engelbrecht, Jesus Bisbal, Siegfried Benkner, and Alejandro F. Frangi
- Published
- 2010
- Full Text
- View/download PDF
6. A model for fast web mining prototyping.
- Author
-
álvaro R. Pereira Jr., Ricardo Baeza-Yates, Nivio Ziviani, and Jesus Bisbal
- Published
- 2009
- Full Text
- View/download PDF
7. Resource-based approach to feature interaction in adaptive software.
- Author
-
Jesus Bisbal and Betty H. C. Cheng
- Published
- 2004
- Full Text
- View/download PDF
8. Clinical coverage of an archetype repository over SNOMED-CT.
- Author
-
Sheng Yu 0005, Damon Berry, and Jesus Bisbal
- Published
- 2012
- Full Text
- View/download PDF
9. A Service-Oriented Distributed Semantic Mediator: Integrating Multiscale Biomedical Information.
- Author
-
Oscar Mora, Gerhard Engelbrecht, and Jesus Bisbal
- Published
- 2012
- Full Text
- View/download PDF
10. Building Consistent Sample Databases to Support Information System Evolution and Migration.
- Author
-
Jesus Bisbal, Bing Wu 0004, Deirdre Lawless, and Jane Grimson
- Published
- 1998
- Full Text
- View/download PDF
11. The Butterfly Methodology : A Gateway-free Approach for Migrating Legacy Information Systems.
- Author
-
Bing Wu 0004, Deirdre Lawless, Jesus Bisbal, Ray Richardson, Jane Grimson, Vincent Wade, and Donie O'Sullivan
- Published
- 1997
- Full Text
- View/download PDF
12. A formal framework for database sampling.
- Author
-
Jesus Bisbal, Jane Grimson, and David A. Bell
- Published
- 2005
- Full Text
- View/download PDF
13. Consistent database sampling as a database prototyping approach.
- Author
-
Jesus Bisbal and Jane Grimson
- Published
- 2002
- Full Text
- View/download PDF
14. Database sampling with functional dependencies.
- Author
-
Jesus Bisbal and Jane Grimson
- Published
- 2001
- Full Text
- View/download PDF
15. Legacy Information Systems: Issues and Directions.
- Author
-
Jesus Bisbal, Deirdre Lawless, Bing Wu 0004, and Jane Grimson
- Published
- 1999
- Full Text
- View/download PDF
16. An Overview of Legacy System Migration.
- Author
-
Jesus Bisbal, Deirdre Lawless, Bing Wu 0004, Jane Grimson, Vincent Wade, Ray Richardson, and Donie O'Sullivan
- Published
- 1997
- Full Text
- View/download PDF
17. Legacy Systems Migration : A Method and its Tool-Kit Framework.
- Author
-
Bing Wu 0004, Deirdre Lawless, Jesus Bisbal, Jane Grimson, Vincent Wade, Donie O'Sullivan, and Ray Richardson
- Published
- 1997
- Full Text
- View/download PDF
18. Mercury: using the QuPreSS reference model to evaluate predictive services
- Author
-
Silverio Martínez-Fernández, Jesus Bisbal, Xavier Franch, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
010504 meteorology & atmospheric sciences ,Operations research ,Computer science ,Weather forecasting ,Tool development ,02 engineering and technology ,computer.software_genre ,Predictive services ,01 natural sciences ,0202 electrical engineering, electronic engineering, information engineering ,Reference model ,0105 earth and related environmental sciences ,computer.programming_language ,Previsió del temps ,Forecast verification ,Service provider ,Service monitoring ,Informàtica::Aplicacions de la informàtica::Aplicacions informàtiques a la física i l‘enginyeria [Àrees temàtiques de la UPC] ,020201 artificial intelligence & image processing ,Data mining ,Mercury (programming language) ,computer ,Software - Abstract
Nowadays, lots of service providers offer predictive services that show in advance a condition or occurrence about the future. As a consequence, it becomes necessary for service customers to select the predictive service that best satisfies their needs. The QuPreSS reference model provides a standard solution for the selection of predictive services based on the quality of their predictions. QuPreSS has been designed to be applicable in any predictive domain (e.g., weather forecasting, economics, and medicine). This paper presents Mercury, a tool based on the QuPreSS reference model and customized to the weather forecast domain. Mercury measures weather predictive services' quality, and automates the context-dependent selection of the most accurate predictive service to satisfy a customer query. To do so, candidate predictive services are monitored so that their predictions can be eventually compared to real observations obtained from a trusted source. Mercury is a proof-of-concept of QuPreSS that aims to show that the selection of predictive services can be driven by the quality of their predictions. Throughout the paper, we show how Mercury was built from the QuPreSS reference model and how it can be installed and used.
- Published
- 2017
19. An Analysis Framework for Electronic Health Record Systems
- Author
-
Damon Berry and Jesus Bisbal
- Subjects
Advanced and Specialized Nursing ,Systems Analysis ,Scope (project management) ,business.industry ,Management science ,Interoperability ,Information technology ,Health Informatics ,Context (language use) ,Models, Theoretical ,Semantic interoperability ,Data science ,Health informatics ,Field (computer science) ,Systems Integration ,Systems analysis ,Health Information Management ,Electronic Health Records ,Humans ,Medicine ,Medical Record Linkage ,Cooperative Behavior ,business - Abstract
Summary Background: The timely provision of complete and up-to-date patient data to clinicians has for decades been one of the most pressing objectives to be fulfilled by information technology in the healthcare domain. The so-called electronic health record (EHR), which provides a unified view of all relevant clinical data, has received much attention in this context from both research and industry. This situation has given rise to a large number of research projects and commercial products that aim to address this challenge. Different projects and initiatives have attempted to address this challenge from various points of view, which are not easily comparable. Objectives: This paper aims to clarify the challenges, concepts, and approaches involved, which is essential in order to consistently compare existing solutions and objectively assess progress in the field. Methods: This is achieved by two different means. Firstly, the paper will identify the most significant issues that differentiate the points of view and intended scope of existing approaches. As a result, a framework for analysis of EHR systems will be produced. Secondly, the most representative EHR-related projects and initiatives will be described and compared within the context of this framework. Results: The main result of the present paper is an analysis framework for EHR systems. This is intended as an initial step towards an attempt to structure research on this field, clearly lacking sound principles to evaluate and compare results, and ultimately focusing its efforts and being able to objectively evaluate scientific progress. Conclusions: Evaluation and comparison of results in medical informatics, and specifically EHR systems, must address technical and nontechnical aspects. It is challenging to condensate in a single framework all potential views of such a field, and any chosen approach is bound to have its limitations. That being said, any well structured comparison approach, such as the framework presented here, is better than no comparison framework at all, as has been the current situation to date. This paper has presented the first attempt known to the authors to define such a framework.
- Published
- 2011
- Full Text
- View/download PDF
20. Virtual physiological human: training challenges
- Author
-
Patricia V, Lawford, Andrew V, Narracott, Keith, McCormack, Jesus, Bisbal, Carlos, Martin, Bart, Bijnens, Bindi, Brook, Margarita, Zachariou, Jordi Villà I, Freixa, Peter, Kohl, Katherine, Fletcher, and Vanessa, Diaz-Zuccarini
- Subjects
Physiology ,Computer science ,Process (engineering) ,Emerging technologies ,General Mathematics ,media_common.quotation_subject ,0206 medical engineering ,General Physics and Astronomy ,Translational research ,02 engineering and technology ,Models, Biological ,User-Computer Interface ,03 medical and health sciences ,Knowledge extraction ,Humans ,Computer Simulation ,Quality (business) ,030304 developmental biology ,media_common ,0303 health sciences ,Management science ,General Engineering ,Virtual Physiological Human ,020601 biomedical engineering ,Data science ,3. Good health ,Visualization ,Europe ,Portfolio ,Interdisciplinary Communication - Abstract
The virtual physiological human (VPH) initiative encompasses a wide range of activities, including structural and functional imaging, data mining, knowledge discovery tool and database development, biomedical modelling, simulation and visualization. The VPH community is developing from a multitude of relatively focused, but disparate, research endeavours into an integrated effort to bring together, develop and translate emerging technologies for application, from academia to industry and medicine. This process initially builds on the evolution of multi-disciplinary interactions and abilities, but addressing the challenges associated with the implementation of the VPH will require, in the very near future, a translation of quantitative changes into a new quality of highly trained multi-disciplinary personnel. Current strategies for undergraduate and on-the-job training may soon prove insufficient for this. The European Commission seventh framework VPH network of excellence is exploring this emerging need, and is developing a framework of novel training initiatives to address the predicted shortfall in suitably skilled VPH-aware professionals. This paper reports first steps in the implementation of a coherent VPH training portfolio.
- Published
- 2010
- Full Text
- View/download PDF
21. A formal framework for database sampling
- Author
-
David A. Bell, Jesus Bisbal, and Jane Grimson
- Subjects
Spatiotemporal database ,Computer science ,View ,Database schema ,InformationSystems_DATABASEMANAGEMENT ,computer.software_genre ,Database design ,Database tuning ,Database testing ,Computer Science Applications ,Database theory ,Data mining ,computer ,Software ,Information Systems ,Database model - Abstract
Database sampling is commonly used in applications like data mining and approximate query evaluation in order to achieve a compromise between the accuracy of the results and the computational cost of the process. The authors have recently proposed the use of database sampling in the context of populating a prototype database, that is, a database used to support the development of data-intensive applications. Existing methods for constructing prototype databases commonly populate the resulting database with synthetic data values. A more realistic approach is to sample a database so that the resulting sample satisfies a predefined set of integrity constraints. The resulting database, with domain-relevant data values and semantics, is expected to better support the software development process. This paper presents a formal study of database sampling. A Denotational Semantics description of database sampling is first discussed. Then the paper characterises the types of integrity constraints that must be considered during sampling. Lastly, the sampling strategy presented here is applied to improve the data quality of a (legacy) database. In this context, database sampling is used to incrementally identify the set of tuples which are the cause of inconsistencies in the database, and therefore should be the ones to be addressed by the data cleaning process.
- Published
- 2005
- Full Text
- View/download PDF
22. Consistent database sampling as a database prototyping approach
- Author
-
Jane Grimson and Jesus Bisbal
- Subjects
Database ,Computer science ,Database schema ,Database theory ,computer.software_genre ,computer ,Database design ,Software ,Database tuning ,Database testing ,Operational database ,Database model ,Data administration - Abstract
Requirements elicitation has been reported to be the stage of software development when errors have the most expensive consequences. Users usually find it difficult to articulate a consistent and complete set of requirements at the beginning of a development project. Prototyping is considered a powerful technique to case this problem by exposing a partial implementation of the software system to the user, who can then identify required modifications. When prototypiug data-intensive applications a so-called prototype database is needed.This paper investigates how a prototype database can be built. Two different approaches are analysed, namely test databases and sample databases; the former populates the resulting database with synthetic values, while the latter uses data values from an existing database. The application areas that require prototype databases, in addition to requirements analysis, are also identified. The paper reports on existing research into the construction of both types of prototype databases, and indicates to which type of application area each is best suited. This paper advocates for the use of sample databases when an operational database is available, as is commonly the case in software maintenance and evolution. Domain-relevant data values and integrity constraints will produce a prototype database which will support the information system development process better than synthetic data. The process of extracting a sample database is also investigated.
- Published
- 2002
- Full Text
- View/download PDF
23. Electronic Health Record Systems
- Author
-
Jesus Bisbal
- Published
- 2013
- Full Text
- View/download PDF
24. Electronic Health Records
- Author
-
Jesus Bisbal
- Published
- 2013
- Full Text
- View/download PDF
25. Interoperability
- Author
-
Jesus Bisbal
- Published
- 2013
- Full Text
- View/download PDF
26. A service-oriented distributed semantic mediator: integrating multiscale biomedical information
- Author
-
Gerhard Engelbrecht, O. Mora, and Jesus Bisbal
- Subjects
Biomedical Research ,Databases, Factual ,computer.internet_protocol ,Computer science ,Information Storage and Retrieval ,Semantics ,computer.software_genre ,World Wide Web ,SPARQL ,Electrical and Electronic Engineering ,RDF ,Semantic Web ,Internet ,Information retrieval ,Distributed database ,Biomedical information ,Computational Biology ,General Medicine ,computer.file_format ,Service-oriented architecture ,Computer Science Applications ,Semantic technology ,computer ,Algorithms ,Software ,Biotechnology ,Data integration - Abstract
Biomedical research continuously generates large amounts of heterogeneous and multimodal data spread over multiple data sources. These data, if appropriately shared and exploited, could dramatically improve the research practice itself, and ultimately the quality of health care delivered. This paper presents DIstributed Semantic MEDiator (DISMED), an open source semantic mediator that provides a unified view of a federated environment of multiscale biomedical data sources. DISMED is a Web-based software application to query and retrieve information distributed over a set of registered data sources, using semantic technologies. It also offers a user-friendly interface specifically designed to simplify the usage of these technologies by nonexpert users. Although the architecture of the software mediator is generic and domain independent, in the context of this paper, DISMED has been evaluated for managing biomedical environments and facilitating research with respect to the handling of scientific data distributed in multiple heterogeneous data sources. As part of this contribution, a quantitative evaluation framework has been developed. It consist of a benchmarking scenario and the definition of five realistic use-cases. This framework, created entirely with public datasets, has been used to compare the performance of DISMED against other available mediators. It is also available to the scientific community in order to evaluate progress in the domain of semantic mediation, in a systematic and comparable manner. The results show an average improvement in the execution time by DISMED of 55% compared to the second best alternative in four out of the five use-cases of the experimental evaluation.
- Published
- 2012
27. QuPreSS: A service-oriented framework for predictive services quality assessment
- Author
-
Silverio Martínez-Fernández, Xavier Franch, Jesus Bisbal, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Service, Information and Data Engineering
- Subjects
Monitoring ,Computer science ,computer.internet_protocol ,media_common.quotation_subject ,Interoperability ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Meteorologia -- Informàtica ,Services computing ,computer.software_genre ,Predictive services ,Service-level agreement ,Service-oriented architectures ,Forecast verifications ,Quality (business) ,Service Quality ,Service Oriented ,media_common ,Service (business) ,Service quality ,Stock market ,business.industry ,Framework architecture ,Systematic literature review ,Generic frameworks ,Service-oriented architecture ,Weather forecasting -- Data processing ,Services quality ,Body of knowledge ,Weather forecasts ,Data mining ,Software engineering ,business ,computer - Abstract
Nowadays there are lots of predictive services for several domains such as stock market and bookmakers. The value delivered by these services relies on the quality of their predictions. This paper presents QuPreSS, a general framework which measures predictive service quality and guides the selection of the most accurate predictive service. To do so, services are monitored and their predictions are compared over time by means of forecast verification with observations. A systematic literature review was performed to design a service-oriented framework architecture that fits into the current body of knowledge. The service-oriented nature of the framework makes it extensible and interoperable, being able to integrate existing services regardless their heterogeneity of platforms and languages. Finally, we also present an instantiation of the generic framework architecture for the weather forecast domain, freely available at http://gessi.lsi.upc. edu/qupress/
- Published
- 2012
28. Integrating volumetric biomedical data in the virtual physiological human
- Author
-
Albert Burger, Bernard de Bono, Duncan Davidson, Alejandro F. Frangi, Jesus Bisbal, Richard Baldock, Xu Gu, Corné Hoogendoorn, and Peter Hunter
- Subjects
Computational model ,Physiome ,Computer science ,Interoperability ,eHealth ,Use case ,Context (language use) ,Virtual Physiological Human ,computer.software_genre ,computer ,Data science ,Data integration - Abstract
Biomedical imaging has become ubiquitous in both basic research and the clinical context. Technology advances and the resulting multitude of imaging modalities have led to a sharp rise in the quantity and quality of such images. In addition, computational models are increasingly used to study biological processes involving spatio-temporal changes in organisms, e.g. the growth of a tumor, and models and images are extensively described in natural language, for example in research publications and patient records. Together this leads to a major spatio-temporal data and model integration challenge for the next generation of biomedical and eHealth information systems.In this paper, we discuss a pilot study of volumetric data integration in the context of the Virtual Physiological Human initiative. Three types of spatio-temporal biomedical data sources are briefly introduced and the motivation for their integration presented in use case scenarios. The sources include a computational model of the human heart from the heart physiome project, a statistical atlas of human heart, and a 3D framework for the developing mouse embryo. We report on our experiences of integrating these resources and discuss the wider requirements of volumetric data integration in the biomedical research and eHealth domain.
- Published
- 2011
- Full Text
- View/download PDF
29. Performance Analysis and Assessment of a TF-IDF Based Archetype-SNOMED-CT Binding Algorithm
- Author
-
Damon Berry, Sheng Yu, Jesus Bisbal, and Damon Berry
- Subjects
Clinical archetype ,Matching (statistics) ,SNOMED CT ,education.field_of_study ,SNOMED-CT ,Information retrieval ,ComputingMilieux_THECOMPUTINGPROFESSION ,Process (engineering) ,Computer science ,Population ,Terminology ,Set (abstract data type) ,Algorithm analysis ,Computer Engineering ,tf–idf ,education ,Archetype ,Algorithm - Abstract
Term bindings in archetypes are at a boundary between health information models and health terminology for dual model-based electronic health-care record (EHR) systems. The development of archetypes and the population of archetypes with bound terms is in its infancy. Terminological binding is currently performed “manually” by the teams who create archetypes. This process could be made more efficient, if it was supported by automatic tools. This paper presents a method for evaluating the performance of automatic code search approaches. In order to assess the quality of the automatic search, the authors extracted all the unique bound codes from 1133 archetypes from an archetype repository. These “manually bound ”SNOMED-CT codes were compared against the codes suggested by the authors' automatic search and used for assessing the algorithm's performance in terms of accuracy and category matching. The result of this study shows a sensitivity analysis of a set of parameters relevant to the matching process.
- Published
- 2011
30. Prediction of Cerebral Aneurysm Rupture Using Hemodynamic, Morphologic and Clinical Features: A Data Mining Approach
- Author
-
Alejandro F. Frangi, Mari-Cruz Villa-Uriol, Gerhard Engelbrecht, and Jesus Bisbal
- Subjects
Computer science ,business.industry ,Hemodynamics ,Feature selection ,Blood flow ,medicine.disease ,Machine learning ,computer.software_genre ,Domain (software engineering) ,Aneurysm rupture ,Aneurysm ,medicine ,Data mining ,Artificial intelligence ,business ,computer ,Cerebral aneurysm rupture ,Biomedicine - Abstract
Cerebral aneurysms pose a major clinical threat and the current practice upon diagnosis is a complex, lengthy, and costly, multicriteria analysis, which to date is not fully understood. This paper reports the development of several classifiers predicting whether a given clinical case is likely to rupture taking into account available information of the patient and characteristics of the aneurysm. The dataset used included 157 cases, with 294 features each. The broad range of features include basic demographics and clinical information, morphological characteristics computed from the patient's medical images, as well as results gained from personalised blood flow simulations. In this premiere attempt the wealth of aneurysm-related information gained from multiple heterogeneous sources and complex simulation processes is used to systematically apply different data-mining algorithms and assess their predictive accuracy in this domain. The promising results show up to 95% classification accuracy. Moreover, the analysis also enables to confirm or reject risk factors commonly accepted or suspected in the domain.
- Published
- 2011
- Full Text
- View/download PDF
31. Towards negotiable SLA-based QoS support for biomedical data services
- Author
-
Alejandro F. Frangi, Jesus Bisbal, Siegfried Benkner, and Gerhard Engelbrecht
- Subjects
Data access ,Database ,Computer science ,Service level ,Quality of service ,Data quality ,Service level objective ,Data as a service ,Web service ,computer.software_genre ,computer ,Data science ,Data modeling - Abstract
Researchers in data intensive domains, like the Virtual Physiological Human initiative (VPH-I), are commonly overwhelmed with the vast and increasing amount of data available. Advanced studies in biomedicine and other domains often require a considerable amount of effort to achieve data access to a critical mass of relevant data to analyze the problem at hand. We aim to improve this situation and propose a novel application of Quality of Service (QoS) mechanisms for data services. This enables scientists to obtain exactly the data they require, rather than being spoilt for choice which data source might comprise suitable data. The proposed QoS support includes a negotiation model based on service level agreements (SLAs), which in turn comprises data-related service level objectives (SLOs) to express the required guarantees about the quantity or quality of data. Moreover a corresponding QoS management model is presented which resolves the complex process of the SLA generation within data access and data mediation services. The benefits of this approach are materialized in the context of the @neurIST data environment and an initial experimental evaluation demonstrates promising performance improvements in a real world scenario.
- Published
- 2010
- Full Text
- View/download PDF
32. A model for fast web mining prototyping
- Author
-
Nivio Ziviani, Ricardo Baeza-Yates, Álvaro Pereira, and Jesus Bisbal
- Subjects
medicine.medical_specialty ,Database ,Computer science ,Data stream mining ,Conceptual model (computer science) ,Concept mining ,computer.software_genre ,Web mining ,medicine ,Web mapping ,Web intelligence ,Software architecture ,computer ,Web modeling - Abstract
Web mining is a computation intensive task, even after the mining tool itself has been developed. Most mining software are developed ad-hoc and usually are not scalable nor reused for other mining tasks. The objective of this paper is to present a model for fast Web mining prototyping, referred to as WIM -- Web Information Mining. The underlying conceptual model of WIM provides its users with a level of abstraction appropriate for prototyping and experimentation throughout the Web data mining task. Abstracting from the idiosyncrasies of raw Web data representations facilitates the inherently iterative mining process. We present the WIM conceptual model, its associated algebra, and the WIM tool software architecture, which implements the WIM model. We also illustrate how the model can be applied to real Web data mining tasks. The experimentation of WIM in real use cases has shown to significantly facilitate Web mining prototyping.
- Published
- 2009
- Full Text
- View/download PDF
33. From passive to active electronic healthcare records
- Author
-
Damon Berry, Jane Grimson, Lucy Hederman, Jesus Bisbal, and William Grimson
- Subjects
Advanced and Specialized Nursing ,Receipt ,System of record ,Medical Records Systems, Computerized ,business.industry ,End user ,Information Storage and Retrieval ,Health Informatics ,Active database ,computer.software_genre ,World Wide Web ,User-Computer Interface ,Health Information Management ,Middleware (distributed applications) ,Information system ,Medicine ,Callback ,Humans ,business ,Implementation ,computer ,Software - Abstract
Summary Objectives: The provision of patient data to clinicians as and when it becomes available is a general objective of information systems in healthcare. It is known that the timely receipt of patient data can have a significant bearing on healthcare outcomes. One of the on-going tasks is to provide this data in the form of an Electronic Healthcare Record according to some agreed standard. The aim in this paper is to provide patient data in electronic form by pushing the information to the end users as soon it becomes available, in advance of any explicit request from the users. Methods: This paper describes how an existing record system, the Synapses Federated Healthcare Records Server, has been extended to incorporate active functionality to facilitate pushing the information to end-user applications. The user must specify the information of interest to him, so that the system pushes only information useful to the final user. The approach proposed here relies solely on the use of callbacks through the middleware layer being used, a mechanism available in all existing middleware implementations. Results: The Synapses Federated Healthcare Records Server which has resulted from this research is a more flexible and scaleable system, capable of fulfilling the needs of a wider range of healthcare organisation than when a strictly passive approach is used. Conclusions: It is shown that healthcare organisation can incorporate a healthcare record system with active functionality without any large investment or significant risk to their existing information systems.
- Published
- 2003
34. An overview of legacy information system migration
- Author
-
Bing Wu, Jesus Bisbal, Ray Richardson, Jane Grimson, Vincent Wade, Declan O'Sullivan, and Deirdre Lawless
- Subjects
business.industry ,Computer science ,Legacy system ,Software maintenance ,Application software ,computer.software_genre ,Open system (systems theory) ,Data science ,Management information systems ,Software portability ,Information system ,Information flow (information theory) ,Software engineering ,business ,computer - Abstract
Legacy information systems typically form the backbone of the information flow within an organisation and are the main vehicle for consolidating information about the business. These systems also pose considerable problems: brittleness, inflexibility, isolation, nonextensibility, lack of openness etc., the so called legacy system problem which opens up a new research topic, legacy system migration. This paper provides a brief review of the main issues involved in legacy information systems migration.
- Published
- 2002
- Full Text
- View/download PDF
35. The Butterfly Methodology: a gateway-free approach for migrating legacy information systems
- Author
-
Vincent Wade, Deirdre Lawless, Donie O'Sullivan, Jane Grimson, Ray Richardson, Jesus Bisbal, and Bing Wu
- Subjects
business.industry ,Computer science ,Interoperability ,Legacy system ,Gateway (computer program) ,Computer security ,computer.software_genre ,Consistency (database systems) ,Documentation ,Information system ,Isolation (database systems) ,Software engineering ,business ,Internetworking ,computer - Abstract
The problems posed by mission-critical legacy systems-e.g., brittleness, inflexibility, isolation, non-extensibility, lack of openness-are well known, but practical solutions have been slow to emerge. Generally, organisations attempt to keep their legacy systems operational, while developing mechanisms which allow the legacy systems to interoperate with new, modern systems which provide additional functionality. The most mature approach employs gateways to provide this interoperability. However, gateways introduce considerable complexity in their attempt to maintain consistency between the legacy and target systems. This paper presents an innovative gateway-free approach to migrating legacy information systems in a mission-critical environment: the Butterfly Methodology. The fundamental premise of this methodology is to question the need for the parallel operation of the legacy and target systems during migration.
- Published
- 2002
- Full Text
- View/download PDF
36. Legacy systems migration-a method and its tool-kit framework
- Author
-
Ray Richardson, Jesus Bisbal, Jane Grimson, Deirdre Lawless, Bing Wu, Vincent Wade, and Donie O'Sullivan
- Subjects
System of systems ,Interoperation ,Software modernization ,Computer science ,business.industry ,Legacy system ,Information system ,Systems design ,Context (language use) ,Software engineering ,business ,Data migration - Abstract
The problems posed by mission-critical legacy systems: brittleness, inflexibility, isolation, non-extensibility, lack of openness etc., are well known, but practical solutions have been slow to emerge. Most approaches are "ad hoc" and tailored to peculiarities of individual systems. This paper presents an approach to mission-critical legacy system migration: the Butterfly methodology, its data migration engine and supporting toolkit framework. Data migration is the primary focus of the Butterfly methodology, however, it is placed in the overall context of a complete legacy system migration. The fundamental premise of the Butterfly methodology is to question the need for parallel operation of the legacy and target systems during migration. Much of the complexity of the current migration methodologies is eliminated by removing this interoperation assumption.
- Published
- 2002
- Full Text
- View/download PDF
37. Building consistent sample databases to support information system evolution and migration
- Author
-
Deirdre Lawless, Bing Wu, Jesus Bisbal, and Jane Grimson
- Subjects
Spatiotemporal database ,Database ,Computer science ,Relational database ,View ,Knowledge engineering ,Database schema ,Probabilistic database ,computer.software_genre ,Database design ,Synthetic data ,Database tuning ,Database testing ,Data modeling ,Knowledge-based systems ,Data integrity ,Information system ,Database theory ,computer ,Intelligent database ,Database model ,Data administration - Abstract
Prototype databases are needed in any information system development process to support data-intensive applications development. It is common practice to populate these databases using synthetic data. This data usually bears little relation to the application's domain and considers only a very reduced subset of the integrity constraints the database will hold during operation.
- Published
- 1998
- Full Text
- View/download PDF
38. Virtual physiological human: training challenges
- Author
-
Diaz-Zuccarini, Margarita Zachariou, Peter Kohl, Keith McCormack, Andrew V. Narracott, Jesus Bisbal, Bart Bijnens, Patricia V. Lawford, C Martin, Katherine Fletcher, J Villa I Freixa, and Bindi S. Brook
- Subjects
Human–computer interaction ,Computer science ,General Mathematics ,0206 medical engineering ,General Engineering ,General Physics and Astronomy ,Virtual Physiological Human ,02 engineering and technology ,020601 biomedical engineering ,Engineering physics ,Training (civil) - Published
- 2011
- Full Text
- View/download PDF
39. Clinical coverage of an archetype repository over SNOMED-CT
- Author
-
Damon Berry, Sheng Yu, Jesus Bisbal, and Damon Berry
- Subjects
Clinical archetypes ,Conceptual modelling ,Ontologies ,Terminology systems ,SNOMED-CT ,Term-binding ,Computer science ,Computational Engineering ,Health Informatics ,Terminology ,User-Computer Interface ,Resource (project management) ,Terminology as Topic ,Electronic Health Records ,Humans ,Product (category theory) ,Archetype ,Translational Medical Research ,Shadow (psychology) ,SNOMED CT ,Hierarchy ,Systematized Nomenclature of Medicine ,Data science ,Computer Science Applications ,Semantics ,Metric (unit) - Abstract
Graphical abstractDisplay Omitted Highlights? Terminology systems can be used to measure clinical concept coverage in archetypes. ? The coverage in archetypes shows unbalanced development in different disciplines. ? The result of the coverage may help guide the development of archetypes. ? The approach is independent of binding algorithms used to generate the coverage. Clinical archetypes provide a means for health professionals to design what should be communicated as part of an Electronic Health Record (EHR). An ever-growing number of archetype definitions follow this health information modelling approach, and this international archetype resource will eventually cover a large number of clinical concepts. On the other hand, clinical terminology systems that can be referenced by archetypes also have a wide coverage over many types of health-care information.No existing work measures the clinical content coverage of archetypes using terminology systems as a metric. Archetype authors require guidance to identify under-covered clinical areas that may need to be the focus of further modelling effort according to this paradigm.This paper develops a first map of SNOMED-CT concepts covered by archetypes in a repository by creating a so-called terminological Shadow. This is achieved by mapping appropriate SNOMED-CT concepts from all nodes that contain archetype terms, finding the top two category levels of the mapped concepts in the SNOMED-CT hierarchy, and calculating the coverage of each category. A quantitative study of the results compares the coverage of different categories to identify relatively under-covered as well as well-covered areas. The results show that the coverage of the well-known National Health Service (NHS) Connecting for Health (CfH) archetype repository on all categories of SNOMED-CT is not equally balanced. Categories worth investigating emerged at different points on the coverage spectrum, including well-covered categories such as Attributes, Qualifier value, under-covered categories such as Microorganism, Kingdom animalia, and categories that are not covered at all such as Cardiovascular drug (product).
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.