25 results on '"Svátek V"'
Search Results
2. Improving WWW Access - from Single-Purpose Systems to Agent Architectures?
- Author
-
Sramek, D., Berka, P., Kosek, J., Svatek, V., Goos, G., editor, Hartmanis, J., editor, van Leeuwen, J., editor, Carbonell, Jaime G., editor, and Siekmann, Jörg, editor
- Published
- 2000
- Full Text
- View/download PDF
3. Improving Editorial Workflow and Metadata Quality at Springer Nature
- Author
-
Ghidini, C, Hartig, O, Maleshkova, M, Svátek, V, Cruz, I, Hogan, A, Song, J, Lefrançois, M, Gandon, F, Salatino, A, Osborne, F, Birukou, A, Motta, E, Salatino AA, Osborne F, Birukou A, Motta E, Ghidini, C, Hartig, O, Maleshkova, M, Svátek, V, Cruz, I, Hogan, A, Song, J, Lefrançois, M, Gandon, F, Salatino, A, Osborne, F, Birukou, A, Motta, E, Salatino AA, Osborne F, Birukou A, and Motta E
- Abstract
Identifying the research topics that best describe the scope of a scientific publication is a crucial task for editors, in particular because the quality of these annotations determine how effectively users are able to discover the right content in online libraries. For this reason, Springer Nature, the world’s largest academic book publisher, has traditionally entrusted this task to their most expert editors. These editors manually analyse all new books, possibly including hundreds of chapters, and produce a list of the most relevant topics. Hence, this process has traditionally been very expensive, time-consuming, and confined to a few senior editors. For these reasons, back in 2016 we developed Smart Topic Miner (STM), an ontology-driven application that assists the Springer Nature editorial team in annotating the volumes of all books covering conference proceedings in Computer Science. Since then STM has been regularly used by editors in Germany, China, Brazil, India, and Japan, for a total of about 800 volumes per year. Over the past three years the initial prototype has iteratively evolved in response to feedback from the users and evolving requirements. In this paper we present the most recent version of the tool and describe the evolution of the system over the years, the key lessons learnt, and the impact on the Springer Nature workflow. In particular, our solution has drastically reduced the time needed to annotate proceedings and significantly improved their discoverability, resulting in 9.3 million additional downloads. We also present a user study involving 9 editors, which yielded excellent results in term of usability, and report an evaluation of the new topic classifier used by STM, which outperforms previous versions in recall and F-measure.
- Published
- 2019
4. Towards improving the quality of knowledge graphs with data-driven ontology patterns and SHACL
- Author
-
Skjæveland, MG, Hu, Y, Hammar, K, Svátek, V, Lawrynowicz, A, Spahiu, B, Maurino, A, Palmonari, M, Spahiu B., Maurino A., Palmonari M., Skjæveland, MG, Hu, Y, Hammar, K, Svátek, V, Lawrynowicz, A, Spahiu, B, Maurino, A, Palmonari, M, Spahiu B., Maurino A., and Palmonari M.
- Abstract
As Linked Data available on the Web continue to grow, understanding their structure and assessing their quality remains a challenging task making such the bottleneck for their reuse. ABSTAT is an online semantic profiling tool which helps data consumers in better understanding of the data by extracting data-driven ontology patterns and statistics about the data. The SHACL Shapes Constraint Language helps users capturing quality issues in the data by means of constraints. In this paper we propose a methodology to improve the quality of different versions of the data by means of SHACL constraints learned from the semantic profiles produced by ABSTAT.
- Published
- 2018
5. Modeling fiscal data with the Data Cube Vocabulary
- Author
-
Mynarz, J., Svátek, V., Karampatakis, S., Jakub Klímek, and Bratsas, C.
- Subjects
data modelling ,Resource Description Framework - Abstract
We present a fiscal data model based on the Data Cube Vocabulary, which we developed for the OpenBudgets.eu project. The model defines component properties out of which data structure definitions for concrete datasets can be composed. Based on initial usage experiments, simple validation constraints have been formulated.
- Published
- 2016
- Full Text
- View/download PDF
6. Graph Kernels for Task 1 and 2 of the Linked Data Data-Mining Challenge 2013
- Author
-
de Vries, G.K.D., d'Amato, C., Berka, P., Svátek, V., Wecel, K., and System and Network Engineering (IVI, FNWI)
- Abstract
In this paper we present the application of two RDF graph kernels to task 1 and 2 of the linked data data-mining challenge. Both graph kernels use term vectors to handle RDF literals. Based on experiments with the task data, we use the Weisfeiler-Lehman RDF graph kernel for task 1 and the intersection path tree kernel for task 2 in our final classiers for the challenge. Applying these graph kernels is very straightforward and requires (almost) no preprocessing of the data.
- Published
- 2013
7. A Fast and Simple Graph Kernel for RDF
- Author
-
de Vries, G.K.D., de Rooij, S., d'Amato, C., Berka, P., Svátek, V., Wecel, K., and System and Network Engineering (IVI, FNWI)
- Abstract
In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster in terms of computation time. Prediction performance is worse than the state-of-the-art Weisfeiler Lehman RDF kernel, but our kernel is a factor 10 faster to compute. Thus, we consider this kernel a very suitable baseline for learning from RDF data. Furthermore, we extend this kernel to handle RDF literals as bag-ofwords feature vectors, which increases performance in two of the four experiments.
- Published
- 2013
8. Results of the Ontology Alignment Evaluation Initiative 2008
- Author
-
caterina caracciolo, Euzenat, J., Hollink, L., Ichise, R., Isaac, A., Malaisé, V., Meilicke, C., Pane, J., Shvaiko, P., Stuckenschmidt, H., Šváb-Zamazal, O., Svátek, V., Food and Agriculture Organization of the United Nations [Rome, Italie] (FAO), Computer mediated exchange of structured knowledge (EXMO), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS), Department of Computer Science - Vrije Universiteit, Vrije Universiteit Brussel (VUB), National Institute of Informatics (NII), Institut für Informatik [Mannheim], Universität Mannheim, Department of Information Engineering and Computer Science (DISI), University of Trento [Trento], Informatica Trentina [Trento], Vytsoka Skola Ekonomicka v Praze (VSE), Vysoká škola ekonomická v Praze, IST-NeOn, European Project: 33513,European Commission,NEON(2006), Business Web and Media, Artificial intelligence, Intelligent Information Systems, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), Universität Mannheim [Mannheim], and Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)
- Subjects
QA076 Computer software ,[INFO.INFO-WB]Computer Science [cs]/Web ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] - Abstract
caraciolo2008a; International audience; Ontology matching consists of finding correspondences between ontology entities. OAEI campaigns aim at comparing ontology matching systems on precisely defined test sets. Test sets can use ontologies of different nature (from expressive OWL ontologies to simple directories) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI-2008 builds over previous campaigns by having 4 tracks with 8 test sets followed by 13 participants. Following the trend of previous years, more participants reach the forefront. The official results of the campaign are those published on the OAEI web site.
- Published
- 2008
9. Results of the Ontology Alignment Evaluation Initiative 2007
- Author
-
Euzenat, J., Antoine Isaac, Meilicke, C., Shvaiko, P., Stuckenschmidt, H., Šváb, O., Svátek, V., Hage, W. R., Yatskevich, M., Computer mediated exchange of structured knowledge (EXMO), Inria Grenoble - Rhône-Alpes, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF), Department of Computer Science - Vrije Universiteit, Vrije Universiteit Brussel (VUB), Institut für Informatik [Mannheim], Universität Mannheim [Mannheim], Department of Information Engineering and Computer Science (DISI), University of Trento [Trento], Vytsoka Skola Ekonomicka v Praze (VSE), Vysoká škola ekonomická v Praze, IST-Knowledgeweb, European Project: 507482,European Commission,FP6-IST,Knowledgeweb(2004), Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS), Universität Mannheim, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire d'Informatique de Grenoble (LIG ), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019])-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Centre National de la Recherche Scientifique (CNRS)-Université Grenoble Alpes [2016-2019] (UGA [2016-2019]), Laboratoire d'Informatique de Grenoble (LIG), Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP )-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)-Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Inria Grenoble - Rhône-Alpes, and Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
[INFO.INFO-WB]Computer Science [cs]/Web ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] - Abstract
euzenat2007g; International audience; We present the Ontology Alignment Evaluation Initiative 2007 campaign as well as its results. The OAEI campaign aims at comparing ontology matching systems on precisely defined test sets. OAEI-2007 builds over previous campaigns by having 4 tracks with 7 test sets followed by 17 participants. This is a major increase in the number of participants compared to the previous years. Also, the evaluation results demonstrate that more participants are at the forefront. The final and official results of the campaign are those published on the OAEI web site.
- Published
- 2007
10. Description of alignment evaluation and benchmarking results
- Author
-
Shvaiko, P., Euzenat, J., Stuckenschmidt, H., Mochol, M., Giunchiglia, F., Yatskevich, M., Avesani, P., van Hage, W.R., Sváb, O., Svátek, V., Artificial intelligence, Business Web and Media, and Intelligent Information Systems
- Published
- 2007
11. Results of the Ontology Alignment Evaluation Initiative
- Author
-
Euzenat, J., Mochol, M., Shvaiko, P., Stuckenschmidt, H., Sváb, O., Svátek, V., van Hage, W.R., Yatskevich, M., Artificial intelligence, Business Web and Media, and Intelligent Information Systems
- Published
- 2007
12. Content Collection for the Labelling of Health-Related Web Content
- Author
-
Stamatakis, K., primary, Metsis, V., additional, Karkaletsis, V., additional, Ruzicka, M., additional, Svátek, V., additional, Amigó, E., additional, Pöllä, M., additional, and Spyropoulos, C., additional
- Full Text
- View/download PDF
13. Content Collection for the Labelling of Health-Related Web Content.
- Author
-
Carbonell, Jaime G., Siekmann, Jörg, Bellazzi, Riccardo, Abu-Hanna, Ameen, Hunter, Jim, Stamatakis, K., Metsis, V., Karkaletsis, V., Ruzicka, M., Svátek, V., Amigó, E., Pöllä, M., and Spyropoulos, C.
- Abstract
As the number of health-related web sites in various languages increases, so does the need for control mechanisms that give the users adequate guarantee on whether the web resources they are visiting meet a minimum level of quality standards. Based upon state-of-the-art technology in the areas of semantic web, content analysis and quality labelling, the MedIEQ project, integrates existing technologies and tests them in a novel application: the automation of the labelling process in health-related web content. MedIEQ provides tools that crawl the web to locate unlabelled health web resources, to label them according to pre-defined labelling criteria, as well as to monitor them. This paper focuses on content collection and discusses our experiments in the English language. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
14. Searching the Internet Using Topological Analysis of Web Pages
- Author
-
Tomas Skopal, Snášel, V., Krátký, M., and Svátek, V.
15. Architecture for enhancing video analysis results using complementary resources
- Author
-
Nemrava, J., Buitelaar, P., Declerck, T., Svátek, V., Petrák, J., Cobet, A., Herwig Zeiner, Sadlier, D., and O Connor, N.
- Subjects
Information storage and retrieval systems ,Digital video - Abstract
In this paper we present different sources of information complementary to audio-visual (A/V) streams and propose their usage for enriching A/V data with semantic concepts in order to bridge the gap between low-level video analysis and high-level analysis. Our aim is to extract cross-media feature descriptors from semantically enriched and aligned resources so as to detect finer-grained events in video. We introduce an architecture for complementary resources analysis and discuss domain dependency aspects of this approach connected to our initial domain of soccer broadcasts.
16. Semantics, Web and Mining: Preface
- Author
-
Berendt, B., Hotho, A., Dunja Mladenic, Semeraro, G., Spiliopoulou, M., Stumme, G., Someren, M., Ackermann, M., Grobelnik, M., and Svátek, V.
17. Improving Editorial Workflow and Metadata Quality at Springer Nature
- Author
-
Aliaksandr Birukou, Enrico Motta, Francesco Osborne, Angelo Antonio Salatino, Ghidini, C, Hartig, O, Maleshkova, M, Svátek, V, Cruz, I, Hogan, A, Song, J, Lefrançois, M, Gandon, F, Salatino, A, Osborne, F, Birukou, A, and Motta, E
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Process (engineering) ,Computer Science - Artificial Intelligence ,media_common.quotation_subject ,02 engineering and technology ,01 natural sciences ,Machine Learning (cs.LG) ,Task (project management) ,Computer Science - Information Retrieval ,World Wide Web ,Scholarly ontologie ,Topic classification ,0202 electrical engineering, electronic engineering, information engineering ,Scholarly data ,Digital Libraries (cs.DL) ,Quality (business) ,Topic detection ,Data mining ,media_common ,Scope (project management) ,business.industry ,010401 analytical chemistry ,020207 software engineering ,Usability ,Computer Science - Digital Libraries ,Discoverability ,0104 chemical sciences ,Artificial Intelligence (cs.AI) ,Workflow ,Publishing ,business ,Bibliographic metadata ,Information Retrieval (cs.IR) - Abstract
Identifying the research topics that best describe the scope of a scientific publication is a crucial task for editors, in particular because the quality of these annotations determine how effectively users are able to discover the right content in online libraries. For this reason, Springer Nature, the world's largest academic book publisher, has traditionally entrusted this task to their most expert editors. These editors manually analyse all new books, possibly including hundreds of chapters, and produce a list of the most relevant topics. Hence, this process has traditionally been very expensive, time-consuming, and confined to a few senior editors. For these reasons, back in 2016 we developed Smart Topic Miner (STM), an ontology-driven application that assists the Springer Nature editorial team in annotating the volumes of all books covering conference proceedings in Computer Science. Since then STM has been regularly used by editors in Germany, China, Brazil, India, and Japan, for a total of about 800 volumes per year. Over the past three years the initial prototype has iteratively evolved in response to feedback from the users and evolving requirements. In this paper we present the most recent version of the tool and describe the evolution of the system over the years, the key lessons learnt, and the impact on the Springer Nature workflow. In particular, our solution has drastically reduced the time needed to annotate proceedings and significantly improved their discoverability, resulting in 9.3 million additional downloads. We also present a user study involving 9 editors, which yielded excellent results in term of usability, and report an evaluation of the new topic classifier used by STM, which outperforms previous versions in recall and F-measure., In: The Semantic Web - ISWC 2019. Lecture Notes in Computer Science, vol 11779. Springer, Cham
- Published
- 2019
18. A Pattern-Based Core Ontology for Product Lifecycle Management based on DUL
- Author
-
Schönteich, Falko, Kasten, Andreas, Scherp, Ansgar, Skjæveland, MG, Hu, Y, Hammar, K, Svátek, V, and Ławrynowicz, A
- Subjects
pattern-based ontologies ,product lifecycle management - Abstract
A major challenge in Product Lifecycle Management (PLM) is the exchange of product data across lifecycle phases, information systems, and parties as data formats, structure and quality can vary vastly. Existing approaches focus only on certain phases of PLM, predominantly design and manufacturing, while the subsequent phases of usage/maintenance and disposal are often ignored. However, especially the usage phase is becoming increasingly important for revenue as customer expectation for services beyond the initial purchase of a product is growing. This paper proposes an ontology CO-PLM based on the foundational ontology DOLCE+DnS Ultralite to provide a formal basis for PLM. In contrast to existing approaches, CO-PLM follows an holistic approach covering all phases of PLM and integrates patterns from existing core ontologies.
- Published
- 2018
19. Impact of COVID-19 research: a study on predicting influential scholarly documents using machine learning and a domain-independent knowledge graph.
- Author
-
Rabby G, D'Souza J, Oelen A, Dvorackova L, Svátek V, and Auer S
- Subjects
- Humans, Machine Learning, Algorithms, Language, Pattern Recognition, Automated, COVID-19
- Abstract
Multiple studies have investigated bibliometric features and uncategorized scholarly documents for the influential scholarly document prediction task. In this paper, we describe our work that attempts to go beyond bibliometric metadata to predict influential scholarly documents. Furthermore, this work also examines the influential scholarly document prediction task over categorized scholarly documents. We also introduce a new approach to enhance the document representation method with a domain-independent knowledge graph to find the influential scholarly document using categorized scholarly content. As the input collection, we use the WHO corpus with scholarly documents on the theme of COVID-19. This study examines different document representation methods for machine learning, including TF-IDF, BOW, and embedding-based language models (BERT). The TF-IDF document representation method works better than others. From various machine learning methods tested, logistic regression outperformed the other for scholarly document category classification, and the random forest algorithm obtained the best results for influential scholarly document prediction, with the help of a domain-independent knowledge graph, specifically DBpedia, to enhance the document representation method for predicting influential scholarly documents with categorical scholarly content. In this case, our study combines state-of-the-art machine learning methods with the BOW document representation method. We also enhance the BOW document representation with the direct type (RDF type) and unqualified relation from DBpedia. From this experiment, we did not find any impact of the enhanced document representation for the scholarly document category classification. We found an effect in the influential scholarly document prediction with categorical data., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
20. Tool-supported Interactive Correction and Semantic Annotation of Narrative Clinical Reports.
- Author
-
Zvára K, Tomečková M, Peleška J, Svátek V, and Zvárová J
- Subjects
- Data Accuracy, Guidelines as Topic, International Classification of Diseases, Meaningful Use standards, Software, User-Computer Interface, Electronic Health Records standards, Machine Learning, Natural Language Processing, Semantics, Vocabulary, Controlled, Word Processing standards, Writing standards
- Abstract
Objectives: Our main objective is to design a method of, and supporting software for, interactive correction and semantic annotation of narrative clinical reports, which would allow for their easier and less erroneous processing outside their original context: first, by physicians unfamiliar with the original language (and possibly also the source specialty), and second, by tools requiring structured information, such as decision-support systems. Our additional goal is to gain insights into the process of narrative report creation, including the errors and ambiguities arising therein, and also into the process of report annotation by clinical terms. Finally, we also aim to provide a dataset of ground-truth transformations (specific for Czech as the source language), set up by expert physicians, which can be reused in the future for subsequent analytical studies and for training automated transformation procedures., Methods: A three-phase preprocessing method has been developed to support secondary use of narrative clinical reports in electronic health record. Narrative clinical reports are narrative texts of healthcare documentation often stored in electronic health records. In the first phase a narrative clinical report is tokenized. In the second phase the tokenized clinical report is normalized. The normalized clinical report is easily readable for health professionals with the knowledge of the language used in the narrative clinical report. In the third phase the normalized clinical report is enriched with extracted structured information. The final result of the third phase is a semi-structured normalized clinical report where the extracted clinical terms are matched to codebook terms. Software tools for interactive correction, expansion and semantic annotation of narrative clinical reports has been developed and the three-phase preprocessing method validated in the cardiology area., Results: The three-phase preprocessing method was validated on 49 anonymous Czech narrative clinical reports in the field of cardiology. Descriptive statistics from the database of accomplished transformations has been calculated. Two cardiologists participated in the annotation phase. The first cardiologist annotated 1500 clinical terms found in 49 narrative clinical reports to codebook terms using the classification systems ICD 10, SNOMED CT, LOINC and LEKY. The second cardiologist validated annotations of the first cardiologist. The correct clinical terms and the codebook terms have been stored in a database., Conclusions: We extracted structured information from Czech narrative clinical reports by the proposed three-phase preprocessing method and linked it to electronic health records. The software tool, although generic, is tailored for Czech as the specific language of electronic health record pool under study. This will provide a potential etalon for porting this approach to dozens of other less-spoken languages. Structured information can support medical decision making, quality assurance tasks and further medical research.
- Published
- 2017
- Full Text
- View/download PDF
21. Mark-up based analysis of narrative guidelines with the Stepper tool.
- Author
-
Růzicka M and Svátek V
- Subjects
- Humans, Programming Languages, User-Computer Interface, Artificial Intelligence, Decision Support Systems, Clinical, Practice Guidelines as Topic
- Abstract
The Stepper tool was developed to assist a knowledge engineer in developing a computable version of narrative guidelines. The system is document-centric: it formalises the initial text in multiple user-definable steps corresponding to interactive XML transformations. In this paper, we report on experience obtained by applying the tool on a narrative guideline document addressing unstable angina pectoris. Possible role of the tool and associated methodology in developing a guideline-based application is also discussed.
- Published
- 2004
22. Analysis of guideline compliance--a data mining approach.
- Author
-
Svátek V, Ríha A, Peleska J, and Rauch J
- Subjects
- Decision Making, Computer-Assisted, Evidence-Based Medicine, Humans, Practice Patterns, Physicians', Software, Decision Support Systems, Clinical, Guideline Adherence, Practice Guidelines as Topic
- Abstract
While guideline-based decision support is safety-critical and typically requires human interaction, offline analysis of guideline compliance can be performed to large extent automatically. We examine the possibility of automatic detection of potential non-compliance followed up with (statistical) association mining. Only frequent associations of non-compliance patterns with various patient data are submitted to medical expert for interpretation. The initial experiment was carried out in the domain of hypertension management.
- Published
- 2004
23. Step-by-step mark-up of medical guideline documents.
- Author
-
Svátek V and Růzicka M
- Subjects
- Cardiology standards, Decision Trees, Humans, Hypertension therapy, Artificial Intelligence, Decision Support Systems, Clinical, Medical Records Systems, Computerized, Practice Guidelines as Topic
- Abstract
Approaches to formalization of medical guidelines can be divided into model-centric and document-centric. While model-centric approaches dominate in the development of clinical decision support applications, document-centric, mark-up-based formalization is suitable for application tasks requiring the 'literal' content of the document to be transferred into the formal model. Examples of such tasks are logical verification of the document or compliance analysis of health records. The quality and efficiency of document-centric formalization can be improved using a decomposition of the whole process into several explicit steps. We present a methodology and software tool supporting the step-by-step formalization process. The knowledge elements can be marked up in the source text, refined to a tree structure with increasing level of detail, rearranged into an XML knowledge base, and, finally, exported into the operational representation. User-definable transformation rules enable to automate a large part of the process. The approach is being tested in the domain of cardiology. For parts of the WHO/ISH Guidelines for Hypertension, the process has been carried out through all the stages, to the form of executable application, generated automatically from the XML knowledge base.
- Published
- 2003
- Full Text
- View/download PDF
24. Step-by-step mark-up of medical guideline documents.
- Author
-
Svátek V and Růzicka M
- Subjects
- Czech Republic, Programming Languages, Documentation, Practice Guidelines as Topic
- Abstract
The quality of document-centric formalisation of medical guidelines can be improved using a decomposition of the whole process into several explicit steps. We present a methodology and a software tool supporting the step-by-step formalisation process. The knowledge elements can be marked up in the text with increasing level of detail, rearranged into an XML knowledge base and exported into the operational representation. Semi-automated transitions can be specified by means of rules. The approach has been tested in a hypertension application.
- Published
- 2002
25. [Methods of collection and storage of porcine dermoepidermal grafts].
- Author
-
Moserová J, Bĕhounková E, Vrabec R, and Svátek V
- Subjects
- Animals, Burns surgery, Humans, Swine, Skin Transplantation, Surgery, Plastic, Tissue Preservation, Transplantation, Heterologous
- Published
- 1974
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.