7 results on '"Jan Bláha"'
Search Results
2. Epidemiology and definition of PPH worldwide
- Author
-
Jan Bláha and Tereza Bartošová
- Subjects
Anesthesiology and Pain Medicine ,Maternal Mortality ,Pregnancy ,Risk Factors ,Incidence ,Postpartum Hemorrhage ,Peripartum Period ,Humans ,Female - Abstract
Postpartum/peripartum hemorrhage (PPH) is an obstetric emergency complicating 1-10% of all deliveries and is a leading cause of maternal mortality and morbidity worldwide. However, the incidence of PPH differs widely according to the definition and criteria used, the way of measuring postpartum blood loss, and the population being studied with the highest numbers in developing countries. Despite all the significant progress in healthcare, the incidence of PPH is rising due to an incomplete implementation of guidelines, resulting in treatment delays and suboptimal care. A consensus clinical definition of PPH is needed to enable awareness, early recognition, and initiation of appropriate intensive treatment. Unfortunately, the most used definition of PPH based on blood loss ≥500 ml after delivery suffers from inaccuracies in blood loss quantification and is not clinically relevant in most cases, as the amount of blood loss does not fully reflect the severity of bleeding.
- Published
- 2022
3. Toward Benchmarking of Long-Term Spatio-Temporal Maps of Pedestrian Flows for Human-Aware Navigation
- Author
-
Tomáš Vintr, Jan Blaha, Martin Rektoris, Jiří Ulrich, Tomáš Rouček, George Broughton, Zhi Yan, and Tomáš Krajník
- Subjects
long-term navigation ,planning ,spatio-temporal modeling ,human-aware navigation ,scheduling ,pedestrian flows ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Despite the advances in mobile robotics, the introduction of autonomous robots in human-populated environments is rather slow. One of the fundamental reasons is the acceptance of robots by people directly affected by a robot’s presence. Understanding human behavior and dynamics is essential for planning when and how robots should traverse busy environments without disrupting people’s natural motion and causing irritation. Research has exploited various techniques to build spatio-temporal representations of people’s presence and flows and compared their applicability to plan optimal paths in the future. Many comparisons of how dynamic map-building techniques show how one method compares on a dataset versus another, but without consistent datasets and high-quality comparison metrics, it is difficult to assess how these various methods compare as a whole and in specific tasks. This article proposes a methodology for creating high-quality criteria with interpretable results for comparing long-term spatio-temporal representations for human-aware path planning and human-aware navigation scheduling. Two criteria derived from the methodology are then applied to compare the representations built by the techniques found in the literature. The approaches are compared on a real-world, long-term dataset, and the conception is validated in a field experiment on a robotic platform deployed in a human-populated environment. Our results indicate that continuous spatio-temporal methods independently modeling spatial and temporal phenomena outperformed other modeling approaches. Our results provide a baseline for future work to compare a wide range of methods employed for long-term navigation and provide researchers with an understanding of how these various methods compare in various scenarios.
- Published
- 2022
- Full Text
- View/download PDF
4. Embedding Weather Simulation in Auto-Labelling Pipelines Improves Vehicle Detection in Adverse Conditions
- Author
-
George Broughton, Jiří Janota, Jan Blaha, Tomáš Rouček, Maxim Simon, Tomáš Vintr, Tao Yang, Zhi Yan, and Tomáš Krajník
- Subjects
long-term autonomy ,machine learning ,self-supervised learning ,inclement weather conditions ,Chemical technology ,TP1-1185 - Abstract
The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required for data collection and annotation is a significant bottleneck when deploying robots in the real world. This applies especially to outdoor deployments, where robots have to face various adverse weather conditions. We present a method that allows an independent car tansporter to train its neural networks for vehicle detection without human supervision or annotation. We provide the robot with a hand-coded algorithm for detecting cars in LiDAR scans in favourable weather conditions and complement this algorithm with a tracking method and a weather simulator. As the robot traverses its environment, it can collect data samples, which can be subsequently processed into training samples for the neural networks. As the tracking method is applied offline, it can exploit the detections made both before the currently processed scan and any subsequent future detections of the current scene, meaning the quality of annotations is in excess of those of the raw detections. Along with the acquisition of the labels, the weather simulator is able to alter the raw sensory data, which are then fed into the neural network together with the labels. We show how this pipeline, being run in an offline fashion, can exploit off-the-shelf weather simulation for the auto-labelling training scheme in a simulator-in-the-loop manner. We show how such a framework produces an effective detector and how the weather simulator-in-the-loop is beneficial for the robustness of the detector. Thus, our automatic data annotation pipeline significantly reduces not only the data annotation but also the data collection effort. This allows the integration of deep learning algorithms into existing robotic systems without the need for tedious data annotation and collection in all possible situations. Moreover, the method provides annotated datasets that can be used to develop other methods. To promote the reproducibility of our research, we provide our datasets, codes and models online.
- Published
- 2022
- Full Text
- View/download PDF
5. Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
- Author
-
Zdeněk Rozsypálek, George Broughton, Pavel Linder, Tomáš Rouček, Jan Blaha, Leonard Mentzl, Keerthy Kusumam, and Tomáš Krajník
- Subjects
visual teach and repeat navigation ,long-term autonomy ,machine learning ,contrastive learning ,image representations ,Chemical technology ,TP1-1185 - Abstract
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model’s robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
- Published
- 2022
- Full Text
- View/download PDF
6. Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation
- Author
-
Tomáš Rouček, Arash Sadeghi Amjadi, Zdeněk Rozsypálek, George Broughton, Jan Blaha, Keerthy Kusumam, and Tomáš Krajník
- Subjects
visual teach and repeat navigation ,long-term autonomy ,self-supervised machine learning ,computer vision ,mobile robot ,artificial neural network ,Chemical technology ,TP1-1185 - Abstract
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day–night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.
- Published
- 2022
- Full Text
- View/download PDF
7. Praktické postupy v anestezii: 3., přepracované a doplněné vydání
- Author
-
Jindrová, Barbora, Jan Kunstýř, and Jan Bláha
- Published
- 2024
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.