13 results on '"Tolsgaard, Martin G"'
Search Results
2. AI supported fetal echocardiography with quality assessment
- Author
-
Taksoee-Vester, Caroline A., Mikolaj, Kamil, Bashir, Zahra, Christensen, Anders N., Petersen, Olav B., Sundberg, Karin, Feragen, Aasa, Svendsen, Morten B. S., Nielsen, Mads, Tolsgaard, Martin G., Taksoee-Vester, Caroline A., Mikolaj, Kamil, Bashir, Zahra, Christensen, Anders N., Petersen, Olav B., Sundberg, Karin, Feragen, Aasa, Svendsen, Morten B. S., Nielsen, Mads, and Tolsgaard, Martin G.
- Abstract
This study aimed to develop a deep learning model to assess the quality of fetal echocardiography and to perform prospective clinical validation. The model was trained on data from the 18-22-week anomaly scan conducted in seven hospitals from 2008 to 2018. Prospective validation involved 100 patients from two hospitals. A total of 5363 images from 2551 pregnancies were used for training and validation. The model's segmentation accuracy depended on image quality measured by a quality score (QS). It achieved an overall average accuracy of 0.91 (SD 0.09) across the test set, with images having above-average QS scoring 0.97 (SD 0.03). During prospective validation of 192 images, clinicians rated 44.8% (SD 9.8) of images as equal in quality, 18.69% (SD 5.7) favoring auto-captured images and 36.51% (SD 9.0) preferring manually captured ones. Images with above average QS showed better agreement on segmentations (p
- Published
- 2024
3. Technical Skills Curriculum in Neonatology:A Modified European Delphi Study
- Author
-
Bay, Emma Therese, Breindahl, Niklas, Nielsen, Mathilde M., Roehr, Charles C., Szczapa, Tomasz, Gagliardi, Luigi, Vento, Maximo, Visser, Douwe H., Stoen, Ragnhild, Klotz, Daniel, Rakow, Alexander, Breindahl, Morten, Tolsgaard, Martin G, Aunsholt, Lise, Bay, Emma Therese, Breindahl, Niklas, Nielsen, Mathilde M., Roehr, Charles C., Szczapa, Tomasz, Gagliardi, Luigi, Vento, Maximo, Visser, Douwe H., Stoen, Ragnhild, Klotz, Daniel, Rakow, Alexander, Breindahl, Morten, Tolsgaard, Martin G, and Aunsholt, Lise
- Abstract
Introduction: Simulation-based training (SBT) aids healthcare providers in acquiring the technical skills necessary to improve patient outcomes and safety. However, since SBT may require significant resources, training all skills to a comparable extent is impractical. Hence, a strategic prioritization of technical skills is necessary. While the European Training Requirements in Neonatology provide guidance on necessary skills, they lack prioritization. We aimed to identify and prioritize technical skills for a SBT curriculum in neonatology. Methods: A three-round modified Delphi process of expert neonatologists and neonatal trainees was performed. In round one, the participants listed all the technical skills newly trained neonatologists should master. The content analysis excluded duplicates and non-technical skills. In round two, the Copenhagen Academy for Medical Education and Simulation Needs Assessment Formula (CAMES-NAF) was used to preliminarily prioritize the technical skills according to frequency, importance of competency, SBT impact on patient safety, and feasibility for SBT. In round three, the participants further refined and reprioritized the technical skills. Items achieving consensus (agreement of ≥75%) were included. Results: We included 168 participants from 10 European countries. The response rates in rounds two and three were 80% (135/168) and 87% (117/135), respectively. In round one, the participants suggested 1964 different items. Content analysis revealed 81 unique technical skills prioritized in round two. In round three, 39 technical skills achieved consensus and were included. Conclusion: We reached a European consensus on a prioritized list of 39 technical skills to be included in a SBT curriculum in neonatology., INTRODUCTION: Simulation-based training (SBT) aids healthcare providers in acquiring the technical skills necessary to improve patient outcomes and safety. However, since SBT may require significant resources, training all skills to a comparable extent is impractical. Hence, a strategic prioritization of technical skills is necessary. While the European Training Requirements in Neonatology provide guidance on necessary skills, they lack prioritization. We aimed to identify and prioritize technical skills for a SBT curriculum in neonatology.METHODS: A three-round modified Delphi process of expert neonatologists and neonatal trainees was performed. In round one, the participants listed all the technical skills newly trained neonatologists should master. The content analysis excluded duplicates and non-technical skills. In round two, the Copenhagen Academy for Medical Education and Simulation Needs Assessment Formula (CAMES-NAF) was used to preliminarily prioritize the technical skills according to frequency, importance of competency, SBT impact on patient safety, and feasibility for SBT. In round three, the participants further refined and reprioritized the technical skills. Items achieving consensus (agreement of ≥75%) were included.RESULTS: We included 168 participants from 10 European countries. The response rates in rounds two and three were 80% (135/168) and 87% (117/135), respectively. In round one, the participants suggested 1964 different items. Content analysis revealed 81 unique technical skills prioritized in round two. In round three, 39 technical skills achieved consensus and were included.CONCLUSION: We reached a European consensus on a prioritized list of 39 technical skills to be included in a SBT curriculum in neonatology.
- Published
- 2024
4. Validity evidence supporting clinical skills assessment by artificial intelligence compared with trained clinician raters
- Author
-
Johnsson, Vilma, Søndergaard, Morten Bo, Kulasegaram, Kulamakan, Sundberg, Karin, Tiblad, Eleonor, Herling, Lotta, Petersen, Olav Bjørn, Tolsgaard, Martin G., Johnsson, Vilma, Søndergaard, Morten Bo, Kulasegaram, Kulamakan, Sundberg, Karin, Tiblad, Eleonor, Herling, Lotta, Petersen, Olav Bjørn, and Tolsgaard, Martin G.
- Abstract
Background: Artificial intelligence (AI) is becoming increasingly used in medical education, but our understanding of the validity of AI-based assessments (AIBA) as compared with traditional clinical expert-based assessments (EBA) is limited. In this study, the authors aimed to compare and contrast the validity evidence for the assessment of a complex clinical skill based on scores generated from an AI and trained clinical experts, respectively. Methods: The study was conducted between September 2020 to October 2022. The authors used Kane's validity framework to prioritise and organise their evidence according to the four inferences: scoring, generalisation, extrapolation and implications. The context of the study was chorionic villus sampling performed within the simulated setting. AIBA and EBA were used to evaluate performances of experts, intermediates and novice based on video recordings. The clinical experts used a scoring instrument developed in a previous international consensus study. The AI used convolutional neural networks for capturing features on video recordings, motion tracking and eye movements to arrive at a final composite score. Results: A total of 45 individuals participated in the study (22 novices, 12 intermediates and 11 experts). The authors demonstrated validity evidence for scoring, generalisation, extrapolation and implications for both EBA and AIBA. The plausibility of assumptions related to scoring, evidence of reproducibility and relation to different training levels was examined. Issues relating to construct underrepresentation, lack of explainability, and threats to robustness were identified as potential weak links in the AIBA validity argument compared with the EBA validity argument. Conclusion: There were weak links in the use of AIBA compared with EBA, mainly in their representation of the underlying construct but also regarding their explainability and ability to transfer to other datasets. However, combining AI and clinical e
- Published
- 2024
5. Surgical gestures can be used to assess surgical competence in robot-assisted surgery:A validity investigating study of simulated RARP
- Author
-
Olsen, Rikke Groth, Svendsen, Morten Bo Søndergaard, Tolsgaard, Martin G., Konge, Lars, Røder, Andreas, Bjerrum, Flemming, Olsen, Rikke Groth, Svendsen, Morten Bo Søndergaard, Tolsgaard, Martin G., Konge, Lars, Røder, Andreas, and Bjerrum, Flemming
- Abstract
To collect validity evidence for the assessment of surgical competence through the classification of general surgical gestures for a simulated robot-assisted radical prostatectomy (RARP). We used 165 video recordings of novice and experienced RARP surgeons performing three parts of the RARP procedure on the RobotiX Mentor. We annotated the surgical tasks with different surgical gestures: dissection, hemostatic control, application of clips, needle handling, and suturing. The gestures were analyzed using idle time (periods with minimal instrument movements) and active time (whenever a surgical gesture was annotated). The distribution of surgical gestures was described using a one-dimensional heat map, snail tracks. All surgeons had a similar percentage of idle time but novices had longer phases of idle time (mean time: 21 vs. 15 s, p < 0.001). Novices used a higher total number of surgical gestures (number of phases: 45 vs. 35, p < 0.001) and each phase was longer compared with those of the experienced surgeons (mean time: 10 vs. 8 s, p < 0.001). There was a different pattern of gestures between novices and experienced surgeons as seen by a different distribution of the phases. General surgical gestures can be used to assess surgical competence in simulated RARP and can be displayed as a visual tool to show how performance is improving. The established pass/fail level may be used to ensure the competence of the residents before proceeding with supervised real-life surgery. The next step is to investigate if the developed tool can optimize automated feedback during simulator training.
- Published
- 2024
6. Automated performance metrics and surgical gestures: two methods for assessment of technical skills in robotic surgery.
- Author
-
Olsen, Rikke Groth, Svendsen, Morten Bo Søndergaard, Tolsgaard, Martin G., Konge, Lars, Røder, Andreas, and Bjerrum, Flemming
- Abstract
The objective of this study is to compare automated performance metrics (APM) and surgical gestures for technical skills assessment during simulated robot-assisted radical prostatectomy (RARP). Ten novices and six experienced RARP surgeons performed simulated RARPs on the RobotiX Mentor (Surgical Science, Sweden). Simulator APM were automatically recorded, and surgical videos were manually annotated with five types of surgical gestures. The consequences of the pass/fail levels, which were based on contrasting groups' methods, were compared for APM and surgical gestures. Intra-class correlation coefficient (ICC) analysis and a Bland–Altman plot were used to explore the correlation between APM and surgical gestures. Pass/fail levels for both APM and surgical gesture could fully distinguish between the skill levels of the surgeons with a specificity and sensitivity of 100%. The overall ICC (one-way, random) was 0.70 (95% CI: 0.34–0.88), showing moderate agreement between the methods. The Bland–Altman plot showed a high agreement between the two methods for assessing experienced surgeons but disagreed on the novice surgeons' skill level. APM and surgical gestures could both fully distinguish between novices and experienced surgeons in a simulated setting. Both methods of analyzing technical skills have their advantages and disadvantages and, as of now, those are only to a limited extent available in the clinical setting. The development of assessment methods in a simulated setting enables testing before implementing it in a clinical setting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Data sharing and big data in health professions education: Ottawa consensus statement and recommendations for scholarship
- Author
-
Kulasegaram, Kulamakan (Mahan), primary, Grierson, Lawrence, additional, Barber, Cassandra, additional, Chahine, Saad, additional, Chou, Fremen Chichen, additional, Cleland, Jennifer, additional, Ellis, Ricky, additional, Holmboe, Eric S., additional, Pusic, Martin, additional, Schumacher, Daniel, additional, Tolsgaard, Martin G., additional, Tsai, Chin-Chung, additional, Wenghofer, Elizabeth, additional, and Touchie, Claire, additional
- Published
- 2024
- Full Text
- View/download PDF
8. Surgical gestures can be used to assess surgical competence in robot-assisted surgery
- Author
-
Olsen, Rikke Groth, primary, Svendsen, Morten Bo Søndergaard, additional, Tolsgaard, Martin G., additional, Konge, Lars, additional, Røder, Andreas, additional, and Bjerrum, Flemming, additional
- Published
- 2024
- Full Text
- View/download PDF
9. Simulation-based assessment of upper abdominal ultrasound skills
- Author
-
Teslak, Kristina E., primary, Post, Julie H., additional, Tolsgaard, Martin G., additional, Rasmussen, Sten, additional, Purup, Mathias M., additional, and Friis, Mikkel L., additional
- Published
- 2024
- Full Text
- View/download PDF
10. How to Use and Report on p-values
- Author
-
Boscardin, Christy K., primary, Sewell, Justin L., additional, Tolsgaard, Martin G., additional, and Pusic, Martin V., additional
- Published
- 2024
- Full Text
- View/download PDF
11. Technical Skills Curriculum in Neonatology: A Modified European Delphi Study.
- Author
-
Bay, Emma Therese, Breindahl, Niklas, Nielsen, Mathilde M., Roehr, Charles C., Szczapa, Tomasz, Gagliardi, Luigi, Vento, Maximo, Visser, Douwe H., Stoen, Ragnhild, Klotz, Daniel, Rakow, Alexander, Breindahl, Morten, Tolsgaard, Martin G., and Aunsholt, Lise
- Subjects
NEONATOLOGY ,MEDICAL personnel ,MEDICAL education ,MEDICAL simulation ,CURRICULUM - Abstract
Introduction: Simulation-based training (SBT) aids healthcare providers in acquiring the technical skills necessary to improve patient outcomes and safety. However, since SBT may require significant resources, training all skills to a comparable extent is impractical. Hence, a strategic prioritization of technical skills is necessary. While the European Training Requirements in Neonatology provide guidance on necessary skills, they lack prioritization. We aimed to identify and prioritize technical skills for a SBT curriculum in neonatology. Methods: A three-round modified Delphi process of expert neonatologists and neonatal trainees was performed. In round one, the participants listed all the technical skills newly trained neonatologists should master. The content analysis excluded duplicates and non-technical skills. In round two, the Copenhagen Academy for Medical Education and Simulation Needs Assessment Formula (CAMES-NAF) was used to preliminarily prioritize the technical skills according to frequency, importance of competency, SBT impact on patient safety, and feasibility for SBT. In round three, the participants further refined and reprioritized the technical skills. Items achieving consensus (agreement of ≥75%) were included. Results: We included 168 participants from 10 European countries. The response rates in rounds two and three were 80% (135/168) and 87% (117/135), respectively. In round one, the participants suggested 1964 different items. Content analysis revealed 81 unique technical skills prioritized in round two. In round three, 39 technical skills achieved consensus and were included. Conclusion: We reached a European consensus on a prioritized list of 39 technical skills to be included in a SBT curriculum in neonatology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. AI supported fetal echocardiography with quality assessment.
- Author
-
Taksoee-Vester CA, Mikolaj K, Bashir Z, Christensen AN, Petersen OB, Sundberg K, Feragen A, Svendsen MBS, Nielsen M, and Tolsgaard MG
- Subjects
- Female, Pregnancy, Humans, Retrospective Studies, Echocardiography
- Abstract
This study aimed to develop a deep learning model to assess the quality of fetal echocardiography and to perform prospective clinical validation. The model was trained on data from the 18-22-week anomaly scan conducted in seven hospitals from 2008 to 2018. Prospective validation involved 100 patients from two hospitals. A total of 5363 images from 2551 pregnancies were used for training and validation. The model's segmentation accuracy depended on image quality measured by a quality score (QS). It achieved an overall average accuracy of 0.91 (SD 0.09) across the test set, with images having above-average QS scoring 0.97 (SD 0.03). During prospective validation of 192 images, clinicians rated 44.8% (SD 9.8) of images as equal in quality, 18.69% (SD 5.7) favoring auto-captured images and 36.51% (SD 9.0) preferring manually captured ones. Images with above average QS showed better agreement on segmentations (p < 0.001) and QS (p < 0.001) with fetal medicine experts. Auto-capture saved additional planes beyond protocol requirements, resulting in more comprehensive echocardiographies. Low QS had adverse effect on both model performance and clinician's agreement with model feedback. The findings highlight the importance of developing and evaluating AI models based on 'noisy' real-life data rather than pursuing the highest accuracy possible with retrospective academic-grade data., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
13. Validity evidence supporting clinical skills assessment by artificial intelligence compared with trained clinician raters.
- Author
-
Johnsson V, Søndergaard MB, Kulasegaram K, Sundberg K, Tiblad E, Herling L, Petersen OB, and Tolsgaard MG
- Subjects
- Humans, Educational Measurement, Artificial Intelligence, Reproducibility of Results, Clinical Competence, Education, Medical
- Abstract
Background: Artificial intelligence (AI) is becoming increasingly used in medical education, but our understanding of the validity of AI-based assessments (AIBA) as compared with traditional clinical expert-based assessments (EBA) is limited. In this study, the authors aimed to compare and contrast the validity evidence for the assessment of a complex clinical skill based on scores generated from an AI and trained clinical experts, respectively., Methods: The study was conducted between September 2020 to October 2022. The authors used Kane's validity framework to prioritise and organise their evidence according to the four inferences: scoring, generalisation, extrapolation and implications. The context of the study was chorionic villus sampling performed within the simulated setting. AIBA and EBA were used to evaluate performances of experts, intermediates and novice based on video recordings. The clinical experts used a scoring instrument developed in a previous international consensus study. The AI used convolutional neural networks for capturing features on video recordings, motion tracking and eye movements to arrive at a final composite score., Results: A total of 45 individuals participated in the study (22 novices, 12 intermediates and 11 experts). The authors demonstrated validity evidence for scoring, generalisation, extrapolation and implications for both EBA and AIBA. The plausibility of assumptions related to scoring, evidence of reproducibility and relation to different training levels was examined. Issues relating to construct underrepresentation, lack of explainability, and threats to robustness were identified as potential weak links in the AIBA validity argument compared with the EBA validity argument., Conclusion: There were weak links in the use of AIBA compared with EBA, mainly in their representation of the underlying construct but also regarding their explainability and ability to transfer to other datasets. However, combining AI and clinical expert-based assessments may offer complementary benefits, which is a promising subject for future research., (© 2023 The Authors. Medical Education published by Association for the Study of Medical Education and John Wiley & Sons Ltd.)
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.