121 results on '"Bodenstedt S"'
Search Results
2. Ensuring privacy protection in the era of big laparoscopic video data: development and validation of an inside outside discrimination algorithm (IODA)
- Author
-
Schulze, A., Tran, D., Daum, M. T. J., Kisilenko, A., Maier-Hein, L., Speidel, S., Distler, M., Weitz, J., Müller-Stich, B. P., Bodenstedt, S., and Wagner, M.
- Published
- 2023
- Full Text
- View/download PDF
3. Aliado - A design concept of AI for decision support in oncological liver surgery
- Author
-
Schulze, A., Haselbeck-Köbler, M., Brandenburg, J.M., Daum, M.T.J., März, K., Hornburg, S., Maurer, H., Myers, F., Reichert, G., Bodenstedt, S., Nickel, F., Kriegsmann, M., Wielpütz, M.O., Speidel, S., Maier-Hein, L., Müller-Stich, B.P., Mehrabi, A., and Wagner, M.
- Published
- 2024
- Full Text
- View/download PDF
4. Does speed equal quality? Time pressure impairs minimally invasive surgical skills in a prospective crossover trial
- Author
-
von Bechtolsheim, F., Schmidt, S., Abel, S., Schneider, A., Wekenborg, M., Bodenstedt, S., Speidel, S., Weitz, J., Oehme, F., and Distler, M.
- Published
- 2022
- Full Text
- View/download PDF
5. Technische Innovationen und Blick in die Zukunft
- Author
-
Wagner, M., Schulze, A., Bodenstedt, S., Maier-Hein, L., Speidel, S., Nickel, F., Berlth, F., Müller-Stich, B. P., and Grimminger, Peter
- Published
- 2022
- Full Text
- View/download PDF
6. „Cognition-Guided Surgery“ – computergestützte intelligente Assistenzsysteme für die onkologische Chirurgie: Wo stehen wir?
- Author
-
Müller-Stich, Beat, Wagner, M., Schulze, A., Bodenstedt, S., Maier-Hein, L., Speidel, S., Nickel, F., and Büchler, M. W.
- Published
- 2022
- Full Text
- View/download PDF
7. Why is the Winner the Best?
- Author
-
Eisenmann, M., primary, Reinke, A., additional, Weru, V., additional, Tizabi, M. D., additional, Isensee, F., additional, Adler, T. J., additional, Ali, S., additional, Andrearczyk, V., additional, Aubreville, M., additional, Baid, U., additional, Bakas, S., additional, Balu, N., additional, Bano, S., additional, Bernal, J., additional, Bodenstedt, S., additional, Casella, A., additional, Cheplygina, V., additional, Daum, M., additional, De Bruijne, M., additional, Depeursinge, A., additional, Dorent, R., additional, Egger, J., additional, Ellis, D. G., additional, Engelhardt, S., additional, Ganz, M., additional, Ghatwary, N., additional, Girard, G., additional, Godau, P., additional, Gupta, A., additional, Hansen, L., additional, Harada, K., additional, Heinrich, M., additional, Heller, N., additional, Hering, A., additional, Huaulmé, A., additional, Jannin, P., additional, Kavur, A. E., additional, Kodym, O., additional, Kozubek, M., additional, Li, J., additional, Li, H., additional, Ma, J., additional, Martín-Isla, C., additional, Menze, B., additional, Noble, A., additional, Oreiller, V., additional, Padoy, N., additional, Pati, S., additional, Payette, K., additional, Rädsch, T., additional, Rafael-Patiño, J., additional, Bawa, V. Singh, additional, Speidel, S., additional, Sudre, C. H., additional, Van Wijnen, K., additional, Wagner, M., additional, Wei, D., additional, Yamlahi, A., additional, Yap, M. H., additional, Yuan, C., additional, Zenk, M., additional, Zia, A., additional, Zimmerer, D., additional, Aydogan, D., additional, Bhattarai, B., additional, Bloch, L., additional, Brüngel, R., additional, Cho, J., additional, Choi, C., additional, Dou, Q., additional, Ezhov, I., additional, Friedrich, C. M., additional, Fuller, C., additional, Gaire, R. R., additional, Galdran, A., additional, Faura, Á. García, additional, Grammatikopoulou, M., additional, Hong, S., additional, Jahanifar, M., additional, Jang, I., additional, Kadkhodamohammadi, A., additional, Kang, I., additional, Kofler, F., additional, Kondo, S., additional, Kuijf, H., additional, Li, M., additional, Luu, M., additional, Martinčič, T., additional, Morais, P., additional, Naser, M. A., additional, Oliveira, B., additional, Owen, D., additional, Pang, S., additional, Park, J., additional, Park, S., additional, Płotka, S., additional, Puybareau, E., additional, Rajpoot, N., additional, Ryu, K., additional, Saeed, N., additional, Shephard, A., additional, Shi, P., additional, Štepec, D., additional, Subedi, R., additional, Tochon, G., additional, Torres, H. R., additional, Urien, H., additional, Vilaça, J. L., additional, Wahid, K. A., additional, Wang, H., additional, Wang, J., additional, Wang, L., additional, Wang, X., additional, Wiestler, B., additional, Wodzinski, M., additional, Xia, F., additional, Xie, J., additional, Xiong, Z., additional, Yang, S., additional, Yang, Y., additional, Zhao, Z., additional, Maier-Hein, K., additional, Jäger, P. F., additional, Kopp-Schneider, A., additional, and Maier-Hein, L., additional
- Published
- 2023
- Full Text
- View/download PDF
8. Crowd-Algorithm Collaboration for Large-Scale Endoscopic Image Annotation with Confidence
- Author
-
Maier-Hein, L., Ross, T., Gröhl, J., Glocker, B., Bodenstedt, S., Stock, C., Heim, E., Götz, M., Wirkert, S., Kenngott, H., Speidel, S., Maier-Hein, K., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Ourselin, Sebastien, editor, Joskowicz, Leo, editor, Sabuncu, Mert R., editor, Unal, Gozde, editor, and Wells, William, editor
- Published
- 2016
- Full Text
- View/download PDF
9. Kognitive Chirurgie/Chirurgie 4.0: Der Weg zur individualisierten Chirurgie
- Author
-
Speidel, S., Bodenstedt, S., Maier-Hein, L., and Kenngott, H.
- Published
- 2018
- Full Text
- View/download PDF
10. Why is the Winner the Best?
- Author
-
Beeldverwerking ISI, Brain, Cancer, Circulatory Health, Structure and Connections, Eisenmann, M., Reinke, A., Weru, V., Tizabi, M. D., Isensee, F., Adler, T. J., Ali, S., Andrearczyk, V., Aubreville, M., Baid, U., Bakas, S., Balu, N., Bano, S., Bernal, J., Bodenstedt, S., Casella, A., Cheplygina, V., Daum, M., De Bruijne, M., Depeursinge, A., Dorent, R., Egger, J., Ellis, D. G., Engelhardt, S., Ganz, M., Ghatwary, N., Girard, G., Godau, P., Gupta, A., Hansen, L., Harada, K., Heinrich, M., Heller, N., Hering, A., Huaulmé, A., Jannin, P., Kavur, A. E., Kodym, O., Kozubek, M., Li, J., Li, H., Ma, J., Martín-Isla, C., Menze, B., Noble, A., Oreiller, V., Padoy, N., Pati, S., Payette, K., Rädsch, T., Rafael-Patiño, J., Bawa, V. Singh, Speidel, S., Sudre, C. H., Van Wijnen, K., Wagner, M., Wei, D., Yamlahi, A., Yap, M. H., Yuan, C., Zenk, M., Zia, A., Zimmerer, D., Aydogan, D., Bhattarai, B., Bloch, L., Brüngel, R., Cho, J., Choi, C., Dou, Q., Ezhov, I., Friedrich, C. M., Fuller, C., Gaire, R. R., Galdran, A., García Faura, A., Grammatikopoulou, M., Hong, S., Jahanifar, M., Jang, I., Kadkhodamohammadi, A., Kang, I., Kofler, F., Kondo, S., Kuijf, H., Li, M., Luu, M., Martinčič, T., Morais, P., Naser, M. A., Oliveira, B., Owen, D., Pang, S., Park, J., Park, S., Płotka, S., Puybareau, E., Rajpoot, N., Ryu, K., Saeed, N., Shephard, A., Shi, P., Štepec, D., Subedi, R., Tochon, G., Torres, H. R., Urien, H., Vilaça, J. L., Wahid, K. A., Wang, H., Wang, J., Wang, L., Wang, X., Wiestler, B., Wodzinski, M., Xia, F., Xie, J., Xiong, Z., Yang, S., Yang, Y., Zhao, Z., Maier-Hein, K., Jäger, P. F., Kopp-Schneider, A., Maier-Hein, L., Beeldverwerking ISI, Brain, Cancer, Circulatory Health, Structure and Connections, Eisenmann, M., Reinke, A., Weru, V., Tizabi, M. D., Isensee, F., Adler, T. J., Ali, S., Andrearczyk, V., Aubreville, M., Baid, U., Bakas, S., Balu, N., Bano, S., Bernal, J., Bodenstedt, S., Casella, A., Cheplygina, V., Daum, M., De Bruijne, M., Depeursinge, A., Dorent, R., Egger, J., Ellis, D. G., Engelhardt, S., Ganz, M., Ghatwary, N., Girard, G., Godau, P., Gupta, A., Hansen, L., Harada, K., Heinrich, M., Heller, N., Hering, A., Huaulmé, A., Jannin, P., Kavur, A. E., Kodym, O., Kozubek, M., Li, J., Li, H., Ma, J., Martín-Isla, C., Menze, B., Noble, A., Oreiller, V., Padoy, N., Pati, S., Payette, K., Rädsch, T., Rafael-Patiño, J., Bawa, V. Singh, Speidel, S., Sudre, C. H., Van Wijnen, K., Wagner, M., Wei, D., Yamlahi, A., Yap, M. H., Yuan, C., Zenk, M., Zia, A., Zimmerer, D., Aydogan, D., Bhattarai, B., Bloch, L., Brüngel, R., Cho, J., Choi, C., Dou, Q., Ezhov, I., Friedrich, C. M., Fuller, C., Gaire, R. R., Galdran, A., García Faura, A., Grammatikopoulou, M., Hong, S., Jahanifar, M., Jang, I., Kadkhodamohammadi, A., Kang, I., Kofler, F., Kondo, S., Kuijf, H., Li, M., Luu, M., Martinčič, T., Morais, P., Naser, M. A., Oliveira, B., Owen, D., Pang, S., Park, J., Park, S., Płotka, S., Puybareau, E., Rajpoot, N., Ryu, K., Saeed, N., Shephard, A., Shi, P., Štepec, D., Subedi, R., Tochon, G., Torres, H. R., Urien, H., Vilaça, J. L., Wahid, K. A., Wang, H., Wang, J., Wang, L., Wang, X., Wiestler, B., Wodzinski, M., Xia, F., Xie, J., Xiong, Z., Yang, S., Yang, Y., Zhao, Z., Maier-Hein, K., Jäger, P. F., Kopp-Schneider, A., and Maier-Hein, L.
- Published
- 2023
11. In-vitro Evaluation von endoskopischer Oberflächenrekonstruktion mittels Time-of-Flight-Kameratechnik
- Author
-
Groch, A., Hempel, S., Speidel, S., Höller, K., Engelbrecht, R., Penne, J., Seitel, A., Röhl, S., Yung, K., Bodenstedt, S., Pflaum, F., Kilgus, T., Meinzer, H.-P., Hornegger, J., Maier-Hein, L., Handels, Heinz, editor, Ehrhardt, Jan, editor, Deserno, Thomas M., editor, Meinzer, Hans-Peter, editor, and Tolxdorff, Thomas, editor
- Published
- 2011
- Full Text
- View/download PDF
12. Crowd-Algorithm Collaboration for Large-Scale Endoscopic Image Annotation with Confidence
- Author
-
Maier-Hein, L., primary, Ross, T., additional, Gröhl, J., additional, Glocker, B., additional, Bodenstedt, S., additional, Stock, C., additional, Heim, E., additional, Götz, M., additional, Wirkert, S., additional, Kenngott, H., additional, Speidel, S., additional, and Maier-Hein, K., additional
- Published
- 2016
- Full Text
- View/download PDF
13. Simultaneous localisation and mapping for laparoscopic liver navigation: a comparative evaluation study
- Author
-
Docea, R., Pfeiffer, M., Bodenstedt, S., Kolbinger, F., Höller, L., Wittig, I., Hoffmann, R., (0000-0001-9550-9050) Troost, E. G. C., Riediger, C., Weitz, J., Speidel, S., Docea, R., Pfeiffer, M., Bodenstedt, S., Kolbinger, F., Höller, L., Wittig, I., Hoffmann, R., (0000-0001-9550-9050) Troost, E. G. C., Riediger, C., Weitz, J., and Speidel, S.
- Abstract
Computer-Assisted Surgery (CAS) aids the surgeon by enriching the surgical scene with additional information in order to improve patient outcome. One such aid may be the superimposition of important structures (such as blood vessels and tumors) over a laparoscopic image stream. In liver surgery, this may be achieved by creating a dense map of the abdominal environment surrounding the liver, registering a preoperative model (CT scan) to the liver within this map, and tracking the relative pose of the camera. Thereby, known structures may be rendered into images from the camera perspective. This intraoperative map of the scene may be constructed, and the relative pose of the laparoscope camera estimated, using Simultaneous Localisation and Mapping (SLAM). The intraoperative scene poses unique challenges, such as: homogeneous surface textures, sparse visual features, specular reflections and camera motions specific to laparoscopy. This work compares the efficacies of two state-of- the-art SLAM systems in the context of laparoscopic surgery, on a newly collected phantom dataset with ground truth trajectory and surface data. The SLAM systems chosen contrast strongly in implementation: one sparse and feature-based, ORB-SLAM3,1–3 and one dense and featureless, ElasticFusion.4 We find that ORB-SLAM3 greatly outperforms ElasticFusion in trajectory estimation and is more stable on sequences from laparoscopic surgeries. However, when extended to give a dense output, ORB-SLAM3 performs surface reconstruction comparably to ElasticFusion. Our evaluation of these systems serves as a basis for expanding the use of SLAM algorithms in the context of laparoscopic liver surgery and Minimally Invasive Surgery (MIS) more generally.
- Published
- 2021
14. In-vitro Evaluation von endoskopischer Oberflächenrekonstruktion mittels Time-of-Flight-Kameratechnik
- Author
-
Groch, A., primary, Hempel, S., additional, Speidel, S., additional, Höller, K., additional, Engelbrecht, R., additional, Penne, J., additional, Seitel, A., additional, Röhl, S., additional, Yung, K., additional, Bodenstedt, S., additional, Pflaum, F., additional, Kilgus, T., additional, Meinzer, H.-P., additional, Hornegger, J., additional, and Maier-Hein, L., additional
- Published
- 2011
- Full Text
- View/download PDF
15. Nanoscale Spin Manipulation with Pulsed Magnetic Gradient Fields from a Hard Disc Drive Writer
- Author
-
Bodenstedt, S., primary, Jakobi, I., additional, Michl, J., additional, Gerhardt, I., additional, Neumann, P., additional, and Wrachtrup, J., additional
- Published
- 2018
- Full Text
- View/download PDF
16. Kognitive Chirurgie/Chirurgie 4.0
- Author
-
Speidel, S., primary, Bodenstedt, S., additional, Maier-Hein, L., additional, and Kenngott, H., additional
- Published
- 2018
- Full Text
- View/download PDF
17. Comparison of Methods for Bowel Length Measurement in Bariatric Surgery: Results of a Phantom Trial
- Author
-
Mayer, BFB, Wagner, M, Bodenstedt, S, Speidel, S, Linke, G, Fischer, L, Müller, BP, and Kenngott, HG
- Subjects
surgical procedures, operative ,ddc: 610 ,nutritional and metabolic diseases ,610 Medical sciences ,Medicine - Abstract
Background: Bariatric surgery is the recommended treatment option for patients suffering from morbid obesity. Out of the existing bariatric procedures, Laparoscopic Roux-en-Y gastric bypass (LRYGB) is the most commonly performed. During LRYGB surgery a Roux-en-Y anastomosis is constructed that consists[for full text, please go to the a.m. URL], 133. Kongress der Deutschen Gesellschaft für Chirurgie
- Published
- 2016
- Full Text
- View/download PDF
18. Quantitative laparoscopy for bowel length measurement in bariatric surgery – from bench to bedside
- Author
-
Wagner, M, Mayer, BFB, Bodenstedt, S, Speidel, S, Linke, G, Fischer, L, Müller, BP, and Kenngott, HG
- Subjects
ddc: 610 ,610 Medical sciences ,Medicine - Abstract
Background: Bariatric surgery is the recommended treatment option for patients suffering from morbid obesity. Out of the existing bariatric procedures, Laparoscopic Roux-en-Y gastric bypass (LRYGB) is the most commonly performed. However, only about 53% of bariatric surgeons measure limb length.[for full text, please go to the a.m. URL], 133. Kongress der Deutschen Gesellschaft für Chirurgie
- Published
- 2016
- Full Text
- View/download PDF
19. Big Data in der Chirurgie: Realisierung einer Echtzeit-Sensordatenanalyse im vernetzten Operationssaal
- Author
-
Wagner, M, additional, Mietkowski, P, additional, Schneider, G, additional, Apitz, M, additional, Mayer, B, additional, Bodenstedt, S, additional, Speidel, S, additional, Bergh, B, additional, Müller-Stich, B, additional, and Kenngott, H, additional
- Published
- 2017
- Full Text
- View/download PDF
20. Sensor- und Expertenmodellgestütztes Trainingssystem für laparoskopisches Nähen und Knoten mit kontinuierlichem individuellem Feedback
- Author
-
Kowalewski, KF, Nickel, F, Bodenstedt, S, Kenngott, HG, Wagner, M, Wekerle, AL, Hendrie, J, Speidel, S, Dillmann, R, and Müller-Stich, BP
- Subjects
ddc: 610 ,610 Medical sciences ,Medicine - Abstract
Einleitung: Trainingssysteme bieten die Möglichkeit, Operationstechniken in einer sicheren Umgebung zu trainieren. Aktuelle Systeme sind jedoch dahingehend limitiert, das ein Supervisor für Anleitung und Bewertung anwesend sein muss bzw. virtuelle Systeme ihr Feedback auf rein metrische Daten[for full text, please go to the a.m. URL], 132. Kongress der Deutschen Gesellschaft für Chirurgie
- Published
- 2015
- Full Text
- View/download PDF
21. Towards learning robots in surgery: First experience with a cognition-guided camera-robot in laparoscopy
- Author
-
Mietkowski, P, Wagner, M, Bihlmaier, A, Bodenstedt, S, Speidel, S, Wörn, H, Müller, BP, Kenngott, HG, Mietkowski, P, Wagner, M, Bihlmaier, A, Bodenstedt, S, Speidel, S, Wörn, H, Müller, BP, and Kenngott, HG
- Published
- 2016
22. Sensor-OR: Towards Data-Driven Workflow-Recognition in the Connected Operating Room
- Author
-
Kenngott, HG, Wagner, M, Mietkowski, P, Bodenstedt, S, Speidel, S, Wörn, H, Schneider, G, Bergh, B, Müller, BP, Kenngott, HG, Wagner, M, Mietkowski, P, Bodenstedt, S, Speidel, S, Wörn, H, Schneider, G, Bergh, B, and Müller, BP
- Published
- 2016
23. Entwicklung eines sensor- und modellgestützten Trainingssystems für die Minimal Invasive Chirurgie
- Author
-
Miloloza, K, Nickel, F, Bodenstedt, S, Kenngott, HG, Speidel, S, Dillmann, R, and Müller-Stich, BP
- Subjects
ddc: 610 ,610 Medical sciences ,Medicine - Abstract
Einleitung: Die Minimal Invasive Chirurgie (MIC) hat viele Vorteile für den Patienten, bedingt aber eine zusätzliche Lernkurve für den Chirurgen. Aktuelle Trainings- und Testsysteme verwenden entweder rein metrische Kriterien oder bewerten anhand subjektiver Eindrücke durch Beobachtung[for full text, please go to the a.m. URL], 131. Kongress der Deutschen Gesellschaft für Chirurgie
- Published
- 2014
- Full Text
- View/download PDF
24. Image-based tracking of the suturing needle during laparoscopic interventions
- Author
-
Speidel, S., additional, Kroehnert, A., additional, Bodenstedt, S., additional, Kenngott, H., additional, Müller-Stich, B., additional, and Dillmann, R., additional
- Published
- 2015
- Full Text
- View/download PDF
25. Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery
- Author
-
Bodenstedt, S., additional, Reichard, D., additional, Suwelack, S., additional, Wagner, M., additional, Kenngott, H., additional, Müller-Stich, B., additional, Dillmann, R., additional, and Speidel, S., additional
- Published
- 2015
- Full Text
- View/download PDF
26. Comparative Validation of Single-Shot Optical Techniques for Laparoscopic 3-D Surface Reconstruction
- Author
-
Maier-Hein, L., primary, Heim, E., additional, Hornegger, J., additional, Jannin, P., additional, Kenngott, H., additional, Kilgus, T., additional, Muller-Stich, B., additional, Oladokun, D., additional, Rohl, S., additional, dos Santos, T. R., additional, Schlemmer, H.-P., additional, Groch, A., additional, Seitel, A., additional, Speidel, S., additional, Wagner, M., additional, Stoyanov, D., additional, Bartoli, A., additional, Bodenstedt, S., additional, Boissonnat, G., additional, Chang, P.-L., additional, Clancy, N. T., additional, Elson, D. S., additional, and Haase, S., additional
- Published
- 2014
- Full Text
- View/download PDF
27. Visual tracking of da Vinci instruments for laparoscopic surgery
- Author
-
Speidel, S., additional, Kuhn, E., additional, Bodenstedt, S., additional, Röhl, S., additional, Kenngott, H., additional, Müller-Stich, B., additional, and Dillmann, R., additional
- Published
- 2014
- Full Text
- View/download PDF
28. Robust feature tracking for endoscopic pose estimation and structure recovery
- Author
-
Speidel, S., primary, Krappe, S., additional, Röhl, S., additional, Bodenstedt, S., additional, Müller-Stich, B., additional, and Dillmann, R., additional
- Published
- 2013
- Full Text
- View/download PDF
29. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling
- Author
-
Röhl, S., primary, Bodenstedt, S., additional, Küderle, C., additional, Suwelack, S., additional, Kenngott, H., additional, Müller-Stich, B. P., additional, Dillmann, R., additional, and Speidel, S., additional
- Published
- 2012
- Full Text
- View/download PDF
30. 3D surface reconstruction for laparoscopic computer-assisted interventions: comparison of state-of-the-art methods
- Author
-
Groch, A., primary, Seitel, A., additional, Hempel, S., additional, Speidel, S., additional, Engelbrecht, R., additional, Penne, J., additional, Höller, K., additional, Röhl, S., additional, Yung, K., additional, Bodenstedt, S., additional, Pflaum, F., additional, dos Santos, T. R., additional, Mersmann, S., additional, Meinzer, H.-P., additional, Hornegger, J., additional, and Maier-Hein, L., additional
- Published
- 2011
- Full Text
- View/download PDF
31. Real-time surface reconstruction from stereo endoscopic images for intraoperative registration
- Author
-
Röhl, S., primary, Bodenstedt, S., additional, Suwelack, S., additional, Kenngott, H., additional, Mueller-Stich, B. P., additional, Dillmann, R., additional, and Speidel, S., additional
- Published
- 2011
- Full Text
- View/download PDF
32. Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery
- Author
-
Yaniv, Ziv R., Webster, Robert J., Bodenstedt, S., Reichard, D., Suwelack, S., Wagner, M., Kenngott, H., Müller-Stich, B., Dillmann, R., and Speidel, S.
- Published
- 2015
- Full Text
- View/download PDF
33. Image-based tracking of the suturing needle during laparoscopic interventions
- Author
-
Yaniv, Ziv R., Webster, Robert J., Speidel, S., Kroehnert, A., Bodenstedt, S., Kenngott, H., Müller-Stich, B., and Dillmann, R.
- Published
- 2015
- Full Text
- View/download PDF
34. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling
- Author
-
Röhl, S., Bodenstedt, S., Küderle, C., Suwelack, S., Kenngott, H., Müller-Stich, B. P., Dillmann, R., and Speidel, S.
- Abstract
Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the preoperative model. Haptic information which could complement the visual sensor data is still not established. In addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by intraoperative sensors. We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of force measurements, surface reconstruction and biomechanical modeling.
- Published
- 2012
- Full Text
- View/download PDF
35. Visual tracking of da Vinci instruments for laparoscopic surgery
- Author
-
Yaniv, Ziv R., Holmes, David R., Speidel, S., Kuhn, E., Bodenstedt, S., Röhl, S., Kenngott, H., Müller-Stich, B., and Dillmann, R.
- Published
- 2014
- Full Text
- View/download PDF
36. Can you feel the force just right? Tactile force feedback for training of minimally invasive surgery-evaluation of vibration feedback for adequate force application.
- Author
-
von Bechtolsheim F, Bielert F, Schmidt S, Buck N, Bodenstedt S, Speidel S, Lüneburg LM, Müller T, Fan Y, Bobbe T, Oppici L, Krzywinski J, Dobroschke J, Weitz J, Distler M, and Oehme F
- Subjects
- Humans, Female, Male, Suture Techniques education, Adult, Feedback, Sensory, Vibration, Laparoscopy education, Clinical Competence, Cross-Over Studies, Touch
- Abstract
Background: Tissue handling is a crucial skill for surgeons and is challenging to learn. The aim of this study was to develop laparoscopic instruments with different integrated tactile vibration feedback by varying different tactile modalities and assess its effect on tissue handling skills., Methods: Standard laparoscopic instruments were equipped with a vibration effector, which was controlled by a microcomputer attached to a force sensor platform. One of three different vibration feedbacks (F1: double vibration > 2 N; F2: increasing vibration relative to force; F3: one vibration > 1.5 N and double vibration > 2 N) was applied to the instruments. In this multicenter crossover trial, surgical novices and expert surgeons performed two laparoscopic tasks (Peg transfer, laparoscopic suture, and knot) each with all the three vibration feedback modalities and once without any feedback, in a randomized order. The primary endpoint was force exertion., Results: A total of 57 subjects (15 surgeons, 42 surgical novices) were included in the trial. In the Peg transfer task, there were no differences between the tactile feedback modalities in terms of force application. However, in subgroup analysis, the use of F2 resulted in a significantly lower mean-force application (p-value = 0.02) among the student group. In the laparoscopic suture and knot task, all participants exerted significantly lower mean and peak forces using F2 (p-value < 0.01). These findings remained significant after subgroup analysis for both, the student and surgeon groups individually. The condition without tactile feedback led to the highest mean and peak force exertion compared to the three other feedback modalities., Conclusion: Continuous tactile vibration feedback decreases the mean and peak force applied during laparoscopic training tasks. This effect is more pronounced in demanding tasks such as laparoscopic suturing and knot tying and might be more beneficial for students. Laparoscopic tasks without feedback lead to increased force application., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
37. [The digital operating room : Chances and risks of artificial intelligence].
- Author
-
Wierick A, Schulze A, Bodenstedt S, Speidel S, Distler M, Weitz J, and Wagner M
- Subjects
- Humans, Surgery, Computer-Assisted ethics, Surgery, Computer-Assisted methods, Surgery, Computer-Assisted instrumentation, Robotic Surgical Procedures ethics, Artificial Intelligence, Operating Rooms
- Abstract
At the central workplace of the surgeon the digitalization of the operating room has particular consequences for the surgical work. Starting with intraoperative cross-sectional imaging and sonography, through functional imaging, minimally invasive and robot-assisted surgery up to digital surgical and anesthesiological documentation, the vast majority of operating rooms are now at least partially digitalized. The increasing digitalization of the whole process chain enables not only for the collection but also the analysis of big data. Current research focuses on artificial intelligence for the analysis of intraoperative data as the prerequisite for assistance systems that support surgical decision making or warn of risks; however, these technologies raise new ethical questions for the surgical community that affect the core of surgical work., (© 2024. The Author(s), under exclusive licence to Springer Medizin Verlag GmbH, ein Teil von Springer Nature.)
- Published
- 2024
- Full Text
- View/download PDF
38. AIxSuture: vision-based assessment of open suturing skills.
- Author
-
Hoffmann H, Funke I, Peters P, Venkatesh DK, Egger J, Rivoir D, Röhrig R, Hölzle F, Bodenstedt S, Willemer MC, Speidel S, and Puladi B
- Subjects
- Humans, Benchmarking, Clinical Competence, Suture Techniques education, Video Recording
- Abstract
Purpose: Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills., Methods: We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset., Results: The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training., Conclusion: We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
39. One model to use them all: training a segmentation model with complementary datasets.
- Author
-
Jenke AC, Bodenstedt S, Kolbinger FR, Distler M, Weitz J, and Speidel S
- Subjects
- Humans, Surgery, Computer-Assisted methods, Image Processing, Computer-Assisted methods, Datasets as Topic, Databases, Factual, Machine Learning
- Abstract
Purpose: Understanding surgical scenes is crucial for computer-assisted surgery systems to provide intelligent assistance functionality. One way of achieving this is via scene segmentation using machine learning (ML). However, such ML models require large amounts of annotated training data, containing examples of all relevant object classes, which are rarely available. In this work, we propose a method to combine multiple partially annotated datasets, providing complementary annotations, into one model, enabling better scene segmentation and the use of multiple readily available datasets., Methods: Our method aims to combine available data with complementary labels by leveraging mutual exclusive properties to maximize information. Specifically, we propose to use positive annotations of other classes as negative samples and to exclude background pixels of these binary annotations, as we cannot tell if a positive prediction by the model is correct., Results: We evaluate our method by training a DeepLabV3 model on the publicly available Dresden Surgical Anatomy Dataset, which provides multiple subsets of binary segmented anatomical structures. Our approach successfully combines 6 classes into one model, significantly increasing the overall Dice Score by 4.4% compared to an ensemble of models trained on the classes individually. By including information on multiple classes, we were able to reduce the confusion between classes, e.g. a 24% drop for stomach and colon., Conclusion: By leveraging multiple datasets and applying mutual exclusion constraints, we developed a method that improves surgical scene segmentation performance without the need for fully annotated datasets. Our results demonstrate the feasibility of training a model on multiple complementary datasets. This paves the way for future work further alleviating the need for one specialized large, fully segmented dataset but instead the use of already existing datasets., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
40. NMRduino: A modular, open-source, low-field magnetic resonance platform.
- Author
-
Tayler MCD and Bodenstedt S
- Abstract
The NMRduino is a compact, cost-effective, sub-MHz NMR spectrometer that utilizes readily available open-source hardware and software components. One of its aims is to simplify the processes of instrument setup and data acquisition control to make experimental NMR spectroscopy accessible to a broader audience. In this introductory paper, the key features and potential applications of NMRduino are described to highlight its versatility both for research and education., Competing Interests: Declaration of competing interest The authors declare no competing interests., (Copyright © 2024 The Authors. Published by Elsevier Inc. All rights reserved.)
- Published
- 2024
- Full Text
- View/download PDF
41. The development of tissue handling skills is sufficient and comparable after training in virtual reality or on a surgical robotic system: a prospective randomized trial.
- Author
-
von Bechtolsheim F, Franz A, Schmidt S, Schneider A, La Rosée F, Radulova-Mauersberger O, Krause-Jüttler G, Hümpel A, Bodenstedt S, Speidel S, Weitz J, Distler M, and Oehme F
- Subjects
- Humans, Female, Male, Prospective Studies, Adult, Simulation Training methods, Learning Curve, Young Adult, Robotic Surgical Procedures education, Clinical Competence, Virtual Reality
- Abstract
Background: Virtual reality is a frequently chosen method for learning the basics of robotic surgery. However, it is unclear whether tissue handling is adequately trained in VR training compared to training on a real robotic system., Methods: In this randomized controlled trial, participants were split into two groups for "Fundamentals of Robotic Surgery (FRS)" training on either a DaVinci VR simulator (VR group) or a DaVinci robotic system (Robot group). All participants completed four tasks on the DaVinci robotic system before training (Baseline test), after proficiency in three FRS tasks (Midterm test), and after proficiency in all FRS tasks (Final test). Primary endpoints were forces applied across tests., Results: This trial included 87 robotic novices, of which 43 and 44 participants received FRS training in VR group and Robot group, respectively. The Baseline test showed no significant differences in force application between the groups indicating a sufficient randomization. In the Midterm and Final test, the force application was not different between groups. Both groups displayed sufficient learning curves with significant improvement of force application. However, the Robot group needed significantly less repetitions in the three FRS tasks Ring tower (Robot: 2.48 vs. VR: 5.45; p < 0.001), Knot Tying (Robot: 5.34 vs. VR: 8.13; p = 0.006), and Vessel Energy Dissection (Robot: 2 vs. VR: 2.38; p = 0.001) until reaching proficiency., Conclusion: Robotic tissue handling skills improve significantly and comparably after both VR training and training on a real robotic system, but training on a VR simulator might be less efficient., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
42. The Dresden in vivo OCT dataset for automatic middle ear segmentation.
- Author
-
Liu P, Steuer S, Golde J, Morgenstern J, Hu Y, Schieffer C, Ossmann S, Kirsten L, Bodenstedt S, Pfeiffer M, Speidel S, Koch E, and Neudert M
- Subjects
- Humans, Algorithms, Neural Networks, Computer, Ear, Middle diagnostic imaging, Tomography, Optical Coherence methods
- Abstract
Endoscopic optical coherence tomography (OCT) offers a non-invasive approach to perform the morphological and functional assessment of the middle ear in vivo. However, interpreting such OCT images is challenging and time-consuming due to the shadowing of preceding structures. Deep neural networks have emerged as a promising tool to enhance this process in multiple aspects, including segmentation, classification, and registration. Nevertheless, the scarcity of annotated datasets of OCT middle ear images poses a significant hurdle to the performance of neural networks. We introduce the Dresden in vivo OCT Dataset of the Middle Ear (DIOME) featuring 43 OCT volumes from both healthy and pathological middle ears of 29 subjects. DIOME provides semantic segmentations of five crucial anatomical structures (tympanic membrane, malleus, incus, stapes and promontory), and sparse landmarks delineating the salient features of the structures. The availability of these data facilitates the training and evaluation of algorithms regarding various analysis tasks with middle ear OCT images, e.g. diagnostics., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
43. Non-rigid point cloud registration for middle ear diagnostics with endoscopic optical coherence tomography.
- Author
-
Liu P, Golde J, Morgenstern J, Bodenstedt S, Li C, Hu Y, Chen Z, Koch E, Neudert M, and Speidel S
- Subjects
- Humans, Child, Endoscopy, Tomography, Optical Coherence methods, Ear, Middle diagnostic imaging, Ear, Middle pathology
- Abstract
Purpose: Middle ear infection is the most prevalent inflammatory disease, especially among the pediatric population. Current diagnostic methods are subjective and depend on visual cues from an otoscope, which is limited for otologists to identify pathology. To address this shortcoming, endoscopic optical coherence tomography (OCT) provides both morphological and functional in vivo measurements of the middle ear. However, due to the shadow of prior structures, interpretation of OCT images is challenging and time-consuming. To facilitate fast diagnosis and measurement, improvement in the readability of OCT data is achieved by merging morphological knowledge from ex vivo middle ear models with OCT volumetric data, so that OCT applications can be further promoted in daily clinical settings., Methods: We propose C2P-Net: a two-staged non-rigid registration pipeline for complete to partial point clouds, which are sampled from ex vivo and in vivo OCT models, respectively. To overcome the lack of labeled training data, a fast and effective generation pipeline in Blender3D is designed to simulate middle ear shapes and extract in vivo noisy and partial point clouds., Results: We evaluate the performance of C2P-Net through experiments on both synthetic and real OCT datasets. The results demonstrate that C2P-Net is generalized to unseen middle ear point clouds and capable of handling realistic noise and incompleteness in synthetic and real OCT data., Conclusions: In this work, we aim to enable diagnosis of middle ear structures with the assistance of OCT images. We propose C2P-Net: a two-staged non-rigid registration pipeline for point clouds to support the interpretation of in vivo noisy and partial OCT images for the first time. Code is available at: https://gitlab.com/nct_tso_public/c2p-net., (© 2023. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
44. Active learning for extracting surgomic features in robot-assisted minimally invasive esophagectomy: a prospective annotation study.
- Author
-
Brandenburg JM, Jenke AC, Stern A, Daum MTJ, Schulze A, Younis R, Petrynowski P, Davitashvili T, Vanat V, Bhasker N, Schneider S, Mündermann L, Reinke A, Kolbinger FR, Jörns V, Fritz-Kebede F, Dugas M, Maier-Hein L, Klotz R, Distler M, Weitz J, Müller-Stich BP, Speidel S, Bodenstedt S, and Wagner M
- Subjects
- Humans, Bayes Theorem, Machine Learning, Minimally Invasive Surgical Procedures methods, Prospective Studies, Esophagectomy methods, Robotics
- Abstract
Background: With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features., Methods: To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers., Results: In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames., Conclusion: We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
45. Anatomy segmentation in laparoscopic surgery: comparison of machine learning and human expertise - an experimental study.
- Author
-
Kolbinger FR, Rinner FM, Jenke AC, Carstens M, Krell S, Leger S, Distler M, Weitz J, Speidel S, and Bodenstedt S
- Subjects
- Humans, Algorithms, Image Processing, Computer-Assisted methods, Machine Learning, Laparoscopy
- Abstract
Background: Lack of anatomy recognition represents a clinically relevant risk in abdominal surgery. Machine learning (ML) methods can help identify visible patterns and risk structures; however, their practical value remains largely unclear., Materials and Methods: Based on a novel dataset of 13 195 laparoscopic images with pixel-wise segmentations of 11 anatomical structures, we developed specialized segmentation models for each structure and combined models for all anatomical structures using two state-of-the-art model architectures (DeepLabv3 and SegFormer) and compared segmentation performance of algorithms to a cohort of 28 physicians, medical students, and medical laypersons using the example of pancreas segmentation., Results: Mean Intersection-over-Union for semantic segmentation of intra-abdominal structures ranged from 0.28 to 0.83 and from 0.23 to 0.77 for the DeepLabv3-based structure-specific and combined models, and from 0.31 to 0.85 and from 0.26 to 0.67 for the SegFormer-based structure-specific and combined models, respectively. Both the structure-specific and the combined DeepLabv3-based models are capable of near-real-time operation, while the SegFormer-based models are not. All four models outperformed at least 26 out of 28 human participants in pancreas segmentation., Conclusions: These results demonstrate that ML methods have the potential to provide relevant assistance in anatomy recognition in minimally invasive surgery in near-real-time. Future research should investigate the educational value and subsequent clinical impact of the respective assistance systems., (Copyright © 2023 The Author(s). Published by Wolters Kluwer Health, Inc.)
- Published
- 2023
- Full Text
- View/download PDF
46. Magnetometer-Detected Nuclear Magnetic Resonance of Photochemically Hyperpolarized Molecules.
- Author
-
Chuchkova L, Bodenstedt S, Picazo-Frutos R, Eills J, Tretiak O, Hu Y, Barskiy DA, de Santis J, Tayler MCD, Budker D, and Sheberstov KF
- Abstract
Photochemically induced dynamic nuclear polarization (photo-CIDNP) enables nuclear spin ordering by irradiating samples with light. Polarized spins are conventionally detected via high-field chemical-shift-resolved NMR (above 0.1 T). In this Letter, we demonstrate in situ low-field photo-CIDNP measurements using a magnetically shielded fast-field-cycling NMR setup detecting Larmor precession via atomic magnetometers. For solutions comprising mM concentrations of the photochemically polarized molecules, hyperpolarized
1 H magnetization is detected by pulse-acquired NMR spectroscopy. The observed NMR line widths are about 5 times narrower than normally anticipated in high-field NMR and are systematically affected by light irradiation during the acquisition period, reflecting a reduction of the transverse relaxation time constant, T2 *, on the order of 10%. Magnetometer-detected photo-CIDNP spectroscopy enables straightforward observation of spin-chemistry processes in the ambient field range from a few nT to tens of mT. Potential applications of this measuring modality are discussed.- Published
- 2023
- Full Text
- View/download PDF
47. Does practice make perfect? Laparoscopic training mainly improves motion efficiency: a prospective trial.
- Author
-
von Bechtolsheim F, Petzsch S, Schmidt S, Schneider A, Bodenstedt S, Funke I, Speidel S, Radulova-Mauersberger O, Distler M, Weitz J, Mees ST, and Oehme F
- Subjects
- Humans, Prospective Studies, Curriculum, Minimally Invasive Surgical Procedures, Learning Curve, Clinical Competence, Laparoscopy education
- Abstract
Training improves skills in minimally invasive surgery. This study aimed to investigate the learning curves of complex motion parameters for both hands during a standardized training course using a novel measurement tool. An additional focus was placed on the parameters representing surgical safety and precision. Fifty-six laparoscopic novices participated in a training course on the basic skills of minimally invasive surgery based on a modified Fundamentals of Laparoscopic Surgery (FLS) curriculum. Before, twice during, and once after the practical lessons, all participants had to perform four laparoscopic tasks (peg transfer, precision cut, balloon resection, and laparoscopic suture and knot), which were recorded and analyzed using an instrument motion analysis system. Participants significantly improved the time per task for all four tasks (all p < 0.001). The individual instrument path length decreased significantly for the dominant and non-dominant hands in all four tasks. Similarly, both hands became significantly faster in all tasks, with the exception of the non-dominant hand in the precision cut task. In terms of relative idle time, only in the peg transfer task did both hands improve significantly, while in the precision cut task, only the dominant hand performed better. In contrast, the motion volume of both hands combined was reduced in only one task (precision cut, p = 0.01), whereas no significant improvement in the relative time of instruments being out of view was observed. FLS-based skills training increases motion efficiency primarily by increasing speed and reducing idle time and path length. Parameters relevant for surgical safety and precision (motion volume and relative time of instruments being out of view) are minimally affected by short-term training. Consequently, surgical training should also focus on safety and precision-related parameters, and assessment of these parameters should be incorporated into basic skill training accordingly., (© 2023. The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
48. Artificial Intelligence for context-aware surgical guidance in complex robot-assisted oncological procedures: An exploratory feasibility study.
- Author
-
Kolbinger FR, Bodenstedt S, Carstens M, Leger S, Krell S, Rinner FM, Nielen TP, Kirchberg J, Fritzmann J, Weitz J, Distler M, and Speidel S
- Abstract
Introduction: Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. This work explores the feasibility of phase recognition and target structure segmentation in robot-assisted rectal resection (RARR) using machine learning., Materials and Methods: A total of 57 RARR were recorded and subsets of these were annotated with respect to surgical phases and exact locations of target structures (anatomical structures, tissue types, static structures, and dissection areas). For surgical phase recognition, three machine learning models were trained: LSTM, MSTCN, and Trans-SVNet. Based on pixel-wise annotations of target structures in 9037 images, individual segmentation models based on DeepLabv3 were trained. Model performance was evaluated using F1 score, Intersection-over-Union (IoU), accuracy, precision, recall, and specificity., Results: The best results for phase recognition were achieved with the MSTCN model (F1 score: 0.82 ± 0.01, accuracy: 0.84 ± 0.03). Mean IoUs for target structure segmentation ranged from 0.14 ± 0.22 to 0.80 ± 0.14 for organs and tissue types and from 0.11 ± 0.11 to 0.44 ± 0.30 for dissection areas. Image quality, distorting factors (i.e. blood, smoke), and technical challenges (i.e. lack of depth perception) considerably impacted segmentation performance., Conclusion: Machine learning-based phase recognition and segmentation of selected target structures are feasible in RARR. In the future, such functionalities could be integrated into a context-aware surgical guidance system for rectal surgery., (© 2023 Published by Elsevier Ltd.)
- Published
- 2023
- Full Text
- View/download PDF
49. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark.
- Author
-
Wagner M, Müller-Stich BP, Kisilenko A, Tran D, Heger P, Mündermann L, Lubotsky DM, Müller B, Davitashvili T, Capek M, Reinke A, Reid C, Yu T, Vardazaryan A, Nwoye CI, Padoy N, Liu X, Lee EJ, Disch C, Meine H, Xia T, Jia F, Kondo S, Reiter W, Jin Y, Long Y, Jiang M, Dou Q, Heng PA, Twick I, Kirtac K, Hosgor E, Bolmgren JL, Stenzel M, von Siemens B, Zhao L, Ge Z, Sun H, Xie D, Guo M, Liu D, Kenngott HG, Nickel F, Frankenberg MV, Mathis-Ullrich F, Kopp-Schneider A, Maier-Hein L, Speidel S, and Bodenstedt S
- Subjects
- Humans, Workflow, Algorithms, Machine Learning, Artificial Intelligence, Benchmarking
- Abstract
Purpose: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill., Methods: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment., Results: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team)., Conclusion: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery., Competing Interests: Declaration of Competing Interest M. Wagner, B.-P. Müller-Stich, S. Speidel and S. Bodenstedt worked with medical device manufacturer KARL STORZ SE & Co. KG in the projects “InnOPlan”, funded by the German Federal Ministry of Economic Affairs and Energy (grant number BMWI 01MD15002E) and “Surgomics”, funded by the German Federal Ministry of Health (grant number BMG 2520DAT82D and BMG 2520DAT82A). Lars Mündermann is an employee of KARL STORZ SE & Co. KG. A. Reinke works with the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science. S. Kondo was an employee of Konica Minolta Inc. when this work was done. Wolfgang Reiter is an employee of Wintegral GmbH, a subsidiary of medical device manufacturer Richard Wolf GmbH. I. Twick, K. Kirtac, E. Hosgor, J. Lindström Bolmgren, M. Stenzel and B. von Siemens are employees of Caresyntax GmbH. Felix Nickel received travel support for conference participation as well as equipment provided for laparoscopic surgery courses by KARL STORZ SE & Co. KG, Johnson & Johnson, Intuitive Surgical, Cambridge Medical Robotics, and Medtronic. The other authors have no conflicts of interest., (Copyright © 2023. Published by Elsevier B.V.)
- Published
- 2023
- Full Text
- View/download PDF
50. CholecTriplet2021: A benchmark challenge for surgical action triplet recognition.
- Author
-
Nwoye CI, Alapatt D, Yu T, Vardazaryan A, Xia F, Zhao Z, Xia T, Jia F, Yang Y, Wang H, Yu D, Zheng G, Duan X, Getty N, Sanchez-Matilla R, Robu M, Zhang L, Chen H, Wang J, Wang L, Zhang B, Gerats B, Raviteja S, Sathish R, Tao R, Kondo S, Pang W, Ren H, Abbing JR, Sarhan MH, Bodenstedt S, Bhasker N, Oliveira B, Torres HR, Ling L, Gaida F, Czempiel T, Vilaça JL, Morais P, Fonseca J, Egging RM, Wijma IN, Qian C, Bian G, Li Z, Balasubramanian V, Sheet D, Luengo I, Zhu Y, Ding S, Aschenbrenner JA, van der Kar NE, Xu M, Islam M, Seenivasan L, Jenke A, Stoyanov D, Mutter D, Mascagni P, Seeliger B, Gonzalez C, and Padoy N
- Subjects
- Humans, Algorithms, Operating Rooms, Workflow, Deep Learning, Benchmarking, Laparoscopy
- Abstract
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of ‹instrument, verb, target› combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery., Competing Interests: Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (Copyright © 2023 Elsevier B.V. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.