Back to Search Start Over

Enhancing fall risk assessment: instrumenting vision with deep learning during walks.

Authors :
Moore J
Catena R
Fournier L
Jamali P
McMeekin P
Stuart S
Walker R
Salisbury T
Godfrey A
Source :
Journal of neuroengineering and rehabilitation [J Neuroeng Rehabil] 2024 Jun 22; Vol. 21 (1), pp. 106. Date of Electronic Publication: 2024 Jun 22.
Publication Year :
2024

Abstract

Background: Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual's gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait.<br />Method: The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset.<br />Results: VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications.<br />Conclusion: The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.<br /> (© 2024. The Author(s).)

Details

Language :
English
ISSN :
1743-0003
Volume :
21
Issue :
1
Database :
MEDLINE
Journal :
Journal of neuroengineering and rehabilitation
Publication Type :
Academic Journal
Accession number :
38909239
Full Text :
https://doi.org/10.1186/s12984-024-01400-2