48 results on '"simulated prosthetic vision"'
Search Results
2. RASPV: A Robotics Framework for Augmented Simulated Prosthetic Vision
- Author
-
Alejandro Perez-Yus, Maria Santos-Villafranca, Julia Tomas-Barba, Jesus Bermudez-Cameo, Lorenzo Montano-Olivan, Gonzalo Lopez-Nicolas, and Jose J. Guerrero
- Subjects
Computer vision ,navigation ,RGB-D ,simulated prosthetic vision ,visually impaired assistance ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
One of the main challenges of visual prostheses is to augment the perceived information to improve the experience of its wearers. Given the limited access to implanted patients, in order to facilitate the experimentation of new techniques, this is often evaluated via Simulated Prosthetic Vision (SPV) with sighted people. In this work, we introduce a novel SPV framework and implementation that presents major advantages with respect to previous approaches. First, it is integrated into a robotics framework, which allows us to benefit from a wide range of methods and algorithms from the field (e.g. object recognition, obstacle avoidance, autonomous navigation, deep learning). Second, we go beyond traditional image processing with 3D point clouds processing using an RGB-D camera, allowing us to robustly detect the floor, obstacles and the structure of the scene. Third, it works either with a real camera or in a virtual environment, which gives us endless possibilities for immersive experimentation through a head-mounted display. Fourth, we incorporate a validated temporal phosphene model that replicates time effects into the generation of visual stimuli. Finally, we have proposed, developed and tested several applications within this framework, such as avoiding moving obstacles, providing a general understanding of the scene, staircase detection, helping the subject to navigate an unfamiliar space, and object and person detection. We provide experimental results in real and virtual environments. The code is publicly available at https://www.github.com/aperezyus/RASPV
- Published
- 2024
- Full Text
- View/download PDF
3. Immersive Virtual Reality Simulations of Bionic Vision.
- Author
-
Kasowski, Justin and Beyeler, Michael
- Subjects
retinal implant ,simulated prosthetic vision ,virtual reality ,Networking and Information Technology R&D (NITRD) ,Neurosciences ,Assistive Technology ,Eye Disease and Disorders of Vision ,Rehabilitation ,Bioengineering ,Eye - Abstract
Bionic vision uses neuroprostheses to restore useful vision to people living with incurable blindness. However, a major outstanding challenge is predicting what people "see" when they use their devices. The limited field of view of current devices necessitates head movements to scan the scene, which is difficult to simulate on a computer screen. In addition, many computational models of bionic vision lack biological realism. To address these challenges, we present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to "see through the eyes" of a bionic eye user. To demonstrate its utility, we systematically evaluated how clinically reported visual distortions affect performance in a letter recognition and an immersive obstacle avoidance task. Our results highlight the importance of using an appropriate phosphene model when predicting visual outcomes for bionic vision.
- Published
- 2022
4. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses
- Author
-
Maureen van der Grinten, Jaap de Ruyter van Steveninck, Antonio Lozano, Laura Pijnacker, Bodo Rueckauer, Pieter Roelfsema, Marcel van Gerven, Richard van Wezel, Umut Güçlü, and Yağmur Güçlütürk
- Subjects
simulated prosthetic vision ,cortical stimulation ,bionic vision ,blindness ,deep learning ,neurotechnology ,Medicine ,Science ,Biology (General) ,QH301-705.5 - Abstract
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or ‘phosphenes’) has limited resolution, and a great portion of the field’s research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator’s suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.
- Published
- 2024
- Full Text
- View/download PDF
5. Enhanced Artificial Vision for Visually Impaired Using Visual Implants
- Author
-
Hossein Mahvash Mohammadi, Mohammad Hadi Edrisi, and Yvon Savaria
- Subjects
Argus II ,artificial vision for visually impaired people ,simulated prosthetic vision ,scene understanding ,visual prosthesis ,visual implants ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Argus II is the most advanced retina implants approved by the US FDA and almost 350 visually impaired people are using it. This implant uses 60 microelectrodes implanted in the retina. The goal of this implant is to improve mobility and quality of life of its users. However, users’ satisfaction is not very high due to the very low resolution of the phosphene images and features created by this device. This article proposes a system to improve the artificial vision created by visual implants. The proposed method extracts information about the people around the visually impaired person by using image processing and machine vision algorithms. This information includes the number of the people in the scene, whether they are known or unknown, their gender, estimated ages, facial emotions, and approximate distance from the visually impaired person. This information is extracted from the frames received by a camera mounted on the glasses of the user to generate signals that are fed into a visual stimulator. This information is shown to the user by a schematic vision created by some pre-trained patterns of phosphenes reflecting the information communicated to the user. The proposed system is validated with a simulated prosthetic vision comprising 150 microelectrodes that is compatible with the retina and visual cortex implants. A low-cost and energy efficient implementation of the proposed method executing on a Raspberry Pi 4 B at a frame rate of 4.5 frames/second shows the feasibility of using it in portable systems.
- Published
- 2023
- Full Text
- View/download PDF
6. Object recognition and localization enhancement in visual prostheses: a real-time mixed reality simulation
- Author
-
Reham H. Elnabawy, Slim Abdennadher, Olaf Hellwich, and Seif Eldawlatly
- Subjects
Simulated prosthetic vision ,Object recognition ,Object localization ,Real-time mixed reality simulation ,Medical technology ,R855-855.5 - Abstract
Abstract Blindness is a main threat that affects the daily life activities of any human. Visual prostheses have been introduced to provide artificial vision to the blind with the aim of allowing them to restore confidence and independence. In this article, we propose an approach that involves four image enhancement techniques to facilitate object recognition and localization for visual prostheses users. These techniques are clip art representation of the objects, edge sharpening, corner enhancement and electrode dropout handling. The proposed techniques are tested in a real-time mixed reality simulation environment that mimics vision perceived by visual prostheses users. Twelve experiments were conducted to measure the performance of the participants in object recognition and localization. The experiments involved single objects, multiple objects and navigation. To evaluate the performance of the participants in objects recognition, we measure their recognition time, recognition accuracy and confidence level. For object localization, two metrics were used to measure the performance of the participants which are the grasping attempt time and the grasping accuracy. The results demonstrate that using all enhancement techniques simultaneously gives higher accuracy, higher confidence level and less time for recognizing and grasping objects in comparison to not applying the enhancement techniques or applying pair-wise combinations of them. Visual prostheses could benefit from the proposed approach to provide users with an enhanced perception.
- Published
- 2022
- Full Text
- View/download PDF
7. Object recognition and localization enhancement in visual prostheses: a real-time mixed reality simulation.
- Author
-
Elnabawy, Reham H., Abdennadher, Slim, Hellwich, Olaf, and Eldawlatly, Seif
- Subjects
- *
RECOGNITION (Psychology) , *ARTIFICIAL vision , *MIXED reality , *PROSTHETICS , *IMAGE intensifiers , *ART objects - Abstract
Blindness is a main threat that affects the daily life activities of any human. Visual prostheses have been introduced to provide artificial vision to the blind with the aim of allowing them to restore confidence and independence. In this article, we propose an approach that involves four image enhancement techniques to facilitate object recognition and localization for visual prostheses users. These techniques are clip art representation of the objects, edge sharpening, corner enhancement and electrode dropout handling. The proposed techniques are tested in a real-time mixed reality simulation environment that mimics vision perceived by visual prostheses users. Twelve experiments were conducted to measure the performance of the participants in object recognition and localization. The experiments involved single objects, multiple objects and navigation. To evaluate the performance of the participants in objects recognition, we measure their recognition time, recognition accuracy and confidence level. For object localization, two metrics were used to measure the performance of the participants which are the grasping attempt time and the grasping accuracy. The results demonstrate that using all enhancement techniques simultaneously gives higher accuracy, higher confidence level and less time for recognizing and grasping objects in comparison to not applying the enhancement techniques or applying pair-wise combinations of them. Visual prostheses could benefit from the proposed approach to provide users with an enhanced perception. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. An infrared image‐enhancement algorithm in simulated prosthetic vision: Enlarging working environment of future retinal prostheses.
- Author
-
Liang, Junling, Li, Heng, Chen, Jianpin, Zhai, Zhenzhen, Wang, Jing, Di, Liqing, and Chai, Xinyu
- Subjects
- *
PROSTHETICS , *INFRARED cameras , *VISION , *ALGORITHMS , *VISIBLE spectra - Abstract
Background: Most existing retinal prostheses contain a built‐in visible‐light camera module that captures images of the surrounding environment. Thus, in case of insufficient or lack of visible light, the camera fails to work, and the retinal prostheses enter a dormant or "OFF" state. A simple and effective solution is replacing the visible‐light camera with a dual‐mode camera. The present research aimed to achieve two main purposes: (1) to explore whether the dual‐mode camera in prosthesis recipients works under no visible‐light conditions and (2) to assess its performance. Methods: To accomplish these aims, we enrolled subjects in a psychophysical experiment under simulated prosthetic vision conditions. We found that the subjects could complete some simple visual tasks, but the recognition performance under the infrared mode was significantly inferior to that under the visible‐light mode. These results inspired us to develop and propose a feasible infrared image‐enhancement processing algorithm. Another psychophysical experiment was performed to verify the feasibility of the algorithm. Results: The obtained results showed that the average efficiency of the subjects completing visual tasks using our enhancement algorithm (0.014 ± 0.001) was significantly higher (p < 0.001) than that of subjects using direct pixelization (0.007 ± 0.001). Conclusions: We concluded that a dual‐mode camera could be a feasible solution to improving the performance of retinal prostheses as the camera adapted better to the specific existing ambient light conditions. Dual‐mode cameras combined with this infrared image‐enhancement algorithm could provide a promising direction for the design of future retinal prostheses. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Semantic translation of face image with limited pixels for simulated prosthetic vision.
- Author
-
Xia, Xuan, He, Xing, Feng, Lu, Pan, Xizhou, Li, Nan, Zhang, Jingfei, Pang, Xufang, Yu, Fengqi, and Ding, Ning
- Subjects
- *
ARTIFICIAL neural networks , *FACE perception , *GENERATIVE adversarial networks , *INTELLIGIBILITY of speech , *PIXELS , *PROSTHESIS design & construction , *SOCIAL skills - Abstract
Facial perception and cognition are among the most critical functions of retinal prostheses for blind people. However, owing to the limitations of the electrode array, simulated prosthetic vision can only provide limited pixel images, which seriously weakens the image semantics expression. To improve the intelligibility of face images with limited pixels, we constructed a face semantic information transformation model to transform real faces into pixel faces based on the analogy between human and artificial intelligence, named F2Pnet (face to pixel networks). This is the first attempt at face pixelation using deep neural networks for prosthetic vision. Furthermore, we established a pixel face database designed for prosthesis vision and proposed a new training strategy for generative adversarial networks for image-to-image translation tasks aiming to solve the problem of semantic loss under limited pixels. The results of psychophysical experiments and user studies show that the identifiability of pixel faces in characteristic and expression is much better than that of comparable methods, which is significant for improving the social ability of blind people. The real-time operation (17.7 fps) on the Raspberry Pi 4 B shows that F2Pnet has reasonable practicability. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. The application of computer vision to visual prosthesis.
- Author
-
Wang, Jing, Zhu, Haiyi, Liu, Jianyun, Li, Heng, Han, Yanling, Zhou, Ruyan, and Zhang, Yun
- Subjects
- *
ARTIFICIAL vision , *COMPUTER vision , *APPLICATION software , *VISUAL perception , *PROSTHETICS , *IMAGE processing - Abstract
A visual prosthesis is an auxiliary device for patients with blinding diseases that cannot be treated with conventional surgery or drugs. It converts captured images into corresponding electrical stimulation patterns, according to which phosphenes are generated through the action of internal electrodes on the visual pathway to form visual perception. However, due to some restrictions such as the few implantable electrodes that the biological tissue can accommodate, the induced perception is far from ideal. Therefore, an important issue in visual prosthesis research is how to detect and present useful information in low‐resolution prosthetic vision to improve the visual function of the wearer. In recent years, with the development and broad application of computer vision methods, researchers have investigated the possibility of their utilization in visual prostheses by simulating prosthetic visual percepts. Through the optimization of visual perception by image processing, the efficiency of visual prosthesis devices can be further improved to better meet the needs of prosthesis wearers. In this article, recent works on prosthetic vision centering on implementing computer vision methods are reviewed. Differences, strengths, and weaknesses of the mentioned methods are discussed. The development directions of optimizing prosthetic vision and improving methods of visual perception are analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. Retinal Prostheses: Functional Outcomes and Visual Rehabilitation
- Author
-
Dagnelie, Gislin, Singh, Arun D., Series editor, Humayun, Mark S., editor, and Olmos de Koo, Lisa C., editor
- Published
- 2018
- Full Text
- View/download PDF
12. RGB-D Computer Vision Techniques for Simulated Prosthetic Vision
- Author
-
Bermudez-Cameo, Jesus, Badias-Herbera, Alberto, Guerrero-Viu, Manuel, Lopez-Nicolas, Gonzalo, Guerrero, Jose J., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Alexandre, Luís A., editor, Salvador Sánchez, José, editor, and Rodrigues, João M. F., editor
- Published
- 2017
- Full Text
- View/download PDF
13. A Fast and Flexible Computer Vision System for Implanted Visual Prostheses
- Author
-
Li, Wai Ho, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Agapito, Lourdes, editor, Bronstein, Michael M., editor, and Rother, Carsten, editor
- Published
- 2015
- Full Text
- View/download PDF
14. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses.
- Author
-
van der Grinten M, de Ruyter van Steveninck J, Lozano A, Pijnacker L, Rueckauer B, Roelfsema P, van Gerven M, van Wezel R, Güçlü U, and Güçlütürk Y
- Subjects
- Animals, Humans, Computer Simulation, Software, Blindness therapy, Phosphenes, Visual Prosthesis
- Abstract
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics., Competing Interests: Mv, Jd, AL, LP, BR, PR, Mv, Rv, UG, YG No competing interests declared, (© 2024, van der Grinten, de Ruyter van Steveninck, Lozano et al.)
- Published
- 2024
- Full Text
- View/download PDF
15. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.
- Author
-
Li, Heng, Su, Xiaofan, Wang, Jing, Kan, Han, Han, Tingting, Zeng, Yajie, and Chai, Xinyu
- Subjects
- *
PHOSPHENES , *ARTIFICIAL vision , *IMAGE processing , *VISUAL perception , *PSYCHOPHYSIOLOGY - Abstract
Background and Objective: Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision.Method: We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects.Results: i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene.Conclusion: The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
16. Pixelized Images Recognition in Simulated Prosthetic Vision
- Author
-
Zhao, Ying, Tian, Yukun, Liu, Huwei, Ren, Qiushi, Chai, Xinyu, Magjarevic, R., editor, Nagel, J. H., editor, Peng, Yi, editor, and Weng, Xiaohong, editor
- Published
- 2008
- Full Text
- View/download PDF
17. A novel simulation paradigm utilising MRI-derived phosphene maps for cortical prosthetic vision.
- Author
-
Wang HZ and Wong YT
- Subjects
- Humans, Phosphenes, Visual Perception, Magnetic Resonance Imaging, Visual Prosthesis, Visual Cortex
- Abstract
Objective. We developed a realistic simulation paradigm for cortical prosthetic vision and investigated whether we can improve visual performance using a novel clustering algorithm. Approach. Cortical visual prostheses have been developed to restore sight by stimulating the visual cortex. To investigate the visual experience, previous studies have used uniform phosphene maps, which may not accurately capture generated phosphene map distributions of implant recipients. The current simulation paradigm was based on the Human Connectome Project retinotopy dataset and the placement of implants on the cortices from magnetic resonance imaging scans. Five unique retinotopic maps were derived using this method. To improve performance on these retinotopic maps, we enabled head scanning and a density-based clustering algorithm was then used to relocate centroids of visual stimuli. The impact of these improvements on visual detection performance was tested. Using spatially evenly distributed maps as a control, we recruited ten subjects and evaluated their performance across five sessions on the Berkeley Rudimentary Visual Acuity test and the object recognition task. Main results. Performance on control maps is significantly better than on retinotopic maps in both tasks. Both head scanning and the clustering algorithm showed the potential of improving visual ability across multiple sessions in the object recognition task. Significance. The current paradigm is the first that simulates the experience of cortical prosthetic vision based on brain scans and implant placement, which captures the spatial distribution of phosphenes more realistically. Utilisation of evenly distributed maps may overestimate the performance that visual prosthetics can restore. This simulation paradigm could be used in clinical practice when making plans for where best to implant cortical visual prostheses., (Creative Commons Attribution license.)
- Published
- 2023
- Full Text
- View/download PDF
18. Real-world indoor mobility with simulated prosthetic vision: The benefits and feasibility of contour-based scene simplification at different phosphene resolutions
- Author
-
Jaap de Ruyter van Steveninck, Tom van Gestel, Paula Koenders, Guus van der Ham, Floris Vereecken, Umut Güçlü, Marcel van Gerven, Yağmur Güçlütürk, Richard van Wezel, TechMed Centre, and Biomedical Signals and Systems
- Subjects
Mobility ,Neuroprosthetics ,Phosphenes ,Vision Disorders ,Biophysics ,Simulated prosthetic vision ,Artificial Intelligence (AI) ,Cognitive artificial intelligence ,Obstacle avoidance ,Prosthetic vision ,Sensory Systems ,Form Perception ,Ophthalmology ,Deep learning (DL) ,Surface boundary detection ,Feasibility Studies ,Humans ,Edge detection ,Vision, Ocular - Abstract
Contains fulltext : 246314.pdf (Publisher’s version ) (Open Access) Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26 x 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures may deteriorate, rather than improve, mobility. Therefore, finding a balanced amount of scene simplification requires a careful tradeoff between informativity and interpretability that may depend on the number of implanted electrodes. 14 p.
- Published
- 2022
19. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.
- Author
-
Wang, Jing, Li, Heng, Fu, Weizhen, Chen, Yao, Li, Liming, Lyu, Qing, Han, Tingting, and Chai, Xinyu
- Subjects
- *
PROSTHETICS , *ARTIFICIAL eyes , *ARTIFICIAL implants , *ARTIFICIAL vision , *RETINA - Abstract
Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest ( ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways-8-4 separated pixelization (8-4 SP) and background edge extraction ( BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization ( DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
20. Adaptation to Phosphene Parameters Based on Multi-Object Recognition Using Simulated Prosthetic Vision.
- Author
-
Xia, Peng, Hu, Jie, and Peng, Yinghong
- Subjects
- *
ARTIFICIAL vision , *PSYCHOPHYSICS , *PHOTORECEPTORS , *NEUROPLASTICITY , *OBJECT recognition (Computer vision) - Abstract
Retinal prostheses for the restoration of functional vision are under development and visual prostheses targeting proximal stages of the visual pathway are also being explored. To investigate the experience with visual prostheses, psychophysical experiments using simulated prosthetic vision in normally sighted individuals are necessary. In this study, a helmet display with real-time images from a camera attached to the helmet provided the simulated vision, and experiments of recognition and discriminating multiple objects were used to evaluate visual performance under different parameters (gray scale, distortion, and dropout). The process of fitting and training with visual prostheses was simulated and estimated by adaptation to the parameters with time. The results showed that the increase in the number of gray scale and the decrease in phosphene distortion and dropout rate improved recognition performance significantly, and the recognition accuracy was 61.8 ± 7.6% under the optimum condition (gray scale: 8, distortion: k = 0, dropout: 0%). The adaption experiments indicated that the recognition performance was improved with time and the effect of adaptation to distortion was greater than dropout, which implies the difference of adaptation mechanism to the two parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
21. A real-time image optimization strategy based on global saliency detection for artificial retinal prostheses.
- Author
-
Li, Heng, Han, Tingting, Wang, Jing, Lu, Zhuofan, Cao, Xiaofei, Chen, Yao, Li, Liming, Zhou, Chuanqing, and Chai, Xinyu
- Subjects
- *
MATHEMATICAL optimization , *RETINA transplants , *VISUAL perception , *PHOSPHENES , *ELECTRODES , *IMAGE processing - Abstract
Current retinal prostheses can only generate low-resolution visual percepts constituted of inadequate phosphenes which are elicited by a limited number of stimulating electrodes and with unruly color and restricted grayscale. Fortunately, for most retinal prostheses, an external camera and a video processing unit are employed to be essential components, and allow image processing to improve visual perception for recipients. At present, there have been some studies that use a variety of sophisticated image processing algorithms to improve prosthetic vision perception. However, most of them cannot achieve real-time processing due to the complexity of the algorithms and the limitation of platform processing power. This greatly curbs the practical application of these algorithms on the retinal prostheses. In this study, we propose a real-time image processing strategy based on a novel bottom-up saliency detection algorithm, aiming to detect and enhance foreground objects in a scene. Results demonstrate by verification of conducting two eye-hand-coordination visual tasks that under simulated prosthetic vision, our proposed strategy has noticeable advantages in terms of accuracy, efficiency, and head motion range. The study aims to help develop image processing modules in retinal prostheses, and is hoped to provide more benefit towards prosthesis recipients. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
22. A spiking neural network model for obstacle avoidance in simulated prosthetic vision.
- Author
-
Chenjie Ge, Nikola Kasabov, Zhi Liu, and Jie Yang
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *MULTILAYER perceptrons , *NEURAL chips , *NEURAL computers - Abstract
Limited by visual percepts elicited by existing visual prosthesis, it's necessary to enhance its functionality to fulfill some challenging tasks for the blind such as obstacle avoidance. This paper argues that spiking neural networks (SNN) are effective techniques for object recognition and introduces for the first time a SNN model for obstacle recognition to assist blind people wearing prosthetic vision devices by modelling and classifying spatio-temporal (ST) video data. The proposed methodology is based on a novel spiking neural network architecture, called NeuCube as a general framework for video data modelling in simulated prosthetic vision. As an integrated environment including spiking trains encoding, input variable mapping, unsupervised reservoir training and supervised classifier training, the NeuCube consists of a spiking neural network reservoir (SNNr) and a dynamic evolving spiking neural network classifier (deSNN). First, input data is captured by visual prosthesis, then ST feature extraction is utilized in the low-resolution prosthetic vision generated by prostheses. Finally such ST features are fed to the NeuCube to output classification result of obstacle analysis for an early warning system to be activated. Experiments on collected video data and comparison with other computational intelligence methods indicate promising results. This makes it possible to directly utilize available neuromorphic hardware chips, embedded in visual prostheses, to enhance significantly their functionality. The proposed NeuCube-based obstacle avoidance methodology provides useful guidance to the blind, thus offering a significant improvement of current prostheses and potentially benefiting future prosthesis wearers. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
23. MEMS-based system and image processing strategy for epiretinal prosthesis.
- Author
-
Peng Xia, Jie Hu, Jin Qi, Chaochen Gu, and Yinghong Peng
- Subjects
- *
PROSTHETICS , *RETINAL degeneration , *BIOMECHANICS , *BIOMEDICAL engineering , *BIOMEDICAL materials - Abstract
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
24. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.
- Author
-
Macé, Marc J.‐M., Guivarch, Valérian, Denis, Grégoire, and Jouffrais, Christophe
- Subjects
- *
CLINICAL trials , *BLIND people , *PROSTHETICS , *ARTIFICIAL implants , *IMAGE processing , *VISUOMOTOR coordination - Abstract
Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method--that is, localization of targets of interest in the scene--may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
25. Virtual Reality Study for the Influence of a Visual Prosthetic’s Phosphene Positioning and Morphologies on Patients’ Mobility Performance
- Author
-
Wang, Xing
- Subjects
simulated prosthetic vision ,phosphenes ,visual prosthesis ,simulation - Abstract
This thesis sets out to determine the extent to which the appearance of phosphenes, including the positioning and morphologies equivalent to those as identified in clinical studies, influence simulated prosthetic vision (SPV) performance, compared to conventional techniques utilizing an idealized (regular) phosphene SPV management structure. The approach in this research work considers sighted subjects being presented with live (real-time) SPV depiction of a visual scene via a virtual reality headset. This approach was presented to subject patients in the form of either idealized or randomized phosphenes, where the patients were required to perform two (2) tasks, viz (a) a line-following task and (b) an obstacle avoidance task respectively. The total time taken for each subject (9 in total) to complete both tasks (a) and (b), along with the number of contacts these subjects made with obstacles, were recorded and analyzed for comparison. The results identified no major differences in the performance of activities in both tasks, tested when using regular phosphene grids compared to phosphene imaging processing (rendering) that more closely follows the clinical observation. Key elements of the Review have been considered in this work and assisted in identification of the principal drives and influences in determining recommendations as to optimum methodology in determining the extent to which phosphene positioning and morphology akin to those reported in clinical studies influenced SPV performance; as against performance utilizing idealized (regular) phosphene SPV techniques. It is concluded that the balance of evidence is not sufficient to indicate that there are significant differences in the subjects’ performance between using idealized/simplified simulated PV and randomized phosphene structures.
- Published
- 2021
26. Virtual reality validation of naturalistic modulation strategies to counteract fading in retinal stimulation
- Author
-
Sandrine Hinrichs, Diego Ghezzi, Naïg Aurelia Ludmilla Chenais, Marion Chatelain, and Jacob Thorn
- Subjects
bipolar cells ,Retinal Ganglion Cells ,vision ,retinal prostheses ,genetic structures ,Computer science ,media_common.quotation_subject ,Retinal implant ,electrical-stimulation ,Phosphenes ,Biomedical Engineering ,Virtual reality ,blind patients ,Retinal ganglion ,Retina ,Task (project management) ,models ,Cellular and Molecular Neuroscience ,scanning ,mobility performance ,Perception ,Humans ,Fading ,Computer vision ,Vision, Ocular ,media_common ,simulated prosthetic vision ,business.industry ,Virtual Reality ,fading ,eye diseases ,Electric Stimulation ,Visual field ,Visual Prosthesis ,artificial vision ,Phosphene ,Artificial intelligence ,business - Abstract
ObjectiveTemporal resolution is a key challenge in artificial vision. Several prosthetic approaches are limited by the perceptual fading of evoked phosphenes upon repeated stimulation from the same electrode. Therefore, implanted patients are forced to perform active scanning, via head movements, to refresh the visual field viewed by the camera. However, active scanning is a draining task, and it is crucial to find compensatory strategies to reduce it.ApproachTo address this question, we implemented perceptual fading in simulated prosthetic vision using virtual reality. Then, we quantified the effect of fading on two indicators: the time to complete a reading task and the head rotation during the task. We also tested if stimulation strategies previously proposed to increase the persistence of responses in retinal ganglion cells to electrical stimulation could improve these indicators.Main resultsThis study shows that stimulation strategies based on interrupted pulse trains and randomisation of the pulse duration allows significant reduction of both the time to complete the task and the head rotation during the task.SignificanceThe stimulation strategy used in retinal implants is crucial to counteract perceptual fading and to reduce active head scanning during prosthetic vision. In turn, less active scanning might improve the patient’s comfort in artificial vision.
- Published
- 2022
27. Recognition of Similar Objects Using Simulated Prosthetic Vision.
- Author
-
Hu, Jie, Xia, Peng, Gu, Chaochen, Qi, Jin, Li, Sheng, and Peng, Yinghong
- Subjects
- *
ARTIFICIAL vision , *VISUAL pathways , *PSYCHOPHYSICS , *PHOSPHENES , *IMAGE processing - Abstract
Due to the limitations of existing techniques, even the most advanced visual prostheses, using several hundred electrodes to transmit signals to the visual pathway, restrict sensory function and visual information. To identify the bottlenecks and guide prosthesis designing, psychophysics simulations of a visual prosthesis in normally sighted individuals are desirable. In this study, psychophysical experiments of discriminating objects with similar profiles were used to test the effects of phosphene array parameters (spatial resolution, gray scale, distortion, and dropout rate) on visual information using simulated prosthetic vision. The results showed that the increase in spatial resolution and number of gray levels and the decrease in phosphene distortion and dropout rate improved recognition performance, and the accuracy is 78.5% under the optimum condition (resolution: 32 × 32, gray level: 8, distortion: k = 0, dropout: 0%). In combined parameter tests, significant facial recognition accuracy was achieved for all the images with k = 0.1 distortion and 10% dropout. Compared with other experiments, we find that different objects do not show specific sensitivity to the changes of parameters and visual information is not nearly enough even under the optimum condition. The results suggests that higher spatial resolution and more gray levels are required for visual prosthetic devices and further research on image processing strategies to improve prosthetic vision is necessary, especially when the wearers have to accomplish more than simple visual tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
28. Recognition of Objects in Simulated Irregular Phosphene Maps for an Epiretinal Prosthesis.
- Author
-
Lu, Yanyu, Wang, Jing, Wu, Hao, Li, Liming, Cao, Xun, and Chai, Xinyu
- Subjects
- *
ARTIFICIAL vision , *PHOSPHENES , *BLINDNESS , *OBJECT recognition (Computer vision) , *OPTICAL resolution , *THERAPEUTICS - Abstract
Visual prostheses offer a possibility of restoring vision to the blind. It is necessary to determine minimum requirements for daily visual tasks. To investigate the recognition of common objects in daily life based on the simulated irregular phosphene maps, the effect of four parameters (resolution, distortion, dropout percentage, and gray scale) on object recognition was investigated. The results showed that object recognition accuracy significantly increased with an increase of resolution. Distortion and dropout percentage had significant impact on the object recognition; with the increase of distortion level and dropout percentage, the recognition decreased considerably. The accuracy decreased significantly only at gray level 2, whereas the other three gray levels showed no obvious difference. The two image processing methods (merging pixels to lower the resolution and edge extraction before lowering resolution) showed significant difference on the object recognition when there was a high degree of distortion level or dot dropout. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
29. Complexity Analysis Based on Image-Processing Method and Pixelized Recognition of Chinese Characters Using Simulated Prosthetic Vision.
- Author
-
Kun Yang, Chuanqing Zhou, Qiushi Ren, Jin Fan, Leilei Zhang, and Xinyu Chai
- Subjects
- *
LINGUISTIC complexity , *OPTICAL resolution , *PIXELS , *CHINESE language , *BIOMEDICAL materials - Abstract
The influence of complexity and minimum resolution necessary for recognition of pixelized Chinese characters (CCs) was investigated by using simulated prosthetic vision. An image-processing method was used to evaluate the complexity of CCs, which is defined as the frequency of black pixels and analyzed by black pixel statistic complexity algorithm. A total of 631 most commonly used CCs that can deliver 80% of the information in Chinese daily reading were chosen as the testing database in order to avoid the negative effect due to illegibility and incognizance. CCs in Hei font style were captured as images and pixelized as 6 × 6, 8 × 8, 10 × 10, and 12 × 12 pixel arrays with square dots. Recognition accuracy of CCs with different complexity and different numbers of pixel arrays was tested by using simulated prosthetic vision. The results indicate that both pixel array number and complexity have significant impact on pixelized reading of CCs. Recognition accuracy of pixelized CCs drops with the increase of complexity and the decrease of pixel number. More than 80% of CCs with any complexity can be recognized correctly; 10 × 10 pixel array can sufficiently provide pixelized reading of CCs for visual prosthesis. Pixelized reading of CCs with low resolution is possible only for characters with low complexity (complexity less than 0.16 for a 6 × 6 pixel array and less than 0.24 for an 8 × 8 pixel array). [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
30. Simulating prosthetic vision: II. Measuring functional capacity
- Author
-
Chen, Spencer C., Suaning, Gregg J., Morley, John W., and Lovell, Nigel H.
- Subjects
- *
ARTIFICIAL vision , *SIMULATION methods & models , *MEDICAL function tests , *MICROELECTRONICS , *MEDICAL electronics , *VISUAL fields , *PHOSPHENES , *IMAGE processing , *VISUAL acuity - Abstract
Abstract: Investigators of microelectronic visual prosthesis devices have found that some aspects of vision can be restored in the form of spots of light in the visual field, so-called “phosphenes”, from which more rich and complex scenes may be composed. However, questions still surround the capabilities of how such a form of vision can allow its recipients to “see” and to carry out everyday activities. Through simulations of prosthetic vision, researchers can experience first-hand many performance and behavioral aspects of prosthetic vision, and studies conducted on a larger population can inform the performance and behavioral preferences in general and in individual cases. This review examines the findings from the various investigations of the functional capacity of prosthetic vision conducted through simulations, especially on the topics of letter acuity, reading, navigation, learning and visual scanning adaptation. Central to the review, letter acuity is posited as a reference measurement so that results and performance trends across the various simulation models and functional assessment tasks can be more readily compared and generalized. Future directions for simulation based research are discussed with respect to designing a functional visual prosthesis, improving functional vision in near-term low-phosphene-count devices, and pursuing image processing strategies to impart the most comprehensible prosthetic vision. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
31. Simulating prosthetic vision: I. Visual models of phosphenes
- Author
-
Chen, Spencer C., Suaning, Gregg J., Morley, John W., and Lovell, Nigel H.
- Subjects
- *
ARTIFICIAL vision , *PHOSPHENES , *CLINICAL trials , *PSYCHOPHYSIOLOGY , *SIMULATION methods & models , *VIRTUAL reality , *ELECTRIC stimulation - Abstract
Abstract: With increasing research advances and clinical trials of visual prostheses, there is significant demand to better understand the perceptual and psychophysical aspects of prosthetic vision. In prosthetic vision a visual scene is composed of relatively large, isolated, spots of light so-called “phosphenes”, very much like a magnified pictorial print. The utility of prosthetic vision has been studied by investigators in the form of virtual–reality visual models (simulations) of prosthetic vision administered to normally sighted subjects. In this review, the simulations from these investigations are examined with respect to how they visually render the phosphenes and the virtual–reality apparatus involved. A comparison is made between these simulations and the actual descriptions of phosphenes reported from human trials of visual prosthesis devices. For the results from these simulation studies to be relevant to the experience of visual prosthesis recipients, it is important that, the simulated phosphenes must be consistent with the descriptions from human trials. A standardized simulation and reporting framework is proposed so that future simulations may be configured to be more realistic to the experience of implant recipients, and the simulation parameters from different investigators may be more readily extracted, and study results more fittingly compared. [Copyright &y& Elsevier]
- Published
- 2009
- Full Text
- View/download PDF
32. Recognition of Pixelized Chinese Characters Using Simulated Prosthetic Vision.
- Author
-
Xinyu Chai, Wei Yu, Jia Wang, Ying Zhao, Changsi Cai, and Qiushi Ren
- Subjects
- *
ARTIFICIAL vision , *ELECTRODES , *DIGITAL image processing , *PHOSPHENES , *MICROELECTRODES , *ARTIFICIAL organs , *NEUROSCIENCES - Abstract
The rehabilitation of the reading ability of the blind with a limited number of stimulating electrodes is regarded as one of the major functions of the envisioned visual prosthesis. This article systematically studied how many pixels of individual Chinese characters should be needed for correct and economic recognition by blind Chinese subjects. In this study, 40 normal-sighted subjects were tested on a self-developed platform HanziConvertor (Institute for Laser Medicine & Bio-photonics, Shanghai Jiaotong University, China) with digital imaging processing capacities to convert images of printed text into various pixelized patterns made up of discrete dots, and present them orderly on a computer screen. It was found that various complicated factors such as pixel number, character typeface, stroke number, etc., can obviously affect the recognition accuracy. It was also found that optimal recognition accuracy occurs at a specific size of binary pixel array, due to a trade-off between a strictly limited number of stimulation electrodes and character sampling resolution. The results showed that (i) recognition accuracy of pixelized characters is optimal with at least 12 × 12 binary pixels, and therefore it is recommended to apply a minimum of 150 discrete and functioning electrodes for restoring the reading ability of blind Chinese individuals in the visual prosthesis; (ii) fonts of Song Ti and Hei Ti are clearer and more effective than other typefaces; and (iii) characters with fewer strokes lead to better accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
33. Visually Guided Performance of Simple Tasks Using Simulated Prosthetic Vision.
- Author
-
Hayes, Jasmine S., Yin, Vivian T., Piyathaisere, Duke, Weiland, James D., Humayun, Mark S., and Dagnelie, Gislin
- Subjects
- *
VISUAL acuity , *PHOTORECEPTORS , *BLINDNESS , *RETINAL (Visual pigment) , *ARTIFICIAL vision , *ELECTRICITY , *THERAPEUTICS - Abstract
Loss of photoreceptor cells is one of the major causes of blindness. Several groups are exploring the functional replacement of photoreceptors by a retinal prosthesis. The goal of this study was to simulate the vision levels that recipients of retinal prostheses with 4 × 4, 6 × 10, and 16 × 16 electrode arrays may experience, and to test the functionality of this vision. A PC video camera captured images that were converted in real time into dots (“pixels”). The PC monitor and a head-mounted display worn by test subjects displayed the pixelized images. To assess performance of normally sighted individuals with each array, we designed a set of tasks including: four-choice orientation discrimination of a Sloan letter E, object recognition and discrimination, a cutting task, a pouring task, symbol recognition, and two reading tasks. In the letter E task, subjects were found to have visual acuities of 20/1,810, 20/1,330, and 20/420 with the 4 × 4, 6 × 10, and 16 × 16 arrays, respectively. Most subjects were able to read fonts as small as 36 point with the 16 × 16 array, corresponding with a visual acuity of 20/600 in our system. The test subjects partially overcame the visual limitation of the system by scanning the video camera over the letters allowing spatial and temporal integration of visual information. In all categories, subjects performed best with the 16 × 16 array and least well with the 4 × 4 array. Even with the lowest resolution array, however, some subjects could recognize simple objects and symbols. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
34. Apport des dispositifs de restauration de la vision et de la résolution temporelle
- Author
-
Poujade, Mylène, Institut de la Vision, Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Institut National de la Santé et de la Recherche Médicale (INSERM), Sorbonne Université, and Ryad Benosman
- Subjects
[SCCO.NEUR]Cognitive science/Neuroscience ,Rétinite pigmentaire ,Visual neuroprosthesis ,Simulated prosthetic vision ,Cécité ,Optogenetic therapy ,Blindness ,Retinitis pigmentosa ,Temporal resolution ,Résolution temporelle ,Neuroprothèses visuelles ,Thérapie optogénétique ,Vision prothétique simulée ,[INFO.INFO-BT]Computer Science [cs]/Biotechnology - Abstract
Retinitis Pigmentosa is an inherited retinal degenerative disease leading to blindness. Vision restoration techniques have been developed as visual neuroprostheses and optogenetic therapy. The limitation of these devices is their spatial resolution. The visual neuroprosthesis IRIS I developed by Pixium vision and Gensight Biologics’ optogenetic therapy allow the visual information to be captured and stimuled with a high temporal resolution. Increasing the temporal resolution leads to a more natural vision, and should overcome the low spatial resolution. Our study evaluate the contribution to these techniques and the temporal resolution, towards usefull vision. Healthy subjects wearing goggles simulating vision arising from the devices were asked to perform everyday tasks at 60Hz and 1440Hz. The devices allow the tasks to be carried out, with greater ease for patients who would be treated with optogenetic therapy. Patients could then regain autonomy in performing daily tasks. We also show that the quality of stimulation influences tasks requiring relatively sharpness. We have not identified any facilitation in the accomplishment of these tasks through increased temporal resolution. According to the literature, an improvement in visual perception should accompany the increase in temporal resolution. As such, we set up a parametric study of the temporal frequency through a task of directional discrimination at three different speeds. From 120 Hz, the temporal resolution facilitates the task at medium and high speed. Based on these results, speeds of the visual scenes from our previous experiment were too low for temporal resolution to improve the perception.; La rétinite pigmentaire est une maladie neurodégénérative héréditaire de la rétine entraînant la cécité. Des technologies restaurant la vision ont vu le jour comme les neuroprothèses visuelles et la thérapie optogénétique. Leur limitation est la résolution spatiale. La neuroprothèse visuelle IRIS I de Pixium Vision et la thérapie optogénétique de Gensigth Biologics permettent une stimulation à haute fréquence temporelle. Or, augmenter la résolution temporelle mène à une stimulation plus naturelle pouvant compenser la résolution spatiale limitée. Notre étude évalue l’apport de la vision restaurée de ces dispositifs et de la résolution temporelle. Des sujets sains portant des lunettes de stimulation simulant la vision restaurée ont réalisé des tâches quotidiennes à 60 Hz et 1440 Hz et différentes qualité de restauration. Les dispositifs permettent la réalisation des tâches proposées, avec plus de facilité pour les patients traités par thérapie optogénétique. Les patients pourraient donc recouvrir une autonomie dans la réalisation de tâches quotidiennes. Nous montrons que la qualité de la stimulation influence les performances des tâches nécessitant une acuité relativement bonne. La réalisation des tâches n’a pas été facilitée par l’augmentation de la résolution temporelle. Selon la littérature, cette dernière améliore la qualité de la perception. Nous avons élaboré une tâche de discrimination de direction de mouvement à trois vitesses. Dès 120 Hz, la résolution temporelle facilite la tâche à vitesse moyenne et élevée. Par conséquent, les vitesses des scènes visuelles de notre précédente étude étaient trop faibles pour que la résolution temporelle améliore la perception.
- Published
- 2019
35. Contribution of vision restoration devices and temporal resolution
- Author
-
Poujade, Mylène, Institut de la Vision, Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Institut National de la Santé et de la Recherche Médicale (INSERM), Sorbonne Université, Ryad Benosman, STAR, ABES, and Institut National de la Santé et de la Recherche Médicale (INSERM)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[SCCO.NEUR]Cognitive science/Neuroscience ,[SCCO.NEUR] Cognitive science/Neuroscience ,Rétinite pigmentaire ,Visual neuroprosthesis ,Simulated prosthetic vision ,Cécité ,Optogenetic therapy ,Blindness ,Retinitis pigmentosa ,Temporal resolution ,Résolution temporelle ,[INFO.INFO-BT] Computer Science [cs]/Biotechnology ,Neuroprothèses visuelles ,Thérapie optogénétique ,Vision prothétique simulée ,[INFO.INFO-BT]Computer Science [cs]/Biotechnology - Abstract
Retinitis Pigmentosa is an inherited retinal degenerative disease leading to blindness. Vision restoration techniques have been developed as visual neuroprostheses and optogenetic therapy. The limitation of these devices is their spatial resolution. The visual neuroprosthesis IRIS I developed by Pixium vision and Gensight Biologics’ optogenetic therapy allow the visual information to be captured and stimuled with a high temporal resolution. Increasing the temporal resolution leads to a more natural vision, and should overcome the low spatial resolution. Our study evaluate the contribution to these techniques and the temporal resolution, towards usefull vision. Healthy subjects wearing goggles simulating vision arising from the devices were asked to perform everyday tasks at 60Hz and 1440Hz. The devices allow the tasks to be carried out, with greater ease for patients who would be treated with optogenetic therapy. Patients could then regain autonomy in performing daily tasks. We also show that the quality of stimulation influences tasks requiring relatively sharpness. We have not identified any facilitation in the accomplishment of these tasks through increased temporal resolution. According to the literature, an improvement in visual perception should accompany the increase in temporal resolution. As such, we set up a parametric study of the temporal frequency through a task of directional discrimination at three different speeds. From 120 Hz, the temporal resolution facilitates the task at medium and high speed. Based on these results, speeds of the visual scenes from our previous experiment were too low for temporal resolution to improve the perception., La rétinite pigmentaire est une maladie neurodégénérative héréditaire de la rétine entraînant la cécité. Des technologies restaurant la vision ont vu le jour comme les neuroprothèses visuelles et la thérapie optogénétique. Leur limitation est la résolution spatiale. La neuroprothèse visuelle IRIS I de Pixium Vision et la thérapie optogénétique de Gensigth Biologics permettent une stimulation à haute fréquence temporelle. Or, augmenter la résolution temporelle mène à une stimulation plus naturelle pouvant compenser la résolution spatiale limitée. Notre étude évalue l’apport de la vision restaurée de ces dispositifs et de la résolution temporelle. Des sujets sains portant des lunettes de stimulation simulant la vision restaurée ont réalisé des tâches quotidiennes à 60 Hz et 1440 Hz et différentes qualité de restauration. Les dispositifs permettent la réalisation des tâches proposées, avec plus de facilité pour les patients traités par thérapie optogénétique. Les patients pourraient donc recouvrir une autonomie dans la réalisation de tâches quotidiennes. Nous montrons que la qualité de la stimulation influence les performances des tâches nécessitant une acuité relativement bonne. La réalisation des tâches n’a pas été facilitée par l’augmentation de la résolution temporelle. Selon la littérature, cette dernière améliore la qualité de la perception. Nous avons élaboré une tâche de discrimination de direction de mouvement à trois vitesses. Dès 120 Hz, la résolution temporelle facilite la tâche à vitesse moyenne et élevée. Par conséquent, les vitesses des scènes visuelles de notre précédente étude étaient trop faibles pour que la résolution temporelle améliore la perception.
- Published
- 2019
36. Moving object recognition under simulated prosthetic vision using background-subtraction-based image processing strategies.
- Author
-
Wang, Jing, Lu, Yanyu, Gu, Liujun, Zhou, Chuanqing, and Chai, Xinyu
- Subjects
- *
OBJECT recognition (Computer vision) , *SIMULATION methods & models , *IMAGE processing , *MATHEMATICAL optimization , *INFORMATION theory , *STRATEGIC planning - Abstract
Abstract: A visual prosthesis that applies electrical stimulation to different parts of the visual pathway has been proposed as a viable approach to restore functional vision. However, the created percept is currently limited due to the low-resolution images elicited from a limited number of stimulating electrodes. Thus, methods to optimize the visual percepts providing useful visual information are being considered. We used two image-processing strategies based on a novel background subtraction technique to optimize the content of dynamic scenes of daily life. Psychophysical results showed that background reduction, or background reduction with foreground enhancement, increased response accuracy compared with methods that directly merged pixels to lower resolution. By adding more gray scale information, a background reduction/foreground enhancement strategy resulted in the best performance and highest recognition accuracy. Further development of image-processing modules for a visual prosthesis based on these results will assist implant recipients to avoid dangerous situations and attain independent mobility in daily life. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
37. Virtual reality validation of naturalistic modulation strategies to counteract fading in retinal stimulation.
- Author
-
Thorn JT, Chenais NAL, Hinrichs S, Chatelain M, and Ghezzi D
- Subjects
- Electric Stimulation, Humans, Phosphenes, Retina, Retinal Ganglion Cells, Vision, Ocular, Virtual Reality, Visual Prosthesis
- Abstract
Objective . Temporal resolution is a key challenge in artificial vision. Several prosthetic approaches are limited by the perceptual fading of evoked phosphenes upon repeated stimulation from the same electrode. Therefore, implanted patients are forced to perform active scanning, via head movements, to refresh the visual field viewed by the camera. However, active scanning is a draining task, and it is crucial to find compensatory strategies to reduce it. Approach . To address this question, we implemented perceptual fading in simulated prosthetic vision using virtual reality. Then, we quantified the effect of fading on two indicators: the time to complete a reading task and the head rotation during the task. We also tested if stimulation strategies previously proposed to increase the persistence of responses in retinal ganglion cells to electrical stimulation could improve these indicators. Main results . This study shows that stimulation strategies based on interrupted pulse trains and randomisation of the pulse duration allows significant reduction of both the time to complete the task and the head rotation during the task. Significance . The stimulation strategy used in retinal implants is crucial to counteract perceptual fading and to reduce active head scanning during prosthetic vision. In turn, less active scanning might improve the patient's comfort in artificial vision., (Creative Commons Attribution license.)
- Published
- 2022
- Full Text
- View/download PDF
38. Immersive Virtual Reality Simulations of Bionic Vision.
- Author
-
Kasowski J and Beyeler M
- Abstract
Bionic vision uses neuroprostheses to restore useful vision to people living with incurable blindness. However, a major outstanding challenge is predicting what people "see" when they use their devices. The limited field of view of current devices necessitates head movements to scan the scene, which is difficult to simulate on a computer screen. In addition, many computational models of bionic vision lack biological realism. To address these challenges, we present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to "see through the eyes" of a bionic eye user. To demonstrate its utility, we systematically evaluated how clinically reported visual distortions affect performance in a letter recognition and an immersive obstacle avoidance task. Our results highlight the importance of using an appropriate phosphene model when predicting visual outcomes for bionic vision.
- Published
- 2022
- Full Text
- View/download PDF
39. Image processing based recognition of images with a limited number of pixels using simulated prosthetic vision
- Author
-
Zhao, Ying, Lu, Yanyu, Tian, Yukun, Li, Liming, Ren, Qiushi, and Chai, Xinyu
- Subjects
- *
PIXELS , *ARTIFICIAL vision , *IMAGE processing , *BIOMEDICAL engineering , *SIMULATION methods & models , *OPTICAL resolution - Abstract
Abstract: Visual prostheses based on micro-electronic technologies and biomedical engineering have been demonstrated to restore vision to blind individuals. It is necessary to determine the minimum requirements to achieve useful artificial vision for image recognition. To find the primary factors in common object and scene images recognition and optimize the recognition accuracy on low resolution images using image processing strategies, we investigate the effects of two kinds of image processing methods, two common shapes of pixels (square and circular) and six resolutions (8×8, 16×16, 24×24, 32×32, 48×48 and 64×64). The results showed that the mean recognition accuracy increased with the number of pixels. The recognition threshold for objects was within the interval of 16×16 to 24×24 pixels. For simple scenes, it was between 32×32 and 48×48 pixels. Near the threshold of recognition, different image modes had great impact on recognition accuracy. The images with “threshold pixel number and binarization-circular points” produced the best recognition results. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
40. Naviguer en vision prothétique simulée : apport de la vision par ordinateur pour augmenter les rendus prothétiques de basse résolution
- Author
-
Vergnieux, Victor, Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées, Université Paul Sabatier - Toulouse III, Christophe Jouffrais, and Marc Macé
- Subjects
Visual Neuroprosthesis ,Réalité virtuelle ,Neuroprothèses visuelles ,Spatial Cognition ,Simulation de vision prothétique ,Simulated Prosthetic Vision ,[INFO.EIAH]Computer Science [cs]/Technology for Human Learning ,Cognition spatiale ,Navigation - Abstract
Blindness affects thirty nine millions people in the world and generates numerous difficulties in everyday life. Specifically, navigation abilities (which include wayfinding and mobility) are heavily diminished. This leads blind people to limit and eventually to stop walking outside. Visual neuroprosthesis are developed in order to restore such "visual" perception and help them to get some autonomy back. Those implants generate electrical micro-stimulations which are focused on the retina, the optic nerve or the visual cortex. Those stimulations elicit blurry dots called "phosphenes". Phosphenes can be mainly white, grey or yellow. The whole stimulation device contains a wearable camera, a small computer and the implant which is connected to the computer. The implant resolution and position impact directly the quality of the restored visual perception. Current implants include less than a hundred electrodes so it is mandatory to reduce the resolution of the visual stream to match the implant resolution. For instance, the already commercialized Argus II implant from the company Second Sight (Seymar, California) is the leading visual implant worldwide and uses only sixty electrodes. This means that Argus II blind owners can perceive only sixty phosphenes simultaneously. Therefore this restored vision is quite poor and signal optimization is required to get to a functional implant usage. Blind people with implants are involved in restricted clinical trials and are difficult to reach. Yet, studying those implant possibilities is at our reach by simulating prosthetic vision and displaying it in a head mounted display for sighted subjects. This is the field of simulated prosthetic vision (SPV). Navigation was never studied with people with implant, and only a few studies approached this topic in SPV. In this thesis, we focused on the study of navigation in SPV. Computer vision allowed us to select which of the scene elements to display in order to help subjects to navigate and build a spatial representation of the environment. We used psychological models of navigation to conceive and evaluate SPV renderings. Subjects had to find their way and collect elements in a navigation task in SPV inspired by video games for the blind. To evaluate their performance we used a performance index based on the completion time. To evaluate their mental representation, we asked them to draw the environment layout after the task for each rendering. This double evaluation lead us to spot which elements can and should be displayed in low resolution SPV in order to navigate. Specifically those results show that to be understandable in low vision, a scene must be simple and the structure of the environment should not be hidden. When blind people with implant will become available we will be able to confirm or deny those results by evaluating their navigation in virtual and real environments.; La cécité touche 39 millions de personnes dans le monde et génère de nombreuses difficultés dans la vie quotidienne. Plus précisément, les capacités de navigation (incluant orientation et mobilité) sont fortement diminuées, ce qui amène les personnes non-voyantes à limiter, voire à cesser leurs déplacements. Pour restaurer des sensations "visuelles", et par-delà, une certaine autonomie, il est possible de stimuler directement le système visuel résiduel d'une personne non-voyante à l'aide d'un implant administrant des micro-stimulations électriques. Le dispositif complet se compose d'une micro-caméra portée sur des lunettes et reliée à un ordinateur de poche, qui lui-même est connecté à l'implant. Lors des micro-stimulations, les sujets perçoivent des tâches grises, blanches ou jaunâtres appelées phosphènes. Ainsi la qualité de la vision restaurée est directement dépendante de la résolution et de la position de l'implant. Le nombre d'électrodes étant faible pour les implants en développement (moins d'une centaine), il est nécessaire de réduire drastiquement la résolution du flux vidéo pour la faire correspondre à la faible résolution de l'implant. Actuellement, l'Argus II de la société Second Sight est l'implant dont le développement est le plus avancé et sa résolution est de 60 électrodes, ce qui permet aux patients implantés de percevoir 60 phosphènes différents. Cette vision restaurée est donc très pauvre et un travail d'optimisation du signal est nécessaire pour pouvoir utiliser l'implant de manière fonctionnelle. Les sujets implantés sont impliqués dans des protocoles cliniques fermés ne permettant pas de les inclure dans d'autres expériences. Malgré cela, il est possible d'étudier les possibilités offertes par ces implants visuels en simulant la vision prothétique dans un casque de réalité virtuelle porté par des sujets voyants. Il s'agit du domaine de la vision prothétique simulée (VPS). La navigation n'a jamais été étudiée chez les patients implantés et très rarement en VPS. Il s'avère qu'avec des implants de très faible résolution, elle pose de grandes difficultés liées à la mobilité mais également des difficultés liées à l'orientation. Les travaux entrepris dans ce doctorat se concentrent sur l'étude de la navigation en VPS. Différentes théories en psychologie nous ont permis d'identifier les éléments importants pour les sujets afin qu'ils se repèrent et se construisent une représentation mentale fiable de l'environnement lors de la navigation. À partir de ces modèles, différents rendus prothétiques utilisant la vision par ordinateur ont été conçus et testés dans une tâche de navigation réalisée dans un environnement virtuel. Les expérimentations effectuées avaient pour objectif d'optimiser la perception et la compréhension de l'espace parcouru avec un implant de faible résolution. Ces évaluations reposaient sur la performance de temps des sujets pour effectuer la tâche de navigation et sur leur représentation mentale de l'environnement. Après la tâche de navigation, il leur était demandé de dessiner la carte des environnements explorés, afin d'évaluer ces représentations. Cette double évaluation a permis d'identifier les indices importants permettant de faciliter la perception et la mémorisation de la structure des environnements dans une tâche de navigation en VPS. Pour améliorer les performances des personnes non-voyantes implantées, il apparaît notamment nécessaire de limiter la quantité d'information présentée, tout en préservant la structure de l'environnement grâce à des algorithmes de vision par ordinateur. Lorsque l'accès à des patients implantés sera plus ouvert, il deviendra nécessaire de valider ces différents résultats en les faisant naviguer en environnement virtuel puis en environnement réel.
- Published
- 2015
41. Simulating prosthetic vision: II. Measuring functional capacity
- Author
-
Gregg J. Suaning, John W. Morley, Spencer C. Chen, and Nigel H. Lovell
- Subjects
Visual acuity ,media_common.quotation_subject ,Population ,Phosphenes ,Functional vision ,Blindness ,Human–computer interaction ,Reading (process) ,Psychophysics ,Humans ,Adaptation (computer science) ,education ,media_common ,Visual search ,education.field_of_study ,Communication ,business.industry ,Simulated prosthetic vision ,Prostheses and Implants ,Prosthetic vision ,Sensory Systems ,Visual field ,Vision science ,Ophthalmology ,Treatment Outcome ,Pattern Recognition, Visual ,Reading ,Visual prosthesis ,business ,Psychology - Abstract
Investigators of microelectronic visual prosthesis devices have found that some aspects of vision can be restored in the form of spots of light in the visual field, so-called “phosphenes”, from which more rich and complex scenes may be composed. However, questions still surround the capabilities of how such a form of vision can allow its recipients to “see” and to carry out everyday activities. Through simulations of prosthetic vision, researchers can experience first-hand many performance and behavioral aspects of prosthetic vision, and studies conducted on a larger population can inform the performance and behavioral preferences in general and in individual cases. This review examines the findings from the various investigations of the functional capacity of prosthetic vision conducted through simulations, especially on the topics of letter acuity, reading, navigation, learning and visual scanning adaptation. Central to the review, letter acuity is posited as a reference measurement so that results and performance trends across the various simulation models and functional assessment tasks can be more readily compared and generalized. Future directions for simulation based research are discussed with respect to designing a functional visual prosthesis, improving functional vision in near-term low-phosphene-count devices, and pursuing image processing strategies to impart the most comprehensible prosthetic vision.
- Published
- 2009
- Full Text
- View/download PDF
42. Simulated prosthetic vision: The benefits from computer-based object recognition and localization
- Author
-
Macé, Marc J.-M., Guivarch, Valérian, Denis, Grégoire, Jouffrais, Christophe, Institut National Polytechnique de Toulouse - Toulouse INP (FRANCE), Centre National de la Recherche Scientifique - CNRS (FRANCE), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), Université Toulouse 1 Capitole - UT1 (FRANCE), Institut National Polytechnique de Toulouse - INPT (FRANCE), Etude de L’Interaction Personne SystèmE (IRIT-ELIPSE), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Centre National de la Recherche Scientifique (CNRS), and Systèmes Multi-Agents Coopératifs (IRIT-SMAC)
- Subjects
genetic structures ,Blindness-visual neuroprosthesis ,Simulated prosthetic vision ,[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV] ,Computer vision ,Vision par ordinateur et reconnaissance de formes ,Visual impairment - Abstract
International audience; Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method—that is, localization of targets of interest in the scene—may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly.
- Published
- 2015
43. Development of assistive peripheral prosthetic vision designs for guiding mobility using virtual-reality simulations
- Author
-
Zapf, Marc Patrick
- Subjects
simulated prosthetic vision ,genetic structures ,retinitis pigmentosa ,phosphenes ,peripheral vision ,virtual reality ,visual prosthesis ,electrical stimulation ,eye diseases - Abstract
Retinal prosthetic devices evoke rudimentary vision, i.e. dot-shaped percepts ("phosphenes') that may be combined to affect pattern vision. Such devices may be appropriate for persons blinded by retinodegenerative diseases like retinitis pigmentosa (RP), and aim at guiding daily activities such as mobility. Due to their novelty, current devices are limited to persons with end-stage RP and no more than bare light perception. They are further designed to elicit phosphenes in the central visual field (VF). Yet, persons commonly experience mobility impairment in intermediate stages of RP. Also, mid-peripheral VF zones have been identified as crucial for mobility. This thesis dissertation seeks to explore alternative electrode array layouts for the peripheral VF to assess their utility prior to the full progression of the retinodegenerative disease. Beneficial effects of peripheral phosphene vision for mobility when coexisting with residual central vision of 10 degrees of visual angle (legal blindness), such as that experienced in mid-stage RP, are assessed. Further, efficacy of peripheral phosphene vision in the absence of residual vision, as in end-stage RP, as compared to a central phosphene layout is evaluated. A photorealistic mobility-testing framework for simulated prosthetic and residual vision was developed. In subsequent stages of the project the performance of normally sighted participants (N=25) performing navigational tasks in a virtual environment, using various layouts of peripheral and central prosthetic vision, was assessed. Peripheral phosphene layouts complementing central residual vision were found particularly useful when circumventing low-lying obstacles - increasing walking speeds by up to 14% and reducing collisions by up to 21% as compared to plain residual vision. Visual scanning - related head movements (VSRHM) were reduced by up to 42.1%. In the absence of residual vision, peripheral phosphene layouts displayed up to 5.6% higher walking speeds, up to 73% less collisions, and up to 60.7% less VSRHM as compared to a central phosphene layout. While benefits of peripheral phosphene layouts have been task-specific, future adjustments in retinal prosthesis designs to realise a peripheral prosthesis might serve to increase the target group and time frame for retinal prosthetic therapy and thus the benefit to the persons with visual impairment.
- Published
- 2015
- Full Text
- View/download PDF
44. Development of assistive peripheral prosthetic vision designs for guiding mobility using virtual-reality simulations
- Author
-
Suaning, Gregg, Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW, Boon, Mei-Ying, Optometry & Vision Science, Faculty of Science, UNSW, Lovell, Nigel, Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW, Zapf, Marc Patrick, Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW, Suaning, Gregg, Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW, Boon, Mei-Ying, Optometry & Vision Science, Faculty of Science, UNSW, Lovell, Nigel, Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW, and Zapf, Marc Patrick, Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW
- Abstract
Retinal prosthetic devices evoke rudimentary vision, i.e. dot-shaped percepts ("phosphenes') that may be combined to affect pattern vision. Such devices may be appropriate for persons blinded by retinodegenerative diseases like retinitis pigmentosa (RP), and aim at guiding daily activities such as mobility. Due to their novelty, current devices are limited to persons with end-stage RP and no more than bare light perception. They are further designed to elicit phosphenes in the central visual field (VF). Yet, persons commonly experience mobility impairment in intermediate stages of RP. Also, mid-peripheral VF zones have been identified as crucial for mobility. This thesis dissertation seeks to explore alternative electrode array layouts for the peripheral VF to assess their utility prior to the full progression of the retinodegenerative disease. Beneficial effects of peripheral phosphene vision for mobility when coexisting with residual central vision of 10 degrees of visual angle (legal blindness), such as that experienced in mid-stage RP, are assessed. Further, efficacy of peripheral phosphene vision in the absence of residual vision, as in end-stage RP, as compared to a central phosphene layout is evaluated. A photorealistic mobility-testing framework for simulated prosthetic and residual vision was developed. In subsequent stages of the project the performance of normally sighted participants (N=25) performing navigational tasks in a virtual environment, using various layouts of peripheral and central prosthetic vision, was assessed. Peripheral phosphene layouts complementing central residual vision were found particularly useful when circumventing low-lying obstacles - increasing walking speeds by up to 14% and reducing collisions by up to 21% as compared to plain residual vision. Visual scanning - related head movements (VSRHM) were reduced by up to 42.1%. In the absence of residual vision, peripheral phosphene layouts displayed up to 5.6% higher walking speeds
- Published
- 2015
45. Simulating prosthetic vision: I. Visual models of phosphenes
- Author
-
Nigel H. Lovell, Spencer C. Chen, Gregg J. Suaning, and John W. Morley
- Subjects
Visual perception ,genetic structures ,media_common.quotation_subject ,Models, Neurological ,Phosphenes ,Models, Psychological ,Blindness ,Vision disorder ,User-Computer Interface ,Human–computer interaction ,Perception ,medicine ,Psychophysics ,Humans ,Electric stimulation ,media_common ,Simulated prosthetic vision ,Prostheses and Implants ,medicine.disease ,eye diseases ,Sensory Systems ,Electric Stimulation ,Ophthalmology ,Phosphene ,Visual prosthesis ,Electrical stimulation ,Virtual–reality ,Visual Perception ,medicine.symptom ,Psychology - Abstract
With increasing research advances and clinical trials of visual prostheses, there is significant demand to better understand the perceptual and psychophysical aspects of prosthetic vision. In prosthetic vision a visual scene is composed of relatively large, isolated, spots of light so-called “phosphenes”, very much like a magnified pictorial print. The utility of prosthetic vision has been studied by investigators in the form of virtual–reality visual models (simulations) of prosthetic vision administered to normally sighted subjects. In this review, the simulations from these investigations are examined with respect to how they visually render the phosphenes and the virtual–reality apparatus involved. A comparison is made between these simulations and the actual descriptions of phosphenes reported from human trials of visual prosthesis devices. For the results from these simulation studies to be relevant to the experience of visual prosthesis recipients, it is important that, the simulated phosphenes must be consistent with the descriptions from human trials. A standardized simulation and reporting framework is proposed so that future simulations may be configured to be more realistic to the experience of implant recipients, and the simulation parameters from different investigators may be more readily extracted, and study results more fittingly compared.
- Published
- 2009
46. MEMS-based system and image processing strategy for epiretinal prosthesis.
- Author
-
Xia P, Hu J, Qi J, Gu C, and Peng Y
- Subjects
- Electrodes, Implanted, Equipment Failure Analysis, Prosthesis Design, Wireless Technology instrumentation, Electric Stimulation Therapy instrumentation, Image Interpretation, Computer-Assisted instrumentation, Implantable Neurostimulators, Micro-Electrical-Mechanical Systems instrumentation, Signal Processing, Computer-Assisted instrumentation, Visual Prosthesis
- Abstract
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
- Published
- 2015
- Full Text
- View/download PDF
47. Simulated Prosthetic Vision: Improving text accessibility with retinal prostheses
- Author
-
Corinne Mailhes, Christophe Jouffrais, Grégoire Denis, Marc J.-M. Macé, Etude de L’Interaction Personne SystèmE (IRIT-ELIPSE), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Centre National de la Recherche Scientifique (CNRS), Signal et Communications (IRIT-SC), Institut National Polytechnique (Toulouse) (Toulouse INP), Université Toulouse 1 Capitole (UT1)-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1)-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Laboratoire Cherchons Pour Voir (CPV), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Institut des Jeunes Aveugles [Toulouse] (IJA)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Institut National Polytechnique de Toulouse - Toulouse INP (FRANCE), Centre National de la Recherche Scientifique - CNRS (FRANCE), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), Université Toulouse 1 Capitole - UT1 (FRANCE), and Institut National Polytechnique de Toulouse - INPT (FRANCE)
- Subjects
Male ,Time Factors ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,[SCCO.COMP]Cognitive science/Computer science ,Image processing ,Retina ,Rendering (computer graphics) ,Pattern Recognition, Automated ,chemistry.chemical_compound ,Traitement des images ,[SCCO]Cognitive science ,User-Computer Interface ,[INFO.INFO-IM]Computer Science [cs]/Medical Imaging ,Image Processing, Computer-Assisted ,Humans ,Computer vision ,Computer Simulation ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,ComputingMilieux_MISCELLANEOUS ,Vision, Ocular ,Imagerie médicale ,Clinical Trials as Topic ,business.industry ,Eye, Artificial ,[SCCO.NEUR]Cognitive science/Neuroscience ,Neurosciences ,Simulated prosthetic vision ,Reproducibility of Results ,Retinal ,Text accessibility ,Visual Prosthesis ,chemistry ,[INFO.INFO-TI]Computer Science [cs]/Image Processing [eess.IV] ,[SCCO.PSYC]Cognitive science/Psychology ,[INFO.INFO-ET]Computer Science [cs]/Emerging Technologies [cs.ET] ,Female ,Artificial intelligence ,business ,Algorithms - Abstract
International audience; Image processing can improve significantly the every-day life of blind people wearing current and upcoming retinal prostheses relying on an external camera. We propose to use a real-time text localization algorithm to improve text accessibility. An augmented text-specific rendering based on automatic text localization has been developed. It has been evaluated in comparison to the classical rendering through a Simulated Prosthetic Vision (SPV) experiment with 16 subjects. Subjects were able to detect text in natural scenes much faster and further with the augmented rendering compared to the control rendering. Our results show that current and next generation of low resolution retinal prostheses may benefit from real-time text detection algorithms.
48. Wayfinding with simulated prosthetic vision: performance comparison with regular and structure-enhanced rendering
- Author
-
Christophe Jouffrais, Marc J.-M. Macé, Victor Vergnieux, Institut National Polytechnique de Toulouse - Toulouse INP (FRANCE), Centre National de la Recherche Scientifique - CNRS (FRANCE), Université Toulouse III - Paul Sabatier - UT3 (FRANCE), Université Toulouse - Jean Jaurès - UT2J (FRANCE), Université Toulouse 1 Capitole - UT1 (FRANCE), Etude de L’Interaction Personne SystèmE (IRIT-ELIPSE), Institut de recherche en informatique de Toulouse (IRIT), Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées-Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse - Jean Jaurès (UT2J)-Université Toulouse III - Paul Sabatier (UT3), Université Fédérale Toulouse Midi-Pyrénées-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Université Fédérale Toulouse Midi-Pyrénées-Université Toulouse 1 Capitole (UT1), Université Fédérale Toulouse Midi-Pyrénées, Centre National de la Recherche Scientifique (CNRS), Laboratoire Cherchons Pour Voir (CPV), Université Fédérale Toulouse Midi-Pyrénées-Institut des Jeunes Aveugles [Toulouse] (IJA)-Centre National de la Recherche Scientifique (CNRS)-Institut National Polytechnique (Toulouse) (Toulouse INP), Institut National Polytechnique de Toulouse - INPT (FRANCE), and Jouffrais, Christophe
- Subjects
Adult ,Male ,Artificial vision algorithmes ,Computer science ,Phosphenes ,[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS] ,Algorithme et structure de données ,[SCCO.COMP]Cognitive science/Computer science ,Models, Biological ,050105 experimental psychology ,Rendering (computer graphics) ,Young Adult ,[SCCO]Cognitive science ,03 medical and health sciences ,0302 clinical medicine ,[SCCO.COMP] Cognitive science/Computer science ,[INFO.INFO-ET] Computer Science [cs]/Emerging Technologies [cs.ET] ,Humans ,Computer Simulation ,0501 psychology and cognitive sciences ,Computer vision ,[INFO.INFO-HC]Computer Science [cs]/Human-Computer Interaction [cs.HC] ,Vision, Ocular ,ComputingMilieux_MISCELLANEOUS ,Cognitive map ,Eye, Artificial ,business.industry ,05 social sciences ,Simulated prosthetic vision ,Cognition ,[SCCO] Cognitive science ,Environnements Informatiques pour l'Apprentissage Humain ,Computer vision techniques ,Phosphene ,Performance comparison ,[SCCO.PSYC] Cognitive science/Psychology ,[SCCO.PSYC]Cognitive science/Psychology ,Female ,[INFO.EIAH]Computer Science [cs]/Technology for Human Learning ,[INFO.INFO-ET]Computer Science [cs]/Emerging Technologies [cs.ET] ,Artificial intelligence ,[INFO.INFO-HC] Computer Science [cs]/Human-Computer Interaction [cs.HC] ,business ,Algorithms ,030217 neurology & neurosurgery - Abstract
International audience; In this study, we used a simulation of upcoming low-resolution visual neuroprostheses to evaluate the benefit of embedded computer vision techniques in a wayfinding task. We showed that augmenting the classical phosphene rendering with the basic structure of the environment - displaying the ground plane with a different level of brightness - increased both wayfinding performance and cognitive mapping. In spite of the low resolution of current and upcoming visual implants, the improvement of these cognitive functions may already be possible with embedded artificial vision algorithms.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.