5,596 results on '"Monocular vision"'
Search Results
2. A Reference-Based Approach for Tumor Size Estimation in Monocular Laparoscopic Videos
- Author
-
Mousavi, Seyed Amir, Tozzi, Francesca, Park, Homin, Anzaku, Esla Timothy, Van Liefferinge, Matthias, Rashidian, Nikdokht, Willaert, Wouter, De Neve, Wesley, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wu, Jia, editor, Qin, Wenjian, editor, Li, Chao, editor, and Kim, Boklye, editor
- Published
- 2025
- Full Text
- View/download PDF
3. Research on distance measurement of vehicles in front of campus patrol vehicles based on monocular vision.
- Author
-
Zheng, Lei, Liu, Lei, Lu, Jingyu, Tian, Jie, Cheng, Yong, and Yin, Wei
- Abstract
The technology of intelligent vehicle perception system based on vision sensor is becoming more and more mature, and many distance measurement methods based on monocular vision have been proposed. However, less attention has been paid to the experiment and application of the distance measurement of the vehicle in front of the campus intelligent patrol vehicle. In this paper, two monocular visual distance measurement methods based on the width of the target detection object and the distance between the target detection object and the ground contact point are proposed. Two distance measurement models are proposed according to the principle of camera imaging model and the principle of coordinate system transformation. Then, two kinds of distance measurement experiments are designed and compared. The experimental results show that the measurement method based on the contact point between the target and the ground is poor, but the absolute error is less than 0.67 m. The overall error of distance measurement method based on license plate width is the smallest, and the absolute error is less than 0.15 m. The distance measurement error of the detection box based on the width of the vehicle body is large, and the absolute error is kept within 0.31 m. The measurement accuracy of the two methods for the distance measurement of the vehicle in front of the campus patrol vehicle meets the detection requirements. This work is significant to the research on the distance measurement of the vehicle in front of campus patrol vehicles and it is of great significance to enhance the safety of the campus patrol vehicle. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. LR-SLAM: Visual Inertial SLAM System with Redundant Line Feature Elimination.
- Author
-
Jiang, Hao, Cang, Naimeng, Lin, Yuan, Guo, Dongsheng, and Zhang, Weidong
- Abstract
The present study focuses on the simultaneous localization and mapping (SLAM) system based on point and line features. Aiming to address the prevalent issue of repeated detection during line feature extraction in low-texture environments, a novel method for merging redundant line features is proposed. This method effectively mitigates the problem of increased initial pose estimation error that arises when the same line is erroneously detected as multiple lines in adjacent frames. Furthermore, recognizing the potential for the introduction of line features to prolong the marginalization process of the information matrix, optimization strategies are employed to accelerate this process. Additionally, to tackle the issue of insufficient point feature accuracy, subpixel technology is introduced to enhance the precision of point features, thereby further reducing errors. Experimental results on the European Robotics Challenge (EUROC) public dataset demonstrate that the proposed LR-SLAM system exhibits significant advantages over mainstream SLAM systems such as ORB-SLAM3, VINS-Mono, and PL-VIO in terms of accuracy, efficiency, and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Evaluating the small aperture intraocular lens: depth of focus and the role of refraction and preoperative corneal astigmatism in visual performance.
- Author
-
Vukich, John, Thompson, Vance, Yeu, Elizabeth, Wiley, William F., Bafna, Shamik, Koch, Douglas D., Ling Lin, and Michna, Magda
- Subjects
- *
INTRAOCULAR lenses , *BINOCULAR vision , *MONOCULAR vision , *VISUAL acuity , *ASTIGMATISM , *PHOTOREFRACTIVE keratectomy , *PHACOEMULSIFICATION - Abstract
Purpose: To evaluate depth of focus (DOF) and visual acuities (VAs) by manifest refractive spherical equivalent (MRSE) and degree of preoperative corneal astigmatism with the IC-8 small aperture intraocular lens (SA IOL) (Apthera). Setting: 21 investigational sites in the United States. Design: Prospective, multicenter, open-label, parallel-group, nonrandomized, examiner-masked, 1-year clinical study. Methods: Included patients had cataract and ≤1.5 diopters (D) preoperative corneal astigmatism. Patients received either the SA IOL in 1 eye targeted to -0.75 D and a monofocal or monofocal toric IOL in the other targeted to plano (SA IOL group) or bilateral monofocal/monofocal toric IOLs targeted to plano (control group). Monocular and binocular assessments included defocus curves and uncorrected VAs (distance, intermediate, and near) by postoperative MRSE; monocular VAs were assessed by degree of preoperative corneal astigmatism. Results: The SA IOL group (n = 343) achieved 0.82 D additional binocular DOF vs the control group (n = 110), and SA IOL eyes achieved 0.91 D additional monocular DOF over fellow eyes. Across all MRSEs, the SA IOL group achieved monocular uncorrected VAs of 20/40 or better and binocular uncorrected VAs of 20/32 or better across all distances. In addition, SA IOL eyes with higher (1.0-1.5 D) vs lower (<1.0 D) preoperative corneal astigmatism achieved equivalent monocular uncorrected VAs. Conclusions: The SA IOL provides increased DOF vs monofocal/monofocal toric IOLs and consistent monocular and binocular vision across several postoperative MRSEs and up to 1.5 D of preoperative corneal astigmatism, giving patients with cataract and mild astigmatism the potential for an extended range of vision and reliable visual outcomes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Clinical Spectrum of Ophthalmic Manifestations in Myelin Oligodendrocyte Glycoprotein-Associated Disease (MOGAD): A Comprehensive Case Report.
- Author
-
Dhoot, Sanjeev Kumar, Lakhanpal, Vikas, Peer, Sameer, and Prakash, Sugandha
- Subjects
- *
MYELIN oligodendrocyte glycoprotein , *MONOCULAR vision , *MAGNETIC resonance imaging , *OPTICAL coherence tomography , *OCULAR manifestations of general diseases - Abstract
Purpose: To describe diverse ocular manifestations in a patient with Myelin oligodendrocyte glycoprotein-associated disease (MOGAD). Methods: A 15-year-old Indian male had severe loss of vision in one eye, followed by a recurrent attack of optic neuritis in the fellow eye a few weeks later. He had a history of vision loss, speech disturbances, altered sensorium and was a confirmed case of Myelin oligodendrocyte glycoprotein-associated disease (MOGAD). Apart from optic neuritis, other rare ophthalmic associations, namely, macular neuroretinopathy, retinal haemorrhages, severe optic nerve head edema, peri neuritis, and orbital enhancement on magnetic resonance imaging (MRI) were noted. Results: He responded dramatically to treatment with intravenous pulse steroids and relapses were controlled with long-term immunomodulation therapy. Conclusion: This case report reiterates the need for early treatment with pulse steroids in MOGAD and depicts the heterogeneous involvement of various ocular structures in the disease. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. From Pixels to Prepositions: Linking Visual Perception with Spatial Prepositions Far and Near.
- Author
-
Raj S R, Krishna, Chakravarthy V, Srinivasa, and Sahoo, Anindita
- Abstract
Human language is influenced by sensory-motor experiences. Sensory experiences gathered in a spatiotemporal world are used as raw material to create more abstract concepts. In language, one way to encode spatial relationships is through spatial prepositions. Spatial prepositions that specify the proximity of objects in space, like far and near or their variants, are found in most languages. The mechanism for determining the proximity of another entity to itself is a useful evolutionary trait. From the taxic behavior in unicellular organisms like bacteria to the tropism in the plant kingdom, this behavior can be found in almost all organisms. In humans, vision plays a critical role in spatial localization and navigation. This computational study analyzes the relationship between vision and spatial prepositions using an artificial neural network. For this study, a synthetic image dataset was created, with each image featuring a 2D projection of an object placed in 3D space. The objects can be of various shapes, sizes, and colors. A convolutional neural network is trained to classify the object in the images as far or near based on a set threshold. The study mainly explores two visual scenarios: objects confined to a plane (grounded) and objects not confined to a plane (ungrounded), while also analyzing the influence of camera placement. The classification performance is high for the grounded case, demonstrating that the problem of far/near classification is well-defined for grounded objects, given that the camera is at a sufficient height. The network performance showed that depth can be determined in grounded cases only from monocular cues with high accuracy, given the camera is at an adequate height. The difference in the network's performance between grounded and ungrounded cases can be explained using the physical properties of the retinal imaging system. The task of determining the distance of an object from individual images in the dataset is challenging as they lack any background cues. Still, the network performance shows the influence of spatial constraints placed on the image generation process in determining depth. The results show that monocular cues significantly contribute to depth perception when all the objects are confined to a single plane. A set of sensory inputs (images) and a specific task (far/near classification) allowed us to obtain the aforementioned results. The visual task, along with reaching and motion, may enable humans to carve the space into various spatial prepositional categories like far and near. The network's performance and how it learns to classify between far and near provided insights into certain visual illusions that involve size constancy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Vision-based autonomous robots calibration for large-size workspace using ArUco map and single camera systems.
- Author
-
Yin, Yuanhao, Gao, Dong, Deng, Kenan, and Lu, Yong
- Subjects
- *
COMPUTER vision , *AUTONOMOUS robots , *LASER measurement , *GLOBAL optimization , *UNITS of measurement , *INDUSTRIAL robots - Abstract
The low positioning accuracy of industrial robots limits their application in industry. Vision-based kinematic calibration, known for its rapid processing and economic efficiency, is an effective solution to enhance this accuracy. However, most of these methods are constrained by the camera's field of view, limiting their effectiveness in large workspaces. This paper proposes a novel calibration framework composed of monocular vision and computer vision techniques using ArUco markers. Firstly, a robot positioning error model was established by considering the kinematic error based on the Modified Denavit-Hartenberg model. Subsequently, a calibrated camera was used to create an ArUco map as an alternative to traditional single calibration targets. The map was constructed by stitching images of ArUco markers with unique identifiers, and its accuracy was enhanced through closed-loop detection and global optimization that minimizes reprojection errors. Then, initial hand-eye parameters were determined, followed by acquiring the robot's end-effector pose through the ArUco map. The Levenberg-Marquardt algorithm was employed for calibration, involving iterative refinement of hand-eye and kinematic parameters. Finally, experimental validation was conducted on the KUKA kr500 industrial robot, with laser tracker measurements as the reference standard. Compared to the traditional checkerboard method, this new approach not only expands the calibration space but also significantly reduces the robot's absolute positioning error, from 1.359 mm to 0.472 mm. • A new calibration framework based on an ArUco map and vision system expands the workspace and improves positioning accuracy. • High-precision ArUco map created via local and global optimization, serving as an effective alternative to single targets. • The robot's kinematic parameters are identified by the monocular camera, with a fully autonomous and robust process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. High Precision Pose Estimation for Uncooperative Targets Based on Monocular Vision and 1D Laser Fusion.
- Author
-
Wang, Yishi, Zhang, Zexu, Huang, Yefei, and Su, Yu
- Abstract
The robust pose estimation of uncooperative space targets is a significant and challenging technology for on-orbit service. This work proposes a pose-recovering method for uncooperative targets by fusing one-dimensional (1D) laser and monocular camera units. Specifically, we utilize the laser projection on the camera’s pixel plane and the feature points extracted from images to develop a weight-based fusion method and estimate the accurate scale. To improve the accuracy, the camera and laser rangefinder are tightly coupled to optimize the estimated sequential poses. This method can overcome the known deficiency of the absolute scale of monocular cameras and maintain accurate pose tracking without the baseline constraint of RGB-D cameras. The performance of our proposed method is validated with synthetic images and real-world experiments, our proposed method shows more robust and accurate performance compared with other fusion methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Monocular visual obstacle avoidance method for autonomous vehicles based on YOLOv5 in multi lane scenes
- Author
-
Junhua wang, Laiquan Han, Yuan Jiang, Yongjun Qi, and Khuder Altangerel
- Subjects
Vehicle engineering ,Autonomous driving ,Driving obstacle avoidance methods ,YOLOv5 ,Monocular vision ,Multiple lanes ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
This study explores a more effective obstacle avoidance method for autonomous driving based on the monocular vision system of YOLOv5. The study utilizes the YOLOv5 model to detect obstacles and road signs in the environment in real-time, including vehicles, pedestrians, traffic signals, etc., identifying objects of different sizes and angles, providing accurate obstacle avoidance decision inputs. Then, we use a path planning algorithm based on deep reinforcement learning to combine the detected obstacle information with the current state of the vehicle, dynamically generating safe and efficient driving paths. In order to further improve the obstacle avoidance effect, a monocular visual autonomous driving obstacle avoidance aggregation network was introduced in the study, and the MMA (Multi lane monocular visual autonomous driving) obstacle avoidance method was established. The future movement trend of obstacles was predicted by setting an improved monocular obstacle avoidance loss function to sense historical data and the environment. The experimental results show that MMA has significantly enhanced obstacle avoidance and driving efficiency. The accuracy of the MMA obstacle avoidance method fluctuates between 78.76 % and 88.26 %, showing the best overall performance. The accuracy also shows an upward trend with the increase of the data sample size.
- Published
- 2024
- Full Text
- View/download PDF
11. 结合机器学习的无人机自主对接过程的目标识别定位.
- Author
-
姚垚, 李军府, 胡志勇, and 艾俊强
- Subjects
MONOCULAR vision ,DRONE aircraft ,FEATURE extraction ,SLEEVES ,PROBLEM solving - Abstract
Copyright of Telecommunication Engineering is the property of Telecommunication Engineering and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
12. Modified tectonic corneoscleral graft technique for treating devastating corneoscleral infections.
- Author
-
Zhang, Xiaoyu, Qi, Xiaolin, Lu, Xiuhai, and Gao, Hua
- Subjects
MONOCULAR vision ,PATIENT compliance ,INTRAOCULAR pressure ,GRAFT survival ,SURFACE stability - Abstract
Background: This study aims to evaluate the clinical outcomes and efficacy of a modified tectonic corneoscleral graft (TCG) in patients suffering from devastating corneoscleral infections. Methods: Thirty-eight eyes from 38 patients who underwent the modified TCG were included in this study. The outcomes measured were recurrence rates, best-corrected visual acuity (BCVA), ocular surface stability, postoperative complications, and graft survival. Results: Among the 38 patients, 23 had fungal infections, 9 had bacterial infections and 6 had Pythium insidiosum infections. At the final follow-up, with an average duration of 25.1 ± 8.6 months, the rate of monocular blindness decreased from 100 to 58%. Significant improvements in LogMAR BCVA were observed from preoperative to postoperative measurements (P < 0.001). Thirty-two eyes (84.2%) maintained a stable ocular surface. The survival rate of ocular surface stability was 84.2%±5.9% at one year and 57.7%±9.7% at three years post-surgery. Twenty eyes (52.6%) retained a clear graft, with a survival rate for graft clarity was 81.6%±6.3% at one year and 36.0%±10.8% at three years post-surgery. The incidence of immune rejection was 36.8%. Corneal epithelial defects were observed in ten patients, and choroidal detachment occurred in four patients. No cases of elevated intraocular pressure were detected. Conclusions: The modified TCG is effective in eradicating infections, preserving the eyeball, and maintaining useful vision in cases of devastating corneoscleral infections. Regular use of tacrolimus, timely administration of glucocorticoids, and good patient compliance can help mitigate postoperative challenges. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Relationship between binocular vision and Govetto's stage in monocular idiopathic epiretinal membrane.
- Author
-
Tsuda, Kanae, Miyata, Manabu, Kawai, Kentaro, Nakao, Shinya, Yamamoto, Akinari, Suda, Kenji, Nakano, Eri, Tagawa, Miho, Muraoka, Yuki, Sakata, Ryo, and Tsujikawa, Akitaka
- Subjects
- *
BINOCULAR vision , *VISION , *MONOCULARS , *MONOCULAR vision , *OPTICAL coherence tomography , *CHOROID - Abstract
Govetto's staging system (stages 1–4) for epiretinal membrane (ERM) based on optical coherence tomography images is a useful predictor of monocular visual function; however, an association between Govetto's stage and binocular vision has not been reported. This study aimed to investigate the factors associated with Govetto's stage among the monocular and binocular parameters. This retrospective study included consecutive patients with treatment-naïve eyes with unilateral ERM without pseudo-hole. We investigated Govetto's stage, degrees of aniseikonia and metamorphopsia, foveal avascular zone area, central retinal and choroidal thickness, vertical ocular deviation, stereopsis, and binocular single vision (BSV). We compared the parameters between the BSV-present and BSV-absent groups and investigated correlations between Govetto's stage and the monocular and binocular parameters. Twenty-eight eyes of 28 patients were examined (age, 66.6 ± 10.2 years). In multivariate correlation analyses, Govetto's stage correlated with BSV (P = 0.04, β = − 0.36) and central retinal thickness (P < 0.001, β = 0.74). Of the eyes, 18 were assigned to the BSV-present group and 10 to the BSV-absent group. Govetto's stage was significantly more advanced in the BSV-absent group than in the BSV-present group (3.2 ± 0.8 vs 2.5 ± 0.7, P = 0.03). Of the 28 patients, 11 (39%) showed small-angle vertical deviations (1–12Δ). In conclusion, our findings showed that Govetto's stage correlated with binocular vision in patients with monocular ERM. Govetto's staging is a useful parameter for predicting not only monocular but also binocular vision. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Can repetitive transcranial magnetic stimulation influence the visual cortex of adults with amblyopia? – systematic review.
- Author
-
Tuna, Ana Rita, Pinto, Nuno, Fernandes, Andresa, Brardo, Francisco Miguel, and Pato, Maria Vaz
- Subjects
- *
CONTRAST sensitivity (Vision) , *TRANSCRANIAL magnetic stimulation , *MONOCULAR vision , *BINOCULAR vision , *CONSCIOUSNESS raising , *ANISOMETROPIA - Abstract
Amblyopia is the most frequent cause of monocular vision loss. Transcranial Magnetic Stimulation (TMS) has been used to improve several vision parameters of the amblyopic eye in adulthood. This study is relevant in order to evaluate TMS effects and to raise awareness of the need for further research. Transcranial Magnetic Stimulation (TMS) is a neuromodulation technique capable of changing cortical excitability. In the last decade, it has been used to improve visual parameters in amblyopic patients. The main goal of this systematic review is to evaluate the influence of TMS in the amblyopic eye, in the visual parameters of amblyopic patients. Searches were done in PubMed and Embase databases, and a combined search strategy was performed using the following Mesh, EMBASE, and keywords: 'Amblyopia', 'Transcranial Magnetic Stimulation', and 'theta burst stimulation'. This review included randomised controlled studies, descriptive cases, and clinical case studies with adult amblyopes. All articles that had any of the following characteristics were excluded: children or animal studies, reviews, pathologies other than amblyopia, and other techniques rather than repetitive TMS (rTMS), or Theta Burst Stimulation (TBS). A total of 42 articles were found, of which only four studies (46 amblyopes) meet the criteria above. Three of the articles found significant improvement after one session of continuous TBS (cTBS) in visual parameters like visual acuity, contrast sensitivity, suppressive imbalance, and stereoacuity. One study found a significant visual improvement with 10 Hz rTMS. Only one stimulation-related dropout was reported. The few existing studies found in this review seem to show that through the usage of high-frequency rTMS and cTBS, it is possible to re-balance the eyes of an adult amblyope. However, despite the promising results, further research with larger randomised double-blind studies is needed for a better understanding of this process. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Automatic hierarchical background virtualization method for monocular vision image based on depth information extraction.
- Author
-
Peng, Mingcheng and Xie, Wenda
- Subjects
- *
MONOCULAR vision , *DATA mining , *HEAT equation , *IMAGE fusion , *THERMOGRAPHY - Abstract
Due to the influence of illumination, noise, distortion and other factors on monocular vision images, the image quality is reduced, the difficulty of image information extraction is high, and there are often errors and uncertainties in background segmentation, which affect the effect of monocular vision image background virtualization. Therefore, a new depth information extraction monocular vision image automatic hierarchical background virtualization method is studied to improve the effect of background virtualization. The depth information map is extracted by anisotropic thermal diffusion equation. The morphology is used to fill the tiny holes in the depth information map, and its smoothing process is used to determine the image depth range, automatically layer the depth information map, and obtain the foreground layer and background layer. The background layer is virtualized by Gaussian blur operation. Pyramid image fusion method is used to fuse the foreground layer and the blurred background layer to complete the background virtualization of monocular vision image. Experimental results have shown that this method can effectively improve the clarity of depth information map edges, preserve a large amount of image edge information, and have high structural similarity, with an average value of 0.96. The efficiency is high, and the background virtualization time is only 15 ms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. 3D Location of Indoor Fire Ignition with a Multilevel Constraint Based on Monocular Vision.
- Author
-
Xie, Yakun, Zhan, Ni, Zhu, Jun, Guo, Yukun, Feng, Dejun, and Baik, Sung Wook
- Subjects
- *
MONOCULAR vision , *FIRE prevention , *SPATIAL orientation , *FIREFIGHTING , *TOPOLOGICAL spaces , *FIRE detectors , *VIDEO surveillance - Abstract
Accurate fire ignition spatial location methods can serve the automatic fire suppression based on video. Although current fire detection systems based on monocular surveillance videos can quickly detect fires, it is impossible to obtain the 3D position due to the polysemy of 2D images. To further promote the universal application of automatic fire suppression, we propose a 3D indoor fire ignition location method based on monocular vision. This is the first study on spatial orientation of fire based on monocular vision. First, the indoor scene is quickly reconstructed as the basis of the scene. Second, based on our previous research on fire detection, the refined position of the fire in 2D images and its topological relationship with the space object are analyzed. The hierarchical constraints from 2 to 3D are established for the spatial orientation of the indoor fire ignition point. The experimental results show that the average absolute error is only 4.82 cm and that the average relative error is 1.71%. In addition, our method can be embedded into the existing fire prevention and control system at a low cost, further promoting the development of intelligent fire prevention and control. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Vergence and accommodation deficits in paediatric and adolescent patients during sub‐acute and chronic phases of concussion recovery.
- Author
-
Marusic, Sophia, Vyas, Neerali, Chinn, Ryan N., O'Brien, Michael J., Roberts, Tawna L., and Raghuram, Aparna
- Subjects
- *
BINOCULAR vision , *MONOCULAR vision , *VISION , *NONPARAMETRIC statistics , *BRAIN concussion - Abstract
Introduction: Visual function deficits have been reported in adolescents following concussion. We compared vergence and accommodation deficits in paediatric and adolescent patients at a tertiary medical centre in the sub‐acute (15 days to 12 weeks) and chronic (12 weeks to 1 year) phases of concussion recovery. Methods: The study included patients aged 7 to <18 years seen between 2014 and 2021, who had a binocular vision (BV) examination conducted within 15 days and 1 year of their concussion injury. Included patients had to have 0.10 logMAR monocular best‐corrected vision or better in both eyes and be wearing a habitual refractive correction. BV examinations at near included measurements of near point of convergence, convergence and divergence amplitudes, vergence facility, monocular accommodative amplitude and monocular accommodative facility. Vergence and accommodation deficits were diagnosed using established clinical criteria. Group differences were assessed using nonparametric statistics and ANCOVA modelling. Results: A total of 259 patients were included with 111 in the sub‐acute phase and 148 in the chronic phase of concussion recovery. There was no significant difference in the rates of vergence deficits between the two phases of concussion recovery (sub‐acute = 48.6%; chronic = 49.3%). There was also no significant difference in the rates of accommodation deficits between the two phases of concussion recovery (sub‐acute = 82.0%; chronic = 77.0%). Conclusion: Patients in both the sub‐acute and chronic phases of concussion recovery exhibited a high frequency of vergence and accommodation deficits, with no significant differences between groups. Results indicate that patients exhibiting vision deficits in the sub‐acute phase may not resolve without intervention, though a prospective, longitudinal study is required to test the hypothesis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. 基于单目视觉跟踪定位系统的手术机器人研究.
- Author
-
黃心怡, 李文皓, 雷云丹, 黃景揚, and 于洪波
- Abstract
Copyright of China Journal of Oral & Maxillofacial Surgery is the property of Shanghai Jiao Tong University, College of Stomatology and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
19. A Novel Method for Extracting DBH and Crown Base Height in Forests Using Small Motion Clips.
- Author
-
Yang, Shuhang, Xing, Yanqiu, Yin, Boqing, Wang, Dejun, Chang, Xiaoqing, and Wang, Jiaqi
- Subjects
FOREST management ,STANDARD deviations ,FOREST surveys ,COST functions ,MONOCULAR vision - Abstract
The diameter at breast height (DBH) and crown base height (CBH) are important indicators in forest surveys. To enhance the accuracy and convenience of DBH and CBH extraction for standing trees, a method based on understory small motion clips (a series of images captured with slight viewpoint changes) has been proposed. Histogram equalization and quadtree uniformization algorithms are employed to extract image features, improving the consistency of feature extraction. Additionally, the accuracy of depth map construction and point cloud reconstruction is improved by minimizing the variance cost function. Six 20 m × 20 m square sample plots were selected to verify the effectiveness of the method. Depth maps and point clouds of the sample plots were reconstructed from small motion clips, and the DBH and CBH of standing trees were extracted using a pinhole imaging model. The results indicated that the root mean square error (RMSE) for DBH extraction ranged from 0.60 cm to 1.18 cm, with relative errors ranging from 1.81% to 5.42%. Similarly, the RMSE for CBH extraction ranged from 0.08 m to 0.21 m, with relative errors ranging from 1.97% to 5.58%. These results meet the accuracy standards required for forest surveys. The proposed method enhances the efficiency of extracting tree structural parameters in close-range photogrammetry (CRP) for forestry. A rapid and accurate method for DBH and CBH extraction is provided by this method, laying the foundation for subsequent forest resource management and monitoring. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Development and validation of a collaborative robotic platform based on monocular vision for oral surgery: an in vitro study.
- Author
-
Huang, Jingyang, Bao, Jiahao, Tan, Zongcai, Shen, Shunyao, and Yu, Hongbo
- Abstract
Purpose: Surgical robots effectively improve the accuracy and safety of surgical procedures. Current optical-navigated oral surgical robots are typically developed based on binocular vision positioning systems, which are susceptible to factors including obscured visibility, limited workplace, and ambient light interference. Hence, the purpose of this study was to develop a lightweight robotic platform based on monocular vision for oral surgery that enhances the precision and efficiency of surgical procedures. Methods: A monocular optical positioning system (MOPS) was applied to oral surgical robots, and a semi-autonomous robotic platform was developed utilizing monocular vision. A series of vitro experiments were designed to simulate dental implant procedures to evaluate the performance of optical positioning systems and assess the robotic system accuracy. The singular configuration detection and avoidance test, the collision detection and processing test, and the drilling test under slight movement were conducted to validate the safety of the robotic system. Results: The position error and rotation error of MOPS were 0.0906 ± 0.0762 mm and 0.0158 ± 0.0069 degrees, respectively. The attitude angle of robotic arms calculated by the forward and inverse solutions was accurate. Additionally, the robot's surgical calibration point exhibited an average error of 0.42 mm, with a maximum error of 0.57 mm. Meanwhile, the robot system was capable of effectively avoiding singularities and demonstrating robust safety measures in the presence of minor patient movements and collisions during vitro experiment procedures. Conclusion: The results of this in vitro study demonstrate that the accuracy of MOPS meets clinical requirements, making it a promising alternative in the field of oral surgical robots. Further studies will be planned to make the monocular vision oral robot suitable for clinical application. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. 基于视觉的AUV 末端回收导引方法研究.
- Author
-
普勇博, 齐向东, 张海龙, and 张涛
- Abstract
Copyright of Computer Measurement & Control is the property of Magazine Agency of Computer Measurement & Control and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
22. 基于被动视觉的三维重建技术研究进展.
- Author
-
王兆庆, 牛朝一, 佘维, 宰光军, 梁波, 易建锋, and 李英豪
- Abstract
Copyright of Journal of Zhengzhou University (Natural Science Edition) is the property of Journal of Zhengzhou University (Natural Science Edition) Editorial Office and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
23. Monocular-Vision-Based Method for Locating the Center of Anchor Holes on Steel Belts in Coal Mine Roadways.
- Author
-
Lei, Mengyu, Zhang, Xuhui, and Chen, Xin
- Subjects
MONOCULAR vision ,COAL mining ,MEASUREMENT errors ,FEATURE extraction ,INDUSTRIAL lasers - Abstract
The precise positioning of anchoring-hole centers on the steel belts used for anchor support in coal mines is essential for improving the automation and efficiency of roadway support. To address the issues of poor positioning accuracy and the low support efficiency caused by the manual determination of anchoring-hole-center positions, this paper proposes a monocular-vision-based method for locating anchoring-hole centers. Firstly, a laser pointer and an industrial camera are used to build an anchoring-hole positioning device, and its visual positioning model is constructed to achieve the automatic and precise localization of the anchoring-hole center. Secondly, to overcome the difficulty of obtaining high-precision spot centers using edge and grayscale information-based spot extraction methods, a spot center extraction method based on two-dimensional arctangent function fitting is proposed, achieving high precision and the stable acquisition of spot pixel coordinates. The experimental results show that the average measurement errors of the anchoring-hole centers in the camera's coordinate system along the X-axis, Y-axis, and Z-axis are 3.36 mm, 3.30 mm, and 5.75 mm, respectively, with maximum errors of 4.23 mm, 4.39 mm, and 6.63 mm. The average measurement errors of the steel belt's pitch, yaw, and roll angles in the camera's coordinate system are 0.16°, 0.16°, and 0.08°, respectively, with maximum errors of 0.21°, 0.27°, and 0.13°. The proposed method can achieve the precise localization of anchoring holes, improve the efficiency of roadway support, and provide new insights for the automation and intelligentization of roadway anchor support. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A vision-based end pose estimation method for excavator manipulator.
- Author
-
Li, Jinguang, Liu, Yu, Wang, Linwei, and Sun, Yumeng
- Subjects
EXCAVATING machinery ,MONOCULARS ,MONOCULAR vision ,COMPUTER vision - Abstract
End pose detection of the excavator manipulator is one of the key components in the manufacture of automatic excavators. The vision-based pose estimation scheme has been identified as a potential low-cost alternative to the mechanical automation system and is gradually being applied to excavators. This paper presents an end pose estimation method for an excavator manipulator based on monocular vision, and the method network consists of two stages. In the first stage, the monocular RGB image is used as the input, and the advanced DeepLabv3 + network is utilized to segment the target, obtaining the excavator manipulator image without the background. Pose estimation is considered a regression problem in the second stage, taking segmentation results as inputs. End pose estimation is performed using the proposed pose regression network, P-ResNet, to ensure its independence from background influence. During the evaluation of pose estimation experiments, we collected a new dataset containing 2000 images based on a KOMATSU excavator. The results demonstrate that this approach exhibits strong robustness and accuracy. Its position error is less than 15 mm, and its attitude error is less than 3 degrees. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. ReliTalk: Relightable Talking Portrait Generation from a Single Video.
- Author
-
Qiu, Haonan, Chen, Zhaoxi, Jiang, Yuming, Zhou, Hang, Fan, Xiangyu, Yang, Lei, Wu, Wayne, and Liu, Ziwei
- Subjects
- *
MONOCULAR vision , *IMPLICIT functions , *VIDEOS , *MONOCULARS - Abstract
Recent years have witnessed great progress in creating vivid audio-driven portraits from monocular videos. However, how to seamlessly adapt the created video avatars to other scenarios with different backgrounds and lighting conditions remains unsolved. On the other hand, existing relighting studies mostly rely on dynamically lighted or multi-view data, which are too expensive for creating video portraits. To bridge this gap, we propose ReliTalk, a novel framework for relightable audio-driven talking portrait generation from monocular videos. Our key insight is to decompose the portrait's reflectance from implicitly learned audio-driven facial normals and images. Specifically, we involve 3D facial priors derived from audio features to predict delicate normal maps through implicit functions. These initially predicted normals then take a crucial part in reflectance decomposition by dynamically estimating the lighting condition of the given video. Moreover, the stereoscopic face representation is refined using the identity-consistent loss under simulated multiple lighting conditions, addressing the ill-posed problem caused by limited views available from a single monocular video. Extensive experiments validate the superiority of our proposed framework on both real and synthetic datasets. Our code is released in (https://github.com/arthur-qiu/ReliTalk). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A Multi-Plant Height Detection Method Based on Ruler-Free Monocular Computer Vision.
- Author
-
Tian, Haitao, Song, Mengmeng, Xie, Zhiming, and Li, Yuqiang
- Subjects
MONOCULAR vision ,COMPUTER vision ,HEIGHT measurement ,COLORIMETRY ,PLANT indicators - Abstract
Plant height is an important parameter of plant phenotype as one indicator of plant growth. In view of the complexity and scale limitation in current measurement systems, a scaleless method is proposed for the automatic measurement of plant height based on monocular computer vision. In this study, four peppers planted side by side were used as the measurement objects. Two color images of the measurement object were obtained by using a monocular camera at different shooting heights. Binary images were obtained as the images were processed by super-green grayscale and the Otsu method. The binarized images were transformed into horizontal one-dimensional data by the statistical number of vertical pixels, and the boundary points of multiple plants in the image were found and segmented into single-plant binarized images by filtering and searching for valleys. The pixel height was extracted from the segmented single plant image and the pixel displacement of the height was calculated, which was substituted into the calculation together with the reference height displacement to obtain the realistic height of the plant and complete the height measurements of multiple plants. Within the range of 2–3 m, under the light condition of 279 lx and 324 lx, this method can realize the rapid detection of multi-plant phenotypic parameters with a high precision and obtain more accurate plant height measurement results. The absolute error of plant height measurement is not more than ±10 mm, and the absolute proportion error is not more than ±4%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A laser-assisted depth detection method for underwater monocular vision.
- Author
-
Tang, Zhijie, Xu, Congqi, and Yan, Siyu
- Subjects
MONOCULAR vision ,UNDERWATER cameras ,INCLINED planes ,REMOTE submersibles ,MONOCULARS - Abstract
Underwater target three-dimensional detection is crucial for effectively recognizing and acquiring target information in complex water rings. The underwater robotic operating system as a conventional underwater operating platform, generally equipped with a binocular or monocular camera, how to utilize the underwater monocular camera with high precision and high efficiency to complete the target three-dimensional information acquisition is the main research starting point of this paper. To this end, this paper proposes a laser-assisted three-dimensional depth monocular detection method for underwater targets, which utilizes three cross lasers to assist the monocular camera system in capturing the depth data at different positions of the target plane at one time. The image correction by the four-point laser calibration method in this paper solves the difficulties of image distortion caused by an unstable underwater environment and lens effect, as well as laser angle deviation caused by the tilting of the underwater robot. The instability of the underwater environment and the lens can cause image distortion, and the tilt of the underwater robot causes the laser angle to deviate. After correcting the image, the depth data between the target and the robot can be calculated based on the geometric relationship that exists between the imaginary rectangle formed by the laser dots and laser lines in the image and the imaginary rectangle formed between the lasers on the device. This method uses a single image to obtain target depth information and is capable of measuring not only horizontal planes but also multiplanes and inclined planes. Experiments show that the algorithm improves the performance accuracy in underwater environments and land environments compared to traditional methods, and obtains depth information for the entire plane at once. The method provides a theoretical and practical basis for underwater monocular 3D information acquisition. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. A multi-center analysis of visual outcomes following open globe injury.
- Author
-
Greenfield, Jason A., Malek, Davina A., Anant, Shruti, Antonietti, Michael, Jammal, Alessandro, Casella, Alicia, Miller, Sarah C., Wang, Kristine, Momenaei, Bita, Lee, Karen, Mansour, Hana A., Justin, Grant A., Makhoul, Kevin G., Bitar, Racquel A., Lorch, Alice C., Armstrong, Grayson W., Wakabayashi, Taku, Yonekawa, Yoshihiro, Woreta, Fasika, and Cavuoto, Kara
- Subjects
- *
MONOCULAR vision , *ELECTRONIC health records , *RACE , *DEMOGRAPHIC characteristics , *VISUAL acuity , *CHILDREN'S injuries - Abstract
The purpose of this study was to examine how demographics, etiology, and clinical examination findings are related to visual outcomes in subjects with open globe injury (OGI) across a large and generalizable sample. A retrospective cohort analysis was performed using data collected from the electronic medical records of four tertiary university centers for subjects with OGI presenting from 2018 to 2021. Demographic information, injury mechanisms, clinical exam findings, visual acuity (VA) at presentation and most recent follow-up were recorded. In subjects with bilateral OGIs, only right eyes were included. A modified ocular trauma score (OTS) using presenting VA, the presence of perforating injury, OGI, and afferent pupillary defect was calculated. The risk of subjects' demographic characteristics, ocular trauma etiology, clinical findings and modified OTS on the presence of monocular blindness at follow-up were assessed using univariable and multivariable regression models. 1426 eyes were identified. The mean age was 48.3 years (SD: ± 22.4 years) and the majority of subjects were men (N = 1069, 75.0%). Univariable analysis demonstrated that subjects of Black race were 66% (OR: 1.66 [1.25–2.20]; P < 0.001) more likely to have monocular blindness relative to White race at follow-up. OTS Class 1 was the strongest predictor of blindness (OR: 38.35 [21.33–68.93]; P < 0.001). Based on multivariable analysis, lower OTS category (OTS Class 1 OR: 23.88 [16.44–45.85]; P < 0.001) moderately predicted visual outcomes (R2 = 0.275, P < 0.001). OGI has many risks of poor visual outcome across patient groups that vary by demographic category, mechanism of injury, and clinical presentation. Our findings validate that a modified OTS remains a strong predictor of visual prognosis following OGI in a large and generalizable sample. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Monocular Vision Sensor-Based Indoor Road and Stair Detection.
- Author
-
Liu, Pengzhan, Jin, Wei, An, Chunhua, and Zhao, Jianqiang
- Subjects
- *
HOUGH transforms , *IMAGE sensors , *GEOGRAPHICAL perception , *STAIRS , *MONOCULAR vision , *IMAGE segmentation - Abstract
Road detection is an essential component of indoor robot navigation. Vision sensors have great advantages in road detection as they can provide rich information in terms of environmental perception. In this paper, a monocular vision sensor-based method for indoor road and stair detection is proposed, which detects feasible areas in indoor environments very fast without paying attention to detailed features of walls or other obstacles. More specifically, for a given indoor road image captured by an on-board vision sensor, the simple linear iterative clustering (SLIC) algorithm-based approach for efficient image segmentation is introduced. Then, according to the DBSCAN algorithm, the generated superpixels are clustered to form large areas of view. The initial road area is obtained through a safe window on the middle bottom of the image. In order to achieve a more accurate road segmentation, the initial image is processed by the binary search, edge detection based on the Canny operator and straight-line detection and location based on the Hough transform, which integrates edge and stair information into road detection. Several experiments are performed to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed method could accurately detect road information and staircase information in images and succeeds in addressing the indoor road-detection problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. All about traumatic cataracts: narrative review.
- Author
-
Soleimani, Mohammad, Cheraqpour, Kasra, Salari, Farhad, Fadakar, Kaveh, Habeel, Samer, Baharnoori, Seyed Mahbod, Banz, Soraya, Tabatabaei, Seyed Ali, Woreta, Fasika A., and Djalilian, Ali R.
- Subjects
- *
VISION disorders , *CATARACT , *MONOCULAR vision , *THERAPEUTICS , *PENETRATING wounds , *EYE inflammation - Abstract
This review describes various aspects of traumatic cataracts in order to provide a thorough understanding of this condition. Ocular trauma is an important cause of monocular blindness worldwide. Injury to the lens after blunt or penetrating trauma is common and can result in vision impairment. Selecting the most appropriate therapeutic approaches depends on factors such as patients' age, mechanism of trauma, and underlying clinical conditions. Early management, especially within childhood, is essential because of the difficulties involved in examination; anatomical variations; as well as accompanying intraocular inflammation, amblyopia, or vitreoretinal adhesions. The objective of this study was to provide a comprehensive review of the epidemiology and clinical management of traumatic cataract, highlighting the significance of accurate diagnosis and selection of the optimal therapeutic approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. A Robust Monocular and Binocular Visual Ranging Fusion Method Based on an Adaptive UKF.
- Author
-
Wang, Jiake, Guan, Yong, Kang, Zhenjia, and Chen, Pengzhan
- Subjects
- *
MONOCULARS , *BINOCULAR vision , *MONOCULAR vision , *INFORMATION measurement , *KALMAN filtering , *SENSITIVITY analysis , *PIXELS , *DIGITAL cameras - Abstract
Visual ranging technology holds great promise in various fields such as unmanned driving and robot navigation. However, complex dynamic environments pose significant challenges to its accuracy and robustness. Existing monocular visual ranging methods are susceptible to scale uncertainty, while binocular visual ranging is sensitive to changes in lighting and texture. To overcome the limitations of single visual ranging, this paper proposes a fusion method for monocular and binocular visual ranging based on an adaptive Unscented Kalman Filter (AUKF). The proposed method first utilizes a monocular camera to estimate the initial distance based on the pixel size, and then employs the triangulation principle with a binocular camera to obtain accurate depth. Building upon this foundation, a probabilistic fusion framework is constructed to dynamically fuse monocular and binocular ranging using the AUKF. The AUKF employs nonlinear recursive filtering to estimate the optimal distance and its uncertainty, and introduces an adaptive noise-adjustment mechanism to dynamically update the observation noise based on fusion residuals, thus suppressing outlier interference. Additionally, an adaptive fusion strategy based on depth hypothesis propagation is designed to autonomously adjust the noise prior of the AUKF by combining current environmental features and historical measurement information, further enhancing the algorithm's adaptability to complex scenes. To validate the effectiveness of the proposed method, comprehensive evaluations were conducted on large-scale public datasets such as KITTI and complex scene data collected in real-world scenarios. The quantitative results demonstrate that the fusion method significantly improves the overall accuracy and stability of visual ranging, reducing the average relative error within an 8 m range by 43.1% and 40.9% compared to monocular and binocular ranging, respectively. Compared to traditional methods, the proposed method significantly enhances ranging accuracy and exhibits stronger robustness against factors such as lighting changes and dynamic targets. The sensitivity analysis further confirmed the effectiveness of the AUKF framework and adaptive noise strategy. In summary, the proposed fusion method effectively combines the advantages of monocular and binocular vision, significantly expanding the application range of visual ranging technology in intelligent driving, robotics, and other fields while ensuring accuracy, robustness, and real-time performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Short-Term Modulation of Online Monocular Visuomotor Function.
- Author
-
Oancea, Gabriela, Manzone, Damian M., and Tremblay, Luc
- Subjects
- *
MONOCULARS , *NEUROPLASTICITY , *GAZE , *MONOCULAR vision - Abstract
Previous literature suggests that correcting ongoing movements is more effective when using the dominant limb and seeing with the dominant eye. Specifically, individuals are more effective at adjusting their movement to account for an imperceptibly perturbed or changed target location (i.e., online movement correction), when vision is available to the dominant eye. However, less is known if visual-motor functions based on monocular information can undergo short-term neuroplastic changes after a bout of practice, to improve online correction processes. Participants (n = 12) performed pointing movements monocularly and their ability to correct their movement towards an imperceptibly displaced target was assessed. On the first day, the eye associated with smaller correction amplitudes was exclusively trained during acquisition. While correction amplitude was assessed again with both eyes monocularly, only the eye with smaller correction amplitudes in the pre-test showed significant improvement in delayed retention. These results indicate that monocular visuomotor pathways can undergo short-term neuroplastic changes. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Differentiating Intracranial Pathology Using Visual Fields: a Teaching Case Report.
- Author
-
Gaiser, Hilary and Lyons, Stacy
- Subjects
ANTIPHOSPHOLIPID syndrome ,CEREBRAL arteriovenous malformations ,MEDICAL personnel ,MONOCULAR vision ,LATERAL geniculate body ,VISUAL fields - Abstract
This article discusses a case report that highlights the importance of visual field testing in diagnosing neurological conditions. The case involves a patient with antiphospholipid syndrome and a cerebral vascular malformation who presented with neurological visual field loss. The article emphasizes the need for communication between different medical disciplines in managing patients with conflicting systemic diagnoses. The patient's visual field testing revealed a left superior paracentral scotoma in both eyes, indicating new-onset intracranial pathology in the right occipital lobe. An MRI confirmed the presence of a stable cavernous malformation and a new occipital lobe infarct. The patient was admitted to the hospital for further care. The document also provides education guidelines and learning objectives for optometry students, emphasizing the importance of visual field testing in identifying neurological field loss and differentiating conditions for appropriate referral. Optometrists can play a crucial role in diagnosing and managing these conditions and communicating with other healthcare providers. [Extracted from the article]
- Published
- 2024
34. Visual Outcomes and Patient Satisfaction After Bilateral Refractive Lens Exchange With a Trifocal Intraocular Lens in 5,226 Patients With Presbyopia.
- Author
-
Llovet-Rausell, Andrea, Ortega-Usobiaga, Julio, Albarrán-Diego, César, Beltrán-Sanz, Jaime, Bilbao-Calabuig, Rafael, and Llovet-Osuna, Fernando
- Subjects
PRESBYOPIA ,INTRAOCULAR lenses ,VISUAL acuity ,MONOCULAR vision ,VISION disorders - Abstract
Purpose: To assess visual and refractive outcomes and visual function after bilateral RayOne Trifocal toric and nontoric intraocular lens (IOL) (Rayner) implantation in patients with presbyopia. Methods: Charts of patients with presbyopia who underwent refractive lens exchange with bilateral implantation of the RayOne Trifocal IOL (toric and non-toric) were retrospectively reviewed. Visual and refractive outcomes were evaluated at 3 months. Patient satisfaction, spectacle independence, and visual disturbance profile were assessed by questionnaires. Results: A total of 5,226 patients were assigned to one of two groups: 1,010 patients had toric IOL implantation (toric group) and 4,216 patients received the non-toric model (non-toric group). Mean ± standard deviation visual acuity at 3 months for the toric group was binocular uncorrected distance visual acuity (UDVA) of 0.07 ± 0.11 logMAR, monocular corrected distance visual acuity (CDVA) of 0.05 ± 0.07 logMAR, binocular uncorrected near visual acuity (UNVA) at 40 cm of 0.10 ± 0.09 logMAR, binocular uncorrected intermediate visual acuity (UIVA) at 40 cm of 0.13 ± 0.12 logMAR, postoperative spherical equivalent (SE) of −0.21 ± 0.47 diopters (D), and cylinder of −0.34 ± 0.40 D. The non-toric group had binocular UDVA of 0.04 ± 0.08 logMAR, monocular CDVA of 0.05 ± 0.07 logMAR, binocular UNVA of 0.10 ± 0.08 logMAR, binocular UIVA of 0.13 ± 0.11 logMAR, SE of −0.08 ± 0.38 D, and cylinder of −0.28 ± 0.34 D. No statistically significant differences were found in achieving spectacle independence and there were high levels of satisfaction in both groups. Conclusions: In this retrospective analysis with more than 5,000 patients, both the toric and non-toric RayOne Trifocal IOL models provided good visual performance at all distances, resulting in excellent levels of spectacle independence and patient satisfaction. [J Refract Surg. 2024;40(7):e468–e479.] [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Optic Perineuritis Presenting with Transient Monocular Vision Loss (TMVL): Case Report.
- Author
-
Tawengi, Mohamed M, Fael, Mohamad, Hourani, Rizeq F, Alyaarabi, Tamader, Tawengi, Abdelaziz M, and Alfitori, Gamal
- Subjects
MONOCULAR vision ,VISION disorders ,MAGNETIC resonance imaging ,VISUAL acuity ,DELAYED diagnosis - Abstract
Optic perineuritis is an inflammatory condition that presents with reduced visual acuity and painful eye movement. The presentation of optic perineuritis is similar of optic neuritis which results in delayed diagnosis and management. Up to this date, we found a single case of optic neuritis that presented with transient monocular vision loss (TMVL). No cases of optic perineuritis were associated with TMVL. Here, we report a case of a 30-year-old woman who presented with recurrent attacks of painless vision loss in her left eye, reaching up to 30 attacks per day. Ophthalmological examination was otherwise unremarkable. Lab investigations were normal. Magnetic resonance imaging was done, which showed left optic nerve sheath enhancement suggestive of left-sided focal optic perineuritis. Patient was managed with 1 mg IV methylprednisolone for 3 days. We report this case to shed light on the importance of accurate and early diagnosis of optic perineuritis presenting with TMVL. Prompt management of optic perineuritis is crucial in reducing morbidity and risk of relapse. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Vision SLAM-based UAV obstacle avoidance system.
- Author
-
Li, Ruoxuan, Shang, Hang, Shen, Minghui, and Xiao, Yueyi
- Subjects
- *
VISUAL odometry , *GLOBAL optimization , *DEEP learning , *COMPUTATIONAL complexity , *DRONE aircraft , *MONOCULAR vision - Abstract
The application of simultaneous localization and mapping (SLAM) technology enables Unmanned Aerial Vehicles (UAVs) to autonomously perform obstacle avoidance in unknown environments. In this context, this review introduces various frequently-used visual SLAM(V-SLAM) methods, including original mono SLAM, oriented fast and rotated brief (ORB)-SLAM, semi-direct monocular visual odometry (SVO)-SLAM, direct sparse odometry (DSO)-SLAM, etc., and categorizes them According to the front-end approach and the back-end approach, and also compares the robustness of these algorithms in different environments, whether they support loop detection and global optimization. The application and development of visual SLAM techniques on UAVs in recent years are also reviewed, as well as the outlook and predictions for the future of the V-SLAM field, including research on multi-UAV collaboration, application of deep learning, and semantic SLAM. In addition, algorithms related to obstacle avoidance are introduced and compared, including the artificial potential field method, rapidly-exploring random trees (RRT) algorithm and A-star (A*) algorithm, which are frequently used in the related research of obstacle avoidance, and their parameters such as determinism, global optimal path guarantee, and computational complexity are compared. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Rapid 3D reconstruction of constant-diameter straight pipelines via single-view perspective projection
- Author
-
Jiasui Yao, Xiaoqi Cheng, Haishu Tan, Xiaosong Li, and Hengxing Zhao
- Subjects
monocular vision ,3D reconstruction ,constant-diameter straight pipeline ,apparent contour ,geometric constraint ,Physics ,QC1-999 - Abstract
Regular inspections of pipelines are of great significance to ensure their long-term safe and stable operation, and the rapid 3D reconstruction of constant-diameter straight pipelines (CDSP) based on monocular images plays a crucial role in tasks such as positioning and navigation for pipeline inspection drones, as well as defect detection on the pipeline surface. Most of the traditional 3D reconstruction methods for pipelines rely on marked poses or circular contours of end faces, which are complex and difficult to apply, while some existing 3D reconstruction methods based on contour features for pipelines have the disadvantage of slow reconstruction speed. To address the above issues, this paper proposes a rapid 3D reconstruction method for CDSP. This method solves for the spatial pose of the pipeline axis based on the geometric constraints between the projected contour lines and the axis, provided that the radius is known. These constraints are derived from the perspective projection imaging model of the single-view CDSP. Compared with traditional methods, the proposed method improves the reconstruction speed by 99.907% while maintaining similar accuracy.
- Published
- 2024
- Full Text
- View/download PDF
38. Design of an Amphibious Drone based on Monocular Vision for Lake Monitoring and Rescue
- Author
-
Ma, Qiancheng, Qi, Jundong, Fournier-Viger, Philippe, Series Editor, and Wang, Yulin, editor
- Published
- 2024
- Full Text
- View/download PDF
39. Research on the Parsing Algorithm of Monocular Visual Structured Data Based on YOLOv5
- Author
-
Lu, Wanli, Zhang, Wen, Sun, Mingrui, Zhang, Jindong, Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Kountchev (Deceased), Roumen, editor, Patnaik, Srikanta, editor, Wang, Wenfeng, editor, and Kountcheva, Roumiana, editor
- Published
- 2024
- Full Text
- View/download PDF
40. On Measuring Ship Height by Using Monocular Vision
- Author
-
Sun, Zhichao, Yu, Chang, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Yu, Jianglong, editor, Liu, Yumeng, editor, and Li, Qingdong, editor
- Published
- 2024
- Full Text
- View/download PDF
41. UCorr: Wire Detection and Depth Estimation for Autonomous Drones
- Author
-
Kolbeinsson, Benedikt, Mikolajczyk, Krystian, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, and Röning, Juha, editor
- Published
- 2024
- Full Text
- View/download PDF
42. Motion Parameter Estimation and Synchronous Approximation Scheme of Non-cooperative Target
- Author
-
Yuan, Jingyi, Zhu, Yiman, Guo, Yu, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Tan, Kay Chen, Series Editor, Qu, Yi, editor, Gu, Mancang, editor, Niu, Yifeng, editor, and Fu, Wenxing, editor
- Published
- 2024
- Full Text
- View/download PDF
43. UAV Autonomous Landing Pose Estimation Using Monocular Vision Based on Cooperative Identification and Scene Reconstruction
- Author
-
Zhao, Xinyan, Ma, Lin, Qin, Danyang, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Wang, Wei, editor, Liu, Xin, editor, Na, Zhenyu, editor, and Zhang, Baoju, editor
- Published
- 2024
- Full Text
- View/download PDF
44. Simulation of Machine Vision AGV Autonomous Navigation Based on Monocular SLAM
- Author
-
Zhou, Sicheng, Liu, Peng, Zhou, MinYing, Zhang, Xuefan, Yao, YuJuan, Lv, ZhongRun, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Li, Yong, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Oneto, Luca, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zamboni, Walter, Series Editor, Zhang, Junjie James, Series Editor, Tan, Kay Chen, Series Editor, Song, Huihui, editor, Xu, Min, editor, Yang, Li, editor, Zhang, Linghao, editor, and Yan, Shu, editor
- Published
- 2024
- Full Text
- View/download PDF
45. Multi-scene application of intelligent inspection robot based on computer vision in power plant
- Author
-
Lulu Lin, Jianxian Guo, and Lincheng Liu
- Subjects
Monocular vision ,Automatic recognition ,Positioning technology ,Machine vision ,Medicine ,Science - Abstract
Abstract As industries develop, the automation and intelligence level of power plants is constantly improving, and the application of patrol robots is also increasingly widespread. This research combines computer vision technology and particle swarm optimization algorithm to build an obstacle recognition model and obstacle avoidance model of an intelligent patrol robot in a power plant respectively. Firstly, the traditional convolutional recurrent neural network is optimized, and the obstacle recognition model of an intelligent patrol robot is built by combining the connection timing classification algorithm. Then, the artificial potential field method optimizes the traditional particle swarm optimization algorithm, and an obstacle avoidance model of an intelligent patrol robot is built. The performance of the two models was tested, and it was found that the highest precision, recall, and F1 values of the identification model were 0.978, 0.974, and 0.975. The highest precision, recall, and F1 values of the obstacle avoidance model were 0.97, 0.96, and 0.96 respectively. The two optimization models designed in this research have better performance. In conclusion, the two models in this study are superior to the traditional methods in recognition effect and obstacle avoidance efficiency, providing an effective technical scheme for intelligent patrol inspection of power plants.
- Published
- 2024
- Full Text
- View/download PDF
46. Untangling the genetic secrets of retinoblastoma.
- Author
-
Charters, Lynda and Marković, Leon
- Subjects
- *
MONOCULAR vision , *GENETIC testing , *CELL-free DNA , *AQUEOUS humor , *GENE silencing - Abstract
Retinoblastoma is a rare childhood cancer caused by a mutation in the RB1 gene. Despite decades of research, there is still much to learn about the genetic makeup of the disease. The availability of treatment options varies by country, with underdeveloped countries having limited access to advanced therapies. Genetic testing plays a crucial role in understanding the inheritance and predisposition to retinoblastoma. Newer techniques, such as cell-free DNA analysis, offer promising alternatives for diagnosis and monitoring. Further research is needed to identify biomarkers and improve diagnostic and treatment options. [Extracted from the article]
- Published
- 2024
47. Monocular Vision Guidance for Unmanned Surface Vehicle Recovery.
- Author
-
Li, Zhongguo, Xi, Qian, Shi, Zhou, and Wang, Qi
- Subjects
AUTONOMOUS vehicles ,MONOCULAR vision ,BINOCULAR vision ,MONOCULARS ,MOTHERS - Abstract
The positioning error of the GPS method at close distances is relatively large, rendering it incapable of accurately guiding unmanned surface vehicles (USVs) back to the mother ship. Therefore, this study proposes a near-distance recovery method for USVs based on monocular vision. By deploying a monocular camera on the USV to identify artificial markers on the mother ship and subsequently leveraging the geometric relationships among these markers, precise distance and angle information can be extracted. This enables effective guidance for the USVs to return to the mother ship. The experimental results validate the effectiveness of this approach, with positioning distance errors of less than 40 mm within a 10 m range and positioning angle errors of less than 5 degrees within a range of ±60 degrees. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. 融合相机与激光雷达的目标检测、跟踪与预测.
- Author
-
黄远宪, 周 剑, 黄 琦, 李必军, 王兰兰, and 朱佳琳
- Subjects
- *
CONVOLUTIONAL neural networks , *OPTICAL radar , *LIDAR , *STANDARD deviations , *POINT cloud - Abstract
Objectives: A real-time and robust 3D dynamic object perception module is a key part of autonomous driving system. Methods: This paper fuses monocular camera and light detection and ranging (LiDAR) to detect 3D objects. First, we use convolutional neural network to detect 2D bounding boxes and generate 3D frustum region of interest (ROI) according to the geometric projection relation between camera and LiDAR. Then, we cluster the point cloud in the frustum ROI and fit the 3D bounding box of the objects. After detecting 3D objects, we reidentify the objects between adjacent frames by appearance features and Hungarian algorithm, and then propose a tracker management model based on a quad-state machine. Finally, a novel prediction model is proposed, which leverages lane lines to constrain vehicle trajectories. Results: The experimental results show that in the stage of target detection, the accuracy and recall of the proposed algorithm can reach 92.5% and 86.7%, respectively. The root mean square error of the proposed trajectory prediction algorithm is smaller than that of the existing algorithms on the simulation datasets including straight line, arc and spiral curves. The whole algorithm only takes approximately 25 ms, which meets the real-time requirements. Conclusions: The proposed algorithm is effective and efficient, and has a good performance in different lane lines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Early versus Delayed Vitrectomy for Open Globe Injuries: A Systematic Review and Meta-Analysis.
- Author
-
Quiroz-Reyes, Miguel A, Quiroz-Gonzalez, Erick A, Quiroz-Gonzalez, Miguel A, and Lima-Gómez, Virgilio
- Subjects
- *
MONOCULAR vision , *PROLIFERATIVE vitreoretinopathy , *VISUAL acuity , *RANDOMIZED controlled trials , *VITRECTOMY , *DATABASE searching - Abstract
Background: Open globe injuries (OGIs) are a leading cause of monocular blindness worldwide and require prompt intervention to prevent proliferative vitreoretinopathy (PVR) and endophthalmitis when serious intraocular damage occurs. The management of OGIs involves initial wound closure within 24 hours, followed by vitrectomy as a secondary surgery. However, there is a lack of consensus regarding the optimal timing of vitrectomy for maximizing visual outcomes. This meta-analysis aimed to investigate whether early or delayed vitrectomy leads to better outcomes in patients with OGIs. Methods: This review was conducted based on PRISMA guidelines. The Medline, Embase, Scopus, Cochrane Central Register of Controlled Trials, and ClinicalTrials.gov databases were searched (October 23, 2023). Clinical studies that used vitrectomy to manage OGIs as early (within 7 days) or delayed (8– 14 days) interventions were included. Randomized controlled trials (RCTs) and non-RCTs were appraised using the Cochrane risk of bias and JBI tools, respectively. Results: Eleven studies met the inclusion criteria and were included in the quantitative analyses. There were 235 patients with OGIs who received early intervention and 211 patients who received delayed intervention. The retina was reattached in 91% and 76% of the patients after early and delayed intervention, respectively. Traumatic PVR was present in 9% and 41% of the patients in the early and delayed groups, respectively. The odds of retinal reattachment after vitrectomy were greater in the early group (OR = 3.42, p = 0.010, 95% CI=1.34– 8.72), and the odds of visual acuity ≥ 5/200 were 2.4 times greater in the early group. The incidence of PVR was significantly greater in the delayed surgery group (OR = 0.16, p < 0.0001; 95% CI=0.06– 0.39), which also required more than one vitrectomy surgery. Conclusion: Early vitrectomy results in better postoperative visual acuity, a greater proportion of retinal reattachment, and a decreased incidence of PVR. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Central Retinal Artery Occlusion in Young Adults at High Altitude: Thin Air, High Stakes.
- Author
-
Rana, Vipin, Kumar, Pradeep, Bandopadhyay, Sandeepan, Sharma, Vijay K., Dangi, Meenu, Joshi, Dattakiran, Mishra, Sanjay Kumar, Srikumar, Satyabrat, and Arun, V.A.
- Subjects
- *
RETINAL artery occlusion , *RETINAL artery , *INTERNAL carotid artery , *YOUNG adults , *MONOCULAR vision , *ALTITUDES - Abstract
Rana, Vipin, Pradeep Kumar, Sandeepan Bandopadhyay, Vijay K. Sharma, Meenu Dangi, Dattakiran Joshi, Sanjay Kumar Mishra, Satyabrat Srikumar, and V.A. Arun. Central retinal artery occlusion in young adults at high altitude: thin air, high stakes. High Alt Med Biol. 00:000–000, 2024.—We present five cases of young security personnel who were posted at high altitude (HA) for a duration of at least 6 months and presented with a sudden decrease of vision in one eye. The diagnosis of central retinal artery occlusion (CRAO) was made in all patients. Fundus fluorescein angiography and optical coherence tomography of the macula supported the diagnosis. None of these cases had any preexisting comorbidities. Erythrocytosis was noticed in all patients, and two of them had hyperhomocysteinemia. Four out of five patients showed either middle cerebral artery or internal carotid artery (ICA) thrombosis on computed tomography angiography. The patients were managed by a team of ophthalmologist, hematologist, vascular surgeon, and neurologist. In cases of incomplete ICA occlusion, patients were managed surgically. However, in the case of complete ICA occlusion, management was conservative with antiplatelet drugs. This case series highlights HA-associated erythrocytosis and hyperhomocysteinemia as important risk factors for CRAO in young individuals stationed at HA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.