50 results on '"Sang X"'
Search Results
2. Real-time representation and rendering of high-resolution 3D light field based on texture-enhanced optical flow prediction.
- Author
-
Li N, Yu X, Gao X, Yan B, Li D, Hong J, Tong Y, Wang Y, Hu Y, Ning C, He J, Ji L, and Sang X
- Abstract
Three-dimensional (3D) light field displays can provide an immersive visual perception and have attracted widespread attention, especially in 3D light field communications, where 3D light field displays can provide face-to-face communication experiences. However, due to limitations in 3D reconstruction and dense views rendering efficiency, generating high-quality 3D light field content in real-time remains a challenge. Traditional 3D light field capturing and reconstruction methods suffer from high reconstruction complexity and low rendering efficiency. Here, a Real-time optical flow representation for the high-resolution light field is proposed. Based on the principle of 3D light field display, we use optical flow to ray trace and multiplex sparse view pixels. We simultaneously synthesize 3D light field images during the real-time interpolation process of views. In addition, we built a complete capturing-display system to verify the effectiveness of our method. The experiments' results show that the proposed method can synthesize 8 K 3D light field videos containing 100 views in real-time. The PSNR of the virtual views is around 32 dB and SSIM is over 0.99, and the rendered frame rate is 32 fps. Qualitative experimental results show that this method can be used for high-resolution 3D light field communication.
- Published
- 2024
- Full Text
- View/download PDF
3. Arbitrary stylized light-field generation for three-dimensional light-field displays based on radiance fields.
- Author
-
Ji L, Sang X, Xing S, Yu X, Yan B, Shen S, Yang Z, Yang J, and Li D
- Abstract
Three-dimensional (3D) light-field display technology can reconstruct the spatial characteristics of 3D scenes and provide users with an immersive visual experience without the need for any additional external devices. Here, an arbitrary stylized light-field generation method for 3D light-field displays is presented, and the tripartite constraints are addressed by conducting style transformation in a refined feature space. A multi-dimensional feature refinement module is designed to learn which aspects and regions should be prioritized within the high-level feature grid of the scene, which allows content and style patterns to be better matched and blended. To preserve more structural details and reduce artifacts, a novel global quality and local structure joint loss function is used for optimization. A mask-guided light-field coding method based on ray-casting is employed to accelerate the synthesis of stylized light-field images. Experimental results demonstrate that the proposed method can generate higher-quality stylized 3D light-field content with any given style in a zero-shot manner. Additionally, the method provides two user control extensions to further enrich the ways for the personalized editing of 3D content displayed on 3D light-field displays.
- Published
- 2024
- Full Text
- View/download PDF
4. Vertically spliced tabletop light field cave display with extended depth content and separately optimized compound lens array.
- Author
-
Yu X, Dong H, Gao X, Li H, Zhang Z, Fu B, Pei X, Wen X, Zhao S, Yan B, and Sang X
- Abstract
Tabletop three-dimensional light field display is a kind of compelling display technology that can simultaneously provide stereoscopic vision for multiple viewers surrounding the lateral side of the device. However, if the flat panel light field display device is simply placed horizontally and displayed directly above, the visual frustum will be tilted and the 3D content outside the display panel will be invisible, the large oblique viewing angle will also lead to serious aberrations. In this paper, we demonstrate what we believe to be a new vertical spliced light field cave display system with an extended depth content. A separate optimization of different compound lens array attenuates the aberration from different oblique viewing angles, and a local heating fitting method is implemented to ensure the accuracy of fabrication process. The image coding method and the correction of the multiple viewpoints realize the correct construction of spliced voxels. In the experiment, a high-definition and precisely spliced 3D city terrain scene is demonstrated on the prototype with a correct oblique perspective in 100-degree horizontal viewing range. We envision that our research will provide more inspiration for future immersive large-scale glass-free virtual reality display technologies.
- Published
- 2024
- Full Text
- View/download PDF
5. Smooth motion parallax method for 3D light-field displays with a narrow pitch based on optimizing the light beam divergence angle.
- Author
-
Yu X, Li J, Gao X, Yan B, Li H, Wang Y, and Sang X
- Abstract
The three-dimensional (3D) light field display (LFD) with dense views can provide smooth motion parallax for the human eye. Increasing the number of views will widen the lens pitch, however, resulting in a decrease in view resolution. In this paper, an approach to smooth motion parallax based on optimizing the divergence angle of the light beam (DALB) for 3D LFD with narrow pitch is proposed. DALB is controlled by lens design. A views-fitting optimization algorithm is established based on a mathematical model between DALB and view distribution. Subsequently, the lens is reversely designed based on the optimization results. A co-designed convolutional neural network (CNN) is used to implement the algorithm. The optical experiment shows that a smooth motion parallax 3D image is achievable through the proposed method.
- Published
- 2024
- Full Text
- View/download PDF
6. 360-degree directional micro prism array for tabletop flat-panel light field displays.
- Author
-
Yu X, Dong H, Gao X, Fu B, Pei X, Zhao S, Yan B, and Sang X
- Abstract
Tabletop light field displays are compelling display technologies that offer stereoscopic vision and can present annular viewpoint distributions to multiple viewers around the display device. When employing the lens array to realize the of integral imaging tabletop light field display, there is a critical trade-off between the increase of the angular resolution and the spatial resolution. Moreover, as the viewers are around the device, the central viewing range of the reconstructed 3D images are wasteful. In this paper, we explore what we believe to be a new method for realizing tabletop flat-panel light field displays to improve the efficiency of the pixel utilization and the angular resolution of the tabletop 3D display. A 360-degree directional micro prism array is newly designed to refract the collimated light rays to different viewing positions and form viewpoints, then a uniform 360-degree annular viewpoint distribution can be accurately formed. In the experiment, a micro prism array sample is fabricated to verify the performance of the proposed tabletop flat-panel light field display system. One hundred viewpoints are uniformly distributed in the 360-degree viewing area, providing a full-color, smooth parallax 3D scene.
- Published
- 2023
- Full Text
- View/download PDF
7. Portrait stylized rendering for 3D light-field display based on radiation field and example guide.
- Author
-
Shen S, Xing S, Sang X, Yan B, Xie X, Fu B, Zhong C, and Zhang S
- Abstract
With the development of three-dimensional (3D) light-field display technology, 3D scenes with correct location information and depth information can be perceived without wearing any external device. Only 2D stylized portrait images can be generated with traditional portrait stylization methods and it is difficult to produce high-quality stylized portrait content for 3D light-field displays. 3D light-field displays require the generation of content with accurate depth and spatial information, which is not achievable with 2D images alone. New and innovative portrait stylization techniques methods should be presented to meet the requirements of 3D light-field displays. A portrait stylization method for 3D light-field displays is proposed, which maintain the consistency of dense views in light-field display when the 3D stylized portrait is generated. Example-based portrait stylization method is used to migrate the designated style image to the portrait image, which can prevent the loss of contour information in 3D light-field portraits. To minimize the diversity in color information and further constrain the contour details of portraits, the Laplacian loss function is introduced in the pre-trained deep learning model. The three-dimensional representation of the stylized portrait scene is reconstructed, and the stylized 3D light field image of the portrait is generated the mask guide based light-field coding method. Experimental results demonstrate the effectiveness of the proposed method, which can use the real portrait photos to generate high quality 3D light-field portrait content.
- Published
- 2023
- Full Text
- View/download PDF
8. True-color light-field display system with large depth-of-field based on joint modulation for size and arrangement of halftone dots.
- Author
-
Yu X, Zhang Z, Liu B, Gao X, Qi H, Hu Y, Zhang K, Liu K, Zhang T, Wang H, Yan B, and Sang X
- Abstract
A true-color light-field display system with a large depth-of-field (DOF) is demonstrated. Reducing crosstalk between viewpoints and increasing viewpoint density are the key points to realize light-field display system with large DOF. The aliasing and crosstalk of light beams in the light control unit (LCU) are reduced by adopting collimated backlight and reversely placing the aspheric cylindrical lens array (ACLA). The one-dimensional (1D) light-field encoding of halftone images increases the number of controllable beams within the LCU and improves viewpoint density. The use of 1D light-field encoding leads to a decrease in the color-depth of the light-field display system. The joint modulation for size and arrangement of halftone dots (JMSAHD) is used to increase color-depth. In the experiment, a three-dimensional (3D) model was constructed using halftone images generated by JMSAHD, and a light-field display system with a viewpoint density of 1.45 (i.e. 1.45 viewpoints per degree of view) and a DOF of 50 cm was achieved at a 100 ° viewing angle.
- Published
- 2023
- Full Text
- View/download PDF
9. Image edge smoothing method for light-field displays based on joint design of optical structure and elemental images.
- Author
-
Yu X, Li H, Su X, Gao X, Sang X, and Yan B
- Abstract
Image visual quality is of fundamental importance for three-dimensional (3D) light-field displays. The pixels of a light-field display are enlarged after the imaging of the light-field system, increasing the graininess of the image, which leads to a severe decline in the image edge smoothness as well as image quality. In this paper, a joint optimization method is proposed to minimize the "sawtooth edge" phenomenon of reconstructed images in light-field display systems. In the joint optimization scheme, neural networks are used to simultaneously optimize the point spread functions of the optical components and elemental images, and the optical components are designed based on the results. The simulations and experimental data show that a less grainy 3D image is achievable through the proposed joint edge smoothing method.
- Published
- 2023
- Full Text
- View/download PDF
10. Real-time light-field generation based on the visual hull for the 3D light-field display with free-viewpoint texture mapping.
- Author
-
Yang Z, Sang X, Yan B, Chen D, Wang P, Wan H, Chen S, and Li J
- Abstract
Real-time dense view synthesis based on three-dimensional (3D) reconstruction of real scenes is still a challenge for 3D light-field display. It's time-consuming to reconstruct an entire model, and then the target views are synthesized afterward based on volume rendering. To address this issue, Light-field Visual Hull (LVH) is presented with free-viewpoint texture mapping for 3D light-field display, which can directly produce synthetic images with the 3D reconstruction of real scenes in real-time based on forty free-viewpoint RGB cameras. An end-to-end subpixel calculation procedure of the synthetic image is demonstrated, which defines a rendering ray for each subpixel based on light-field image coding. In the ray propagation process, only the essential spatial point of the target model is located for the corresponding subpixel by projecting the frontmost point of the ray to all the free-viewpoints, and the color of each subpixel is identified in one pass. A dynamic free-viewpoint texture mapping method is proposed to solve the correct graphic texture considering the free-viewpoint cameras. To improve the efficiency, only the visible 3D position and texture that contributes to the synthetic image are calculated based on backward ray tracing rather than computing the entire 3D model and generating all elemental images. In addition, an incremental calibration method by dividing camera groups is proposed to satisfy the accuracy. Experimental results show the validity of our method. All the rendered views are analyzed for justifying the texture mapping method, and the PSNR is improved by an average of 11.88dB. Finally, LVH can achieve a natural and smooth viewing effect at 4K resolution and the frame rate of 25 ∼ 30fps with a large viewing angle.
- Published
- 2023
- Full Text
- View/download PDF
11. Fast virtual view synthesis for an 8K 3D light-field display based on cutoff-NeRF and 3D voxel rendering.
- Author
-
Chen S, Yan B, Sang X, Chen D, Wang P, Yang Z, Guo X, and Zhong C
- Abstract
Three-dimensional (3D) light-field displays can provide an immersive visual experience, which has attracted significant attention. However, the generating of high-quality 3D light-field content in the real world is still a challenge because it is difficult to capture dense high-resolution viewpoints of the real world with the camera array. Novel view synthesis based on CNN can generate dense high-resolution viewpoints from sparse inputs but suffer from high-computational resource consumption, low rendering speed, and limited camera baseline. Here, a two-stage virtual view synthesis method based on cutoff-NeRF and 3D voxel rendering is presented, which can fast synthesize dense novel views with smooth parallax and 3D images with a resolution of 7680 × 4320 for the 3D light-field display. In the first stage, an image-based cutoff-NeRF is proposed to implicitly represent the distribution of scene content and improve the quality of the virtual view. In the second stage, a 3D voxel-based image rendering and coding algorithm is presented, which quantify the scene content distribution learned by cutoff-NeRF to render high-resolution virtual views fast and output high-resolution 3D images. Among them, a coarse-to-fine 3D voxel rendering method is proposed to improve the accuracy of voxel representation effectively. Furthermore, a 3D voxel-based off-axis pixel encoding method is proposed to speed up 3D image generation. Finally, a sparse views dataset is built by ourselves to analyze the effectiveness of the proposed method. Experimental results demonstrate the method's effectiveness, which can fast synthesize novel views and 3D images with high resolution in real 3D scenes and physical simulation environments. PSNR of the virtual view is about 29.75 dB, SSIM is about 0.88, and the synthetic 8K 3D image time is about 14.41s. We believe that our fast high-resolution virtual viewpoint synthesis method can effectively improve the application of 3D light field display.
- Published
- 2022
- Full Text
- View/download PDF
12. Real-time realistic computer-generated hologram with accurate depth precision and a large depth range.
- Author
-
Zhong C, Sang X, Yan B, Li H, Chen D, and Qin X
- Abstract
Holographic display is an ideal technology for near-eye display to realize virtual and augmented reality applications, because it can provide all depth perception cues. However, depth performance is sacrificed by exiting computer-generated hologram (CGH) methods for real-time calculation. In this paper, volume representation and improved ray tracing algorithm are proposed for real-time CGH generation with enhanced depth performance. Using the single fast Fourier transform (S-FFT) method, the volume representation enables a low calculation burden and is efficient for Graphics Processing Unit (GPU) to implement diffraction calculation. The improved ray tracing algorithm accounts for accurate depth cues in complex 3D scenes with reflection and refraction, which is represented by adding extra shapes in the volume. Numerical evaluation is used to verify the depth precision. And experiments show that the proposed method can provide a real-time interactive holographic display with accurate depth precision and a large depth range. CGH of a 3D scene with 256 depth values is calculated at 30fps, and the depth range can be hundreds of millimeters. Depth cues of reflection and refraction images can also be reconstructed correctly. The proposed method significantly outperforms existing fast methods by achieving a more realistic 3D holographic display with ideal depth performance and real-time calculation at the same time.
- Published
- 2022
- Full Text
- View/download PDF
13. Real-time dense-view imaging for three-dimensional light-field display based on image color calibration and self-supervised view synthesis.
- Author
-
Guo X, Sang X, Yan B, Wang H, Ye X, Chen S, Wan H, Li N, Zeng Z, Chen D, Wang P, and Xing S
- Abstract
Three-Dimensional (3D) light-field display has achieved promising improvement in recent years. However, since the dense-view images cannot be collected fast in real-world 3D scenes, the real-time 3D light-field display is still challenging to achieve in real scenes, especially at the high-resolution 3D display. Here, a real-time 3D light-field display method with dense-view is proposed based on image color correction and self-supervised optical flow estimation, and a high-quality and high frame rate of 3D light-field display can be realized simultaneously. A sparse camera array is firstly used to capture sparse-view images in the proposed method. To eliminate the color deviation of the sparse views, the imaging process of the camera is analyzed, and a practical multi-layer perception (MLP) network is proposed to perform color calibration. Given sparse views with consistent color, the optical flow can be estimated by a lightweight convolutional neural network (CNN) at high speed, which uses the input image pairs to learn the optical flow in a self-supervised manner. With inverse warp operation, dense-view images can be synthesized in the end. Quantitative and qualitative experiments are performed to evaluate the feasibility of the proposed method. Experimental results show that over 60 dense-view images at a resolution of 1024 × 512 can be generated with 11 input views at a frame rate over 20 fps, which is 4× faster than previous optical flow estimation methods PWC-Net and LiteFlowNet3. Finally, large viewing angles and high-quality 3D light-field display at 3840 × 2160 resolution can be achieved in real-time.
- Published
- 2022
- Full Text
- View/download PDF
14. Automatic co-design of light field display system based on simulated annealing algorithm and visual simulation.
- Author
-
Chen Y, Sang X, Xing S, Guan Y, Zhang H, and Wang K
- Abstract
Accurate, fast, and reliable modeling and optimization methods play a crucial role in designing light field display (LFD) system. Here, an automatic co-design method of LFD system based on simulated annealing and visual simulation is proposed. The process of LFD content acquisition and optical reconstruction are modeled and simulated, the objective function for evaluating the display effect of the LFD system is established according to the simulation results. In case of maximum objective function, the simulated annealing optimization method is used to find the optimal parameters of the LFD system. The validity of the proposed method is confirmed through optical experiments.
- Published
- 2022
- Full Text
- View/download PDF
15. Improvement of a floating 3D light field display based on a telecentric retroreflector and an optimized 3D image source.
- Author
-
Gao X, Yu X, Sang X, Liu L, and Yan B
- Abstract
For a floating three-dimensional (3D) display system using a prism type retroreflector, non-retroreflected light and a blurred 3D image source are two key causes of the deterioration in image quality. In the present study, ray tracing is used to analyze the light distribution of a retroreflector at different incident angles. Based on this analysis, a telecentric retroreflector (TCRR) is proposed to suppress non-retroreflected light without sacrificing the viewing angle. A contrast transfer function (CTF) is used to evaluate the optical performance of the TCRR. To improve the 3D image source, the relationship between the root mean square (RMS) of the voxels and the 3D image quality is discussed, and an aspheric lens array is designed to reduce aberrations. Computational simulation results reveal that the structural similarity (SSIM) of the 3D image source increased to 0.9415. An experimental prototype system combining the TCRR and optimized 3D image source is then built. Experimental analysis demonstrates that the proposed method suppresses non-retroreflected light and improves the 3D image source. In particular, a clear floating 3D image with a floating distance of 70 mm and a viewing angle of 50° can be achieved.
- Published
- 2021
- Full Text
- View/download PDF
16. Real-time optical reconstruction for a three-dimensional light-field display based on path-tracing and CNN super-resolution.
- Author
-
Guo X, Sang X, Chen D, Wang P, Wang H, Liu X, Li Y, Xing S, and Yan B
- Abstract
Three-Dimensional (3D) light-field display plays a vital role in realizing 3D display. However, the real-time high quality 3D light-field display is difficult, because super high-resolution 3D light field images are hard to be achieved in real-time. Although extensive research has been carried out on fast 3D light-field image generation, no single study exists to satisfy real-time 3D image generation and display with super high-resolution such as 7680×4320. To fulfill real-time 3D light-field display with super high-resolution, a two-stage 3D image generation method based on path tracing and image super-resolution (SR) is proposed, which takes less time to render 3D images than previous methods. In the first stage, path tracing is used to generate low-resolution 3D images with sparse views based on Monte-Carlo integration. In the second stage, a lite SR algorithm based on a generative adversarial network (GAN) is presented to up-sample the low-resolution 3D images to high-resolution 3D images of dense views with photo-realistic image quality. To implement the second stage efficiently and effectively, the elemental images (EIs) are super-resolved individually for better image quality and geometry accuracy, and a foreground selection scheme based on ray casting is developed to improve the rendering performance. Finally, the output EIs from CNN are used to recompose the high-resolution 3D images. Experimental results demonstrate that real-time 3D light-field display over 30fps at 8K resolution can be realized, while the structural similarity (SSIM) can be over 0.90. It is hoped that the proposed method will contribute to the field of real-time 3D light-field display.
- Published
- 2021
- Full Text
- View/download PDF
17. 3D light-field display with an increased viewing angle and optimized viewpoint distribution based on a ladder compound lenticular lens unit.
- Author
-
Liu L, Sang X, Yu X, Gao X, Wang Y, Pei X, Xie X, Fu B, Dong H, and Yan B
- Abstract
Three-dimensional (3D) light-field displays (LFDs) suffer from a narrow viewing angle, limited depth range, and low spatial information capacity, which limit their diversified application. Because the number of pixels used to construct 3D spatial information is limited, increasing the viewing angle reduces the viewpoint density, which degrades the 3D performance. A solution based on a holographic functional screen (HFS) and a ladder-compound lenticular lens unit (LC-LLU) is proposed to increase the viewing angle while optimizing the viewpoint utilization. The LC-LLU and HFS are used to create 160 non-uniformly distributed viewpoints with low crosstalk, which increases the viewpoint density in the middle viewing zone and provides clear monocular depth cues. The corresponding coding method is presented as well. The optimized compound lenticular lens array can balance between suppressing aberration and improving displayed quality. The simulations and experiments show that the proposed 3D LFD can present natural 3D images with the right perception and occlusion relationship within a 65° viewing angle.
- Published
- 2021
- Full Text
- View/download PDF
18. Aberration correction based on a pre-correction convolutional neural network for light-field displays.
- Author
-
Yu X, Li H, Sang X, Su X, Gao X, Liu B, Chen D, Wang Y, and Yan B
- Abstract
Lens aberrations degrade the image quality and limit the viewing angle of light-field displays. In the present study, an approach to aberration reduction based on a pre-correction convolutional neural network (CNN) is demonstrated. The pre-correction CNN is employed to transform the elemental image array (EIA) generated by a virtual camera array into a pre-corrected EIA (PEIA). The pre-correction CNN is built and trained based on the aberrations of the lens array. The resulting PEIA, rather than the EIA, is presented on the liquid crystal display. Via the optical transformation of the lens array, higher quality 3D images are obtained. The validity of the proposed method is confirmed through simulations and optical experiments. A 70-degree viewing angle light field display with the improved image quality is demonstrated.
- Published
- 2021
- Full Text
- View/download PDF
19. Virtual view synthesis for 3D light-field display based on scene tower blending.
- Author
-
Chen D, Sang X, Wang P, Yu X, Gao X, Yan B, Wang H, Qi S, and Ye X
- Abstract
Three-dimensional (3D) light-field display has achieved a great improvement. However, the collection of dense viewpoints in the real 3D scene is still a bottleneck. Virtual views can be generated by unsupervised networks, but the quality of different views is inconsistent because networks are separately trained on each posed view. Here, a virtual view synthesis method for the 3D light-field display based on scene tower blending is presented, which can synthesize high quality virtual views with correct occlusions by blending all tower results, and dense viewpoints on 3D light-field display can be provided with smooth motion parallax. Posed views are combinatorially input into diverse unsupervised CNNs to predict respective input-view towers, and towers of the same viewpoint are fused together. All posed-view towers are blended as a scene color tower and a scene selection tower, so that 3D scene distributions at different depth planes can be accurately estimated. Blended scene towers are soft-projected to synthesize virtual views with correct occlusions. A denoising network is used to improve the image quality of final synthetic views. Experimental results demonstrate the validity of the proposed method, which shows outstanding performances under various disparities. PSNR of the virtual views are about 30 dB and SSIM is above 0.91. We believe that our view synthesis method will be helpful for future applications of the 3D light-field display.
- Published
- 2021
- Full Text
- View/download PDF
20. Analysis and removal of crosstalk in a time-multiplexed light-field display.
- Author
-
Liu B, Sang X, Yu X, Ye X, Gao X, Liu L, Gao C, Wang P, Xie X, and Yan B
- Abstract
Time-multiplexed light-field displays (TMLFDs) can provide natural and realistic three-dimensional (3D) performance with a wide 120° viewing angle, which provides broad potential applications in 3D electronic sand table (EST) technology. However, current TMLFDs suffer from severe crosstalk, which can lead to image aliasing and the distortion of the depth information. In this paper, the mechanisms underlying the emergence of crosstalk in TMLFD systems are identified and analyzed. The results indicate that the specific structure of the slanted lenticular lens array (LLA) and the non-uniformity of the emergent light distribution in the lens elements are the two main factors responsible for the crosstalk. In order to produce clear depth perception and improve the image quality, a novel ladder-type LCD sub-pixel arrangement and a compound lens with three aspheric surfaces are proposed and introduced into a TMLFD to respectively reduce the two types of crosstalk. Crosstalk simulation experiments demonstrate the validity of the proposed methods. Structural similarity (SSIM) simulation experiments and light-field reconstruction experiments also indicate that aliasing is effectively reduced and the depth quality is significantly improved over the entire viewing range. In addition, a tabletop 3D EST based on the proposed TMLFD is presented. The proposed approaches to crosstalk reduction are also compatible with other lenticular lens-based 3D displays.
- Published
- 2021
- Full Text
- View/download PDF
21. Space-division-multiplexed catadioptric integrated backlight and symmetrical triplet-compound lenticular array based on ORM criterion for 90-degree viewing angle and low-crosstalk directional backlight 3D light-field display.
- Author
-
Gao C, Sang X, Yu X, Gao X, Du J, Liu B, Liu L, Wang P, and Yan B
- Abstract
A novel optical reverse mapping (ORM) method and an ORM criterion are proposed to evaluate the relevance between the directional backlight (DB) 3D light-field display system aberration and the crosstalk. Based on the ORM criterion, the space-division-multiplexed catadioptric integrated backlight (SCIB) and symmetrical triplet-compound lenticular array (triplet LA) are designed. The SCIB is composed of hybrid Fresnel integrated backlight unit (hybrid Fresnel unit) and space-division-multiplexed microprism unit (microprism unit). The hybrid Fresnel unit is used to provide the directional light, and the divergence angle is 2.4-degrees. The average uniformity of 83.02% is achieved. The microprism unit is used to modulate the directional light distribution into three predetermined directions to establish a 90-degree viewing area. Combined with SCIB, the triplet LA is used to suppress the aberrations and reduce the crosstalk. In the experiment, a DB 3D light-field display system based on SCIB and triplet LA is set up. The displayed light-field 3D image can be observed in a 90-degree viewing angle. Compared to the conventional DB 3D display system, the light-field 3D image is aberration-suppressed, and the SSIM values are improved from 0.8462 to 0.9618. Meanwhile, the crosstalk measurement results show that the average crosstalk is 3.49%. The minimum crosstalk is 2.31% and the maximum crosstalk is 4.52%. The crosstalk values in 90-degree are lower than 5%.
- Published
- 2020
- Full Text
- View/download PDF
22. Parallel multi-view polygon rasterization for 3D light field display.
- Author
-
Guan Y, Sang X, Xing S, Chen Y, Li Y, Chen D, Yu X, and Yan B
- Abstract
Three-dimensional (3D) light field displays require samples of image data captured from a large number of regularly spaced camera images to produce a 3D image. Generally, it is inefficient to generate these images sequentially because a large number of rendering operations are repeated in different viewpoints. The current 3D image generation algorithm with traditional single viewpoint computer graphics techniques is not sufficiently well suited to the task of generating images for the light field displays. A highly parallel multi-view polygon rasterization (PMR) algorithm for 3D multi-view image generation is presented. Based on the coherence of the triangular rasterization calculation among different viewpoints, the related rasterization algorithms including primitive setup, plane function, and barycentric coordinate interpolation in the screen space are derived. To verify the proposed algorithm, a hierarchical soft rendering pipeline with GPU is designed and implemented. Several groups of images of 3D objects are used to verify the performance of the PMR method, and the correct 3D light field image can be achieved in real time.
- Published
- 2020
- Full Text
- View/download PDF
23. Design, characterization, and fabrication of 90-degree viewing angle catadioptric retroreflector floating device using in 3D floating light-field display system.
- Author
-
Gao C, Sang X, Yu X, Gao X, Du J, Liu B, Liu L, and Wang P
- Abstract
A novel catadioptric retroreflector floating device (CRA) used in the 3D floating light-field system is proposed. The floating light-field image constructed by the CRA is aberration-suppressed. The luminance and the contrast of the image are substantially improved in a 90-degree viewing angle. The CRA is constituted of the designed catadioptric retroreflector (CR). The CR consists of three lenses, the first and the second lens is to refract the light, and the rear surface of the third lens is coated with reflective coating in order to reflect the incident light. The CRA is processable and the fabrication process using UV embossing is also described. A spectrophotometer is utilized to measure the retroreflective efficiency of the CRA. The average retroreflective efficiency of the CRA is 80.1%. A beam quality analyzer is utilized to measure the beam spot quality of the CRA, and the image quality can satisfy the requirements of human eye observation. In the experiment, compared to the floating light-field image constructed by the micro-beads type retroreflector floating device (MRA), the image quality of the floating light-field image constructed by the CRA is significantly enhanced. In the quantitative computer simulation, the PSNR values of the images are increased from 23.0185 to 32.1958.
- Published
- 2020
- Full Text
- View/download PDF
24. Time-multiplexed light field display with 120-degree wide viewing angle.
- Author
-
Liu B, Sang X, Yu X, Gao X, Liu L, Gao C, Wang P, Le Y, and Du J
- Abstract
The light field display can provide vivid and natural 3D performance, which can find many applications, such as relics research and exhibition. However, current light field displays are constrained by the viewing angle, which cannot meet the expectations. With three groups directional backlights and a fast-switching LCD panel, a time-multiplexed light field display with a 120-degree wide viewing angle is demonstrated. Up to 192 views are constructed within the viewing range to ensure the right geometric occlusion and smooth parallax motion. Clear 3D images can be perceived at the entire range of viewing angle. Additionally, the designed holographic functional screen is used to recompose the light distribution and the compound aspheric lens array is optimized to balance the aberrations and improve the 3D display quality. Experimental results verify that the proposed light field display has the capability to present realistic 3D images of historical relics in 120-degree wide viewing angle.
- Published
- 2019
- Full Text
- View/download PDF
25. Demonstration of a low-crosstalk super multi-view light field display with natural depth cues and smooth motion parallax.
- Author
-
Wang P, Sang X, Yu X, Gao X, Yan B, Liu B, Liu L, Gao C, Le Y, Li Y, and Du J
- Abstract
Due to lack of the accommodation stimulus, an inherent drawback for the conventional glasses-free stereoscopic display is that precise depth cues for the human monocular vision is rent, which results in the well-known convergence-accommodation conflict for the human visual system. Here, a super multi-view light field display with the vertically-collimated programmable directional backlight (VC-PDB) and the light control module (LCM) is demonstrated. The VC-PDB and the LCM are used to form the super multi-view light field display with low crosstalk, which can provide precisely detectable accommodation depth for human monocular vision. Meanwhile, the VC-PDB cooperates with the refreshable liquid-crystal display panel to provide the convergence depth matching the accommodation depth. In addition, the proposed method of light field pick-up and reconstruction is implemented to ensure the perceived three dimensional (3D) images with accurate depth cues and correct geometric occlusion, and the eye tracker is used to enlarge the viewing angle of 3D images with smooth motion parallax. In the experiments, the reconstructed high quality fatigue-free 3D images can be perceived with the clear focus depth of 13 cm in the viewing angle of ± 20°, where 352 viewpoints with the viewpoint density of 1 mm
-1 and the crosstalk of less than 6% are presented.- Published
- 2019
- Full Text
- View/download PDF
26. A flipping-free 3D integral imaging display using a twice-imaging lens array.
- Author
-
Zhang W, Sang X, Gao X, Yu X, Gao C, Yan B, and Yu C
- Abstract
Integral imaging is a promising 3D visualization technique for reconstructing 3D medical scenes to enhance medical analysis and diagnosis. However, the use of lens arrays inevitably introduces flipped images beyond the field of view, which cannot reproduce the correct parallax relation. To avoid the flipping effect in optical reconstruction, a twice-imaging lens array based integral display is presented. The proposed lens arrangement, which consists of a light-controlling lens array, a field lens array and an imaging lens array, allows the light rays from each elemental image only pass through its corresponding lens unit. The lens arrangement is optimized with geometrical optics method, and the proposed display system is experimentally demonstrated. A full-parallax 3D medical scene showing continuous viewpoint information without flipping is reconstructed in 45° field of view.
- Published
- 2019
- Full Text
- View/download PDF
27. Emission Editing in Eu/Tb binary complexes based on Au@SiO 2 nanorods.
- Author
-
Wang Q, Sang X, Li S, Liu Y, Wang W, Wang Q, Liu K, An Z, and Huang W
- Abstract
The Au@SiO
2 nanorods with two plasmonic resonance bands are used to enhance and tune the emission of binary lanthanide (Eu/Tb) complexes. The emissions of Tb and Eu ions are both enhanced, the maximum enhancement is over 100-fold. Meanwhile the ratio and relative intensity of the red/green bands is altered by the strong coupling between complexes and nanorods, tuning the color of emission from green to yellow under excitation of 292 nm and improving the color purity from orange to red under excitation of 360 nm. The underlying physics of the lanthanide complex-plasmonic nanorod composite system is analyzed, which deepen the understanding of the interaction between complexes and plasmon nanoparticles.- Published
- 2019
- Full Text
- View/download PDF
28. Backward ray tracing based high-speed visual simulation for light field display and experimental verification.
- Author
-
Guan Y, Sang X, Xing S, Li Y, Chen Y, Chen D, Yang L, and Yan B
- Abstract
The exiting simulation method is not capable of achieving three-dimensional (3D) display result of the light field display (LFD) directly, which is important for design and optimization. Here, a high-speed visual simulation method to calculate the 3D image light field distribution is presented. Based on the backward ray tracing technique (BRT), the geometric and optical models of the LFD are constructed. The display result images are obtained, and the field of view angle (FOV) and depth of field (DOF) can be estimated, which are consistent with theoretical results and experimental results. The simulation time is 1s when the number of sampling rays is 3840×2160×100, and the computational speed of the method is at least 1000 times faster than that of the traditional physics-based renderer. A prototype was fabricated to evaluate the feasibility of the proposed method. From the results, our simulation method shows good potential for predicting the displayed image of the LFD for various positions of the observer's eye with sufficient calculation speed.
- Published
- 2019
- Full Text
- View/download PDF
29. 360-degree tabletop 3D light-field display with ring-shaped viewing range based on aspheric conical lens array.
- Author
-
Yu X, Sang X, Gao X, Yan B, Chen D, Liu B, Liu L, Gao C, and Wang P
- Abstract
When employing the light field method with standard lens array and the holographic functional screen (HFS) to realize the tabletop three-dimensional (3D) display, the viewing area of the reconstructed 3D images is right above the screen. As the observers sit around the table, the generated viewpoints in the middle of the viewing area are wasteful. Here, a 360-degree viewable light-field display system is demonstrated, which can present 3D images to multiple viewers in ring-shaped viewing range. The proposed display system consists of the HFS, the aspheric conical lens array, a 27-inch LCD with the resolution of 3840×2160, the LEDs array and the Fresnel lens array. By designing the aspheric conical lens, the light rays emitting from the elemental images forms the viewpoints in a ring-type arrangement. And the corresponding coding method is given. Compared with the light field display with standard lens array, the viewpoint density is increased and the aliasing phenomenon is reduced. In the experiment, the tabletop light-field display based on aspheric conical lens array can present high quality 360-degree viewable 3D image with the right perception and occlusion.
- Published
- 2019
- Full Text
- View/download PDF
30. Dense-view synthesis for three-dimensional light-field display based on unsupervised learning.
- Author
-
Chen D, Sang X, Wang P, Yu X, Yan B, Wang H, Ning M, Qi S, and Ye X
- Abstract
Three-dimensional (3D) light field display, as a potential future display method, has attracted considerable attention. However, there still exist certain issues to be addressed, especially the capture of dense views in real 3D scenes. Using sparse cameras associated with view synthesis algorithm has become a practical method. Supervised convolutional neural network (CNN) is used to synthesize virtual views. However, such a large amount of training target views is sometimes difficult to be obtained and the training position is relatively fixed. Novel views can also be synthesized by unsupervised network MPVN, but the method has strict requirements on capturing multiple uniform horizontal viewpoints, which is not suitable in practice. Here, a method of dense-view synthesis based on unsupervised learning is presented, which can synthesize arbitrary virtual views with multiple free-posed views captured in the real 3D scene based on unsupervised learning. Multiple posed views are reprojected to the target position and input into the neural network. The network outputs a color tower and a selection tower indicating the scene distribution along the depth direction. A single image is yielded by the weighted summation of two towers. The proposed network is end-to-end trained based on unsupervised learning by minimizing errors during reconstructions of posed views. A virtual view can be predicted in a high quality by reprojecting posed views to the desired position. Additionally, a sequence of dense virtual views can be generated for 3D light-field display by repeated predictions. Experimental results demonstrate the validity of our proposed network. PSNR of synthesized views are around 30dB and SSIM are over 0.90. Since multiple cameras are supported to be placed in free-posed positions, there are not strict physical requirements and the proposed method can be flexibly used for the real scene capture. We believe this approach will contribute to the wide applications of 3D light-field display in the future.
- Published
- 2019
- Full Text
- View/download PDF
31. Real-time optical 3D reconstruction based on Monte Carlo integration and recurrent CNNs denoising with the 3D light field display.
- Author
-
Li Y, Sang X, Xing S, Guan Y, Yang S, Chen D, Yang L, and Yan B
- Abstract
A general integral imaging generation method based on the path-traced Monte Carlo (MC) method and recurrent convolutional neural networks denoising is presented. According to the optical layer structure of the three-dimensional (3D) light field display, screen pixels are encoded to specific viewpoints, then the directional rays are cast from viewpoints to screen pixels to preform the path integral. In the process of the integral, advanced illumination is used for high-quality elemental image array (EIA) generation. Recurrent convolutional neural networks are implemented as an auxiliary post-processing for the EIA to eliminate the noise of the 3D image in MC integration. 4K (3840 × 2160) resolution, 2 sample/pixel and the ray path tracing method are realized in the experiment. Experimental results demonstrate that the structural similarity metric (SSIM) value and peak signal-to-noise ratio (PSNR) gain of the reconstructed 3D image and target 3D image exceed 90% and 10 dB within 10 frames, respectively. Besides, real-time frame rate is more than 30 fps, showing the super efficiency and quality in optical 3D reconstruction.
- Published
- 2019
- Full Text
- View/download PDF
32. Viewing-angle and viewing-resolution enhanced integral imaging based on time-multiplexed lens stitching.
- Author
-
Yang L, Sang X, Yu X, Yan B, Wang K, and Yu C
- Abstract
A method for the viewing angle and viewing resolution enhancement of integral imaging (InIm) based on time-multiplexed lens stitching is demonstrated using the directional time-sequential backlight (DTS-BL) and the compound lens-array. In order to increase the lens-pitch of the compound lens-array for enlarging the viewing angle of InIm, DTS-BL is used to continuously stitch the adjacent elemental lenses in the time-multiplexed way. Through the compound lens-array with two pieces of lens in each lens unit, the parallel light beams from the DTS-BL converge and form a uniformly distributed dense point light source array (PLSA). Light rays emitting from the PLSA are modulated by the liquid crystal display (LCD) panel and then integrated as volumetric pixels of the reconstructed three-dimensional (3D) image. Meanwhile, time-multiplexed generation of the point light sources (PLSs) in the array is realized by time-multiplexed lens stitching implemented with the DTS-BL. As a result, the number of the PLSs, as the pixels of the perceived 3D image, is increased and then the viewing resolution of the 3D image is obviously enhanced. Additionally, joint optical optimization for the DTS-BL and the compound lens-array is used for suppressing the aberrations, and the imaging distortion can be decreased to 0.23% from 5.80%. In the experiment, a floating full-parallax 3D light-field image can be perceived with 4 times the viewing resolution enhancement in the viewing angle of 50°, where 7056 viewpoints are presented.
- Published
- 2019
- Full Text
- View/download PDF
33. Dynamic three-dimensional light-field display with large viewing angle based on compound lenticular lens array and multi-projectors.
- Author
-
Yu X, Sang X, Gao X, Chen D, Liu B, Liu L, Gao C, and Wang P
- Abstract
Real-time terrain rendering with high-resolution has been a hot spot in computer graphics for many years, which is widely used in electronic maps. However, the traditional two-dimensional display cannot provide the occlusion relationship between buildings, which restricts the observers' judgment of spatial accuracy. With three projectors, compound lenticular lens array and holographic functional screen, a dynamic three-dimensional (3D) light-field display with 90° viewing angle is demonstrated. Three projectors provide views for the right 30 degrees, center 30 degrees and left 30 degrees, respectively. The holographic functional screen recomposes the light distribution, and the compound lenticular lens array is optimized to balance the aberrations and improve the display quality. In our experiment, the 3D light-field image with 96 perspectives provides the right geometric occlusion and smooth parallax in the viewing range. By rendering 3D images and synchronizing projectors, the dynamic light field display is obtained.
- Published
- 2019
- Full Text
- View/download PDF
34. A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction.
- Author
-
Yang L, Sang X, Yu X, Liu B, Yan B, Wang K, and Yu C
- Abstract
A crosstalk-suppressed dense multi-view light-field display based on real-time light-field pickup and reconstruction is demonstrated, which is capable of realizing high view density in the horizontal direction with low crosstalk between micro-pitch viewing zones. The micro-pinhole unit array and the vertically-collimated backlight are specially developed and used, instead of refraction-based optical components like lenticular lens, to avoid aberrations and to suppress crosstalk for accurately projecting multiple view perspectives into each eye pupil of the viewer. Additionally, the spatial information entropy is defined and investigated to improve 3D image perception for balancing resolution, which can be generally applicable to better-reconstructed 3D images with the limited number of resolution pixels. To enlarge the viewing angle of 3D images with smooth motion parallax, the novel high-efficient light-field pickup and reconstruction method based on the real-time position of the viewer's pupils is implemented with an eye tracker to scan 750 view perspectives with the correct geometric occlusion in real time at the frame rate of 40 fps. In the experiment, a floating horizontal-parallax 3D light-field image with the view density of 0.75 mm
-1 and the micro-pitch crosstalk of less than 7% can be perceived with the clear floating focus depth of 10 cm and the high resolution of 1920 × 1080 in the viewing angle of 70°.- Published
- 2018
- Full Text
- View/download PDF
35. 162-inch 3D light field display based on aspheric lens array and holographic functional screen.
- Author
-
Yang S, Sang X, Yu X, Gao X, Liu L, Liu B, and Yang L
- Abstract
Large-scale three-dimensional (3D) display can evoke a great sense of true presence and immersion. Nowadays, most of the large-scale autostereoscopic displays are based on parallax barrier with view zone jumping, which also sacrifices much brightness and leads to uneven illumination. With a 3840 × 2160 LED panel, a large-scale horizontal light field display based on aspheric lens array (ALA) and holographic functional screen (HFS) is demonstrated, which can display high quality 3D image. The HFS recomposes the light distribution, while the ALA improves the quantity of perspective information in a horizontal direction by using vertical pixels and it can suppress the aberration that is mainly caused by marginal light rays. The 162-inch horizontal light field display can reconstruct 3D images with the depth range of 1.5 m within the viewing angle of 40°. The feasibility of the proposed display method is verified by the experimental results.
- Published
- 2018
- Full Text
- View/download PDF
36. Multi-parallax views synthesis for three-dimensional light-field display using unsupervised CNN.
- Author
-
Chen D, Sang X, Peng W, Yu X, and Wang HC
- Abstract
Multi-view applications have been used in a wide range, especially Three-Dimensional (3D) display. Since capturing dense multiple views for 3D light-field display is still a difficult work, view synthesis becomes an accessible way. Convolutional neural networks (CNN) has been used to synthesize new views of the scene. However, training targets are sometimes difficult to obtain, and it views are very difficult to synthesize at arbitrary positions. Here, an unsupervised network of Multi-Parallax View Net (MPVN) is proposed, which can synthesize multi-parallax views for 3D light-field display. Existing parallax views are re-projected to the target position to build input towers. The network is operated on these towers, and outputs a color tower and a selection tower. These two towers yield the final output image by per-pixel weight summing. MPVN adopts end-to-end unsupervised training to minimize prediction errors at existing positions. It can predict virtual views at any parallax position between existing views in a high quality. Experimental results demonstrate the validation of our proposed network, and SSIM of synthetic views are mostly over 0.95. We believe that this method can effectively provide enough views for 3D light-field display in the future work.
- Published
- 2018
- Full Text
- View/download PDF
37. Wavefront aberration correction for integral imaging with the pre-filtering function array.
- Author
-
Zhang W, Sang X, Gao X, Yu X, Yan B, and Yu C
- Abstract
In integral imaging, the quality of a reconstructed image degrades with increasing viewing angle due to the wavefront aberrations introduced by the lens-array. A wavefront aberration correction method is proposed to enhance the image quality with a pre-filtering function array (PFA). To derive the PFA for an integral imaging display, the wavefront aberration characteristic of the lens-array is analyzed and the intensity distribution of the reconstructed image is calculated based on the wave optics theory. The minimum mean square error method is applied to manipulate the elemental image array (EIA) with a PFA. The validity of the proposed method is confirmed through simulations as well as optical experiments. A 45-degree viewing angle integral imaging display with enhanced image quality is achieved.
- Published
- 2018
- Full Text
- View/download PDF
38. Interactive floating full-parallax digital three-dimensional light-field display based on wavefront recomposing.
- Author
-
Sang X, Gao X, Yu X, Xing S, Li Y, and Wu Y
- Abstract
Advanced three-dimensional (3D) imaging techniques can acquire high-resolution 3D biomedical and biological data, but available digital display methods show this data in restricted two dimensions. 3D light-field displays optically reconstruct realistic 3D image by carefully tailoring light fields, and a natural and comfortable 3D sense of real objects or scenes is expected. An interactive floating full-parallax 3D light-field display with all depth cues is demonstrated with 3D biomedical and biological data, which are capable of achieving high efficiency and high image quality. A compound lens-array with two pieces of lens in each lens unit is designed and fabricated to suppress the aberrations and increase the viewing angle. The optimally designed holographic functional screen is used to recompose the light distribution from the lens-array. The imaging distortion can be decreased to less than 1.9% from more than 20%. The real time interactive floating full-parallax 3D light-field image with the clear displayed depth of 30 cm can be perceived with the right geometric occlusion and smooth parallax in the viewing angle of 45°, where 9216 viewpoints are used.
- Published
- 2018
- Full Text
- View/download PDF
39. Image quality improvement of multi-projection 3D display through tone mapping based optimization.
- Author
-
Wang P, Sang X, Zhu Y, Xie S, Chen D, Guo N, and Yu C
- Abstract
An optical 3D screen usually shows a certain diffuse reflectivity or diffuse transmission, and the multi-projection 3D display suffers from decreased display local contrast due to the crosstalk of multi-projection contents. A tone mapping based optimizing method is innovatively proposed to suppress the crosstalk and improve the display contrast by minimizing the visible contrast distortions between the display light field and a targeted one with enhanced contrast. The contrast distortions are weighted according to the visibility predicted by the model of human visual system, and the distortions are minimized for the given multi-projection 3D display model that enforces constrains on the solution. Our proposed method can adjust parallax images or parallax video contents for the optimum 3D display image quality taking into account the display characteristics and ambient illumination. The validity of the method is evaluated and proved in experiments.
- Published
- 2017
- Full Text
- View/download PDF
40. High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction.
- Author
-
Xing S, Sang X, Yu X, Duo C, Pang B, Gao X, Yang S, Guan Y, Yan B, Yuan J, and Wang K
- Abstract
A high-efficient computer-generated integral imaging (CGII) method is presented based on the backward ray-tracing technique. In traditional CGII methods, the total rendering time is long, because a large number of cameras are established in the virtual world. The ray origin and the ray direction for every pixel in elemental image array are calculated with the backward ray-tracing technique, and the total rendering time can be noticeably reduced. The method is suitable to create high quality integral image without the pseudoscopic problem. Real time and non-real time CGII rendering images and optical reconstruction are demonstrated, and the effectiveness is verified with different types of 3D object models. Real time optical reconstruction with 90 × 90 viewpoints and the frame rate above 40 fps for the CGII 3D display are realized without the pseudoscopic problem.
- Published
- 2017
- Full Text
- View/download PDF
41. Performance improvement of compressive light field display with the viewing-position-dependent weight distribution.
- Author
-
Chen D, Sang X, Yu X, Zeng X, Xie S, and Guo N
- Abstract
Compressive light field display with multilayer and multiframe decompositions is able to provide three-dimensional (3D) scenes with high spatial-angular resolution and without periodically repeating view-zones. However, there are still some limitations on the display performance, such as poor image quality and limited field of view (FOV). Compressive light field display with the viewing-position-dependent weight distribution is presented. When relevant views are given high weights in the optimization, the displaying performance at the viewing-position can be noticeably improved. Simulation and experimental results demonstrate the effectiveness of the proposed method. Peak signal-noise-ration (PSNR) is improved by 7dB for the compressive light field display with narrow FOV. The angle for wide FOV can be expended to 70° × 60°, and multi-viewers are supported.
- Published
- 2016
- Full Text
- View/download PDF
42. Improved halftoning method for autostereoscopic display based on float grid-division multiplexing.
- Author
-
Chen D, Sang X, Yu X, Chen Z, Wang P, Gao X, Guo N, and Xie S
- Abstract
Autostereoscopic printing is one of the most common ways for three-dimensional display, because it can present finer results by printing higher dots per inches (DPI). However, there are some problems for current methods. First, errors caused by dislocation between integer grids and non-customized lenticular lens result in severe vision quality. Second, the view-number and gray-level cannot be set arbitrarily. In this paper, an improved halftoning method for autostereoscopic printing based on float grid-division multiplexing (fGDM) is proposed. FGDM effectively addresses above two problems. GPU based program of fGDM is enabled to achieve the result very fast. Films with lenticular lens array are implemented in experiments to verify the effectiveness of proposed method which provides an improved three-dimensional performance, compared with the AM screening and random screening.
- Published
- 2016
- Full Text
- View/download PDF
43. Augmented reality three-dimensional display with light field fusion.
- Author
-
Xie S, Wang P, Sang X, and Li C
- Abstract
A video see-through augmented reality three-dimensional display method is presented. The system that is used for dense viewpoint augmented reality presentation fuses the light fields of the real scene and the virtual model naturally. Inherently benefiting from the rich information of the light field, depth sense and occlusion can be handled under no priori depth information of the real scene. A series of processes are proposed to optimize the augmented reality performance. Experimental results show that the reconstructed fused 3D light field on the autostereoscopic display is well presented. The virtual model is naturally integrated into the real scene with a consistence between binocular parallax and monocular depth cues.
- Published
- 2016
- Full Text
- View/download PDF
44. Large viewing angle three-dimensional display with smooth motion parallax and accurate depth cues.
- Author
-
Yu X, Sang X, Gao X, Chen Z, Chen D, Duan W, Yan B, Yu C, and Xu D
- Abstract
A three-dimensional (3D) display with smooth motion parallax and large viewing angle is demonstrated, which is based on a microlens array and a coded two-dimensional (2D) image on a 50 inch liquid crystal device (LCD) panel with the resolution of 3840 × 2160. Combining with accurate depth cues expressing, the flipping images of the traditional integral imaging (II) are eliminated, and smooth motion parallax can be achieved. The image on the LCD panel is coded as an elemental image packed repeatedly, and the depth cue is determined by the repeated period of elemental image. To construct the 3D image with complex depth structure, the varying period of elemental image is required. Here, the detailed principle and coding method are presented. The shape and the texture of a target 3D image are designed by a structure image and an elemental image, respectively. In the experiment, two groups of structure images and their corresponding elemental images are utilized to construct a 3D scene with a football in a green net. The constructed 3D image exhibits obviously enhanced 3D perception and smooth motion parallax. The viewing angle is 60°, which is much larger than that of the traditional II.
- Published
- 2015
- Full Text
- View/download PDF
45. Tunable fractional-order photonic differentiator based on the inverse Raman scattering in a silicon microring resonator.
- Author
-
Jin B, Yuan J, Yu C, Sang X, Wu Q, Li F, Wang K, Yan B, Farrell G, and Wai PK
- Abstract
A novel photonic fractional-order temporal differentiator is proposed based on the inverse Raman scattering (IRS) in the side-coupled silicon microring resonator. By controlling the power of the pump light-wave, the intracavity loss is adjusted and the coupling state of the microring resonator can be changed, so the continuously tunable differentiation order is achieved. The influences of input pulse width on the differentiation order and the output deviation are discussed. Due to the narrow bandwidth of IRS in silicon, the intracavity loss can be adjusted on a specific resonance while keeping the adjacent resonances undisturbed. It can be expected that the proposed scheme has the potential to realize different differentiation orders simultaneously at different resonant wavelengths.
- Published
- 2015
- Full Text
- View/download PDF
46. Aberration analyses for improving the frontal projection three-dimensional display.
- Author
-
Gao X, Sang X, Yu X, Wang P, Cao X, Sun L, Yan B, Yuan J, Wang K, Yu C, and Dou W
- Subjects
- Equipment Design, Humans, Computer-Aided Design, Image Enhancement instrumentation, Imaging, Three-Dimensional methods, Lens, Crystalline anatomy & histology, Lighting instrumentation
- Abstract
The crosstalk severely affects the viewing experience for the auto-stereoscopic 3D displays based on frontal projection lenticular sheet. To suppress unclear stereo vision and ghosts are observed in marginal viewing zones(MVZs), aberration of the lenticular sheet combining with the frontal projector is analyzed and designed. Theoretical and experimental results show that increasing radius of curvature (ROC) or decreasing aperture of the lenticular sheet can suppress the aberration and reduce the crosstalk. A projector array with 20 micro-projectors is used to frontally project 20 parallax images one lenticular sheet with the ROC of 10 mm and the size of 1.9 m × 1.2 m. The 3D image with the high quality is experimentally demonstrated in both the mid-viewing zone and MVZs in the optimal viewing plane. The 3D clear depth of 1.2m can be perceived. To provide an excellent 3D image and enlarge the field of view at the same time, a novel structure of lenticular sheet is presented to reduce aberration, and the crosstalk is well suppressed.
- Published
- 2014
- Full Text
- View/download PDF
47. Resolution-enhanced all-optical analog-to-digital converter employing cascade optical quantization operation.
- Author
-
Kang Z, Zhang X, Yuan J, Sang X, Wu Q, Farrell G, and Yu C
- Abstract
In this paper, a cascade optical quantization scheme is proposed to realize all-optical analog-to-digital converter with efficiently enhanced quantization resolution and achievable high analog bandwidth of larger than 20 GHz. Employing the cascade structure of an unbalanced Mach-zehnder modulator and a specially designed optical directional coupler, we predict the enhancement of number-of-bits can be up to 1.59-bit. Simulation results show that a 25 GHz RF signal is efficiently digitalized with the signal-to-noise ratio of 33.58 dB and effective-number-of-bits of 5.28-bit.
- Published
- 2014
- Full Text
- View/download PDF
48. Efficient and broadband parametric wavelength conversion in a vertically etched silicon grating without dispersion engineering.
- Author
-
Jin B, Yuan J, Yu C, Sang X, Wei S, Zhang X, Wu Q, and Farrell G
- Abstract
An efficient and broadband parametric wavelength converter is proposed in the silicon-on-insulator (SOI) waveguide without dispersion engineering. The vertical grating is utilized to achieve the quasi-phase-matching (QPM) of four-wave mixing (FWM). By alternating the phase-mismatch between two values with opposite signs, the parametric attenuation is suppressed. The conversion efficiency at the designated signal wavelength is significantly improved, and the 3-dB conversion bandwidth is also extended effectively. It is demonstrated that the conversion bandwidth is insensitive to both the propagation length and the grating width, which alleviates the tradeoff between the conversion bandwidth and the peak conversion efficiency. For a continuous-wave (CW) pump at 1550 nm, a conversion bandwidth of 331 nm and a peak efficiency of -12.8 dB can be realized in a 1.5-cm-long grating with serious phase-mismatch.
- Published
- 2014
- Full Text
- View/download PDF
49. The modulation function and realizing method of holographic functional screen.
- Author
-
Yu C, Yuan J, Fan FC, Jiang CC, Choi S, Sang X, Lin C, and Xu D
- Subjects
- Computer-Aided Design, Equipment Design, Equipment Failure Analysis, Algorithms, Holography instrumentation, Holography methods, Image Interpretation, Computer-Assisted methods, Imaging, Three-Dimensional instrumentation, Imaging, Three-Dimensional methods
- Abstract
The modulation function of holographic functional screen (HFS) in the real-time, large-size full-color (RLF), three-dimensional (3D) display system is derived from angular spectrum analysis. The directional laser speckle (DLS) method to realize the HFS is proposed. A HFS by the DLS method was fabricated and used in the experiment. Experimental results show that the HFS is valid in the RLF 3D display, and that the derived modulation function is valuable for the design of the HFS. The research results are important to realize the RLF 3D display system which will find many applications such as holographic video.
- Published
- 2010
- Full Text
- View/download PDF
50. Gain and noise characteristics of high-bit-rate silicon parametric amplifiers.
- Author
-
Sang X and Boyraz O
- Subjects
- Equipment Design, Equipment Failure Analysis, Amplifiers, Electronic, Computer-Aided Design, Lasers, Signal Processing, Computer-Assisted instrumentation, Telecommunications instrumentation
- Abstract
We report a numerical investigation on parametric amplification of high-bit-rate signals and related noise figure inside silicon waveguides in the presence of two-photon absorption (TPA), TPA-induced free-carrier absorption, free-carrier-induced dispersion and linear loss. Different pump parameters are considered to achieve net gain and low noise figure. We show that the net gain can only be achieved in the anomalous dispersion regime at the high-repetition-rate, if short pulses are used. An evaluation of noise properties of parametric amplification in silicon waveguides is presented. By choosing pulsed pump in suitably designed silicon waveguides, parametric amplification can be a chip-scale solution in the high-speed optical communication and optical signal processing systems.
- Published
- 2008
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.