5 results on '"JIN Weizheng"'
Search Results
2. EPGNet: Enhanced Point Cloud Generation for 3D Object Detection
- Author
-
Chen Qingsheng, Lian Zou, Cien Fan, Li Xiaopeng, Jin Weizheng, Fangyu Li, Yifeng Liu, Minyuan Wu, and Hao Jiang
- Subjects
enhanced point cloud ,3D objection detection ,Computer science ,Point cloud ,Structure (category theory) ,02 engineering and technology ,computer.software_genre ,lcsh:Chemical technology ,01 natural sciences ,Biochemistry ,Article ,Analytical Chemistry ,autonomous driving ,Voxel ,0202 electrical engineering, electronic engineering, information engineering ,Computer vision ,lcsh:TP1-1185 ,Electrical and Electronic Engineering ,Instrumentation ,symmetry ,business.industry ,010401 analytical chemistry ,Object (computer science) ,Atomic and Molecular Physics, and Optics ,Object detection ,0104 chemical sciences ,Lidar ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Symmetry (geometry) ,business ,computer - Abstract
Three-dimensional object detection from point cloud data is becoming more and more significant, especially for autonomous driving applications. However, it is difficult for lidar to obtain the complete structure of an object in a real scene due to its scanning characteristics. Although the existing methods have made great progress, most of them ignore the prior information of object structure, such as symmetry. So, in this paper, we use the symmetry of the object to complete the missing part in the point cloud and then detect it. Specifically, we propose a two-stage detection framework. In the first stage, we adopt an encoder&ndash, decoder structure to generate the symmetry points of the foreground points and make the symmetry points and the non-empty voxel centers form an enhanced point cloud. In the second stage, the enhanced point cloud is input into the baseline, which is an anchor-based region proposal network, to generate the detection results. Extensive experiments on the challenging KITTI benchmark show the effectiveness of our method, which has better performance on both 3D and BEV (bird&rsquo, s eye view) object detection compared with some previous state-of-the-art methods.
- Published
- 2020
3. Lightning Detection and Imaging Based on VHF Radar Interferometry.
- Author
-
Yin, Wenjie, Jin, Weizheng, Zhou, Chen, Liu, Yi, Tang, Qiong, Liu, Moran, Chen, Guanyi, and Zhao, Zhengyu
- Subjects
- *
RADAR interferometry , *LIGHTNING , *INTERFEROMETRY , *ELECTROMAGNETIC radiation , *ANTENNA arrays , *SHORTWAVE radio , *THREE-dimensional imaging - Abstract
In this study, detection and three-dimensional (3D) imaging of lightning plasma channels are presented using radar interferometry. Experiments were carried out in Leshan, China with a 48.2 MHz VHF radar configured with an interferometric antenna array. The typical characteristics of lightning echoes are studied in the form of amplitude, phase, and doppler spectra derived from the raw in-phase/quadrature (I/Q) data. In addition, the 3D structure of lightning channels is reconstructed using the interferometry technique. The localization results of lightning are verified with the locating results of lightning detection networks operating at VLF ranges, which indicate the feasibility of using VHF radar for lightning mapping. The interpretation of the observational results is complicated by the dendric structure of lightning channel and the overlap between passive electromagnetic radiations and return echoes. Nevertheless, some parts of the characteristics of lightning are still evident. The observational result of return echoes shows good consistency with the overdense assumption of lightning channels. The transition from the overdense channel to the underdense channel in the form of amplitude and phase is clearly observed. This technique is very promising to reveal the typical characteristics of lightning return echoes and structure of lightning propagation processes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. PSANet: Pyramid Splitting and Aggregation Network for 3D Object Detection in Point Cloud.
- Author
-
Li, Fangyu, Jin, Weizheng, Fan, Cien, Zou, Lian, Chen, Qingsheng, Li, Xiaopeng, Jiang, Hao, and Liu, Yifeng
- Subjects
- *
POINT cloud , *PYRAMIDS , *OPTICAL scanners , *AUGMENTED reality , *CONVOLUTIONAL neural networks - Abstract
3D object detection in LiDAR point clouds has been extensively used in autonomous driving, intelligent robotics, and augmented reality. Although the one-stage 3D detector has satisfactory training and inference speed, there are still some performance problems due to insufficient utilization of bird's eye view (BEV) information. In this paper, a new backbone network is proposed to complete the cross-layer fusion of multi-scale BEV feature maps, which makes full use of various information for detection. Specifically, our proposed backbone network can be divided into a coarse branch and a fine branch. In the coarse branch, we use the pyramidal feature hierarchy (PFH) to generate multi-scale BEV feature maps, which retain the advantages of different levels and serves as the input of the fine branch. In the fine branch, our proposed pyramid splitting and aggregation (PSA) module deeply integrates different levels of multi-scale feature maps, thereby improving the expressive ability of the final features. Extensive experiments on the challenging KITTI-3D benchmark show that our method has better performance in both 3D and BEV object detection compared with some previous state-of-the-art methods. Experimental results with average precision (AP) prove the effectiveness of our network. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. PSANet: Pyramid Splitting and Aggregation Network for 3D Object Detection in Point Cloud.
- Author
-
Li F, Jin W, Fan C, Zou L, Chen Q, Li X, Jiang H, and Liu Y
- Abstract
3D object detection in LiDAR point clouds has been extensively used in autonomous driving, intelligent robotics, and augmented reality. Although the one-stage 3D detector has satisfactory training and inference speed, there are still some performance problems due to insufficient utilization of bird's eye view (BEV) information. In this paper, a new backbone network is proposed to complete the cross-layer fusion of multi-scale BEV feature maps, which makes full use of various information for detection. Specifically, our proposed backbone network can be divided into a coarse branch and a fine branch. In the coarse branch, we use the pyramidal feature hierarchy (PFH) to generate multi-scale BEV feature maps, which retain the advantages of different levels and serves as the input of the fine branch. In the fine branch, our proposed pyramid splitting and aggregation (PSA) module deeply integrates different levels of multi-scale feature maps, thereby improving the expressive ability of the final features. Extensive experiments on the challenging KITTI-3D benchmark show that our method has better performance in both 3D and BEV object detection compared with some previous state-of-the-art methods. Experimental results with average precision (AP) prove the effectiveness of our network.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.