An accurate modeling of three-dimensional (3D) point cloud has been a crucial step to extract the physical phenotyping parameters of plants, such as the plant height, leaf quantity, and leaf area. In this study, an automatic registration of point clouds with multiple calibration balls was proposed to realize the 3D modeling of plants under a complex background in the field. A low-cost depth sensor (such as Kinect V3) was also selected to capture the images. The performance of the registration was then evaluated using the multiple angles of several plants under the indoor and in-field scenes. Four procedures were included the point cloud filtering and down sampling, multiple calibration balls extraction, correspondence points matching, as well as the calculation and registration of transformation matrix. 1) The pass-through filtering with only several boundary thresholds was used to reduce the noises, while the bounding box compression was used to down sampling the point clouds. 2) A Random Sampling Consistency (RANSAC) was also used in the multiple calibration balls extraction. Furthermore, a concept of point cloud subtraction was proposed to combine with the RANSAC, in order to form an automatic extraction of multiple calibration balls. Among them, the RANSAC was utilized to extract one single object at one time. As such, the center coordinates of each calibration ball were then calculated as the featured points. 3) The distances of any two feature points were calculated in the correspondence points matching. An automatic matching of correspondence points using the distance information was also proposed to realize the self-matching of the calibration balls. 4) The singular value decomposition was adopted to solve the transformation (rotation and translation) matrix, and then the registration of two pieces of point clouds was realized eventually. The experiments were carried out in two scenes: one was the indoor scene with the flat and clean surface, and another was the field scene with the uneven and complex conditions. Five plants were chosen as the research objects, including one sugarcane (indoor), one sorghum (indoor), and three young banana plants (in-field). The Kinect V3 sensor was utilized to capture the point clouds of each object in five orientations (the horizontal orientation of 0º, 90º, 180º, 270º, and one vertical orientation). A commercial point cloud processing software named LiDAR360 was adopted for comparison with the man-aid registration. After that, the object point clouds (horizontal 90°, 180°, 270°, and vertical) of each plant was used to implement the registration into the coordinate system of the source point cloud (horizontal 0°). The registration performance was evaluated to calculate the axial errors and point errors of the transformed coordinates, as well as the source coordinates of the centers of all the calibration balls. The results showed that the registration of different point clouds was successfully implemented in the different orientations, no matter in the indoor or in-field environment. Particularly, the generated 3D model of the plant was clear in shape. Specifically, there were an average axial error of 5.8-17.4 mm and an average point position error of 13.1-28.9 mm for the registration in the different scenes, similar to the registration in the manual registration software LiDAR360. In the required calculation time, the automatic matching took about 50 s on average for the process from the extraction of calibration sphere to the registration, whereas, the LiDAR360 took about 150 s with the manual selection of the correspondence points, indicating the efficiency increased by 67%. Consequently, it can be widely expected to serve a high-precision and automatic registration of plant point clouds acquired by low-cost depth cameras in complex field environments. The finding can provide a low-cost feasible solution for the 3D modeling and the extraction of phenotyping parameters of field plants. [ABSTRACT FROM AUTHOR]