A method based on multiple images captured under different light sources at different incident angles was developed to recognize the coal density range in this study. The innovation is that two new images were constructed based on images captured under four single light sources. Reconstruction image 1 was constructed by fusing greyscale versions of the original images into one image, and Reconstruction image 2 was constructed based on the differences between the images captured under the different light sources. Subsequently, the four original images and two reconstructed images were input into the convolutional neural network AlexNet to recognize the density range in three cases: −1.5 (clean coal) and +1.5 g/cm3(non-clean coal); −1.8 (gangue) and +1.8 g/cm3(non-gangue); −1.5 (clean coal), 1.5–1.8 (middlings), and +1.8 g/cm3(gangue). The results show the following: (1) The reconstructed images, especially Reconstruction image 2, can effectively improve the recognition accuracy for the coal density range compared with images captured under single light source. (2) The recognition accuracies for gangue and non-gangue, clean coal and non-clean coal, and clean coal, middlings, and gangue reached 88.44%, 86.72% and 77.08%, respectively. (3) The recognition accuracy increases as the density moves further away from the boundary density.