Rendering realistic virtual scenes from images has been a long-standing research goal in the fields of computer graphics and computer vision. NeRF (neural radiance fields) is an emerging method based on deep neural networks, which achieves realistic rendering by learning the radiance field of each point in the scene. By using neural radiance fields, not only realistic images but also realistic three-dimensional scenes can be generated, making it have a wide range of application prospects such as virtual reality, augmented reality and computer games. However, its basic model has shortcomings such as low training efficiency, poor generalization ability, insufficient interpretability, susceptible to lighting and material changes, inability to handle dynamic scenes, and other deficiencies that may result in suboptimal rendering results in certain situations. With the continuous popularity of this field, a large amount of research has been carried out, yielding impressive results in terms of efficiency and accuracy. In order to track the latest research in this field, this paper provides a review and summary of the key algorithms in recent years. This paper first outlines the background and principles of neural radiance fields, and briefly introduces the evaluation metrics and public datasets in this field. Then, a classification discussion is conducted on the key improvements to the model, mainly including: the optimization of basic NeRF model parameters, the improvement in rendering speed and inference ability, the enhancement of spatial representation and lighting ability, the improvement in camera pose and sparse view synthesis methods for static scene, and the development in dynamic scene modeling field. Subsequently, the speed and performance of various models are classified, compared and analyzed, and the main model evaluation indicators and open datasets in this field are briefly introduced. Finally, the future development trend of neural radiance field is prospected. [ABSTRACT FROM AUTHOR]