This paper presents an exact pruning algorithm with adaptive pruning interval for general dynamic neural networks (GDNN). GDNNs are artificial neural networks with internal dynamics. All layers have feedback connections with time delays to the same and to all other layers. The structure of the plant is unknown, so the identification process is started with a larger network architecture than necessary. During parameter optimization with the Levenberg- Marquardt (LM) algorithm irrelevant weights of the dynamic neural network are deleted in order to find a model for the plant as simple as possible. The weights to be pruned are found by direct evaluation of the training data within a sliding time window. The influence of pruning on the identification system depends on the network architecture at pruning time and the selected weight to be deleted. As the architecture of the model is changed drastically during the identification and pruning process, it is suggested to adapt the pruning interval online. Two system identification examples show the architecture selection ability of the proposed pruning approach., {"references":["Y. Le Cun, J. S. Denker, S. A. Solla, \"Optimal Brain Damage,\" in\nD. S. Touretzky, \"Advances in Neural Information Processing Systems,\"\nMorgan Kaufmann, 1990, pp. 598-605.","B. Hassibi, D. G. Stork, G. J. Wolff, \"Optimal Brain Surgeon and\nGeneral Network Pruning,\" IEEE International Conference on Neural\nNetworks, vol. 1, pp. 293-299, April 1993.","R. Reed, \"Pruning Algorithms - A Survey,\" IEEE Transactions on\nNeural Networks, vol. 4, no. 5, pp. 740-747, September 1993.","M. Attik, L. Bougrain, F. Alexandre, \"Optimal Brain Surgeon Variants\nFor Feature Selection,\" in IEEE Proceedings of the International Joint\nConference on Neural Networks, pp. 1371-1374, 2004.","O. De Jes'us, M. Hagan, \"Backpropagation Algorithms Through Time\nfor a General Class of Recurrent Network,\" IEEE Int. Joint Conf. Neural\nNetwork, Washington, 2001, pp. 2638-2643.","O. De Jes'us, M. Hagan, \"Forward Perturbation Algorithm For a General\nClass of Recurrent Network,\" IEEE Int. Joint Conf. Neural Network,\nWashington, 2001, pp. 2626-2631.","O. De Jes'us, \"Training General Dynamic Neural Networks,\" Ph.D.\ndissertation, Oklahoma State University, Stillwater, OK, 2002.","O. De Jes'us, M. Hagan, \"Backpropagation Algorithms for a Broad Class\nof Dynamic Networks,\" IEEE Transactions on Neural Networks, vol. 18,\nno. 1, pp. 14-27, January 2007.","M. Hagan, B. M. Mohammed, \"Training Feedforward Networks with\nthe Marquardt Algorithm,\" IEEE Transactions on Neural Networks, vol.\n5, no. 6, pp. 989-993, November 1994.\n[10] L.S.H. Ngia, J. Sj¨oberg, \"Efficient Training of Neural Nets for Nonlinear\nAdaptive Filtering Using a Recursive Levenberg-Marquardt Algorithm,\"\nIEEE Transactions on Signal Processing, vol. 48, no. 7, pp. 1915-1927,\nJuly 2000.\n[11] P.J. Werbos, \"Backpropagation Through Time: What it is and how to to\nit,\" Proc. IEEE, vol. 78, no. 10, pp. 1550-1560, 1990.\n[12] R.J. Williams, D. Zipser, \"A Learning Algorithm for Continually Running\nFully Recurrent Neural Networks,\" Neural Computing, vol. 1, pp.\n270-280, 1989.\n[13] O. Nelles, Nonlinear System Identification. Berlin Heidelberg New York:\nSpringer-Verlag, 2001.\n[14] D. Schr¨oder, Elektrische Antriebe - Regelung von Antriebssystemen.\n2nd edn., Berlin Heidelberg New York: Springer-Verlag, 2001.\n[15] K. S. Narendra, K. Parthasarathy, \"Identification and Control of Dynamical\nSystems Using Neural Networks,\" IEEE Transactions on Neural\nNetworks, vol. 1, no. 1, pp. 4-27, November 1990."]}