Multi-modality medical image fusion technology can integrate the complementary information of different modality medical images, obtain more precise, reliable and better description of lesions. Dictionary learning based image fusion draws a great attention in researchers and scientists, for its high performance. The standard learning scheme uses entire image for dictionary learning. However, in medical images, the informative region takes a small proportion of the whole image. Most of the image patches have limited and redundant information. Taking all the image patches for dictionary learning brings lots of unvalued and redundant information, which can influence the medical image fusion quality. In this paper, a novel dictionary learning approach is proposed for image fusion. The proposed approach consists of three steps. Firstly, a novel image patches sampling scheme is proposed to obtain the informative patches. Secondly, a local density peaks based clustering algorithm is conducted to classify the image patches with similar image structure information into several patch groups. Each patch group is trained to a compact sub-dictionary by K-SVD. Finally the sub-dictionaries are combined to a complete, informative and compact dictionary. In this dictionary,only important and useful information which can effectively describe the medical image are selected. To show the efficiency of the proposed dictionary learning approach, the sparse coefficient vectors are estimated by a simultaneous orthogonal matching pursuit (SOMP) algorithm with the trained dictionary, and fused by max-L1 rules. The comparative experimental results and analyses reveal that the proposed method achieves better image fusion quality than existing state-of-the-art methods.