Support vector machine (SVM) is known for its good generalization performance and wide application in various fields. Despite its success, the learning efficiency of SVM decreases significantly originating from the assumption that the number of training samples increases rapidly. Consequently, the traditional SVM with standard optimization methods faces challenges such as excessive memory requirements and slow training speed, especially for large-scale training sets. To address this issue, this paper draws inspiration from the fuzzy support vector machine (FSVM). Considering that each sample has varying contributions to the decision plane, we propose an effective SVM sample reduction method based on the fuzzy membership function (FMF). This method uses FMF to calculate the fuzzy membership of each training sample. Training samples with low fuzzy memberships are then deleted. Specifically, we propose SVM sample reduction algorithms based on class center distance, kernel target alignment, centered kernel alignment, slack factor, entropy, and bilateral weighted FMF, respectively. Comprehensive experiments on UCI and KEEL datasets demonstrate that our proposed algorithms outperform other comparative methods in terms of accuracy, F-measure, and hinge-loss measures. • An effective SVM sample reduction method based on the fuzzy membership function is presented. • It employs the fuzzy membership function to assess the significance of each training sample. • It can improve the SVM classification performance on both balanced and imbalanced datasets. • It is demnonstrated with several benchmark examples in terms of different classification measures. [ABSTRACT FROM AUTHOR]