Background: Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing. Aims: We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features. Materials & Methods: We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]). Results: From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]). Discussion/Conclusion: We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing. [ABSTRACT FROM AUTHOR]