Cancer is a major cause of death that is brought on by the body's abnormal cell proliferation, including breast cancer. It poses a significant threat to the safety and health of people globally. Several imaging methods, such as mammography, CT scans, MRI, ultrasound, and biopsies, can help detect breast cancer. A biopsy is commonly done in histopathology to examine an image and assist in diagnosing breast cancer. However, accurately identifying the appropriate Region of Interest (ROI) remains challenging due to the complex nature of pre-processing phases, feature extracting regions, segmenting process and other conventional machine learning phases. This reduces the system's efficiency and accuracy. In order to reduce the variance that exists among viewers, the aim of this work is to build superior deep-learning phases algorithms. This research introduces a classifier that can detect and classify images simultaneously, without any human involvement. It employs a transfer-driven ensemble learning approach, where the framework comprises two main phases: production and detection of pseudo-color images and segmentation based on ROI Pooling CNN, which then feeds its output to ensemble models such as Efficientnet, ResNet101, and VGG19. Before the feature extraction process, data augmentation is necessary, involving minor adjustments like random cropping, horizontal flipping, and color space augmentations. Implementing and simulating the proposed segmentation and classification algorithms for any decision-making framework suggested could decrease the frequency of incorrect diagnoses and enhance classification accuracy. This could aid pathologists in obtaining a second opinion and facilitate the early identification of diseases. With a prediction accuracy of 98.3%, the proposed method outperforms the individual pre-trained models, namely Efficientnet, ResNet101, VGG16, and VGG19, by 2.3%, 1.71%, 2.01%, and 1.47%, respectively.