The objective of our study was to explore the feasibility of integrating artificial intelligence (AI) algorithms for breast cancer detection into a portable, point-of-care ultrasound device (POCUS). This proof-of-concept implementation is to demonstrate the platform for integrating AI algorithms into a POCUS device to achieve a performance benchmark of at least 15 frames/second. Our methodology involved the application of five AI models (FasterRCNN+MobileNetV3, FasterRCNN+ResNet50, RetinaNet+ResNet50, SSD300+VGG16, and SSDLite320+MobileNetV3), pretrained on public datasets of natural images, fine-tuned using a dataset of gelatin-based breast phantom images with both anechoic and hyperechoic lesions, mimicking real tissue characteristics. We created various gelatin-based ultrasound phantoms containing ten simulated lesions, ranging from 4-20 mm in size. Our experimental setup used the Clarius L15 scanning probe, which was connected via Wi-Fi to both a tablet and a laptop, forming the core of our development platform. The phantom data was divided into training, validation, and held-out testing sets on a per-video basis. We executed 200 timing trials for each finetuned AI model, streaming scanning video from the ultrasound probe in real-time. SSDLite320+MobileNetV3 emerged as a standout, showing a mean frame-to-frame timing of 0.068 seconds (SD=0.005), which is approximately 14.71 FPS, closely followed by FasterRCNN+MobileNetV3, with a mean timing of 0.123 seconds (SD=0.016), or about 8.13 FPS. Both models show acceptable performance in lesion localization. Compared to our goal of 15 frames/second, only the SSDLite320+MobileNetV3 architecture performed with sufficient evaluation speed to be used in real-time. Our findings show the necessity of using AI architectures designed for edge devices for real-time use, as well as the potential need for hardware acceleration to encode AI models for use in POCUS.