Physical adversarial attacks face significant challenges in achieving transferability across different object detection models, especially in real-world conditions. This is primarily due to variations in model architectures, training data, and detection strategies, which can make adversarial examples highly model-specific. This study introduces a multi-model adversarial training approach to improve the transferability of adversarial textures across diverse detection models, including one-stage, two-stage, and transformer-based architectures. Using the Truck Adversarial Camouflage Optimization (TACO) framework and a novel combination of YOLOv8n, YOLOv5m, and YOLOv3 models for optimization, our approach achieves an AP@0.5 detection score of 0.0972—over 50% lower than textures trained on single models alone. This result highlights the importance of multi-model training in enhancing attack effectiveness across object detectors, contributing to improved adversarial effectiveness.