Time-sensitive and safety-critical networked vehicular applications, such as autonomous driving, require deterministic guaranteed resources. This is achieved through advanced individual bandwidth reservations. The efficient timing of a vehicle decision to place a cost-efficient reservation request is crucial, as vehicles typically lack sufficient information about future bandwidth resource availability and costs. Predicting bandwidth costs often using time-series machine learning models like Long Short-Term Memory (LSTM). However, standard LSTM models typically require longer durations of multiple input data sets to achieve high accuracy. In certain scenarios, quick decisions must be made, even if the vehicle means sacrificing some accuracy. We propose a batched LSTM model to assist vehicles in placing bandwidth reservation requests within a limited data for an upcoming driving path. The model divides data during training to enhance computational efficiency and model performance. We validated our model using historical Amazon price data, providing a real-world scenario for experiment. The results demonstrate that the batched LSTM model not only achieves higher accuracy within a short input data duration but also significantly reduces bandwidth costs by up to 27% compared to traditional time-series machine learning models.