“…In this study, the neural network preparation was conducted using TensorFlow, training was conducted with various hyperparameters, and the determination of whether the model achieved the global optimum was based on the loss function values of the training set, as illustrated in Figure 10. From the graph, it can be observed that the three batch sizes selected in this study (16,32, and 64) had minimal impact on the loss function. Even when the initial learning rates were set to 0.0001, 0.0002, and 0.0004, the loss function did not reach its minimum value even after 100 epochs, indicating that the model did not converge to the global optimum.…”