“…Different optimizers have been considered for the hyper tuning of the hydroponic systems such as Adam and SGD optimizers using different learning rates values ranging from zero (0) to one (1) to obtain the optimal convergence as shown in Figs. 15,16,17,18,19,20,21,22,23 which represent the results of the AER Loss using Adam optimizer (Learning rate=0.0000001), AER Loss using Adam optimizer (Learning rate=0.1), AER Loss using SGD optimiser (learning=0.0000001), AG loss using Adam optimizer (Learning rate=0.01), AG loss using SGD optimizer (Learning rate=0.0000001), Float loss using Adam optimizer (Learning rate=0.01), Float loss using SGD optimiser (Learning rate=0.000001), NFT loss using Adam optimizer (Learning rate=0.01), NFT loss using SGD optimizer (Learning rate=0.000001) respectively, it can be inferred that the floating hydroponic systems, produced the optimal convergence using the Adam optimizer at a learning rate of 0.01, this indicates that the floating hydroponic systems is the preferred hydroponic systems for the Onion bulb diameter predictions using a decentralised split learning network.…”