Xels, and Pe could be the expected accuracy. 2.2.7. Parameter Settings The BiLSTM-Attention model was built by means of the PyTorch framework. The version of Python is three.7, and the version of PyTorch employed within this study is 1.2.0. All of the processes were performed on a Windows 7 workstation having a NVIDIA GeForce GTX 1080 Ti graphics card. The batch size was set to 64, the initial finding out rate was 0.001, and also the studying price was adjusted in Dicaprylyl carbonate References accordance with the epoch education instances. The attenuation step of your mastering price was 10, as well as the multiplication aspect of your updating mastering rate was 0.1. Employing the Adam optimizer, the optimized loss function was cross entropy, which was the regular loss function applied in all multiclassification tasks and has acceptable results in secondary classification tasks [57]. three. Benefits To be able to verify the effectiveness of our proposed method, we carried out 3 experiments: (1) the comparison of our proposed strategy with BiLSTM model and RF classification system; (two) comparative BMS-901715 Data Sheet evaluation prior to and following optimization by using FROM-GLC10; (3) comparison among our experimental outcomes and agricultural statistics. three.1. Comparison of Rice Classification Approaches Within this experiment, the BiLSTM strategy and the classical machine understanding technique RF have been chosen for comparative evaluation, plus the 5 evaluation indexes introduced in Section two.two.5 have been applied for quantitative evaluation. To ensure the accuracy from the comparison final results, the BiLSTM model had the same BiLSTM layers and parameter settings using the BiLSTM-Attention model. The BiLSTM model was also constructed via the PyTorch framework. Random forest, like its name implies, consists of a sizable quantity of individual selection trees that operate as an ensemble. Each and every person tree within the random forest spits out a class prediction plus the class together with the most votes becomes the model’s prediction. The implementation from the RF system is shown in [58]. By setting the maximum depth and also the number of samples on the node, the tree construction might be stopped, which can lower the computational complexity from the algorithm and also the correlation among sub-samples. In our experiment, RF and parameter tuning had been realized by utilizing Python and Sklearn libraries. The version of Sklearn libraries was 0.24.2. The number of trees was one hundred, the maximum tree depth was 22. The quantitative results of diverse strategies on the test dataset described inside the Section two.two.3 are shown in Table two. The accuracy of BiLSTM-Attention was 0.9351, which was substantially superior than that of BiLSTM (0.9012) and RF (0.8809). This result showed that compared with BiLSTM and RF, the BiLSTM-Attention model accomplished greater classification accuracy. A test location was chosen for detailed comparative analysis, as shown in Figure 11. Figure 11b shows the RF classification benefits. There had been some broken missing places. It was achievable that the structure of RF itself limited its potential to study the temporal characteristics of rice. The regions missed inside the classification results of BiLSTM shown in Figure 11c had been lowered and the plots have been comparatively complete. It was discovered that the time series curve of missed rice within the classification results of BiLSTM model and RF had apparent flooding period signal. When the signal in harvest period will not be clear, theAgriculture 2021, 11,14 ofmodel discriminates it into non-rice, resulting in missed detection of rice. Compared together with the classification outcomes with the BiLSTM and RF.