Xels, and Pe is the expected accuracy. 2.2.7. Parameter Settings The BiLSTM-Attention model was constructed through the PyTorch framework. The version of Python is 3.7, and also the version of PyTorch employed within this study is 1.two.0. All the processes had been performed on a Windows 7 workstation using a NVIDIA GeForce GTX 1080 Ti graphics card. The batch size was set to 64, the initial finding out price was 0.001, plus the studying price was adjusted as outlined by the epoch instruction instances. The attenuation step in the studying price was ten, as well as the multiplication issue of your updating learning price was 0.1. Employing the Adam optimizer, the optimized loss function was cross entropy, which was the common loss function applied in all multiclassification tasks and has acceptable final results in secondary classification tasks [57]. three. Results As a way to confirm the effectiveness of our proposed method, we carried out three experiments: (1) the comparison of our proposed technique with Sapienic acid Data Sheet BiLSTM model and RF classification strategy; (two) comparative evaluation just before and after optimization by using FROM-GLC10; (3) comparison involving our experimental final results and agricultural Pyrroloquinoline quinone Protocol statistics. three.1. Comparison of Rice Classification Methods Within this experiment, the BiLSTM strategy plus the classical machine mastering strategy RF have been selected for comparative analysis, along with the 5 evaluation indexes introduced in Section 2.two.5 have been utilized for quantitative evaluation. To make sure the accuracy of the comparison final results, the BiLSTM model had precisely the same BiLSTM layers and parameter settings with all the BiLSTM-Attention model. The BiLSTM model was also constructed by means of the PyTorch framework. Random forest, like its name implies, consists of a large variety of person decision trees that operate as an ensemble. Each individual tree within the random forest spits out a class prediction plus the class with the most votes becomes the model’s prediction. The implementation of the RF technique is shown in [58]. By setting the maximum depth along with the quantity of samples on the node, the tree construction is often stopped, which can cut down the computational complexity from the algorithm as well as the correlation in between sub-samples. In our experiment, RF and parameter tuning have been realized by utilizing Python and Sklearn libraries. The version of Sklearn libraries was 0.24.two. The amount of trees was one hundred, the maximum tree depth was 22. The quantitative final results of unique procedures on the test dataset pointed out within the Section two.two.three are shown in Table two. The accuracy of BiLSTM-Attention was 0.9351, which was significantly superior than that of BiLSTM (0.9012) and RF (0.8809). This result showed that compared with BiLSTM and RF, the BiLSTM-Attention model achieved larger classification accuracy. A test location was selected for detailed comparative evaluation, as shown in Figure 11. Figure 11b shows the RF classification final results. There have been some broken missing areas. It was probable that the structure of RF itself restricted its capacity to find out the temporal characteristics of rice. The locations missed within the classification results of BiLSTM shown in Figure 11c had been reduced as well as the plots have been relatively comprehensive. It was located that the time series curve of missed rice inside the classification results of BiLSTM model and RF had clear flooding period signal. When the signal in harvest period just isn’t obvious, theAgriculture 2021, 11,14 ofmodel discriminates it into non-rice, resulting in missed detection of rice. Compared using the classification outcomes in the BiLSTM and RF.