Fluctuations in agricultural commodity prices affect the supply and demand of agricultural commodities and have a significant impact on consumers. Accurate prediction of agricultural commodity prices would facilitate the reduction of risk caused by price fluctuations. This paper proposes a model called the dual input attention long short-term memory (DIA-LSTM) for the efficient prediction of agricultural commodity prices. DIA-LSTM is trained using various variables that affect the price of agricultural commodities, such as meteorological data, and trading volume data, and can identify the feature correlation and temporal relationships of multivariate time series input data. Further, whereas conventional models predominantly focus on the static main production area (which is selected for each agricultural commodity beforehand based on statistical data), DIA-LSTM utilizes the dynamic main production area (which is selected based on the production of agricultural commodities in each region). To evaluate DIA-LSTM, it was applied to the monthly price prediction of cabbage and radish in the South Korean market. Using meteorological information for the dynamic main production area, it achieved 2.8% to 5.5% lower mean absolute percentage error (MAPE) than that of the conventional model that uses meteorological information for the static main production area. Furthermore, it achieved 1.41% to 4.26% lower MAPE than that of benchmark models. Thus, it provides a new idea for agricultural commodity price forecasting and has the potential to stabilize the supply and demand of agricultural products.
Owing to their increasing amendments and complexity, most taxpayers do not have the required knowledge of tax laws, which results in issues in everyday life. To use tax counseling services through the internet, a person must first select a category of tax laws corresponding to their tax question. However, a layperson without prior knowledge of tax laws may not know which category to select in the first place. Therefore, a model capable of automatically classifying the categories of tax laws is needed. Recently, a model using BERT has been frequently used for text classification; however, it is generally used in opendomains, and often experiences a degraded performance due to domain-specific technical terms, such as tax laws. Furthermore, a significant amount of time is required to train the model, since BERT is a large-scale model. To address these issues, this study proposes Korean tax law-BERT (KTL-BERT) for the automatic classification of categories of tax questions. For the proposed KTL-BERT, a new pre-trained language model was constructed by performing learning from scratch, to which a static masking method was applied based on DistilRoBERTa. Subsequently, the pre-trained language model was fine-tuned to classify five categories of tax law. A total of 327,735 tax law questions were used to verify the performance of the proposed KTL-BERT. The F1-score of the proposed KTL-BERT was approximately 91.06%, which is higher than that of the benchmark models by approximately 1.07%-15.46%, and the training speed was approximately 0.89%-56.07% higher.INDEX TERMS BERT, domain-specific, Korean tax law, pre-trained language model, text classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.