2022
DOI: 10.3390/e24111597
|View full text |Cite
|
Sign up to set email alerts
|

Entropy as a High-Level Feature for XAI-Based Early Plant Stress Detection

Abstract: This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat stress using drought stress as an example. The state of the plants was periodically monitored via thermal infrared (TIR) and hyperspectral image (HSI) cameras. A single-layer perceptron (SLP)-based classifier was used as the main instrument in t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…In the examples of providing explainability in the practice of diagnosing plant stress based on DL [2,3], we can speak so far only of partial implementation of three principles, (1) Explanation, (2) Meaningful, and (3) Accuracy of explanation, in order to overcome some of the limitations of the black-box nature of DL models. In examples of providing explainablity based on ML methods [6,13,26], one can see the implementation of the same 3 XAI principles, but with a high degree of content, as well as a part of content of the 4th principle, to which has not yet begun to be given due attention in practice.…”
Section: Notions Of Xai and Explainablity In Aimentioning
confidence: 99%
See 2 more Smart Citations
“…In the examples of providing explainability in the practice of diagnosing plant stress based on DL [2,3], we can speak so far only of partial implementation of three principles, (1) Explanation, (2) Meaningful, and (3) Accuracy of explanation, in order to overcome some of the limitations of the black-box nature of DL models. In examples of providing explainablity based on ML methods [6,13,26], one can see the implementation of the same 3 XAI principles, but with a high degree of content, as well as a part of content of the 4th principle, to which has not yet begun to be given due attention in practice.…”
Section: Notions Of Xai and Explainablity In Aimentioning
confidence: 99%
“…HSI 'explanator' should be centered onto data scientists, and TIR 'explanator' onto biologist and agroscientists. As a result, such XAI block-explanator solves the universal problem of transformation 'explanator' to 'explanator' and can be used at the input of any network for the early diagnosis of plant stress, as example like in [13] or [26].…”
Section: Construction Xai Early Diagnostics Network With Explanators ...mentioning
confidence: 99%
See 1 more Smart Citation