2022
DOI: 10.3390/agriculture12071024
|View full text |Cite
|
Sign up to set email alerts
|

Visually Explaining Uncertain Price Predictions in Agrifood: A User-Centred Case-Study

Abstract: The rise of ‘big data’ in agrifood has increased the need for decision support systems that harvest the power of artificial intelligence. While many such systems have been proposed, their uptake is limited, for example because they often lack uncertainty representations and are rarely designed in a user-centred way. We present a prototypical visual decision support system that incorporates price prediction, uncertainty, and visual analytics techniques. We evaluated our prototype with 10 participants who are ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 87 publications
0
2
0
Order By: Relevance
“…In another between-subjects study with high and low accuracy advice, in which model accuracy could be deduced from comparing the AI advice to the correct answer (ground-truth) after each task, participants trusted and relied significantly more on highly accurate advice [32]. Finally, trust may be affected not only by what participants know about the system but also by what they (subliminally) expect from it: a mismatch in expectations could lead to a decline in trust as Ooge and Verbert found [48].…”
Section: Effects Of Ai Failure Over Time On Reliance and Trustmentioning
confidence: 99%
“…In another between-subjects study with high and low accuracy advice, in which model accuracy could be deduced from comparing the AI advice to the correct answer (ground-truth) after each task, participants trusted and relied significantly more on highly accurate advice [32]. Finally, trust may be affected not only by what participants know about the system but also by what they (subliminally) expect from it: a mismatch in expectations could lead to a decline in trust as Ooge and Verbert found [48].…”
Section: Effects Of Ai Failure Over Time On Reliance and Trustmentioning
confidence: 99%
“…Interpretable and explainable models diminish the black-box nature of AI, enable understanding of the reasons behind any specific decision, and promote the acceptance of AgriDSS (Dara et al, 2022. Ooge and Verbert (2022) suggest that improved model transparency with tailored explanations and designed visualizations can foster user trust and acceptance of AI.…”
Section: Systems Roadblocksmentioning
confidence: 99%