2013
DOI: 10.1002/ev.20064
|View full text |Cite
|
Sign up to set email alerts
|

Editors' Notes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…These kinds of discrete generative tasks with verifiable answers are well‐suited to AI applications. Tasking AI with producing more complex or richer qualitative narratives, however, can be less reliable (Azzam, 2023). Investigative journalists have demonstrated that generative AI may exhibit confusing responses, react inappropriately to prompts, or reflect the worst facets of the internet in its attempts to match human conversation (Roose, 2023).…”
Section: Ai and Evaluationmentioning
confidence: 99%
See 2 more Smart Citations
“…These kinds of discrete generative tasks with verifiable answers are well‐suited to AI applications. Tasking AI with producing more complex or richer qualitative narratives, however, can be less reliable (Azzam, 2023). Investigative journalists have demonstrated that generative AI may exhibit confusing responses, react inappropriately to prompts, or reflect the worst facets of the internet in its attempts to match human conversation (Roose, 2023).…”
Section: Ai and Evaluationmentioning
confidence: 99%
“…The most important consideration in machine learning models is this—what the machine learns depends on what you teach it. These models can only ever be as strong as the data on which they are trained and are subject to an array of validity concerns (Azzam, 2023). Training a machine learning model can exponentiate even very minute errors, particularly in supervised models where humans shape the algorithm's decision‐making.…”
Section: Limitations Of Ai That Evaluators Should Considermentioning
confidence: 99%
See 1 more Smart Citation
“…Azzam's (2023) discussion of validity types and the ability of evaluators versus AI to support them.…”
Section: Proposed Criteria For Ai Use In Evaluationmentioning
confidence: 99%
“…The key question this criterion is centered on is: Are claims generated during an evaluation using AI valid or trustworthy? One aspect of this criterion is the validity or trustworthiness of inferences resulting from quantitative and/or qualitative AI methods, measurement, and analysis (Azzam, 2023; Sabarre et al., 2023; Tilton et al., 2023). Another consideration is whether and to what extent AI methods, measurement, and analysis are consistent with theory, best practices, principles, and standards (Azzam, 2023; Head et al., 2023).…”
Section: Proposed Criteria For Ai Use In Evaluationmentioning
confidence: 99%