1987
DOI: 10.1002/bs.3830320404
|View full text |Cite
|
Sign up to set email alerts
|

Artificial intelligence expert systems for clinical diagnosis: Are they worth the effort?

Abstract: Modeling the decision-making processes of human experts has been studied by scientists who call themselves psychologists and by scientists who say they are students of artificial intelligence (Al). The psychological research literature suggests that experts' decision-making processes can be adequately captured by simple mathematical models. On the other hand, those in Al who are preoccupied with human expertise maintain that complex computer models, in the form of expert systems, are required to do justice to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

1989
1989
2002
2002

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 40 publications
0
8
0
Order By: Relevance
“…This problem relates to the difficulty of finding available experts that are recognized in the field as knowledgeable. Often intelligent systems' developers assume human expertise to be 'the ideal' and do not question the suggestions of the experts and, similarly, clinicians also seem to have high confidence in their decisions (Carroll, 1994;Ridderikhoff & van Herk, 1999).…”
Section: Acquiring Informationmentioning
confidence: 98%
“…This problem relates to the difficulty of finding available experts that are recognized in the field as knowledgeable. Often intelligent systems' developers assume human expertise to be 'the ideal' and do not question the suggestions of the experts and, similarly, clinicians also seem to have high confidence in their decisions (Carroll, 1994;Ridderikhoff & van Herk, 1999).…”
Section: Acquiring Informationmentioning
confidence: 98%
“…Such situations arise because even post hoc it may not be clear what the proper answer should have been (e.g., what negotiation strategy should be adopted, what clause should be inserted into a contract). The most apparent example of differences in success metrics is that knowledge-based systems research tends virtually to ignore technical validity (Carroll, 1987;Turban & Aronson, 1998). This is a consequence of the assumption that human experts are the normative standard and that the experts are able to achieve an outstanding level of decisionmaking performance (Hayes-Roth et al, 1983), and also reflects the difficulty of validating output in knowledge-based environments.…”
Section: Success In the Knowledge-based System Traditionmentioning
confidence: 99%
“…In addition, researchers in knowledge-based systems use other criteria that are not used by vanilla DSS researchers. For example, a major goal of knowledge-based systems is to mimic the reasoning process that an expert would have used (Carroll, 1987;Davis & Lenat, 1980;Rangaswamy et al, 1987Rangaswamy et al, , 1989Turban & Aronson 1998).…”
Section: Success In the Knowledge-based System Traditionmentioning
confidence: 99%
See 1 more Smart Citation
“…As a conse-quence, much research effort has been spent on rule-based or cognitive expert systems. Statistical systems, such as that of de Dombal [13][14][15] for the diagnosis of abdominal pain, can only offer statistical probabilities and no detailed explanations despite their greater diagnostic accuracy [16].…”
Section: Construction Of the Expert Systemsmentioning
confidence: 99%