2013
DOI: 10.1002/asi.22758
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of techniques for measuring sensemaking and learning within participant‐generated summaries

Abstract: While it is easy to identify whether someone has found a piece of information during a search task, it is much harder to measure how much someone has learned during the search process. Searchers who are learning often exhibit exploratory behaviors, and so current research is often focused on improving support for exploratory search. Consequently, we need effective measures of learning to demonstrate better support for exploratory search. Some approaches, such as quizzes, measure recall when learning from a fix… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 71 publications
(60 citation statements)
references
References 47 publications
0
60
0
Order By: Relevance
“…Research in search as learning has proposed several ways to measure the learning outcome after searching (Bhattacharya & Gwizdka, ; Collins‐Thompson, Rieh, Haynes, & Syed, ; Gadiraju, Yu, Dietze, & Holtz, ; Wilson & Wilson, ; Yu et al, ); and the relationship between search interactions and learning performance or knowledge change (Bhattacharya & Gwizdka, ; Ghosh, Rath, & Shah, ; Liu & Song, ).…”
Section: Introductionmentioning
confidence: 99%
“…Research in search as learning has proposed several ways to measure the learning outcome after searching (Bhattacharya & Gwizdka, ; Collins‐Thompson, Rieh, Haynes, & Syed, ; Gadiraju, Yu, Dietze, & Holtz, ; Wilson & Wilson, ; Yu et al, ); and the relationship between search interactions and learning performance or knowledge change (Bhattacharya & Gwizdka, ; Ghosh, Rath, & Shah, ; Liu & Song, ).…”
Section: Introductionmentioning
confidence: 99%
“…Summary quality was evaluated based on topic-specific criteria including the number of reasonable topics, overall quality of the topic description, and number of arguments. Wilson and Wilson [25] developed systematic techniques to measure the depth of learning at three levels: quality of facts, interpretation of data into statements, and use of critique. These previous studies indicate that traditional measures in information retrieval, such as recall and precision, can be effectively complemented with alternative measures that pay more attention to search behavior, process, and task outcomes beyond basic search results.…”
Section: Introductionmentioning
confidence: 99%
“…The interesting research questions lie around the impact that incomplete prior knowledge has on assessing whether new information correctly resolves an information need. Answer correctness, or a measure of an answers strength, is often used in studies of searching behaviour, whilst also trying to measure pre-and post-search knowledge on a topic [55]. The Tetris Model may help to draw new finer-grained questions about the sequence in which information is found during search.…”
Section: Opportunities For Future Researchmentioning
confidence: 99%