26th International Conference on Intelligent User Interfaces 2021
DOI: 10.1145/3397481.3450649
|View full text |Cite
|
Sign up to set email alerts
|

ForSense: Accelerating Online Research Through Sensemaking Integration and Machine Research Support

Abstract: Online research is a frequent and important activity people perform on the Internet, yet current support for this task is basic, fragmented and not well integrated into web browser experiences. Guided by sensemaking theory, we present ForSense, a browser extension for accelerating people's online research experience. The two primary sources of novelty of ForSense are the integration of multiple stages of online research and providing machine assistance to the user by leveraging recent advances in neural-driven… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…Similar to what was reported in prior work [99], since our participants were not explicitly told how the system worked to automatically collect and rank information, they had to form their own mental models and hypotheses about how the system works and how they could affect it with their behavior. For example, P8 noticed that "it looks like if I spend a little bit more time on a particular place on a page, the corresponding criterion would get picked up and bumped up quickly; and if I click on that part a bunch of times, which happens to be what I typically would do when I try to focus my attention on something now that I'm thinking about it, it's [the corresponding criterion] going to go up even faster."…”
Section: Evaluation Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Similar to what was reported in prior work [99], since our participants were not explicitly told how the system worked to automatically collect and rank information, they had to form their own mental models and hypotheses about how the system works and how they could affect it with their behavior. For example, P8 noticed that "it looks like if I spend a little bit more time on a particular place on a page, the corresponding criterion would get picked up and bumped up quickly; and if I click on that part a bunch of times, which happens to be what I typically would do when I try to focus my attention on something now that I'm thinking about it, it's [the corresponding criterion] going to go up even faster."…”
Section: Evaluation Discussionmentioning
confidence: 99%
“…Similar to previous systems [61,81,99], the sidebar can be toggled in and out like a drawer by clicking the extension icon (Figure 1h) or using a keyboard shortcut. Developers can passively monitor the sidebar as they are searching and browsing to make sure the system performs correctly, and quickly correct or dismiss the mistakes that the system makes.…”
Section: Crystalline 41 System Overviewmentioning
confidence: 99%
“…In their work, they explain how something is introduced can have a direct impact on how users frame and continue their analysis. Intelligence analysis [9,18], medical diagnosis [23,39], and even humble internet research [41,42,49] all share tasks where complex data requires thoughtful consideration and artifact synthesis to arrive at and present a formal conclusion [58]. So, while common analytic settings clearly employ parts of the sensemaking process, they also define operations involving collaborative teaming or hierarchical units [37].…”
Section: Collaborative Sensemakingmentioning
confidence: 99%
“…Without the transcription of accurate mental schemes, details can be forgotten, leading to false conclusions and inaccuracies [51,52]. By partnering with computers, human analysts can focus less on annotation tasks, like recording how they arrived at different concepts, and shift their attention toward directing the analysis and hypothesizing relationships between discovered ideas [11,19,49]. In this study, we examine how the representation of a user's process impacts the sensemaking processes of new analysts.…”
Section: Collaborative Sensemakingmentioning
confidence: 99%
“…CiteSense [68] developed an information-rich environment which provides various features for searching, appraising, and managing the different tasks involved in a literature review. While prior work in supporting scholarly literature review primarily focused on a standalone information environment detached from users' flow of reading, recent notable exceptions in online sensemaking support tools are ForSense [50] and Fuse [38].…”
Section: Tools For the Literature Review Workflowmentioning
confidence: 99%