Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 2019
DOI: 10.1145/3290605.3300892
|View full text |Cite
|
Sign up to set email alerts
|

VizNet

Abstract: and developing benchmark models and algorithms for automating visual analysis. To demonstrate VizNet's utility as a platform for conducting online crowdsourced experiments at scale, we replicate a prior study assessing the influence of user task and data distribution on visual encoding effectiveness, and extend it by considering an additional task: outlier detection. To contend with running such studies at scale, we demonstrate how a metric of perceptual effectiveness can be learned from experimental results, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 73 publications
(20 citation statements)
references
References 70 publications
0
20
0
Order By: Relevance
“…The first group examines how data distributions and user tasks influence the effectiveness of charts [32,33] and visual encodings [26]. Hu et al [22] proposed to view every realized visualization as a tuple of (data 𝐷, visual form 𝑉 , task 𝑇 ).…”
Section: Structure In Visualizationmentioning
confidence: 99%
See 1 more Smart Citation
“…The first group examines how data distributions and user tasks influence the effectiveness of charts [32,33] and visual encodings [26]. Hu et al [22] proposed to view every realized visualization as a tuple of (data 𝐷, visual form 𝑉 , task 𝑇 ).…”
Section: Structure In Visualizationmentioning
confidence: 99%
“…We synthesize the ideas from Demiralp et al, Kindlmann and Scheidegger, and Hu et al [16,22,27] and form the following model for reasoning about visualizations: Data 𝐷 ↔ Visual forms 𝑉 ↔ Tasks 𝑇 Given a particular data set 𝑑 in the space of all possible data sets 𝐷, 𝑑 could be mapped to a set of different visual configurations 𝑉 𝑑 ⊆ 𝑉 , some more structure-preserving (or having better "data-visual" correspondence) than others. Each feasible visual configuration 𝑣 ∈ 𝑉 𝑑 can be used to answer tasks, be it low-level [2] or high-level [38].…”
Section: A Model For Y-axis Truncationmentioning
confidence: 99%
“…Note that crossfilter-applications generally involve only a small number of attributes [17,41]. Typically, visualization datasets have less than 10 attributes, as reported in [27,Figure 2].…”
Section: Datasets and Exploration Environmentsmentioning
confidence: 99%
“…However, most existing works have been focusing on pre-training itself, which only rely on the general distribution information of corpora without considering the difference between semantical and irregular characters. Unlike the natural language, most of the tabular data are organized by non-semantical items, including numbers, strings, or symbols, which approximately remain 70% of the tabular pre-training datasets Table 3.5: Quantitative GED results of student model trained with different loss weight α. like Wikitables and Common Crawl Tables [229,4]. Therefore, the semantical table entities, including both headers and cells, take the remaining 30% but play an important role in high-level table understanding such as column type prediction or table classification based on the semantical attributes.…”
Section: Motivationmentioning
confidence: 99%
“…"type 0" Table Type Classification Outlier Detection … Figure 3.9: Overview of our model. The proposed model is pretrained by the tabular corpora [4] and external well strctured data such as knowledge graph [5]. Then the model will be applied to multiple downstream tasks by either zero-shot learning or direct finetuning.…”
Section: Columns Type Predictionmentioning
confidence: 99%