2021
DOI: 10.1016/j.visinf.2021.06.002
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing large-scale high-dimensional data via hierarchical embedding of KNN graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 16 publications
0
3
0
1
Order By: Relevance
“…Experts noted that the tabular view may not facilitate effective comparisons when there were too many rows or columns, while excessive dots in feature projections may cause visual clutter. 100 Therefore, it is promising to design more effective visualizations to support the workflow at a larger scale. Additionally, our system currently employs SHAP values to evaluate feature contribution, which can be computationally expensive, especially given the vast volume of manuscript submissions encountered by leading conferences and journals.…”
Section: Discussionmentioning
confidence: 99%
“…Experts noted that the tabular view may not facilitate effective comparisons when there were too many rows or columns, while excessive dots in feature projections may cause visual clutter. 100 Therefore, it is promising to design more effective visualizations to support the workflow at a larger scale. Additionally, our system currently employs SHAP values to evaluate feature contribution, which can be computationally expensive, especially given the vast volume of manuscript submissions encountered by leading conferences and journals.…”
Section: Discussionmentioning
confidence: 99%
“…It is predicated on the notion that comparable data points typically have similar labels or values [16]. The KNN method employs the complete training dataset as a reference throughout the training phase [17], [18]. It uses a selected distance metric, such as Euclidean distance, to determine the distance between each training example and the input data point before making predictions [13], [19].…”
Section: K-nearest Neighbors (Knn)mentioning
confidence: 99%
“…Representation learning embeds data into a highdimensional space by vectorization [161]. In this Traffic modelling [53,60,64,80] Data cleaning [13,20] Object tracking [114] Heuristic search [65,69,70,93,98] Simulation-based [16,32,87,113,120,124] Mathematical programming [105,116] Indexing [15,17,22 space, adversarial examples can also be generated for adversarial learning [75,76].…”
Section: Representation Learningmentioning
confidence: 99%