2022
DOI: 10.1016/j.trc.2022.103661
|View full text |Cite
|
Sign up to set email alerts
|

Worldwide city transport typology prediction with sentence-BERT based supervised learning via Wikipedia

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 39 publications
1
2
0
Order By: Relevance
“…These findings suggest that CoSENT may be better suited for capturing semantic similarity between sentences [ 79 ], particularly when more powerful pre-trained models are employed. However, it is noteworthy that all combinations of models yielded Pearson's and Spearman's coefficients above 0.75, indicating that pre-trained language models and sentence embedding schemes are effective for capturing semantic similarity between sentences, corroborating the view of studies such as [ 66 68 ].…”
Section: Resultssupporting
confidence: 74%
See 1 more Smart Citation
“…These findings suggest that CoSENT may be better suited for capturing semantic similarity between sentences [ 79 ], particularly when more powerful pre-trained models are employed. However, it is noteworthy that all combinations of models yielded Pearson's and Spearman's coefficients above 0.75, indicating that pre-trained language models and sentence embedding schemes are effective for capturing semantic similarity between sentences, corroborating the view of studies such as [ 66 68 ].…”
Section: Resultssupporting
confidence: 74%
“…Such approaches require significant computational resources and struggle to make real-time inferences, which can be alleviated by the representation-based twin-tower structure with sentence embedding at its core [ 66 ]. Sentence-BERT [ 67 ] obtains sentence embedding by siamese and triplet network structure and the semantic similarity of two sentences can be gauged by the cosine similarity between their embeddings [ 68 ]. A Simple contrastive learning framework of sentence embedding (SimCSE), including both unsupervised and supervised versions, was proposed by Gao et al in [ 69 ], achieving the SOTA performance of sentence-level semantic representation based on contrast learning and dropout data augment [ 70 ].…”
Section: Related Workmentioning
confidence: 99%
“…Initially, many doubted the quality of Wikipedia's content and allegations of misinformation, so several institutions prohibited teachers and students from using Wikipedia [7], [9], [10]. However, this assumption is gradually changing because Wikipedia is supported by millions of editors and moderators of multi-sourced content [11], [12]. This research focuses on whether the digital library Wikipedia usage in biology learning contributes to improving students' scientific literacy to support biology learning with innovative digital media that has not been used previously.…”
Section: Introductionmentioning
confidence: 99%