Findings of the Association for Computational Linguistics: EMNLP 2021 2021
DOI: 10.18653/v1/2021.findings-emnlp.34
|View full text |Cite
|
Sign up to set email alerts
|

WHOSe Heritage: Classification of UNESCO World Heritage Statements of "Outstanding Universal Value” with Soft Labels

Abstract: The UNESCO World Heritage List (WHL) includes the exceptionally valuable cultural and natural heritage to be preserved for mankind. Evaluating and justifying the Outstanding Universal Value (OUV) is essential for each site inscribed in the WHL, and yet a complex task, even for experts, since the selection criteria of OUV are not mutually exclusive. Furthermore, manual annotation of heritage values and attributes from multi-source textual data, which is currently dominant in heritage studies, is knowledge-deman… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…, 2018] has also been trained and fine-tuned, reaching a similar performance in accuracy. Furthermore, it has been found that the average confidence by both BERT and ULMFiT models on the prediction task showed significant correlation with expert evaluation, even on social media data [Bai et al, 2021a]. This suggests that it may be possible to use the both trained model to generate labels about heritage values in a semi-supervised active learning setting [Prince, 2004, Zhu andGoldberg, 2009], as this is a task too knowledge-demanding for crowd-workers, yet too time-consuming for experts [Pustejovsky and Stubbs, 2012].…”
Section: Contextual Featuresmentioning
confidence: 99%
See 2 more Smart Citations
“…, 2018] has also been trained and fine-tuned, reaching a similar performance in accuracy. Furthermore, it has been found that the average confidence by both BERT and ULMFiT models on the prediction task showed significant correlation with expert evaluation, even on social media data [Bai et al, 2021a]. This suggests that it may be possible to use the both trained model to generate labels about heritage values in a semi-supervised active learning setting [Prince, 2004, Zhu andGoldberg, 2009], as this is a task too knowledge-demanding for crowd-workers, yet too time-consuming for experts [Pustejovsky and Stubbs, 2012].…”
Section: Contextual Featuresmentioning
confidence: 99%
“…Specifically, the output on the [CLS] token of BERT models is regarded as an effective representation of the entire input sentence, being used extensively for classification tasks [Clark et al, 2019, Sun et al, 2019. In heritage studies domain, Bai et al [2021a] fine-tuned BERT on the dataset WHOSe Heritage they constructed from UNESCO World Heritage inscription document, followed by a Multi-Layer Perceptron (MLP) classifier to predict the OUV selection criteria a sentence is concerned with, showing top-1 accuracy of around 71% and top-3 accuracy of around 94%.…”
Section: Textual Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, the output on the [CLS] token of BERT models is regarded as an effective representation of the entire input sentence, being used extensively for classification tasks [77,78]. In the heritage studies domain, Bai et al [79] fine-tuned BERT on the dataset WHOSe Heritage that they constructed from the UNESCO World Heritage inscription document, followed by a Multi-Layer Perceptron (MLP) classifier to predict the OUV selection criteria that a sentence is concerned with, showing top-1 accuracy of around 71% and top-3 accuracy of around 94%.…”
Section: Textual Featuresmentioning
confidence: 99%
“…However, for pragmatic purposes of demonstrating a framework, this study omits this distinction and considers the OUV selection criteria as a proxy of HV during label generation. A group of ML models were trained and fine-tuned to make such predictions by Bai et al [79] as introduced in Section 2.4.2. Except for BERT already used to generate textual features as mentioned above, a Universal Language Model Finetuning (UMLFiT) [88] has also been trained and fine-tuned, reaching a similar performance in accuracy.…”
Section: Pseudo-label Generation 251 Heritage Values As Ouv Selection...mentioning
confidence: 99%