2013
DOI: 10.1007/978-3-319-03680-9_37
|View full text |Cite
|
Sign up to set email alerts
|

Propositionalisation of Multi-instance Data Using Random Forests

Abstract: Abstract. Multi-instance learning is a generalisation of attribute-value learning where examples for learning consist of labeled bags (i.e. multisets) of instances. This learning setting is more computationally challenging than attribute-value learning and a natural fit for important application areas of machine learning such as classification of molecules and image classification. One approach to solve multi-instance learning problems is to apply propositionalisation, where bags of data are converted into vec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…The selected features were filtered by propositionalization and partition using the Partition-Membership filter (Partition Membership Filter with option Random Committee in Weka) on Lung 2 train and test sets. It can apply any partition generator to a given feature vector to get these filtered vectors for all instances, and the filtered instances are composed of these values plus class attribute and make as sparse instances (29).…”
Section: Feature Selectionmentioning
confidence: 99%
See 1 more Smart Citation
“…The selected features were filtered by propositionalization and partition using the Partition-Membership filter (Partition Membership Filter with option Random Committee in Weka) on Lung 2 train and test sets. It can apply any partition generator to a given feature vector to get these filtered vectors for all instances, and the filtered instances are composed of these values plus class attribute and make as sparse instances (29).…”
Section: Feature Selectionmentioning
confidence: 99%
“…PMF was used for transforming features and CFS is good at picking the most representative minimum feature subset. It has been proved that PMF can not only solve the problem of binary classification but also improve the accuracy of classification(29,30). Meanwhile, in order to avoid over-fitting as much as possible, the train and the test sets were divided with stratified random sampling to keep them balanced.…”
mentioning
confidence: 99%
“…The WEKA 3.8.1 has the partition‐membership filter that can apply any partition generator to a given feature vector to get these filtered vectors for all instances . The partition‐membership filter uses a partition generator to generate partition membership values; filtered instances are composed of these values plus class attribute and make as sparse instances …”
Section: Methodsmentioning
confidence: 99%
“…Then the partition-Membership filter (PMF, PartitionMembershipFilter with option Random Committee in Weka) used to transform the normalized 2 PETR and 2 CTR features into sparse instances to improve the model performance (34,35).…”
Section: Model Building and Performance Evaluationmentioning
confidence: 99%