2020
DOI: 10.3390/e22070794
|View full text |Cite
|
Sign up to set email alerts
|

Modelling and Recognition of Protein Contact Networks by Multiple Kernel Learning and Dissimilarity Representations

Abstract: Multiple kernel learning is a paradigm which employs a properly constructed chain of kernel functions able to simultaneously analyse different data or different representations of the same data. In this paper, we propose an hybrid classification system based on a linear combination of multiple kernels defined over multiple dissimilarity spaces. The core of the training procedure is the joint optimisation of kernel weights and representatives selection in the dissimilarity spaces. This equips the system with a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 112 publications
(236 reference statements)
0
9
0
Order By: Relevance
“…For this reason, graphs are commonly used to model a plethora of real-world, possibly complex, systems [ 1 ]. Notable examples include biological systems and chemistry [ 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 ], social and collaboration networks [ 14 ], computer vision and image processing [ 15 , 16 , 17 , 18 ], natural language processing [ 19 , 20 , 21 , 22 ], and energy distribution networks [ 23 ].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…For this reason, graphs are commonly used to model a plethora of real-world, possibly complex, systems [ 1 ]. Notable examples include biological systems and chemistry [ 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 ], social and collaboration networks [ 14 ], computer vision and image processing [ 15 , 16 , 17 , 18 ], natural language processing [ 19 , 20 , 21 , 22 ], and energy distribution networks [ 23 ].…”
Section: Introductionmentioning
confidence: 99%
“…Building an embedding space consists in moving the pattern recognition problem from the graphs domain towards a possibly metric (Euclidean) space: instead of using the former, one can use the latter which can be equipped with algebraic structures. To this family belongs dissimilarity spaces [ 32 ], where each pattern is described by the pairwise dissimilarities with respect to all training data (or a properly-chosen subset [ 3 , 33 , 34 ]), and embedding via information granulation, where recurrent and meaningful entities called information granules (e.g., network motifs, graphlets, and the like) are extracted by the training data and each pattern is cast into an integer-valued vector ( symbolic histogram ) which counts the number of occurrences of each granule within the pattern itself [ 35 , 36 , 37 , 38 , 39 , 40 ]. …”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…• ad-hoc dissimilarities in the input space, where custom dissimilarity measures (e.g., edit distances [19][20][21][22]) are designed in order to directly process patterns in the input space (without moving towards Euclidean spaces) • dissimilarity representations [18,23], where each pattern is described by the pairwise distances with other patterns or with respect to a properly chosen subset of pivotal training patterns [23][24][25][26] • kernel methods, where the mapping between the original input space and the Euclidean space exploits positive-definite kernel functions [27][28][29][30][31][32] • embedding via information granulation.…”
Section: Introductionmentioning
confidence: 99%
“…In the paper “Modelling and Recognition of Protein Contact Networks by Multiple Kernel Learning and Dissimilarity Representations” by Martino et al [ 6 ], the authors focused on predicting the proteins’ functional role, proposing a hybrid classification system based on a linear combination of multiple kernels defined over multiple dissimilarity spaces. Here, the training procedure jointly optimized the kernel weights and the representatives’ selection in the dissimilarity spaces.…”
mentioning
confidence: 99%