2018
DOI: 10.1162/neco_a_01119
|View full text |Cite
|
Sign up to set email alerts
|

Learning Data Manifolds with a Cutting Plane Method

Abstract: We consider the problem of classifying data manifolds where each manifold represents invariances that are parameterized by continuous degrees of freedom. Conventional data augmentation methods rely on sampling large numbers of training examples from these manifolds. Instead, we propose an iterative algorithm, [Formula: see text], based on a cutting plane approach that efficiently solves a quadratic semi-infinite programming problem to find the maximum margin solution. We provide a proof of convergence as well … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2

Relationship

5
1

Authors

Journals

citations
Cited by 14 publications
(16 citation statements)
references
References 33 publications
0
16
0
Order By: Relevance
“…Our theory has shown that for manifolds occupying D ≫ 1 dimensions (as in most cases of interest), R M and D M determine the classification capacity; in fact the capacity is similar to that of balls with radius and dimensions equal to R M and D M , respectively (see Methods). Furthermore, using statistical mechanical mean-field techniques, we derive algorithms for measuring the capacity, R M and D M for manifolds given by either empirical data samples or from parametric generative models 36,41 . This theory assumed that the position and orientation of different manifolds are uncorrelated.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our theory has shown that for manifolds occupying D ≫ 1 dimensions (as in most cases of interest), R M and D M determine the classification capacity; in fact the capacity is similar to that of balls with radius and dimensions equal to R M and D M , respectively (see Methods). Furthermore, using statistical mechanical mean-field techniques, we derive algorithms for measuring the capacity, R M and D M for manifolds given by either empirical data samples or from parametric generative models 36,41 . This theory assumed that the position and orientation of different manifolds are uncorrelated.…”
Section: Resultsmentioning
confidence: 99%
“…The results presented so far were obtained using algorithms derived from a mean-field theory which is exact in the limit of large number of neurons and manifolds and additional simplifying statistical assumptions ( Supplementary Note 3.1). To test the agreement between the theory and the capacity of finite-sized networks with realistic data, we have numerically computed capacity at each layer of the network, using recently developed efficient algorithms for manifold linear classification 41 (see Methods). Comparing the numerically measured values to theory shows good agreement for both point-cloud manifolds (Fig.…”
Section: Comparison Of Theory With Numerically Measured Capacitymentioning
confidence: 99%
“…We have recently developed an efficient algorithm for finding the maximum margin solution in manifold classification and have used this method in the present work [see Ref. [22] and SM [16] (Sec. S5)].…”
Section: E Numerical Solution Of the Mean-field Equationsmentioning
confidence: 99%
“…But can the mean field equations describe capacity of finite-sized networks with realistic data? To address this question, we have numerically computed capacity at each layer of the network, using a linear classifier trained to classify object manifolds with random binary labels (using recently developed efficient algorithms for manifold classification [39], see Methods). Comparing the numerically measured values to theory shows good agreement for both point-cloud manifolds (figure 8a) and smooth manifolds (figure 8b for smooth 2-d manifolds in AlexNet and figure SI11 for smooth 2-d and 1-d manifolds in AlexNet, VGG-16 and ResNet-50).…”
Section: Comparison Of Theory With Numerically Measured Capacitymentioning
confidence: 99%
“…Testing for linearly separability of manifold can be done using regular optimization procedures (i.e. using quadratic optimization), or using efficient algorithms developed specifically for the task of manifold classification [39]. As n increase the fraction of separable dichotomies goes from 0 to 1 and we numerically measure classification capacity as α c = P/n c where the fraction surpasses 50%; a binary search for the exact value n c is used to find this transition.…”
Section: Inhomogeneous Ensemble Of Manifoldsmentioning
confidence: 99%