DOI: 10.11606/t.45.2019.tde-02042019-231050
|View full text |Cite
|
Sign up to set email alerts
|

Model selection for learning boolean hypothesis

Abstract: Palavras chave: aprendizado de máquina, hipóteses Booleanas, partições do domínio, viabilidade de aprendizado, dimensão VC.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
16
0

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(16 citation statements)
references
References 31 publications
0
16
0
Order By: Relevance
“…If H is given by the set of Boolean functions h : {0, 1} d → {0, 1} we have the special case of a Boolean Partition Lattice, which is studied in [6], where simulation studies involving the U-curve property may be found. Observe that the Feature Selection Learning Space when X ⊂ R d , |X | < ∞ and Y = {0, 1} is a sub-lattice of the Partition Lattice Learning Space.…”
Section: Examples Of Learning Spacesmentioning
confidence: 99%
See 4 more Smart Citations
“…If H is given by the set of Boolean functions h : {0, 1} d → {0, 1} we have the special case of a Boolean Partition Lattice, which is studied in [6], where simulation studies involving the U-curve property may be found. Observe that the Feature Selection Learning Space when X ⊂ R d , |X | < ∞ and Y = {0, 1} is a sub-lattice of the Partition Lattice Learning Space.…”
Section: Examples Of Learning Spacesmentioning
confidence: 99%
“…Nonetheless, even when no U-curve property is satisfied, one may still apply an U-curve algorithm and obtain a suitable suboptimal solution. This has been done successfully in feature selection (see [3,6,19,20,21,22] for more details). As it is outside the scope of this paper, which is to define a general framework for model selection via Learning Spaces, an interesting topic for future research would be to find conditions on L(H) and for the U-curve properties, especially in cases of interest as linear classifiers, feature selection and neural networks.…”
Section: Definition 5 a Learning Space L(h) Under Loss Function Satis...mentioning
confidence: 99%
See 3 more Smart Citations