2010
DOI: 10.1016/j.artint.2010.03.004
|View full text |Cite
|
Sign up to set email alerts
|

Partial observability and learnability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(27 citation statements)
references
References 22 publications
0
27
0
Order By: Relevance
“…Although the ultimate objective of such work would be to strengthen these results to the distribution-free PAC setting, any work that handled a class of distributions that exhibited such biases would also be of interest. A similar direction would be to obtain results for a more general class of masking processes [34]; although it seems that our results generalize to masking distributions that simultaneously reveal any width-w set of literals with non-negligible probability (for w = Ω(log n)) such as w-wise independent distributions (Wigderson and Yehudayoff [41] make a similar 8 The intuition here is that our masking process is "leaking" some of the secrets of the two parties, so we use a leakageresilient encoding of these values. Since the masking completely at random is such a weak form of leakage, the parity encoding is secure.…”
Section: Directions For Future Researchmentioning
confidence: 94%
“…Although the ultimate objective of such work would be to strengthen these results to the distribution-free PAC setting, any work that handled a class of distributions that exhibited such biases would also be of interest. A similar direction would be to obtain results for a more general class of masking processes [34]; although it seems that our results generalize to masking distributions that simultaneously reveal any width-w set of literals with non-negligible probability (for w = Ω(log n)) such as w-wise independent distributions (Wigderson and Yehudayoff [41] make a similar 8 The intuition here is that our masking process is "leaking" some of the secrets of the two parties, so we use a leakageresilient encoding of these values. Since the masking completely at random is such a weak form of leakage, the parity encoding is secure.…”
Section: Directions For Future Researchmentioning
confidence: 94%
“…There are several related articles considering learning under various restrictions in Gold's model (Goldman et al 2003), Valiant's model (Ben-David and Dichterman 1998;Decatur and Gennaro 1995), and other learning context (Khardon and Roth 1999). Moreover, recently learning from partial examples, or examples with missing information, has attracted much attention in Valiant's learning model (Michael 2010(Michael , 2011. In this paper we also consider learning from examples with missing information, which are truncated finite sequences.…”
Section: Related Workmentioning
confidence: 99%
“…Another approach of MI has also been studied under noisefree deterministic rule learning setting in the Probably Approximately Correct (PAC) learning framework. It has been shown [21] that the PAC learning semantics can be extended to deal with arbitrarily missing information, and that certain PAC learning algorithms can be easily modified to cope with such experiences [9]. Other studies [23] present the benefits of simultaneously learning multiple predictors (rules) from a common dataset and [22] the benefits of rule chaining.…”
Section: Imputing Values Through Learningmentioning
confidence: 99%
“…It has been shown that the PAC learning semantics can be extended to deal with arbitrarily missing information [21] , and that PAC learning algorithms can easily modified to cope with such experiences [9]. In our experiments we tested two algorithms modified to cope with incompleteness, the Winnow2 [8] and a Back Propagation NN algorithm [18].…”
Section: Base Learning Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation