1998
DOI: 10.1142/0822
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Complexity in Statistical Inquiry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
940
0
12

Year Published

1999
1999
2005
2005

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 947 publications
(956 citation statements)
references
References 0 publications
4
940
0
12
Order By: Relevance
“…A second result is that the shortest code can be used for prediction, with a high probability of 'convergence' on largely correct predictions. Finally, a third powerful line of justification for simplicity as an effective method of induction is its widespread use in machine learning [8,9] and statistics [10].…”
Section: Quantifying Simplicitymentioning
confidence: 99%
See 1 more Smart Citation
“…A second result is that the shortest code can be used for prediction, with a high probability of 'convergence' on largely correct predictions. Finally, a third powerful line of justification for simplicity as an effective method of induction is its widespread use in machine learning [8,9] and statistics [10].…”
Section: Quantifying Simplicitymentioning
confidence: 99%
“…The answer is that the projection is much more stable for the perpendicular ellipse; for the highly skewed ellipse the angle of orientation must be specified more precisely, costing additional code length, to obtain an equally good fit with the data. [10] Formal measures of simplicity [37,38] a Many pattern-finding problems have been successfully approached by mathematicians and computer scientists using a simplicity principle. In many of these areas, the simplicity principle has also been used as a starting point for modelling cognition.…”
Section: Box 1 Empirical Datamentioning
confidence: 99%
“…In the process of acquiring a vocabulary, infants would be well advised to avoid any temptation to consider consonants, or consonant clusters, as potential words to be added to the lexicon. Brent and Cartwright (1996) conducted a series of computational studies of vocabulary acquisition, in which they used a Minimum Representation Length (Rissanen, 1989) tech nique to learn a vocabulary from selections of the CHILDES database (Mac-markings were deleted. Brent and Cartwright showed that vocabulary acquisi tion was improved by adopting a strategy which insisted that all words con tained a vowel, thus effectively ruling out consonants as feasible candidate words.…”
Section: Ruling Out Impossible Segmentationsmentioning
confidence: 99%
“…in a setting where only the data received are of interest. In Rissanen's principle of Minimum Description Length ( [17]), the criterion is the efficiency of the representation of a given data set. This again has two components, one being the complexity of the model that represents the data, or the length of the program needed to describe the model, the other one being the length of the representation of the data by the model.…”
Section: Learningmentioning
confidence: 99%
“…In principle, this quantification is also possible through other means, for example through the length of the representation of the data in the internal code of the system. If we assume optimal coding, however, which is a consequence of the minimization of internal complexity, then the length of the representation of a datum X i (θ) behaves like log 2 P (X i (θ)) (a code is good if frequent inputs are represented by short code words, see [17]). How external complexity can be increased then depends on the time scale involved.…”
Section: Introduction: Complex Adaptive Systemsmentioning
confidence: 99%