2014
DOI: 10.7465/jkdi.2014.25.1.227
|View full text |Cite
|
Sign up to set email alerts
|

Noninformative priors for the log-logistic distribution

Abstract: In this paper, we develop the noninformative priors for the scale parameter and the shape parameter in the log-logistic distribution. We developed the first and second order matching priors. It turns out that the second order matching prior matches the alternative coverage probabilities, and is a highest posterior density matching prior. Also we revealed that the derived reference prior is the second order matching prior for both parameters, but Jeffrey's prior is not a second order matching prior. We showed t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

5
2

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…by Kang et al (2014a). Then from the likelihood (3.2) and the reference prior (3.3), the element m b 1 (x, y) of the FBF under H 1 is given by…”
Section: Bayesian Hypothesis Testing Procedures Based On the Fractionamentioning
confidence: 99%
See 1 more Smart Citation
“…by Kang et al (2014a). Then from the likelihood (3.2) and the reference prior (3.3), the element m b 1 (x, y) of the FBF under H 1 is given by…”
Section: Bayesian Hypothesis Testing Procedures Based On the Fractionamentioning
confidence: 99%
“…So under minimal training sample, we only calculate the marginal densities for the hypotheses H 1 and H 2 , respectively. The marginal densities of (X j1 , X j2 , Y k1 , Y k2 ) are finite for all 1 ≤ j 1 < j 2 ≤ n and 1 ≤ k 1 < k 2 ≤ m under each hypothesis (see Theorem 3.1 of Kang et al (2014a)). Thus we conclude that any training sample of size 4 is a minimal training sample.…”
Section: Bayesian Hypothesis Testing Procedures Based On the Intrinsicmentioning
confidence: 99%
“…Ghosh and Mukerjee (1992), and Bernardo (1989, 1992) give a general algorithm to derive a reference prior by splitting the parameters into several groups according to their order of inferential importance. This approach is very successful in various practical problems (Kang et al, 2013(Kang et al, , 2014. Quite often reference priors satisfy the matching criterion described earlier.…”
Section: Introductionmentioning
confidence: 98%
“…Ghosh and Mukerjee (1992), and Bernardo (1989,1992) give a general algorithm to derive a reference prior by splitting the parameters into several groups according to their order of inferential importance. This approach is very successful in various practical problems (Kang, 2013;Kang et al 2013Kang et al , 2014. Quite often reference priors satisfy the matching criterion described earlier.…”
Section: Introductionmentioning
confidence: 99%