Statistical Methodologies 2020
DOI: 10.5772/intechopen.85627
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study of Maximum Likelihood Estimation and Bayesian Estimation for Erlang Distribution and Its Applications

Abstract: In this chapter, Erlang distribution is considered. For parameter estimation, maximum likelihood method of estimation, method of moments and Bayesian method of estimation are applied. In Bayesian methodology, different prior distributions are employed under various loss functions to estimate the rate parameter of Erlang distribution. At the end the simulation study is conducted in R-Software to compare these methods by using mean square error with varying sample sizes. Also the real life applications are exami… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…They concluded SELF yields least estimates than PLF, and the squared error loss function of inverse exponential prior was the best among Bayesian estimators. [9] used different estimation methods for the rate parameter, including the maximum likelihood method, method of moments and Bayesian method of estimation. they derived Bayesian estimator under two different priors Jeffrey's prior and Quasi prior based on three different Loss Functions which are precautionary loss function (PLF), Al-Bayyati's loss function (ALF) , LINEX loss function (LLF).…”
Section: Introductionmentioning
confidence: 99%
“…They concluded SELF yields least estimates than PLF, and the squared error loss function of inverse exponential prior was the best among Bayesian estimators. [9] used different estimation methods for the rate parameter, including the maximum likelihood method, method of moments and Bayesian method of estimation. they derived Bayesian estimator under two different priors Jeffrey's prior and Quasi prior based on three different Loss Functions which are precautionary loss function (PLF), Al-Bayyati's loss function (ALF) , LINEX loss function (LLF).…”
Section: Introductionmentioning
confidence: 99%
“…Currently, the dominant approaches are auto-regressive models, such as Recurrent Neural Network (Mikolov et al, 2011), Transformer (Vaswani et al, 2017), and Convolutional Seq2Seq (Gehring et al, 2017), which have achieved impressive performances for the task of language generation using the Maximum Likelihood Estimation (MLE) method. Nevertheless, some studies reveal that such settings may have three main drawbacks: First, the MLE method makes the generative model extremely sensitive to rare samples, which results in the learned distribution being too conservative (Feng and McCulloch, 1992;Ahmad and Ahmad, 2019). Second, autoregressive generation models suffer from exposure bias due to the dependence on the previous sampled output during the inferring phase.…”
Section: Introductionmentioning
confidence: 99%