In this study, we consider a bias reduction of the conditional maximum likelihood estimators for the unknown parameters of a Gaussian second-order moving average (MA(2)) model. In many cases, we use the maximum likelihood estimator because the estimator is consistent. However, when the sample size

Estimators of unknown parameters must be consistent. The consistency is ensured when we have large samples. The estimators might have a bias if the sample size is small. In recent years, some computational methods has been developed to compute estimates for unknown parameters. However, an analytical solution for the bias enables us to know a relationship between unknown parameter and the bias. It means that the bias changes depending on unknown parameters. Many analytical evaluations for the bias for a class of nonlinear estimators in models with i.i.d. samples have been conducted for many years. Tanaka (1983) [

Practically, we often rely on the

In this study, we show the bias of the conditional maximum likelihood estimators of unknown parameters for a Gaussian second-order moving average (MA(2)) model followed by the method in [

Let

In this study, we consider a Gaussian MA(2) model defined by

If we consider an estimation problem for the maximum likelihood function with

The proof will be in the Appendix. We know that

Since the conditional likelihood function is expressed as a function of independent samples

In this section, we compute the biases of estimators for the unknown parameters of the Gaussian MA(2) model using the conditional maximum likelihood estimate, and also propose new estimators for these parameters. Before obtaining the results on bias, we observe the MSEs. The MSEs appear in the diagonal elements in the covariance matrix by

The proof is given in Section

The proof is given in Section

We note that MA

We have recently found the notable results by Y. Bao (2016) [

Using (

In this section, we conduct a simulation study in order to verify Theorems

We evaluate how much the MSEs of the conditional maximum likelihood estimators change depending on the true values of the unknown parameters. Table

Comparisons of the estimated MSEs and

50 | 0.02046 | 0.01133 | 0.03102 | 0.03684 | 50 | 0.01412 | 0.01146 | 0.04705 | 0.02783 | |

0.02000 | 0.01000 | 0.01875 | 0.01875 | 0.01312 | 0.01000 | 0.01304 | 0.01304 | |||

100 | 0.01020 | 0.00526 | 0.01136 | 0.01233 | 100 | 0.00691 | 0.00578 | 0.01566 | 0.00944 | |

0.01000 | 0.00500 | 0.00938 | 0.00938 | 0.00656 | 0.00500 | 0.00652 | 0.00652 | |||

150 | 0.00675 | 0.00346 | 0.00706 | 0.00739 | 150 | 0.00453 | 0.00386 | 0.00921 | 0.00564 | |

0.00667 | 0.00333 | 0.00625 | 0.00625 | 0.00437 | 0.00333 | 0.00435 | 0.00435 | |||

50 | 0.00786 | 0.01148 | 0.03108 | 0.03334 | 50 | 0.05849 | 0.01099 | 0.01943 | 0.02612 | |

0.00720 | 0.01000 | 0.01395 | 0.01395 | 0.05780 | 0.01000 | 0.01395 | 0.01395 | |||

100 | 0.00379 | 0.00526 | 0.01051 | 0.01123 | 100 | 0.02907 | 0.00520 | 0.00814 | 0.00937 | |

0.00360 | 0.00500 | 0.00698 | 0.00698 | 0.02890 | 0.00500 | 0.00698 | 0.00698 | |||

150 | 0.00248 | 0.00346 | 0.00597 | 0.00620 | 150 | 0.01932 | 0.00343 | 0.00511 | 0.00559 | |

0.00240 | 0.00333 | 0.00465 | 0.00465 | 0.01927 | 0.00333 | 0.00465 | 0.00465 |

We conducted simulations under the four settings. The top-left table is prepared for checking of performance under the condition that the true parameters are within the invertibility condition. On the other hand, the top-right table is close to the boundary of the invertibility condition. The two bottom tables are made for checking the symmetry of

Comparisons of

0.39307 | 0.57401 | 1.55412 | 1.66715 | ||

0.36061 | 0.51210 | 0.72833 | 0.71769 | ||

0.36000 | 0.50000 | 0.69750 | 0.69750 | ||

2.92462 | 0.54941 | 0.97164 | 1.30611 | ||

2.88205 | 0.51144 | 0.69979 | 0.71517 | ||

2.89000 | 0.50000 | 0.69750 | 0.69750 |

Generally,

Behavior of the estimated MSEs when the sample size is small

10 | 0.14485 | 0.09400 | 0.41133 | 0.36819 |

11 | 0.12914 | 0.08262 | 0.34886 | 0.33276 |

12 | 0.11453 | 0.07419 | 0.29700 | 0.29798 |

13 | 0.10000 | 0.06659 | 0.26646 | 0.27767 |

14 | 0.08955 | 0.06034 | 0.23022 | 0.25188 |

15 | 0.08262 | 0.05502 | 0.21125 | 0.23577 |

16 | 0.07426 | 0.05051 | 0.18974 | 0.21314 |

17 | 0.06889 | 0.04616 | 0.17351 | 0.19723 |

18 | 0.06413 | 0.04295 | 0.15869 | 0.18569 |

19 | 0.05969 | 0.04028 | 0.14624 | 0.17146 |

20 | 0.05529 | 0.03762 | 0.13552 | 0.16067 |

The estimated MSE becomes smaller as the sample size becomes larger.

We express the bias of

is the bias of the conditional maximum likelihood estimator,

is the bias of the conditional maximum likelihood estimator without the term

Evaluation of the estimated bias of

50 | 50 | 0.00686 | ||||

100 | 0.00115 | 100 | 0.00428 | |||

150 | 0.00132 | 150 | 0.00342 |

The bias of

Comparison of the bias and MSEs of

Bias | MSE | |||||||

50 | 50 | 0.01133 | 0.01195 | 0.01077 | ||||

100 | 0.00087 | 100 | 0.00526 | 0.00531 | 0.00517 | |||

150 | 0.00120 | 150 | 0.00346 | 0.00347 | 0.00343 |

The estimated bias of

We express the bias of

Evaluation of the estimated bias of

50 | 50 | 0.04564 | ||||||||

100 | 100 | 0.01046 | 0.03816 | |||||||

150 | 150 | 0.01191 | 0.03037 | |||||||

50 | 50 | 0.00769 | ||||||||

100 | 100 | 0.00247 | ||||||||

150 | 0.00104 | 150 | 0.00095 |

Except for the case where

Comparisons of estimated bias for the estimator of

50 | −0.03591 | −0.03166 | −0.01421 | −0.04913 | −0.05512 | −0.01118 |

100 | −0.01319 | −0.01087 | −0.00286 | −0.01969 | −0.02115 | −0.00160 |

150 | −0.00847 | −0.00716 | −0.00166 | −0.01271 | −0.01338 | −0.00079 |

50 | −0.13333 | −0.08952 | −0.10667 | −0.00976 | −0.05341 | 0.04623 |

100 | −0.07309 | −0.03134 | −0.06057 | 0.01046 | −0.01752 | 0.03784 |

150 | −0.05414 | −0.01796 | −0.04592 | 0.01191 | −0.01095 | 0.03014 |

50 | −0.06690 | −0.06346 | −0.03622 | −0.06690 | −0.09484 | −0.00988 |

100 | −0.02493 | −0.01971 | −0.01041 | −0.02765 | −0.03755 | −0.00032 |

150 | −0.01475 | −0.01109 | −0.00521 | −0.01663 | −0.02199 | 0.00137 |

50 | −0.00862 | −0.00733 | −0.00260 | 0.00769 | 0.02588 | −0.00577 |

100 | −0.00305 | −0.00265 | −0.00004 | 0.00247 | 0.00854 | −0.00410 |

150 | −0.00250 | −0.00223 | −0.00049 | 0.00095 | 0.00473 | −0.00340 |

The biases of the proposed estimators

Comparisons of the estimated MSEs for the estimators of

50 | 0.03102 | 0.03332 | 0.02822 | 0.03684 | 0.04005 | 0.03055 |

100 | 0.01136 | 0.01157 | 0.01090 | 0.01233 | 0.01264 | 0.01124 |

150 | 0.00706 | 0.00711 | 0.00686 | 0.00739 | 0.00748 | 0.00695 |

50 | 0.04705 | 0.03905 | 0.03866 | 0.02783 | 0.03581 | 0.02665 |

100 | 0.01566 | 0.01050 | 0.01364 | 0.00944 | 0.01016 | 0.01021 |

150 | 0.00921 | 0.00579 | 0.00826 | 0.00564 | 0.00569 | 0.00619 |

50 | 0.03108 | 0.03492 | 0.02635 | 0.03334 | 0.04263 | 0.02561 |

100 | 0.01051 | 0.01124 | 0.00972 | 0.01123 | 0.01288 | 0.00985 |

150 | 0.00597 | 0.00613 | 0.00568 | 0.00620 | 0.00668 | 0.00569 |

50 | 0.01943 | 0.01993 | 0.01848 | 0.02612 | 0.03126 | 0.02306 |

100 | 0.00814 | 0.00818 | 0.00795 | 0.00937 | 0.00993 | 0.00883 |

150 | 0.00511 | 0.00513 | 0.00503 | 0.00559 | 0.00577 | 0.00538 |

There is no difference between the estimated MSEs of

We show our main Theorems

The second derivatives are given by the inverse of the Fisher information matrix and the components of the matrix can be obtained by the expectation of the second derivative of the log-likelihood functions. The expectations of the components are given in Proposition

We shall compute the limiting values for

Eq. (

Next, we show (

Now, we are ready to use Lemma

We provide a practical example using quarterly U.S. GNP from

We do not know true values for unknown parameters in the MA(2) but we assume that the GNP rate follows

MLE and conditional MLE with a correction for U.S. GNP using MA

TRUE ( |
0.0083 | 0.0094 | 0.3028 | 0.2036 |

MLE ( |
0.0071 | 0.0060 | 0.1446 | 0.1370 |

QMLE ( |
0.0072 | 0.0060 | 0.1564 | 0.1321 |

corrected MLE | – | 0.0065 | 0.1824 | 0.1680 |

corrected QMLE | – | 0.0065 | 0.1939 | 0.1639 |

Bias for MLE | −0.0012 | −0.0035 | −0.1582 | −0.0666 |

Bias for QMLE | −0.0012 | −0.0035 | −0.1464 | −0.0714 |

Bias for corrected MLE | – | −0.0029 | −0.1204 | −0.0356 |

Bias for corrected QMLE | – | −0.0029 | −0.1089 | −0.0397 |

For an unknown parameter

We show the derivatives of the conditional log-likelihood function and expectations. Let

The first derivatives of the conditional log-likelihood function (

Lemma

Similarly,

Since

We easily obtain the third derivatives in Lemma

We show that the solution of

We assume that

First, we show (

Next, we show (

Finally, we prove (

First, we show (

Next, we show (

We show only (

The log-likelihood functions appearing in the first line in Proposition

Unlike Proposition

Eqs. (

The authors would like to thank anonymous referees for giving us suitable suggestions and comments. They gave us not only technical comments, but also practical insights. By their comments, we revised a lot of equations and added Remark