A linear structural regression model is studied, where the covariate is observed with a mixture of the classical and Berkson measurement errors. Both variances of the classical and Berkson errors are assumed known. Without normality assumptions, consistent estimators of model parameters are constructed and conditions for their asymptotic normality are given. The estimators are divided into two asymptotically independent groups.

Regression models with measurement errors in covariates are quite popular nowadays [

We consider a linear regression model under the presence of the classical and Berkson errors in the covariate:

In model (

Models with a mixture of the classical and Berkson errors appear in radio-epidemiology. In [

In [

The presented linear model (

The goal of the present paper is to study asymptotic properties of estimators of model parameters in the linear regression (

The paper is organized as follows. In Section

We use the following notation. The symbol

We consider the structural model (

Random variables

Random variables

Variances of

Consider independent copies of model (

We allow

Now, we explain why we impose condition (

The distribution of the observed Gaussian vector

Notice that under conditions of Lemma

Now, besides conditions (

Random variables

Now, out of (

Suppose at the moment that

Next, in model (

It is remarkable that

In our model, we have to estimate 5 parameters

Though we derived the estimators under the normality assumption (

Here, we check the strong consistency of

We need the following moment assumption.

We follow the line of the proof of Theorem 2.22 [

If condition (

The convergence (

Analysis of formula (

We slightly reparametrize the model (

Theorem

It is convenient to deal with asymptotically independent estimators

(a) We prove (

1. Since all the variances in the underlying model are assumed positive, the true vector

As was mentioned above,

The unbiasedness of

2. It remains to prove that

Consider a random vector

We use the centralization

Consider two cases about the support of

2.1. Here we suppose that

2.2. Now, we suppose that for some

(b) Now, we rely additionally on the assumption (

Using assumption (

Theorem

We simulated test data in order to evaluate the cover probability for the asymptotic confidence interval of the slope parameter, which is constructed based on Theorem

Cover Probability Plot - Sample Size effect perspective

Cover Probability Plot - Berkson effect perspective

Figure

We dealt with a linear observation model (

Then we modified the model to an equivalent centralized form (

In future we intend to consider the prediction problem for the model (