Open Access

# Estimation of a linear model with two-parameter symmetric platykurtic distributed errors

Journal of Uncertainty Analysis and Applications20131:13

DOI: 10.1186/2195-5468-1-13

Accepted: 6 November 2013

Published: 26 November 2013

## Abstract

### Purpose

A linear regression model with Gaussian-distributed error terms is the most widely used method to describe the possible relationship between outcome and predictor variables. However, there are some drawbacks of Gaussian errors such as the distribution being mesokurtic. In many practical situations, the variables under study may not be mesokurtic but are platykurtic. Hence, to analyze this sort of platykurtic variables, a multiple regression model with symmetric platykurtic (SP) distributed errors is needed. In this paper, we introduce and develop a multiple linear regression model with symmetric platykurtic distributed errors for the first time.

### Methods

We used the methods of ordinary least squares (OLS) and maximum likelihood (ML) to estimate the model parameters. The properties of the ML estimators with respect to the symmetric platykurtic distributed errors are discussed. The model selection criteria such as Akaike information criteria (AIC) and Bayesian information criteria (BIC) for the models are used. The utility of the proposed model is demonstrated with both simulation and real-time data.

### Results

A comparative study of symmetric platykurtic linear regression model with the Gaussian model revealed that the former gives good fit to some data sets. The results also revealed that ML estimators are more efficient than OLS estimators in terms of the relative efficiency of the one-step-ahead forecast mean square error.

### Conclusions

The study shows that the symmetric platykurtic distribution serves as an alternative to the normal distribution. The developed model is useful for analyzing data sets arising from agricultural experiments, portfolio management, space experiments, and a wide range of other practical problems.

### Keywords

Maximum likelihood Multiple linear regression model Simulation Symmetric platykurtic distribution

## Introduction

Regression analysis is one of the most commonly used statistical methodologies in many branches of science and engineering used for discovering functional relationships between variables. The most typical example of regression analysis is multiple linear regression modeling, which is used for predicting values of one or more response variables from any factor of interest, the independent variables. It has received applications in almost every area of science, engineering, and medicine. Comprehensive accounts of the theory and applications of the linear regression model are discussed in Seber [1], Montgomery et al. [2], Grob [3], Sengupta and Jammalamadaka [4], Seber and Lee [5], Weisberg [6], and Yan and Su [7]. This technique is usually based on a statistical model in which the error terms are assumed to be independent and identically distributed random variables, whose distribution is considered to be multivariate normal with a zero mean vector and a positive definite covariance matrix [8]. However, in many disciplines, scientific research based on empirical studies or theoretical reasoning provided support for the presence of skewness or heavy tails in the distribution of the error terms. The departures from normality may be caused also by the presence of outlying values in the responses. Examples can be found, amongst others, in Fama [9] and Sutton [10]. For these reasons, several researchers proposed to perform multivariate regression analysis using a model that assumes a different parametric distribution family for the error terms.

Zeckhauser and Thompson [11] studied on a linear regression model with power distributions. Zellner [12] and Sutradhar and Ali [13] studied on a regression model with a multivariate t error variable. Tiku et al. [1416] investigated a linear regression model with symmetric innovations, discussed a first-order autoregressive model with symmetric innovations, and presented a linear model with t distribution, respectively. Sengupta and Jammalamadaka [17] studied on linear models. Liu and Bozdogan [18] studied on power exponential (PE) multiple regression. Wong and Bian [19, 20] studied on multiple regression coefficients in a linear model with errors being Student's t distribution and a linear regression model with underlying distribution being a generalized logistic distribution, respectively. Liu and Bozdogan [21] studied on multivariate regression models with PE random errors under various assumptions. Soffritti and Galimberti [22] discussed a multivariate linear regression model under the assumption that the error terms follow a finite mixture of normal distributions. Jafari and Hashemi [23] studied on linear regression with the error term of skew-normal distribution. Jahan and Khan [24] investigated the g-and-k distribution as the underlying assumption for the distribution of error in a simple linear regression model. Bian et al. [25] studied a multiple linear regression model with underlying Student's t distribution.

No serious attempt is made to develop and analyze multiple regression models with symmetric platykurtic (SP) errors. For this reason, to achieve more flexibility in statistical modeling and model selection, and to robustify many multiple statistical procedures, the purpose of this paper is to introduce and develop a multivariate linear regression model for conditions in which the distribution of error terms is assumed to be independent and identically distributed SP random errors with mean 0 and constant variance σ2.

### Model description

The multiple regression model assumes a linear (in parameters) relationship between a dependent variable Y = (y 1, y 2, …, y n ) ' and a set of independent variables ${X}_{i}^{\text{'}}=\left({x}_{i0},{x}_{i1},{x}_{i2},\dots ,{x}_{\mathit{ik}}\right)$, where the first regressor x i 0 = 1 is a constant and i = 1, 2,…, n. The model of interest is the standard linear regression model of the following form:
${y}_{i}={\beta }_{0}+{\beta }_{1}{x}_{i1}+\cdots +{\beta }_{k}{x}_{\mathit{ik}}+{ϵ}_{i}\phantom{\rule{0.25em}{0ex}}\left(i=1,2,\dots ,n\right),$
(1)
where y i is an observed dependent variable, x ik are observed independent variables, β 0, β 1,…, β k are unknown regression coefficients to be estimated, and ϵ i are independently and identically distributed. Using linear algebra notation, model (1) may be written alternatively in matrix form as
$\mathbf{y}=\mathbf{X}\beta +\in ,\in ~\mathit{SP}\left(0,{\sigma }^{2}I\right).$
(2)
In this model,
$\begin{array}{cccc}\hfill \underset{\mathit{nx}1}{\mathbf{y}}=\left(\begin{array}{c}\hfill {y}_{1}\hfill \\ \hfill {y}_{2}\hfill \\ \hfill .\hfill \\ \hfill .\hfill \\ \hfill .\hfill \\ \hfill {y}_{n}\hfill \end{array}\right)\hfill & \hfill ,\underset{\mathit{nx}\left(k+1\right)}{\mathbf{X}}=\left(\begin{array}{cccccc}\hfill {x}_{10}\hfill & \hfill {x}_{11}\hfill & \hfill .\hfill & \hfill .\hfill & \hfill .\hfill & \hfill {x}_{1k}\hfill \\ \hfill {x}_{20}\hfill & \hfill {x}_{21}\hfill & \hfill .\hfill & \hfill .\hfill & \hfill .\hfill & \hfill {x}_{2k}\hfill \\ \hfill .\hfill & \hfill .\hfill & \hfill .\hfill & \hfill .\hfill \\ \hfill .\hfill & \hfill .\hfill & \hfill .\hfill & \hfill .\hfill \\ \hfill .\hfill & \hfill .\hfill & \hfill .\hfill & \hfill .\hfill \\ \hfill {x}_{n0}\hfill & \hfill {x}_{n1}\hfill & \hfill .\hfill & \hfill .\hfill & \hfill .\hfill & \hfill {x}_{\mathit{nk}}\hfill \end{array}\right)\hfill & \hfill ,\underset{\left(k+1\right)x1}{\beta }=\left(\begin{array}{c}\hfill {\beta }_{0}\hfill \\ \hfill {\beta }_{1}\hfill \\ \hfill .\hfill \\ \hfill .\hfill \\ \hfill .\hfill \\ \hfill {\beta }_{k}\hfill \end{array}\right)\hfill & \hfill ,\underset{\mathit{nx}1}{\in }=\left(\begin{array}{c}\hfill {ϵ}_{1}\hfill \\ \hfill {ϵ}_{2}\hfill \\ \hfill .\hfill \\ \hfill .\hfill \\ \hfill .\hfill \\ \hfill {ϵ}_{n}\hfill \end{array}\right)\hfill \end{array}\phantom{\rule{0.3em}{0ex}},$
(3)

where y is a column vector of n elements, X is an nx(k + 1)(k + 1 < n) nonrandom design matrix of covariates (with its first column having all elements equal to 1, the second column being filled by the observed values of x 1, the (k + 1)th column being filled by the observed values of x k ), β is a column vector of the (k + 1) elements, σ is an unknown scale parameter, and is an nx 1 column vector of error terms with zero mean and constant variance σ2 I.

### Model assumptions

In order to complete the description of the model, some assumptions about the nature of the errors are necessary. It is assumed that the errors are independent and identically distributed (i.i.d.) random variables whose distribution is assumed to be two-parameter SP rather than normal, with zero mean and a positive definite variance-covariance matrix σ2 I of dimension nxn, that is
${ϵ}_{i}~\mathit{MSP}\left(0,{\sigma }^{2}I\right).$
(4)
These assumptions are summarized in the matrix vector form as
$E\left(\in \right)=0,\mathit{Cov}\left(\in \right)=E\left(\in \in \text{'}\right)={\sigma }^{2}\underset{\mathit{nxn}}{I},$
(5)

where the notation E stands for the expected value and Cov represents an nxn variance-covariance matrix. The vector 0 is a column vector with n zero elements, and I is an identity matrix of order nxn. The parameter σ2 is unspecified, along with the vector parameter β. The elements of β are real-valued, while σ is positive. The covariates are either nonrandom or are independent of the errors. E(y) = , and Cov(y) = σ2 I. We shall use the triplet (y,,σ2 I) for linear model (2).

The manuscript is organized in eight sections. The ‘Introduction’ section frames the objective of the paper and reviews related literatures. In the ‘Properties of the two parameter symmetric platykurtic distribution’ section, we introduced the two-parameter SP distribution, in notation SP(μ,σ). We derived the maximum likelihood (ML) estimators in the ‘Maximum likelihood estimation of the model parameters’ section. In order to obtain numerical solutions to the ML estimate problem, the Newton–Raphson (NR) iterative method has been used. In the ‘Properties of the estimators through simulation study’ section, we show the asymptotic properties of the estimators. In the ‘Least squares estimation of the model parameters’ section, OLS estimation for the model parameters is studied. Comparison of MLE with OLS estimators and that of the proposed model with the Gaussian model are done in the ‘Comparative study of the model’ section. The ‘Application of the model’ section demonstrates the usefulness of the present model on real data. Finally, the ‘Summary and conclusions’ section concludes the paper.

### Properties of the two-parameter symmetric platykurtic distribution

Even though the sample frequency curve has a symmetric and bell shape, it often has fat tails than normal and the almost universally used Gaussian distribution may badly fit the fat tails. The family of symmetric platykurtic distributions can best model these features. The origin and genesis of this distribution is given by Srinivasa Rao et al. [26] for analyzing statistical data that arise from biological, sociological, agricultural, and environmental experiments. It became popular thereafter and has received considerable attention in the present time in modeling image segmentation and economic and financial data as a generalization of normal distribution (e.g., [27]). The probability density function (pdf) of such a family of distributions is
${f}_{r}\left(y/\mu ,\sigma ,\gamma \right)=\frac{{\left[2\gamma +{\left(\frac{y-\mu }{\sigma }\right)}^{2}\right]}^{\gamma }{e}^{-\frac{1}{2}{\left(\frac{y-\mu }{\sigma }\right)}^{2}}}{\sum _{j=0}^{\gamma }\left(\begin{array}{c}\hfill \gamma \hfill \\ \hfill j\hfill \end{array}\right){\left(2\gamma \right)}^{\gamma -j}{2}^{j+\frac{1}{2}}\Gamma \left(j+\frac{1}{2}\right)\sigma }.$
(6)

The distribution depends on three parameters μ, σ, and γ. These parameters can be interpreted as follows:

• μ is a real number and may be thought of as a location measure.

• σ is positive and measures the dispersion or the scale of the distribution.

• γ is a kurtosis parameter which determines the shape of the distribution, taking values γ = 0, 1, 2,…, n. If γ = 0, we retrieve the normal distribution N(μ,σ2). If we take γ = 1, we get a two-parameter symmetric platykurtic distribution.

In this section, we briefly present a two-parameter symmetric platykurtic distribution which is appropriate for data having kurtosis β = 2.52. A random variable Y is said to follow a two-parameter SP distribution if the density function of Y is
$f\left(y/\mu ,\sigma \right)=\frac{\left[2+{\left(\frac{y-\mu }{\sigma }\right)}^{2}\right]{e}^{-\frac{1}{2}{\left(\frac{y-\mu }{\sigma }\right)}^{2}}}{3\sigma \sqrt{2\pi }},y\in \mathrm{\Re },$
(7)
where − ∞ < μ < ∞ and σ > 0 are location and scale parameters, respectively. We denote this by Y ~ SP(μ,σ2). The various shapes of the frequency curves are shown in Figure 1.
Some important properties of the random variable Y in a univariate context and those which are needed for the simulation of the variate are as follows:
1. 1.
The distribution function of the random variable Y specified by the probability density function (7) is given by
${F}_{Y}\left(y\right)={\int }_{-\infty }^{y}\frac{\left[2+{\left(\frac{t-\mu }{\sigma }\right)}^{2}\right]{e}^{-\frac{1}{2}{\left(\frac{t-\mu }{\sigma }\right)}^{2}}}{3\sigma \sqrt{2\pi }}\mathit{dt}$
which, on simplification, reduces to
${F}_{Y}\left(y\right)=\frac{1}{\sigma \sqrt{2\pi }}{\int }_{-\infty }^{y}{e}^{-\frac{1}{2}{\left(\frac{t-\mu }{\sigma }\right)}^{2}}\mathit{dt}-\frac{\left(y-\mu \right){e}^{-\frac{1}{2}{\left(\frac{y-\mu }{\sigma }\right)}^{2}}}{3\sigma \sqrt{2\pi }}={F}_{0}\left(y;\mu ,\sigma \right)-{F}_{1}\left(y;\mu ,\sigma \right),$
(8)

where ${F}_{0}\left(y;\mu ,\sigma \right)=\frac{1}{\sigma \sqrt{2\pi }}{\int }_{-\infty }^{y}{e}^{-{\left(\frac{t-\mu }{\sigma }\right)}^{2}}\mathit{dt}\phantom{\rule{0.25em}{0ex}}y\in \Re$, is the distribution function of the normal random variable with mean μ and variance σ, while ${F}_{1}\left(y\right)=\frac{\left(y-\mu \right)}{3\sigma \sqrt{2\pi }}{e}^{-\frac{1}{2}{\left(\frac{y-\mu }{\sigma }\right)}^{2}},y\in \mathrm{\Re }$ is the nondistribution function (cannot be a cumulative density function) since it is negative for Y > μ.

2. 2.
Numerical approximations for the two-parameter symmetric platykurtic cumulative distribution function (CDF): Following Marsaglia's [28] approximation for standard normal distribution who suggested a simple algorithm based on the Taylor series expansion, we have also approximated the values F(y;μ,σ) as follows. For standard normal distribution with arbitrary precision, $\Phi \left(y\right)=\frac{1}{2}+\phi \left(y\right)\left(y+\frac{{y}^{3}}{3}+\frac{{y}^{5}}{3.5}+\frac{{y}^{7}}{3.5.7}+\dots \right)$ where φ and Φ is the pdf and CDF of the normal distribution with mean μ and variance σ. Accordingly, after little algebra, the standard symmetric platykurtic CDF, F(y), is approximated by
$F\left(y\right)=\frac{1}{2}+f\left(y\right)\left(\frac{-y}{3}+\sum _{i=1}^{n}\frac{{y}^{i}}{i!!}\right)$
(9)

where n = 1, 3, 5,…, n and n!! denotes the double factorial that is the product of every odd number from 1 to n.

3. 3.
The cumulant-generating function is the logarithm of the moment-generating function:
$g\left(t;\mu ,{\sigma }^{2}\right)=\mathit{\mu t}+\frac{1}{2}{\sigma }^{2}{t}^{2}+ln\left[1+\frac{{\left(\mathit{\sigma t}\right)}^{2}}{3}\right].$
(10)
The cumulants k n are extracted from the cumulant-generating function via differentiation (at zero) of g(t). That is, the cumulants appear as the coefficients in the Maclaurin series of g(t):
${k}_{1}=g\text{'}\left(0\right)=\mu ,{k}_{2}=g\text{'}\text{'}\left(0\right)=\frac{5{\sigma }^{2}}{3},{k}_{3}=g\text{'}\text{'}\text{'}\left(0\right)=\cdots ={k}_{n}={g}^{\left(n\right)}\left(0\right)=0$
(11)

That is, the first two cumulants are equal to the mean μ and the variance $\frac{5{\sigma }^{2}}{3}$ of the two-parameter symmetric platykurtic distribution, respectively, whereas all higher-order cumulants are equal to zero.

4. 4.
Hazard rate function of the distribution: The hazard function h(y;μ,σ) of the two-parameter symmetric platykurtic distribution used in this paper is utilized to characterize life phenomena and can be written as
$h\left(y;\mu ,\sigma \right)=\frac{\left(2+{\left(\frac{y-\mu }{\sigma }\right)}^{2}\right)\mathit{\phi }\left(\frac{y-\mu }{\sigma }\right)}{3\mathit{\sigma Q}\left(y\right)+\left(y-\mu \right)\phi \left(\frac{y-\mu }{\sigma }\right)},$
(12)

where φ is the pdf of the normal distribution with mean μ and variance σ, whereas the Q-function Q(y) is the complement of the standard normal CDF, Q(y) = 1 − Φ(y).

Recently, it was observed by Gupta and Gupta [29] that the reversed hazard function plays an important role in the reliability analysis. The reversed hazard function of the two-parameter SP(μ,σ) is
$r\left(y;\mu ,\sigma \right)=\frac{f\left(y;\mu ,\sigma \right)}{\mathrm{SP}\left(y;\mu ,\sigma \right)}=\frac{\left(2+{\left(\frac{y-\mu }{\sigma }\right)}^{2}\right)\phi \left(\frac{y-\mu }{\sigma }\right)}{3\mathit{\sigma }\mathrm{\Phi }\left(\frac{y-\mu }{\sigma }\right)-\left(y-\mu \right)\phi \left(\frac{y-\mu }{\sigma }\right)}.$
(13)

It is well known that the hazard function or the reversed hazard function uniquely determines the corresponding probability density function.

5. 5.
Entropy: The entropy for a two-parameter symmetric platykurtic distribution random variable y with probability density function f(y) on the real line is defined by
$h\left(y\right)=\frac{-529}{1,500}+ln\left(3\sigma \sqrt{2\pi }\right).$
(14)
It can be recalled that the entropy for normal distribution is $\frac{1}{2}ln\left(2\mathit{\pi e}{\sigma }^{2}\right)$. If f and g are the probability distributions of symmetric platykurtic and normal distributions, respectively, then the relative entropy D(f||g) from f to g is
$D\left(f||g\right)={\int }_{-\infty }^{\infty }f\left(y\right)log\frac{f\left(y\right)}{g\left(y\right)}\mathit{dy}=0.7088.$
(15)

This gives us a measure of something like the distance between the two probability distributions, in the sense that the relative entropy is always positive, is zero if and only if the two distributions are the same, and increases as the distributions diverge. Some of the more important properties of the SP distribution are summarized in the Appendix.

### Maximum likelihood estimation of the model parameters

In this section, we consider the homoscedastic regression model (Y,,σ2 I) with ϵ ~ SP(0, σ2 I). The unknown parameters of this model are the coefficient vector β and the error variance σ2. We deal with the problem of ML estimation of these parameters, which requires some fairly intricate mathematics, from the observables Y and X. The MLEs of β and σ2 are the parameter values that maximize the likelihood function:
$\begin{array}{ll}L\left(y;\beta ,\sigma \right)& ={\prod }_{i=1}^{n}f\left({y}_{i}/\mathit{X\beta },{\sigma }^{2}I\right)=\prod _{i=1}^{n}\frac{\left[2+{\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)}^{2}\right]{e}^{-\frac{1}{2}{\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)}^{2}}}{3\sigma \sqrt{2\pi }}\\ ={\left(3\sigma \sqrt{2\pi }\right)}^{-n}\prod _{i=1}^{n}\left[\left(2+{\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)}^{2}\right){e}^{-\frac{1}{2}{\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)}^{2}}\right].\end{array}$
(16)
The log-likelihood function in the i.i.d. case, ignoring the additive constants, equals
$\begin{array}{l}\ell =lnL\left(y;\beta ,\sigma \right)=-\frac{n}{2}ln{\sigma }^{2}+\sum _{i=1}^{n}lnh\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)\\ \phantom{\rule{0.72em}{0ex}}\mathrm{where}\phantom{\rule{0.24em}{0ex}}h\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)=\left[2+{\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)}^{2}\right]{e}^{-\frac{1}{2}{\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)}^{2}}.\end{array}$
(17)
The unknown parameters of this model are the coefficient vector β and the error variance σ2. The MLE is that ${\stackrel{^}{\theta }}_{\mathrm{MLE}}=\left(\stackrel{^}{\beta },{\stackrel{^}{\sigma }}^{2}\right)$ which maximizes the log-likelihood. Taking the partial derivatives of the log of the likelihood with respect to the (k + 1)x 1 vector β and nxn matrix of σ2 I and setting the result equal to zero will produce (18) and (19). The MLEs are the solutions of the equations:
$\begin{array}{l}\frac{\partial \ell }{\partial \beta }=-\frac{1}{\sigma }\sum _{i=1}^{n}{x}_{i}\left(\frac{\partial lnh\left(u\right)}{\partial u}\right){|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}=0\phantom{\rule{0.96em}{0ex}}=\frac{1}{\sigma }\sum _{i=1}^{n}{x}_{i}\left(\frac{{u}^{3}}{2+{u}^{2}}\right){|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}=0\end{array}$
(18)
and
$\begin{array}{l}\frac{\partial \ell }{\partial {\sigma }^{2}}=-\frac{n}{2{\sigma }^{2}}-\frac{1}{2{\sigma }^{2}}\sum _{i=1}^{n}\left(u\frac{\partial lnh\left(u\right)}{\partial u}\right){|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}=0\phantom{\rule{1.2em}{0ex}}=-\frac{n}{2{\sigma }^{2}}+\frac{1}{2{\sigma }^{2}}\sum _{i=1}^{n}\left(\frac{{u}^{4}}{2+{u}^{2}}\right){|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}=0.\end{array}$
(19)
Since there are no closed-form solutions to the likelihood equations, numerical methods such as the Fisher scoring or Newton–Raphson iterative method can be used to obtain the MLEs. The usual or standard procedure for implementing this solution is to use the Newton–Raphson iteration method given by
${\theta }^{\left(n+1\right)}={\theta }^{\left(n\right)}-{\left({H}^{\left(n\right)}\right)}^{-1}{S}^{\left(n\right)},\theta =\left(\beta ,{\sigma }^{2}\right)\text{'},$
(20)

where H is the matrix of the second derivative and S is the vector of the first derivative of the log-likelihood function both evaluated at the current values of the parameter vector θ.

Here we begin with some starting value, say θ(0), and improve it by finding some better approximation θ(1) to the required root. This procedure can be iterated to go from a current approximation θ(n) to a better approximation θ(n+1).

### Variance-covariance matrix

To obtain the asymptotic variances and covariance of the estimates, it is required to construct the Hessian matrix of the log-likelihood. The negative expected values of the second-order partial derivatives of the log-likelihood equations (18) and (19) can be used to estimate the asymptotic covariance matrix of parameter estimates and can be found as follows. Each summand in the right-hand side of (19) has zero expectation. Therefore,
$\begin{array}{ll}E\left[\left(\frac{\partial \ell }{\partial \beta }\right){\left(\frac{\partial \ell }{\partial \beta }\right)}^{\text{'}}\right]& =\frac{1}{{\sigma }^{2}}\sum _{i=1}^{n}{x}_{i}{x}_{i}\text{'}{\int }_{-\infty }^{\infty }{\left(\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}{|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}.\frac{1}{\sigma }h\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)d{y}_{i}\\ =\frac{1}{{\sigma }^{2}}\sum _{i=1}^{n}{x}_{i}{x}_{i}\text{'}{\int }_{-\infty }^{\infty }{\left(\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}h\left(u\right)\mathit{du}\\ ={I}_{\mu }X\text{'}X,\end{array}$
(21)
$\begin{array}{l}\mathrm{where}\phantom{\rule{0.24em}{0ex}}{I}_{\mu }=\frac{1}{{\sigma }^{2}}{\int }_{-\infty }^{\infty }{\left(\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}h\left(u\right)\mathit{du}=\frac{1}{{\sigma }^{2}}{\int }_{-\infty }^{\infty }\frac{{u}^{6}}{{u}^{2}+2}{e}^{-\frac{1}{2}{u}^{2}}\mathit{du}=\frac{4.934}{{\sigma }^{2}}.\end{array}$
(22)
Further,
$\begin{array}{ll}E\left[\left(\frac{\partial \ell }{\partial \beta }\right)\left(\frac{\partial \ell }{\partial {\sigma }^{2}}\right)\right]& =\frac{1}{2{\sigma }^{3}}\sum _{i=1}^{n}{x}_{i}{\int }_{-\infty }^{\infty }u{\left(\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}{|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}.\frac{1}{\sigma }h\left(\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }\right)d{y}_{i}\\ =\frac{1}{2{\sigma }^{3}}\sum _{i=1}^{n}{x}_{i}{\int }_{-\infty }^{\infty }u{\left(\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}h\left(u\right)\mathit{du}=0\end{array}$
(23)

as the integrand in the last expression is an odd function of u, that is, making use of the fact that the integrand of the off-diagonal term is an odd function of u.

Finally,
$\begin{array}{rl}E\left[{\left(\frac{\partial \ell }{\partial {\sigma }^{2}}\right)}^{2}\right]=& \frac{{n}^{2}}{4{\sigma }^{4}}+\frac{n}{2{\sigma }^{4}}\sum _{i=1}^{n}E\left[\left(u\frac{\partial lnh\left(u\right)}{\partial u}\right){|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}\right]\\ +\frac{1}{4{\sigma }^{4}}\sum _{i=1}^{n}\sum _{j=1}^{n}E\left[\left(u\frac{\partial lnh\left(u\right)}{\partial u}\right){|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}.\left(u\frac{\partial lnh\left(u\right)}{\partial u}\right){|}_{u=\frac{{y}_{i}-x{\text{'}}_{i}\beta }{\sigma }}\right]\\ =& \frac{{n}^{2}}{4{\sigma }^{4}}+\frac{{n}^{2}}{2{\sigma }^{4}}{\int }_{-\infty }^{\infty }u\frac{\partial h\left(u\right)}{\partial u}\mathit{du}+\frac{{n}^{2}-n}{4{\sigma }^{4}}{\left({\int }_{-\infty }^{\infty }u\frac{\partial h\left(u\right)}{\partial u}\mathit{du}\right)}^{2}\\ +\frac{n}{4{\sigma }^{4}}{\int }_{-\infty }^{\infty }{\left(u\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}.h\left(u\right)\mathit{du}.\end{array}$
Simplification of the expression of this term is aided by the identity
$\begin{array}{l}{\int }_{-\infty }^{\infty }u\frac{\partial h\left(u\right)}{\partial u}\mathit{du}=-1=\frac{{n}^{2}}{4{\sigma }^{4}}+\frac{{n}^{2}}{2{\sigma }^{4}}\left(-1\right)+\frac{{n}^{2}-n}{4{\sigma }^{4}}\left(-1{\right)}^{2}+\frac{n}{4{\sigma }^{4}}{\int }_{-\infty }^{\infty }{\left(u\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}.h\left(u\right)\mathit{du}=n{I}_{{\sigma }^{2}}\end{array}$
(24)
$\begin{array}{ll}\mathrm{where}\phantom{\rule{0.25em}{0ex}}{I}_{{\sigma }^{2}}& =\frac{1}{4{\sigma }^{4}}\left[{\int }_{-\infty }^{\infty }{\left(u\frac{\partial lnh\left(u\right)}{\partial u}\right)}^{2}.h\left(u\right)\mathit{du}-1\right]=\frac{1}{4{\sigma }^{4}}\left[{\int }_{-\infty }^{\infty }\frac{{u}^{8}}{{u}^{2}+2}{e}^{-\frac{1}{2}{u}^{2}}\mathit{du}-1\right]\\ =\frac{1}{4{\sigma }^{4}}\left[27.731-1\right]=\frac{26.731}{4{\sigma }^{4}}=\frac{6.68275}{{\sigma }^{4}}.\end{array}$
(25)
Therefore, the information matrix for θ = (β,σ2)' is
$I\left(\theta \right)=\left(\begin{array}{cc}\hfill {I}_{\mu }X\text{'}X\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill n{I}_{{\sigma }^{2}}\hfill \end{array}\right).$
(26)
The information for β is I μ X'X. Therefore, the design issues can be addressed with reference to the matrix X'X - just as in the normal case. The scalar I μ is equal to $\frac{4.934}{{\sigma }^{2}}$ (greater than σ−2 when the components of y are normally distributed), whereas the scalar ${I}_{{\sigma }^{2}}$ is equal to $\frac{6.68275}{{\sigma }^{4}}$ (greater than σ−4 when the components of y are normally distributed). The large sample variances and covariance of the estimates can be approximated by inverting the usual symmetric information matrix (26):
$\mathit{Var}\left(\theta \right)={\left[I\left(\theta \right)\right]}^{-1}=\left(\begin{array}{cc}\hfill {\left({I}_{\mu }X\text{'}X\right)}^{-1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {\left(n{I}_{{\sigma }^{2}}\right)}^{-1}\hfill \end{array}\right)$
(27)

Thus, the square root of the elements on the diagonal of this matrix will give us the standard errors associated with the coefficients.

### Simulation and results

The proposed approach was evaluated through Monte Carlo experiments in which artificial data sets were generated from model (2) using Wolfram Mathematica 9. To facilitate exposition of the method of estimation, a multiple data set with two independent variables and one dependent variable are simulated from a model with prespecified parameters for various sample sizes n = 100, 1,000, 3,000, 5,000, 10,000. (The sample size in this case, 10,000, is relatively large, and so finite sample bias is less of an issue.) The dependent variable (Y) is simulated from the symmetric platykurtic distribution with mean 0 and variance 1 while the two predictor x 1 and x 2 variables are generated from normal and lognormal distributions, respectively, with the prespecified mean and variance using the following simulation protocol:
$\begin{array}{l}{x}_{1}=\mathrm{RandomReal}\left[\mathrm{NormalDistribution}\left[10,5\right]\right];\\ {x}_{2}=\mathrm{RandomReal}\left[\mathrm{LogNormalDistribution}\left[0,3\right]\right]\end{array}$
(28)
and use as explanatory variables for the regression model. Without loss of generality, we considered the values of the model parameters as
$\left(\begin{array}{l}{\beta }_{0}\\ {\beta }_{1}\\ {\beta }_{2}\end{array}\right)=\left(\begin{array}{l}4\\ 2\\ 3\end{array}\right)\mathrm{and}\left(\begin{array}{l}\mu \\ \sigma \end{array}\right)=\left(\begin{array}{l}0\\ 1\end{array}\right).$
(29)
Summary statistics of estimations of the regression model with symmetric platykurtic distributed errors using the ML procedure are presented in Table 1.
The numerical results in Table 1 suggest that as the sample size increases, the estimates of the parameters become more precise. The ML method provides good estimates of the underlying model not only of the regression coefficients but also of the correlation matrix. The fitted linear regression model with symmetric platykurtic distributed error terms to the simulated data, based on the sample of size 10,000, is
$\stackrel{^}{Y}=4.0052+1.9981{X}_{1}+3.000{X}_{2}.$
(30)
Standard errors of the estimates were estimated by the square root of the diagonal elements of the inverse of the Hessian of the log-likelihood function. Thus, the estimated standard errors are
$\mathrm{s}.\mathrm{e}.\left({\stackrel{^}{\beta }}_{0}\right)=0.0220,\mathrm{s}.\mathrm{e}.\left({\stackrel{^}{\beta }}_{1}\right)=0.0020,\mathrm{s}.\mathrm{e}.\left({\stackrel{^}{\beta }}_{2}\right)=0.000,\mathrm{and}\phantom{\rule{0.25em}{0ex}}\mathrm{s}.\mathrm{e}.\left(\stackrel{^}{\sigma }\right)=0.0070.$
(31)

### Properties of the estimators through simulation study

If certain regularity conditions of the density are met, the MLEs are most attractive because they possess many asymptotic or large sample properties. Derivations of the asymptotic properties require some fairly intricate mathematics. The three properties of the regular densities (moments of the derivatives of the log-likelihood) are used in establishing the properties of MLEs. The properties of the ML estimators are as follows.

### Consistency

One of the basic properties of a good estimator is that it differs from a true value by a very small amount as n becomes large. This implies that we can reach the exact value of θ by indefinitely increasing the sample size. Mathematically, it is expressed as
$plim\stackrel{^}{\theta }=\theta .$
(32)
More specifically, a consistent estimator should not only be unbiased, but it should also have a variance which is as small as possible. This leads to two definitions:
$\begin{array}{l}E\left(\stackrel{^}{\theta }\right)\stackrel{n\to \infty }{\to }\theta \phantom{\rule{0.24em}{0ex}}\mathrm{and}\\ V\left(\stackrel{^}{\theta }\right)\stackrel{n\to \infty }{\to }0\end{array}$
(33)

From (27), it is clear that the variance tends to zero as n → ∞ in each case, so we conclude that the estimators are consistent since they are composed of i.i.d. observations.

### Asymptotic normality

Greene's derivation of the asymptotic normality of the MLE applies here. The first derivative of the log-likelihood evaluated at the MLE equals zero. So
$S\left(\stackrel{^}{\theta }\right)=0.$
(34)
Expand this set of equations in a Taylor series around the true parameters θ 0 using the mean value theorem to truncate the Taylor series at the second term:
$S\left(\stackrel{^}{\theta }\right)=S\left({\theta }_{0}\right)+H\left(\overline{\theta }\right)\left(\stackrel{^}{\theta }-{\theta }_{0}\right)=0.$
(35)
The Hessian is evaluated at a point $\overline{\theta }$ that is between $\stackrel{^}{\theta }$ and θ 0 $\left[\overline{\theta }=w\stackrel{^}{\theta }+\left(1-w\right){\theta }_{0}\phantom{\rule{0.25em}{0ex}}\mathrm{for}\phantom{\rule{0.25em}{0ex}}\mathrm{some}\phantom{\rule{0.25em}{0ex}}0. We then rearrange this function and multiply the result by to obtain
$\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right)={\left[-H\left(\overline{\theta }\right)\right]}^{-1}\left[\sqrt{n}S\left({\theta }_{0}\right)\right]$
(36)
Using (32), the probability limits of $\sqrt{n}\left(\stackrel{^}{\theta }-\theta \right)$ and $\sqrt{n}\left(\stackrel{^}{\theta }-\theta \right)$ go to zero because of the consistency of $\stackrel{^}{\theta }$ (i.e., $plim\left(\stackrel{^}{\theta }-\theta \right)=0\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}plim\left(\stackrel{^}{\theta }-\overline{\theta }\right)=0$). The second derivatives are continuous functions. Therefore, if the limiting distribution exists, then
$\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right)\stackrel{d}{\to }{\left[-H\left({\theta }_{0}\right)\right]}^{-1}\left[\sqrt{n}S\left({\theta }_{0}\right)\right].$
(37)
By dividing H(θ 0) and S(θ 0) by n, we obtain
$\sqrt{n}\left(\stackrel{^}{\theta }-{\theta }_{0}\right)\stackrel{d}{\to }{\left[-\frac{1}{n}H\left({\theta }_{0}\right)\right]}^{-1}\left[\sqrt{n}\overline{S}\left({\theta }_{0}\right)\right]$
(38)
We may apply the Lindeberg-Levy central limit theorem to $\left[\sqrt{n}\overline{S}\left({\theta }_{0}\right)\right]$ because it is times the mean of a random sample. By virtue of $V\left[{S}_{i}\left(\theta \right)\right]=-E\left[\frac{1}{n}H\left(\theta \right)\right]$, the limiting variance of $\left[\sqrt{n}\overline{S}\left(\theta \right)\right]$ is $-E\left[\frac{1}{n}H\left(\theta \right)\right]$, so
$\sqrt{n}\overline{S}\left(\theta \right)\stackrel{d}{\to }N\left\{0,-E\left[\frac{1}{n}H\left(\theta \right)\right]\right\}$
(39)
By virtue of $E\left[{S}_{i}\left(\theta \right)\right]=0,plim\left[-\frac{1}{n}H\left(\theta \right)\right]=-E\left[\frac{1}{n}H\left(\theta \right)\right]$. This result is a constant matrix, so we can combine results to obtain
$\begin{array}{l}\phantom{\rule{0.12em}{0ex}}{\left[-\frac{1}{n}H\left(\theta \right)\right]}^{-1}\sqrt{n}\overline{S}\left(\theta \right)\stackrel{d}{\to }N\left[0,{\left\{-E\left[\frac{1}{n}H\left(\theta \right)\right]\right\}}^{-1}\left\{-E\left[\frac{1}{n}H\left(\theta \right)\right]\right\}{\left\{-E\left[\frac{1}{n}H\left(\theta \right)\right]\right\}}^{-1}\right]\phantom{\rule{0.25em}{0ex}}\mathrm{or}\\ \sqrt{n}\left(\stackrel{^}{\theta }-\theta \right)\stackrel{d}{\to }N\left[0,{\left\{-E\left[\frac{1}{n}H\left(\theta \right)\right]\right\}}^{-1}\right].\end{array}$
(40)
It follows that the MLEs are asymptotically normal with asymptotic distribution:
$\stackrel{^}{\theta }~N\left[\theta ,{\left\{I\left(\theta \right)\right\}}^{-1}\right].$
(41)

### Asymptotic efficiency

The information matrix forms a tool of interest to verify efficiency, viz. the attainment of the information limit to the variance of the estimator. An estimator whose variance is as small as the Cramer-Rao lower bound when the sample size tends to infinity is called asymptotically efficient. This means that an estimator which reaches 100% efficiency only in the n → ∞ limit is called asymptotically efficient. It can be shown that the Cramer-Rao lower bound for $\stackrel{^}{\theta }=\left({\stackrel{^}{\beta }}_{0},{\stackrel{^}{\beta }}_{1},{\stackrel{^}{\beta }}_{2}\right)\text{'}$ and ${\stackrel{^}{\sigma }}^{2}$, respectively, are
$\begin{array}{cl}\mathrm{Var}\left(\stackrel{^}{\theta }\right)\ge & \frac{1}{-\mathit{nE}\left(\frac{{\partial }^{2}lnf\left(y\right)}{\partial {\theta }^{2}}\right)}={\left({I}_{\mu }X\text{'}X\right)}^{-1}\phantom{\rule{0.12em}{0ex}}\mathrm{and}\\ \mathrm{Var}\left({\stackrel{^}{\sigma }}^{2}\right)\ge & \frac{1}{-\mathit{nE}\left(\frac{{\partial }^{2}lnf}{{\left(\partial {\sigma }^{2}\right)}^{2}}\right)}={\left(n{I}_{{\sigma }^{2}}\right)}^{-1},\end{array}$
(42)

where I μ and ${I}_{{\sigma }^{2}}$ are as defined in (22) and (25), respectively.

This means that any unbiased estimator that achieves this lower bound is efficient and no better unbiased estimator is possible. Now look back at the variance-covariance matrix (27). It is interesting to note that the variances of the estimators in the variance-covariance matrix do asymptotically coincide with the Cramer-Rao lower bound (42). This means that our MLEs are 100% asymptotically efficient. The asymptotic variance of the MLE is, in fact, equal to the Cramer-Rao lower bound for the variance of a consistent and asymptotically normally distributed estimator [30].

### Invariance

Last, the invariance property is a mathematical result of the method of computing MLEs; it is not a statistical result as such. If it is desired to analyze a continuous and continuously differentiable function of an MLE, then the function of $\stackrel{^}{\theta }$ will, itself, be the MLE since the MLE is invariant to one-to-one transformations of θ.

These four properties explain the prevalence of the ML technique. The second is a particularly powerful result. The third greatly facilitates hypothesis testing and the construction of interval estimates. The MLE has the minimum variance achievable by a consistent and asymptotically normally distributed estimator.

### Least squares estimation of the model parameters

The most widely used technique for estimating the unknown regression coefficients in a standard linear regression model is undeniably the method of ordinary least squares (OLS). The least squares estimates of β 0, β 1, and β 2 are the values which minimize
$\mathrm{SS}=\sum _{i=1}^{n}{\left({y}_{i}-{\beta }_{0}-{\beta }_{1}{x}_{1i}-{\beta }_{2}{x}_{2i}\right)}^{2}.$
(43)
This leads to a closed-form expression for the estimated value of the unknown parametric vector β:
$\begin{array}{l}\stackrel{^}{\beta }={\left({X}^{T}X\right)}^{-1}{X}^{T}y={\left(\frac{1}{n}{\sum _{1}^{n}{x}_{i}x}_{i}^{T}\right)}^{-1}\left(\frac{1}{n}\sum _{1}^{n}{x}_{i}{y}_{i}\right).\end{array}$
(44)
Tables 2 and 3 summarize the results of the OLS estimation for a sample of size 10,000 using the same simulated observations obtained in the ‘Simulation and results’ section.

From Table 3, it is observed that the OLS estimates differ significantly from the ML estimates and the ML estimators are closer to the true values of the parameters compared to the OLS estimators.

## Comparative study of the model

### Comparison of estimators of the linear regression model

In this section, ML and OLS estimators are compared in fitting the multivariate linear model with two-parameter symmetric platykurtic error terms. One-step-ahead forecasting is commonly used to compare the performance of different models [31, 32]. For each estimation technique, bias and mean square error (MSE) are calculated for the sample size of 10,000 where
$\mathrm{MSE}\left(\stackrel{^}{\beta }\right)=var\left(\stackrel{^}{\beta }\right)+{\left[\mathrm{bias}\left(\stackrel{^}{\beta }\right)\right]}^{2}.$
(45)
The computational result is presented in Table 4.

As we expect, the results reported in Table 4 show that ML estimators have both smaller one-step-ahead forecast bias and less MSE than OLS estimators. This reveals that ML estimators exhibit superior performance to OLS estimators. This confirms the fact that deviations from normality cause OLS estimators to be poor estimators.

### Comparison of the SP-LRM with the N-LRM

The linear regression model with symmetric platykurtic errors (SP-LRM) and linear regression model with normal errors (N-LRM) were applied to the simulated data sets. This produced parameter estimates $\left({\stackrel{^}{\beta }}_{0},{\stackrel{^}{\beta }}_{1},{\stackrel{^}{\beta }}_{2},\stackrel{^}{\sigma }\right)$ for the model with SP errors and parameter estimates $\left({\stackrel{^}{\beta }}_{0},{\stackrel{^}{\beta }}_{1},{\stackrel{^}{\beta }}_{2},\stackrel{^}{\sigma }\right)$ for the model with N errors. The simulated multivariate data are used to compare the performance of the linear regression model with symmetric platykurtic error terms with that of the linear regression model with normal error terms. In order to determine the best linear model among the fitted ones, we computed the Akaike information criteria (AIC) and Bayesian information criteria (BIC) with model diagnostics root-mean-square error (RMSE). Needless to say, the proposed model would be chosen as the best model according to the minimum of AIC, BIC, and RMSE. The output of the simulation study using various sample sizes is presented in Table 5.
Models with small values for the criterion are potential candidate models where it can be seen that the SP distribution provides the best fit to the data. That is, both information criteria and model diagnostics indicate that the linear model with symmetric platykurtic distributed error terms consistently performed best across all the sample sizes of the simulation. The fact that SP-LRM is superior to N-LRM is also consistently noticed from Figures 2, 3, and 4.

### Application of the model

As an illustration of the proposed methodology, we considered a real data set concerning 202 athletes collected at the Australian Institute of Sport, courtesy of Richard Telford and Ross Cunningham [33]. It is also available within the package sn in R. The variables examined are body mass index (BMI), red cell count (RCC), white cell count (WCC), and plasma ferritine concentration (PFC). The first is a biometrical variable, while the remaining three concern blood composition. They are summarized in Table 6. We studied the linear dependence of the biometrical variable on the blood composition variables.
We compute the skewness, kurtosis, and Jarque-Bera statistic to test the normality hypothesis for the body mass index. The results are shown in Table 7. Several other statistics could be used to test normality, such as the modified Shapiro-Wilk statistic, Anderson-Darling test, and Kolmogorov-Smirnov test. However, as the Jarque-Bera statistic is one of the most powerful tests of normality, and the results of the other statistics are similar, we report only the results of the Jarque-Bera statistic and its corresponding skewness and kurtosis. The 1% level of significance shown in the table leads to rejection of the null hypothesis of normality for the body mass index.
Table 8 shows the results of fitting multiple regression models with symmetric platykurtic distributed errors and Gaussian errors (SP-LRM and N-LRM) to the Australian Institute of Sport data set using the maximum likelihood method of estimation. The standard errors are the asymptotic standard errors based on the observed information matrix given in (26).
From Table 8, it is observed that the estimates differ slightly between the two models. Following Lachos et al. [34], we propose selecting the best fit between N-LRM and SP-LRM by inspection of information criteria such as AIC and BIC (the preferred model is the one with the smallest value of the criterion). The AIC and BIC values shown at the bottom of Table 8 indicate that the SP- LRM outperforms the N-LRM. Therefore, the fitted model for BMI is
$\stackrel{^}{Y}=4.7799+3.1472{X}_{1}+0.0899{X}_{2}+0.1988{X}_{3}$
(46)
with the estimated standard errors
$\begin{array}{l}\mathrm{s}.\mathrm{e}.\left({\stackrel{^}{\beta }}_{0}\right)=2.3617,\mathrm{s}.\mathrm{e}.\left({\stackrel{^}{\beta }}_{1}\right)=0.4575,\mathrm{s}.\mathrm{e}.\left({\stackrel{^}{\beta }}_{2}\right)=0.1018,\\ \mathrm{s}.\mathrm{e}.\left({\stackrel{^}{\beta }}_{3}\right)=0.0337,\mathrm{and}\phantom{\rule{0.25em}{0ex}}\mathrm{s}.\mathrm{e}.\left(\stackrel{^}{\sigma }\right)=0.1918.\end{array}$
(47)

## Summary and conclusions

A multiple linear regression model generalizes the simple linear regression model by allowing the response variable to depend on more than one explanatory variable. In this paper, we have explored the idea of using a symmetric platykurtic distribution for analyzing nonnormal errors in the multivariate linear regression model. The symmetric platykurtic distribution serves as an alternative to the normal distribution with platykurtic nature. The maximum likelihood estimators of the model parameters are derived and we found them feasible. Through simulation studies, the properties of these estimators are studied. Traditional OLS estimation is carried out in parallel and the results are compared. The simulated results reveal that the ML estimators are more efficient than the OLS estimators in terms of the relative efficiency of one-step-ahead forecast mean square error. A comparative study of the developed regression model with the Gaussian model revealed that this model gives good fit to some data sets. The asymptotic properties of the maximum likelihood estimators are studied, and the large sample theory with respect to regression coefficients is also presented. The utility of the proposed model is demonstrated with real-time data. This regression model is much more useful for analyzing data sets arising from agricultural experiments, portfolio management, space experiments, and a wide range other practical problems. The calculations in this paper make considerable use of a combination of three popular statistical packages: Mathematica 9.0, Matlab R2012b, and SAS 9.0.

## Appendix

### Summary of properties of two-parameter symmetric platykurtic distribution

1. 1.

Notation:              NS(μ, σ 2)

2. 2.

Parameters:           $\begin{array}{l}\mu \in \Re -\mathrm{Mean}\phantom{\rule{0.12em}{0ex}}\left(\mathrm{location}\right)\\ {\sigma }^{2}>0-\mathrm{Variance}\phantom{\rule{0.12em}{0ex}}\left(\mathrm{squared}\phantom{\rule{0.12em}{0ex}}\mathrm{scale}\right)\end{array}$

3. 3.

Support:             y

4. 4.

PDF:               $\frac{\left[2+{\left(\frac{y-\mu }{\sigma }\right)}^{2}\right]{e}^{-\frac{1}{2}{\left(\frac{y-\mu }{\sigma }\right)}^{2}}}{3\sigma \sqrt{2\pi }}$

5. 5.

CDF:               $\frac{1}{\sigma \sqrt{2\pi }}{\int }_{-\infty }^{y}{e}^{-\frac{1}{2}{\left(\frac{t-\mu }{\sigma }\right)}^{2}}\mathit{dt}-\frac{\left(y-\mu \right){e}^{-\frac{1}{2}{\left(\frac{y-\mu }{\sigma }\right)}^{2}}}{3\sigma \sqrt{2\pi }}$

6. 6.

Mean:                 μ

7. 7.

Median:               μ

8. 8.

Mode:               μ

9. 9.

Variance:            $\frac{5}{3}{\sigma }^{2}$

10. 10.

Skewness:           0

11. 11.

Kurtosis:            2.52

12. 12.

Entropy:             $\frac{-529}{1500}+\mathit{\text{ln}}\left(3\sigma \sqrt{2\pi }\right)$

13. 13.

MGF:             ${e}^{\mathit{\mu t}+\frac{{t}^{2}{\sigma }^{2}}{2}}\left[1+\frac{{\left(\mathit{\sigma t}\right)}^{2}}{3}\right]$

14. 14.

CF:              ${e}^{\mathit{\mu it}-\frac{1}{2}{\sigma }^{2}{t}^{2}}\left\{1+\frac{{\left(\mathit{\sigma it}\right)}^{2}}{3}\right\}$

15. 15.

CGF:               $\mathit{\mu t}+\frac{1}{2}{\sigma }^{2}{t}^{2}+\mathit{\text{ln}}\left[1+\frac{{\left(\mathit{\sigma t}\right)}^{2}}{3}\right]$

16. 16.

Central moments: moments    $\begin{array}{l}{\mu }_{2n}=\left[\frac{\left(n+\frac{3}{2}\right)\Gamma \left(n+\frac{1}{2}\right)}{3\sqrt{\pi }}\right]{2}^{n+1}{\sigma }^{2n},\mathrm{even}\phantom{\rule{0.12em}{0ex}}\mathrm{central}\phantom{\rule{0.12em}{0ex}}\mathrm{moments}\\ {\mu }_{2n+1}=0,\phantom{\rule{6.6em}{0ex}}\mathrm{odd}\phantom{\rule{0.12em}{0ex}}\mathrm{central}\phantom{\rule{0.12em}{0ex}}\mathrm{moments}\end{array}$

17. 17.

Fisher information: information  $\left(\begin{array}{cc}\hfill {I}_{\mu }X\text{'}X\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill n{I}_{{\sigma }^{2}}\hfill \end{array}\right)$; ${I}_{\mu }=\frac{4.934}{{\sigma }^{2}}$,${I}_{{\sigma }^{2}}=\frac{6.68275}{{\sigma }^{4}}$

## Declarations

### Acknowledgements

The authors are grateful to the Editor of JUAA, two anonymous referees, and SpringerOpen Copyediting Management for the helpful comments and suggestions on the earlier version of this article. The present version of the paper owes much to their precise and kind remarks.

## Authors’ Affiliations

(1)
Department of Statistics, Andhra University

## References

1. Seber GAF: Linear Regression Analysis. New York: Wiley; 1977.Google Scholar
2. Montgomery DC, Peck EA, Vining GG: Introduction to Linear Regression Analysis. 3rd edition. New York: Wiley; 2001.Google Scholar
3. Grob J: Linear Regression. In Lecture Notes in Statistics, vol. 175. Berlin: Springer; 2003.Google Scholar
4. Sengupta D, Jammalamadaka SR: Estimation in the linear model. In Linear Models: An Integrated Approach. River Edge: World Scientific; 2003:93–131.
5. Seber GAF, Lee AJ: Linear Regression Analysis. 2nd edition. New York: Wiley; 2003.
6. Weisberg S: Applied Linear Regression. 3rd edition. New York: Wiley; 2005.
7. Yan X, Su XG: Linear Regression Analysis: Theory and Computing. Hackensack: World Scientific; 2009.
8. Srivastava MS: Methods of Multivariate Statistics. New York: Wiley; 2002.Google Scholar
9. Fama EF: The behaviour of stock market prices. J. Bus 1965, 38: 34–105. 10.1086/294743
10. Sutton J: Gibrat’s legacy. J. Econ. Lit 1997, 35: 40–59.Google Scholar
11. Zeckhauser R, Thompson M: Linear regression with non-normal error terms. Rev. Econ. Stat 1970,52(3):280–286. 10.2307/1926296
12. Zellner A: Bayesian and non-Bayesian analysis of the regression model with multivariate Student-t error terms. J. Am. Stat. Assoc 1976,71(354):400–405.
13. Sutradhar BC, Ali MM: Estimation of the parameters of a regression model with a multivariate t error variable. Commun. Stat. Theory 1986, 15: 429–450. 10.1080/03610928608829130
14. Tiku ML, Wong WK, Bian G: Estimating parameters in autoregressive models in non-normal situations: symmetric innovations. Commun. Stat. Theory Methods 28(2):315–341.Google Scholar
15. Tiku ML, Wong WK, Vaughan DC, Bian G: Time series models with non-normal situations: symmetric innovations. J. Time Ser. Anal 2000,2(5):571–596.
16. Tiku ML, Islam MQ, Selcuk AS: Non-normal regression, II: symmetric distributions. Commun. Stat. Theory Methods 2001,30(6):1021–1045. 10.1081/STA-100104348
17. Sengupta D, Jammalamadaka SR: The symmetric non-normal case. In Linear Models: An Integrated Approach. River Edge: World Scientific; 2003:131–133.Google Scholar
18. Liu M, Bozdogan H: Power exponential multiple regression model selection with ICOMP and genetic algorithms. Springer, Tokyo: Working paper; 2004.Google Scholar
19. Wong WK, Bian G: Estimation of parameters in autoregressive models with asymmetric innovations. Stat. Prob. Lett 2005,71(1):61–70. 10.1016/j.spl.2004.10.022
20. Wong WK, Bian G: Robust estimation of multiple regression model with asymmetric innovations and its applicability on asset pricing model. Euras. Rev. Econ. Financ 2005,1(4):7.Google Scholar
21. Liu M, Bozdogan H: Multivariate regression models with power exponential random errors and subset selection using genetic algorithms with information complexity. Eur. J. Pure Appl. Math 2008,1(1):4–37.
22. Soffritti G, Galimberti G: Multivariate linear regression with non-normal errors: a solution based on mixture models. Stat. Comput 2011,21(4):523–536. 10.1007/s11222-010-9190-3
23. Jafari H, Hashemi R: Optimal designs in a simple linear regression with skew-normal distribution for error term. J. Appl. Math 2011,1(2):65–68.
24. Jahan S, Khan A: Power of t-test for simple linear regression model with non-normal error distribution: a quantile function distribution approach. J. Sci. Res 2012,4(3):609–622.
25. Bian G, McAleer M, Wong WK: Robust estimation and forecasting of the capital asset pricing model. Ann. Financ. Econ 2013. in pressGoogle Scholar
26. Srinivasa Rao K, Vijay Kumar CVSR, Lakshmi Narayana J: On a new symmetrical distribution. J. Indian Soc. Agric. Stat 1997,50(1):95–102.Google Scholar
27. Seshashayee M, Srinivas Rao K, Satyanarayana CH, Srinivasa Rao P: Image segmentation based on a finite generalized symmetric platykurtic mixture model with K-means. Int. J. Comput. Sci. Issu 2011,8(3):2.Google Scholar
28. Marsaglia G: Evaluating the normal distribution. J. Stat. Softw 2004,11(4):1–7.Google Scholar
29. Gupta RC, Gupta RD: Proportional reversed hazard rate model and its applications. J. Stat. Plann. Inference 2007,137(11):3525–3536. 10.1016/j.jspi.2007.03.029
30. Greene W: Econometric Analysis. 5th edition. Upper Saddle River: Prentice-Hall; 2003.Google Scholar
31. Clements MP, Hendry DF: An empirical study of seasonal unit roots in forecasting. Int. J. Forecast 1997,13(3):341–355. 10.1016/S0169-2070(97)00022-8
32. Chiang TC, Qiao Z, Wong WK: New evidence on the relation between return volatility and trading volume. J. Forecast 2010,29(5):502–515.
33. Ferreira JTAS, Steel MF Statistics Research Report, 419. In Bayesian multivariate regression analysis with a new class of skewed distributions. University of Warwick: Department of Statistics; 2004.Google Scholar
34. Lachos VH, Bolfarine H, Arellano-Valle RB, Montenegro LC: Likelihood based inference for multivariate skew-normal regression models. Commun. Stat. Theory Methods 2007, 36: 1769–1786. 10.1080/03610920601126241