Research Article

# Square Root Transformation of the Quadratic Equation

Iheanyi S. Iwueze and Johnson Ohakwe

ABSTRACT

In this study, we determined the necessary and sufficient conditions on the coefficients of a quadratic equation such that its square root transformation remains approximately a quadratic equation or exactly a linear equation. Numerical examples are used to illustrate the results obtained.

 Services Related Articles in ASCI Similar Articles in this Journal Search in Google Scholar View Citation Report Citation Science Alert

 How to cite this article: Iheanyi S. Iwueze and Johnson Ohakwe, 2011. Square Root Transformation of the Quadratic Equation. Asian Journal of Mathematics & Statistics, 4: 186-199. DOI: 10.3923/ajms.2011.186.199 URL: https://scialert.net/abstract/?doi=ajms.2011.186.199

Received: February 21, 2011; Accepted: May 25, 2011; Published: August 10, 2011

INTRODUCTION

A quadratic equation is a polynomial equation of the second degree. The general form is:

 (1)

For a = 0, Eq. 1 becomes a linear equation. For our discussion, a, b and c (called coefficients) are real numbers and the domain of y is the set of real numbers.

The quadratic Eq. 1 has two (not necessarily distinct) solutions, called roots which may or may not be real, given by the quadratic formula:

 (2)

Since a, b and c are real numbers and the domain of y is the set of real numbers, then Eq. 1 can have either one distinct real root (b2-4ac = 0) which is sometimes called a double root or two distinct real roots (b2-4ac>0) or two distinct complex roots (b2-4ac<0) which are complex conjugates of each other.

Budd and Sangwin (2004a, b) have shown that the quadratic equation has many applications and has played a fundamental role in human history. The different and important applications include amongst others the grandfather clocks, areas, singing, tax, architecture, acceleration, paper, radio, telescope, shooting and jumping.

In pure mathematics, one important use of a quadratic equation is in the solution of higher-degree equations that can be brought into quadratic form. For example, the 8th -degree equation in x, giving by:

 (3)

can be written as a quadratic equation in a new variable z:

 (4)

where, z = x4.

A very important use of the quadratic equation is in the study of statistical relationships between variables. In many instances, the relationship between two random variables X and Y is non-linear and is said to be curvilinear. In such instances, a curved (or curvilinear) function is needed. The simplest of such curvilinear function which supports the idea of parsimony in modeling is the quadratic statistical model:

 (5)

where, ei~N (0, σ2). Equation 5 is a regression model and X is called the independent or predictor variable while Y is called the dependent or response variable (Draper and Smith, 1981; Graybill and Iyer, 1994). The existence of quadratic relationships in nature is numerous, for example, Wooten and Tsokos (2010) modeled carbon dioxide emission into the atmosphere to be quadratic function of time. Furthermore Chauhan et al. (2006), in his study, “Passive Modified Atmosphere Packaging of Banana (Cv. Cavendish) Using Silicone Membrane” established that quadratic relationships exist between the variables - fill weight and silicon membrane diffusion area at a constant fill volume and storage temperature and head-space oxygen, carbon dioxide and storage life using a response surface methodology.

Sometimes a transformation of the response variable is desired. Transformation is a mathematical operation that changes the measurement scale of a variable and is usually done to make a set of variables useable with a particular statistical test or method (Iwueze et al., 2008). Reasons for transformation can be obtained from Iwueze and Akpanta (2007). Selecting the best transformation can be a complex issue and the usual statistical technique used is to estimate both the transformation and required model for the transformed variable (W) at the same time (Box and Cox, 1964):

 (6)

Akpanta and Iwueze (2009) have shown how to apply Bartlett (1947) transformation technique to time series data without considering the time series model structure. For time series data that require transformation, we split the observed time series Yt, t = 1, 2, …, n chronologically into m fairly equal different groups and compute and for the groups. Akpanta and Iwueze (2009) showed that Bartlett’s transformation for time series data is to regress the natural logarithms of the group standard deviations against the natural logarithms of the group means and determine the slope, β, of the relationship:

 (7)

Akpanta and Iwueze (2009) showed that Bartlett’s transformation may also be regarded as the power transformation:

 (8)

Applications of Eq. 7 and 8 often lead to the six transformations (often found in statistical literature for data analysis problems):

 (9)

Transformations as outlined in Eq. 9 surely alter the fundamental nature of the curve associated with the response variable. For example, the squares transformation of Eq. 1 gives:

 (10)

which can be written as:

 (11)

where, α4 = a2, α3 = 2ab, α2 = b2+2ac, α1 = 2bc, αbc, α0 = c2. That is, the square of a quadratic equation can never be a quadratic equation but rather a polynomial equation of order 4.

The purpose of this study is to study the square root transformation of the quadratic equation with a view to placing necessary and sufficient conditions on its coefficients such that the transformed variable is a quadratic or linear equation.

SQUARE ROOT TRANSFORMATION OF THE QUADRATIC EQUATION

As already stated, our aim is to take the square root transformation of the quadratic Eq. 1 and place necessary and sufficient conditions on its coefficients so that the transformed variable is approximately a quadratic equation or exactly a linear equation. Taking w as the transformed variable and using Maclaurin’s series expansion, we obtain:

 (12)

 (13)

Equating corresponding coefficients in Eq. 12 and 13, we obtain:

 (14)

 (15)

 (16)

To approximate Eq. 12 by a quadratic Eq. 13, we equate the coefficients of x3, x4, x5, x6, …, to zero and solve for the coefficient of the quadratic term needed for the required approximation. When we do this, we obtain the values of a given in Table 1. From Table 1, it does appear that with γ, β, α given by Eq. 14-16, respectively, if:

 (17)

It is important to note that:

is a common root to all the polynomials obtained as coefficients of x3, x4, x5, x6, …, in Eq. 12. More importantly is the fact that under the necessary condition:

the square root transformation of the quadratic equation admits a straight line since:

 Table 1: Coefficient of the quadratic term needed for quadratic approximation of the quadratic equation: y = ax2+bx+c, a≠0

This leads to an important result in the study of quadratic equations as stated in Theorem 1.

Theorem 1: Linearization of the quadratic equation: Let y = ax2+bx+c, a≠0, b≠0 and c≠0 be a quadratic equation. If:

then is a straight line given by w = γ+βx, where and irrespective of the values of b and c.

Proof: Let y = ax2+bx+c, a≠0, b≠0 and c≠0 with:

Then:

 (18)

Therefore:

Hence:

It is easy to show that the quadratic Eq. 1 with: has:

 • One distinct real root when k = 1/4 (or equivalently b2-4ac = 0) and its value is • Two distinct real roots when k<1/4 • Two distinct complex roots when k>1/4

DETERMINATION OF K FOR GIVEN b AND c

It is obvious that the problem we are trying to solve is that of choosing the quadratic coefficient (a) when the linear coefficient (b) and the constant term (c) are known, such that the square root transformation of the quadratic equation remains a quadratic or linear equation. Consider:

 (19)

When:

we obtain:

 (20)

Hence, if

 (21)

and

 (22)

 (23)

Using Eq. 14 through Eq. 16 and the fact that: we obtain from Eq. 23 the following:

 (24)

 (25)

From Eq. 23, our problem reduces to the consideration of the quadratic equation:

 (26)

whose square root transformation becomes:

 (27)

 (28)

where,

that is, for given b and c:

and available set of observations (Xi, Yi), i = 1, 2, …, n satisfying Eq. 27, we fit a quadratic model to . Let the fitted quadratic equation be:

 (29)

We accept the value of k for which the null hypothesis:

 (30)

against the alternative hypothesis:

 (31)

is not rejected at a suitable level of significance, δ. For a perfect fit we would require that R2 = 1, where R2 is the coefficient of the multiple determination between W* = θ01X+θ2X2 and (Draper and Smith, 1981).

The model under consideration is

 (32)

which in vector form becomes:

 (33)

where:

Since XT X is nonsingular, we can estimate θ as:

 (34)

The residual sum of squares for this analysis is given as:

 (35)

 Table 2: Computation of F for values of k when y* = 1.0+Δx+kΔ2x2, Δ = 0.20, n = 100

This sum of squares has (n-3) degrees of freedom. The linear hypothesis (Eq. 30) to be tested has n degrees of freedom because it provides zero conditions on the parameters θ0, θ1, θ2. The residual sum of squares for the linear hypothesis Eq. 30 is given by:

 (36)

A test of the hypothesis Eq. 30 is made by considering the ratio:

 (37)

and referring it to the F (3, n-3) distribution and we reject H0 when F>F (3, n-3).

SIMULATION RESULTS

Using the k values of Table 1 and values of k close to k = 1/4, 100 values of Eq. 26 are simulated for Δ = 0.2 and Δ = 0.1, since RSS2 values are very large for large values of Δ. Our simulation results are summarized in Table 2 and 3. From Table 2 and 3, RSS2 when compared with RSS1 (and hence F) is very large for all values of k except for k = 1/4.

From Table 2 and 3, it is blindingly obvious that the null hypothesis Eq. 30 is rejected for all k ≠ 1/4 but accepted for k = 1/4.

Equally important from the simulation results is the fact that given y satisfying Eq. 1, the square root transformation may still assume the quadratic form (R2 = 1) . However, when:

This is illustrated in the real life example.

 Table 3: Computation of F for values of k when y* = 1.0+Δx+kΔ2x2, Δ = 0.20, n = 100

REAL LIFE EXAMPLE

The data under study (Table 4, Fig. 1) is the monthly data on gas production by Oiltest Nigeria Limited, Port Harcourt, Rivers State of Nigeria for the period: January, 1984 to December, 1998. Here, the data is denoted by Yi, i = 1, 2, …, 180. From Fig. 1 it is clear the series has a quadratic trend curve which is represented by:

 (38)

Here a = -0.702, b = 131.545 and c = 617.976 which implies that:

To determine the appropriate transformation we adopt Bartlett transformation technique as established by Akpanta and Iwueze (2009). Using Table 4 (m = 15):

 (39)

Comparing Eq. 8 and 39, which could be approximated to 0.5; confirming the square transformation. To use the square root transformation, we must confirm this approximation by testing the null hypothesis, H0: β = 0.5 against the alternative hypothesis, H1: β≠0.5, using the test statistic (Draper and Smith, 1981):

 (40)

Under H0, the test statistic (Eq. 40) has the student t-distribution with (m-2) degrees of freedom. The computed value of the test statistic is t = -0.2376 and the tabulated value is t (13, 0.975) = 2.16. Thus, we could not reject the null hypothesis and we conclude that the square root transformation is the most appropriate.

 Fig. 1: A time plot (and its trend curve) of the monthly data on gas production in KSm3/d by Oiltest Nigeria Limited (January, 1984 through December, 1998)

 Table 4: Monthly data on gas production in KSm3/d by Oiltest Nigeria Limited (January, 1984 through December, 1998) Source: Biu (2010)

 Fig. 2: A time plot (and its trend curve) of the square root transformed monthly data on gas production in KSm3/d by Oiltest Nigeria Limited (January, 1984 through December, 1998)

Note that:

The fitted quadratic equation to the square root transformed data (Fig. 2) is:

 (41)

which does not satisfy Eq. 14 through Eq. 16. Comparing Eq. 41 with the corresponding theoretical values of Eq. 14 through Eq. 16, we obtain the following results:

It is clear that the estimated coefficients (Eq. 41) after square transformation are 3 or more times larger than the theoretical values with respect to γ and β and 3 times smaller with respect to α.

This confirms our simulation results that the estimated coefficients after square root transformation can only attain the theoretical results (Eq. 14 through Eq. 16) only when k = 1/4.

CONCLUSION

The fundamental finding of this study is that the square root transformation of the quadratic equation:

 (42)

will be a linear equation:

 (43)

if and only if:

(Or b2-4ac = 0). It is known that when:

the quadratic Eq. 42 has one distinct real root. In addition to this, we stated that when:

the quadratic Eq. 42 has one distinct root and its square root transformation is a line.

For the square root transformation of Eq. 42 to be represented by the quadratic equation:

 (44)

it was shown that there exist a constant k such that:

 (45)

and:

 (46)

 (47)

 (48)

Our simulation results revealed that the square root transformation of Eq. 42 could be represented by a quadratic equation but Eq. 47 through Eq. 48 would not hold except at the point where k = 1/4. That is, we cannot predict the coefficients of the resultant quadratic equation for the square root transformation of a quadratic equation when k ≠1/4.

REFERENCES
Akpanta, A.C. and I.S. Iwueze, 2009. On applying the bartlett transformation method to time series data. J. Math. Sci., 20: 227-243.

Bartlett, M.S., 1947. The use of transformations. Biometrica, 3: 39-52.

Biu, O.E., 2010. Application of intervention analysis to oil and gas production series in Niger Delta, Nigeria. M.Sc.Thesis, University of Port Harcourt, Port Harcourt, Nigeria.

Box, G.E.P. and D.R. Cox, 1964. An analysis of transformations. J. R. Stat. Soc., 26: 211-252.

Budd, C. and C. Sangwin, 2004. 101 uses of a quadratic equation. Part I. Plus Maths online Magazine, Living Mathematics, 29, 2004.

Budd, C. and C. Sangwin, 2004. 101 uses of a quadratic equation. Part II. Plus Maths online Magazine, Living Mathematics, 30, 2004.

Chauhan, O.P., P.S. Raju, D.K. Dasgupta and A.S. Bawa, 2006. Passive modified atmosphere packaging of banana (cv. Cavendish) using silicone membrane. Am. J. Food Technol., 1: 129-138.

Draper, N.R. and H. Smith, 1981. Applied Regression Analysis. John Wiley and Sons Inc., New York.

Graybill, F.A. and H.K. Iyer, 1994. Regression and Analysis Concepts and Applications. Duxbury Press, Belmont, CA, USA., ISBN: 0534198694.

Iwueze, I.S. and A.C. Akpanta, 2007. Effect of the logarithmic transformation on the trend-cycle component. J. Applied Sci., 7: 2414-2422.