INTRODUCTION
Consider a matchedpair design with pdimensional responses. With θ =
(θ_{1}, θ_{2}, …, θ_{p})’ the
difference, treatment one minus treatment two, of the mean responses, one may
test the null hypothesis, H_{0}: θ_{1} = θ_{2}
= … = θ_{p} = 0, to determine if there is a significant difference
in the two treatments. If one believes that, for each coordinate, the mean response
for treatment one is at least as large as the mean response for treatment two,
then the alternative can be constrained by H_{1}: θ_{i}≥0
for i = 1, 2, …, p. Follmann (1996) discussed other
situations in which these orderrestricted hypotheses are of interest.
Let H_{1}H_{0} denote the hypothesis that H_{1} holds but H_{0} does not and let ~H_{0} denote the hypothesis that H_{0} does not hold. Let X_{1}, X_{2}, …, X_{n }be a random sample from the pdimensional multivariate normal distribution with unknown mean θ = (θ_{1}, θ_{2}, …, θ_{p})’ and positive definite covariance matrix V. The sample mean and unbiased sample covariance are:
It is well known that Ŝ is positive definite with probability one for
n>p. Kudo (1963), Shorack (1967)
and Perlman (1969) derive the Likelihood Ratio Test
(LRT) of H_{0} versus H_{1}H_{0} if V is known, known
up to a multiplicative constant or completely unknown, respectively. By V known
up to a multiplicative constant, we mean V = σ^{2}V_{0}
with V_{0} known and σ unknown.
Tang et al. (1989) proposed approximate LRTs
and Follmann (1996) studied onesided modifications
of the nondirectional χ^{2} and Hotelling’s T^{2}
tests of H_{0} versus ~H_{0}. Follmann’s tests reject H_{0}
if the appropriate nondirectional ones do with significance level 2α and:
The tests that use (Eq. 1) or a variant of it are called
Follmanntype tests and they include those in Chongcharoen
et al. (2002) which incorporate information about the offdiagonal
elements of V in Eq. 1. The latter kind of Follmanntype tests
is called the new tests. All three of these procedures, approximate LRTs, Follmann’s
tests and new tests, are easier to implement than the LRTs but the two Follmanntype
tests are easier to use than the approximate LRT. In particular, the Follmanntype
tests utilize chisquare or F critical values but the null distributions of
the approximate LRT statistics are mixtures of chisquare or beta distributions.
It is clear that for most matchedpair designs, one wants the test to be invariant
under changes in the units of measurement for any or all of the response variables
as well as changes in the order of the response variables. The likelihood function
and the constraint region, H_{1}, are invariant under permutations of
the indices of the response variables and under scale changes for the response
variables. Thus, the LRTs are permutation and scale invariant. Chongcharoen
and Wright (2007) give modified approximate LRTs that are permutation and
scale invariant. In this note, Follmanntype tests that have these invariance
properties are considered.
Focused on V that are known up to a multiplicative constant, i.e., V = σ^{2}V_{0} or that are completely unknown but it is briefly described as permutation and scale invariant versions of the Follmanntype tests for the case of a known covariance matrix. A scale matrix is a diagonal matrix with positive diagonal elements. The versions of the Follmanntype tests of H_{0} versus H_{1}H_{0} that reject H_{0} for X_{j}, j = 1, 2, …, n and covariance V if and only if they reject H_{0} for Y_{j} = DX_{j}, j = 1, 2, …, n and covariance DVD’ with D either a permutation matrix or a scale matrix are considered.
In this setting, the usual χ^{2} and Hotelling’s T^{2}
tests are permutation and scale invariant. Thus, Follmann’s tests have
the desired invariance properties if one scales the sample means in Eq.
1, i.e., if one divides
by the square root of V_{i,i}, (V_{0})_{i,i}, or Ŝ_{i,i}
when V is known, known up to a multiplicative constant or unknown, respectively.
These tests are called the invariant Follmann tests. Because the first two are
Follmann’s test applied to a fixed, nonsingular transformation of the
X_{j}, they have the desired significance levels; see Follmann
(1996) and Chongcharoen et al. (2002), respectively.
Theorem 3 shows that the one based on Ŝ also does.
Two permutation and scale invariant versions of the new tests that have significance
level α are considered. In both cases (Eq. 1) is modified.
For the first invariant new tests, the sample mean vector is scaled as above,
then premultiplied by the symmetric square root of the inverse of the correlation
or sample correlation matrix and finally summed. For the second invariant new
tests, the ith sample mean is scaled by multiplying by the square root of the
ith diagonal element of the inverse of V, V_{0}, or Ŝ then summed. It
should be noted that in the second case, the scaling factors contain information
about the offdiagonal elements of V.
The second approach is showed that it is equivalent to using the orthogonal
transformation of the Cholesky factor proposed by Tang et
al. (1989) in the test of Chongcharoen et al.
(2002). In the Monte Carlo study, it is shown that the second invariant
new tests have better powers than the first invariant new tests if one is concerned
about all of θ = (θ_{1}, θ_{2}, …, θ_{p})’
with θ_{i} = 0 or c, i = 1, 2, …, p with c>0.
The powers of all of these permutation and scale invariant tests including
the LRTs will be compared by Monte Carlo simulation elsewhere. There it will
be noted that for p = 3, there is little difference in the powers of the invariant
Follmann’s tests and the second invariant new tests. However, for p≥4,
if one is concerned about the entire alternative region, then the second invariant
new tests have better powers than the invariant Follmann’s tests.
By taking the appropriate differences, the hypotheses H_{0} and H_{1}
arise when testing homogeneity of normal means in the oneway analysis of variance
with an orderrestricted alternative (Robertson et al.,
1988). Let n_{i} denote the size of the ith sample and σ_{i}^{2}
the variance of the ith population. If the weights w_{i} = n_{i}/σ_{i}^{2}
are equal, then the following correlation matrices are of interest for a simple
order restriction and a simple tree restriction, respectively:
where, I(A) denotes the indicator of A.
The new tests: The permutation and scale invariant versions of the Follmanntype tests that incorporate information about the offdiagonal elements of V in condition (Eq. 1), i.e., the new tests are presented. For the three types of covariance matrices considered here, define:
For the nondirectional tests of H_{0} versus ~H_{0 }in these three cases, one may use the test statistics:
For V known up to a multiplicative constant, Chongcharoen
et al. (2002) discussed a version of Follmann’s test, a version
of the new test and the statistic F_{1}. The Follmanntype onesided
modifications of these tests are considered. If Y_{j} = DX_{j}
with D nonsingular, then Y_{j} has covariance DVD’, = D and χ^{2}
which is not changed by this transformation, is permutation and scale invariant.
Similarly, F_{1} and F_{2} are shown to be permutation and scale
invariant.
V known up to a multiplicative constant: Suppose V = σ^{2}V_{0}
with V_{0} known. For an arbitrary symmetric, nonsingular matrix B,
let B^{1/2} denote the symmetric square root of B^{1}. Following
Tang et al. (1989), let C denote the Cholesky
factor of V_{0}^{1} i.e., the unique upper triangular matrix
C with C’C = V_{0}^{1} With F_{2α;p(n1)p}
the (12α)^{th} quantile of the F distribution with degrees of
freedom p and (n1)p, respectively, let N_{1R} (N_{1C}) reject
H_{0} if F_{1}>F_{2α;p(n1)p} and:
The example in the appendix shows that N_{1R} which was studied by
Chongcharoen et al. (2002), is not scale invariant
and N_{1C} is not permutation invariant. (Incidentally, the proofs of
Theorems 1 and 2 by Chongcharoen and Wright (2007) show
that N_{1C} is scale invariant and N_{1R} is permutation invariant).
To obtain a permutation and scale invariant test, the scaling the components
of
is considered first and then premultiplying by the symmetric square root of
R^{1}, with R = M_{1}V_{0}M_{1} the correlation
matrix of an observation and M_{1} defined as in Eq. 3.
Thus, let N_{1S} reject H_{0} if:
Clearly N_{1S} is scale invariant and Theorem 1, shows that it is permutation invariant.
Another way to scale the sample means is to multiply by the square root of the diagonal elements of the inverse of V_{0} which incorporates information about the offdiagonal elements of V. Thus the second invariant version of the new test, denoted by N_{1T}, rejects H_{0} if:
It is straightforward to show that U_{1T} and consequently N_{1T},
is permutation and scale invariant. Because N_{1R}, N_{1C},
N_{1S} and N_{1T} are the Follmann test developed by Chongcharoen
et al. (2002) applied to nonsingular transformations of the X_{j},
they all have significance level α.
Let C^{o} be the orthogonal transformation of C recommended by Tang
et al. (1989) and U_{1C}^{o} be the sum of (C^{o})_{i}.
Theorem 2, shows that U_{1T}>0 if and only if U_{1C}^{o}>0.
Thus, the Follmanntype tests based on U_{1T} and U_{1C}^{o}
are equivalent.
The powers of N_{1S} and N_{1T} are compared. If one considers the entire alternative region, based on the Monte Carlo study described there, N_{1T} seems to have better powers.
Unknown V: For V completely unknown, the analogues of N_{1S}
and N_{1T} are considered. With M_{2} as in Eq.
3 and
the sample correlation matrix, let N_{2S} reject H_{0} if:
Since U_{2S} is scale invariant, so is N_{2S}. Furthermore, a proof, like the one given for Theorem 1, shows that N_{2S} is permutation invariant. The test N_{2T} rejects H_{0} if:
Clearly, U_{2T} and N_{2T} are permutation and scale invariant. One could base a test on U_{2R} or U_{2C} which are defined like U_{1R} and U_{1C} but use Ŝ^{1} rather than V_{0}^{1} The former is not scale invariant and the latter is not permutation invariant.
For the following result, whose proof is straightforward, R^{k} denotes the kdimensional reals.
Theorem 1: N_{1S}, defined by Eq. 6, is permutation invariant.
Proof: We only need to show that U_{1S}, given in Eq.
6, is permutation invariant. Let π be a permutation of {1, 2, …,
p} and D be the corresponding matrix, i.e., D_{i,j} = I(π(i) =
j) for 1≤i, j≤p. Note that for B an arbitrary pxp matrix (DBD’)_{i,j}
= B_{π(i),π(j)}. With M_{1} defined in Eq.
3, recall that X_{j} has covariance and correlation matrices σ^{2}V_{0}
and R = M_{1}V_{0}M_{1}, respectively. Of course, DD’
= D’D = I, DX_{j} has covariance and correlation matrices σ^{2}DV_{0}D’
and R_{*} = DRD’, respectively. Corresponding to M_{1},
let .
But M_{1*} = DM_{1}D’.
The symmetric square root of R is O’E^{1/2}O, where O is orthogonal
and E = ORO’ is a diagonal matrix with the eigenvalues of R on the diagonal.
Now O_{*} = DOD’ is orthogonal. With E_{*} = O_{*}R_{*}O_{*}’
= DED’, E_{*} is diagonal and its diagonal is a permutation of
the diagonal of E. Thus:
E_{*}^{1/2} = DE^{1/2} D’,
R_{*}^{1/2} = O_{*}’E_{*}^{1/2}O_{*}
= DR^{1/2}D’
R_{*}^{1/2} = M_{1*}D = DR^{1/2}M_{1}

and therefore, U_{1S} is permutation invariant.
Theorem 2: With U_{1T} and U_{1C}^{o} defined in Eq. 7 and after Eq. 7, U_{1T}>0 if and only if U_{1C}^{o}>0.
Proof: Let C be the Cholesky factor of the inverse of V_{0},
d_{C} be defined as:
in equation of Tang et al. (1989) and J, e_{1},
e_{2}, …, e_{p} be pdimensional vectors with J_{j}
= 1 (e_{i})_{j} = I(i = j) for 1≤i, j≤p. Let r be the
permutation of {1, 2, …, p} given by Tang et al.
(1989) that is based on the columns of the inverse of V_{0}. (The
proof given is valid for any permutation of {1, 2, …, p}). Let Q_{2}
be the orthogonal matrix determined by the GramSchmidt orthogonalization process
applied to J, e_{2}, e_{3}, …, e_{p} in the order
listed which we denote by Q_{2} = GS(J, e_{2}, e_{3},
…, e_{p}). Similarly, define Q_{1} = GS(d_{c},
Ce_{r(1)}, Ce_{r(2)}, …, Ce_{r(p1)}). Then C^{o}
= Q_{2}Q_{1}’C is the orthogonal transformation of the
Cholesky factor by Tang et al. (1989) and U_{1C}^{o}
is the sum of C^{o} By the definition of d_{C}, for i = 1, 2,
…, p.
Writing the ith column of C as Ce_{i}, the following algebra completes
the proof:
Theorem 3: Let Q and L be real valued functions defined on R^{np},
with Q even and L odd, let c be real and let X be an np dimensional random vector
with X andX identically distributed. If:
The npdimensional data vector is symmetric under H_{0} and as a function of the data vector, F_{2} is even and U_{2S} and U_{2T} are odd. Thus, N_{2S} and N_{2T} have significance level α. As mentioned earlier, based on the Monte Carlo study described, if one considers the entire alternative region, then N_{2T} seems to have better powers than N_{2S}.
Known V: Based on the results of the last two subsections, when V is known, the new test, N_{0T}, is recommended that, with χ^{2} defined in Eq. 4 and χ^{2}_{2α,p} the (12α)^{th} quantile of the chisquare distribution with p degrees of freedom, rejects the null hypothesis if:
Power comparisons: Monte Carlo techniques are used to compare N_{1S}
and N_{1T} as well as N_{2S} and N_{2T}. Following Chongcharoen
et al. (2002), with p = 3 and 6, we simulate multivariate normal
random vectors with covariance V = R for the following correlation matrices,
R = (ρ_{i, j}):
• 
R_{p, 1} = R_{S}, R_{p, 2}
= R_{T} with R_{S} and R_{T} given in Eq.
2, R_{3,3} (R_{6,3}) with ρ_{i,j} = 0.4
(0.1) for i≠j 
• 
R_{3,4} with ρ_{1,2} = ρ_{2,3}
= 0.4 and ρ_{1,3} = 0.4, R_{3,5} with ρ_{1,2}
= ρ_{2,3} = 0.4 and ρ_{1,3} = 0.4 
• 
R_{6,4} with ρ_{1,2} = ρ_{1,4} = ρ_{2,5}
= ρ_{2,6} = ρ_{3,5} = ρ_{3,6} = ρ_{4,5}
= ρ_{4,6} = 0.4 and ρ_{i,j} = 0.4 for other i≠j 
(11) 
Because the scale invariant tests is studied, so V_{i,i} = 1 for i = 1, 2, …, p is set. As expected, for each of the tests N_{1S}, N_{1T}, N_{2S} and N_{2T}, there is little difference in its power function for the correlation matrix with ρ_{1,2} = ρ_{1,3} = ρ_{2,3} = 0.4 and for R_{3,2} (ρ_{1,2} = ρ_{1,3} = ρ_{2,3} = 0.5). The former R is not discussed any further. Because the tests are permutation invariant, their overall performances for R_{3,4} (R_{3,5}) are the same as those for R with ρ_{1,2} = ρ_{1,3} = ρ_{2,3} = 0.4 and exactly one (two) of the three positive.
Sample sizes are n = 6, 20 and 100, except n = 6 is replaced by n = 10 for p = 6. The mean vectors of the form, θ = cυ with c a constant and υ a vector are considered. The vector υ' is called the direction and c is chosen so that the usual F test based on F_{1} or F_{2} has power equal to 0.70 provided υ is nonnull, i.e., υ≠0. The directions of the form (v_{1}, v_{2}, …, v_{p})' with v_{i} = 0 or 1 for 1≤i≤p are considered. With 10,000 iterations, the proportion of times each test rejects the null hypothesis is recorded. Throughout, the level of significance is α = 0.05.
All of the tests considered are exact. For all of these tests, all n and all the correlation structures considered, the power estimates under the null hypothesis range from 0.046 to 0.053.
Now the two tests N_{1S} and N_{1T} are compared. Chongcharoen
et al. (2002) noted that if V_{i,i} = 1 and V_{i,j}
have the same value for 1≤i≠j≤p, then with U_{1S} defined in
Eq. 6, 1 holds if and only if U_{1S}>0. For such
V, the diagonals elements of V^{1} are the same and thus with U_{1T}
defined in Eq. 7, 1 holds if and only if U_{1T}>0.
For such V, Follmann’s test, N_{1S} and N_{1T} are identical.
It is noted that R_{3,2}, R_{3,3}, R_{6,2} and R_{6,3}
are of this type. For a given R and a given Ψ, a test of H_{0}
versus H_{1}H_{0}, let a(Ψ) and m(Ψ) be the average
and minimum, respectively, of the power estimates of Ψ over the 2^{p}1
nonnull directions considered here.
First, p = 3 is considered. For R_{3,1} and R_{3,4} with n = 6, 20 and 100, a(N_{1T})a(N_{1S}) ranges from 0.002 to 0.000 and m(N_{1T})m(N_{1S}) ranges from 0.001 to 0.005. The differences in the two tests are more noticeable for R_{3,5}. For p = 3, R_{3,5}, n = 6 and 100 and the seven nonnull directions considered, Table 1 gives the power estimates for N_{1S} and N_{1,T}. (It also gives some power estimates for N_{2S} and N_{2T} for this R.) As n ranges from 6 to 100 for R_{3,5}, a(N_{1T})a(N_{1S}) ranges from 0.006 to 0.003 and m(N_{1T})m(N_{1S}) ranges from 0.149 to 0.155. N_{1T} is recommended over N_{1S} for this R. Using N_{1T} rather than N_{1S} may result in a slight loss in “average” power but will provide some protection against the low power of N_{1S} in the direction (0, 1, 0).
As in study of Chongcharoen and Wright (2007), with
p = 3 we also consider correlation matrices for which the elements above the
diagonal have different magnitudes and can be positive or negative.
Table 1: 
For p = 3, α = 0.05, R_{3,5}, n = 6 and n =
100, the values of c, estimates of the powers of N_{1S} and N_{1T}
are given for several directions 

c is chosen to make the power of the usual F test equal 0.70.
The corresponding values for N_{2S} and N_{2T} are given
for n = 6. For n = 100, the estimates for N_{1S} (N_{1T})
do not differ from those for N_{2S} (N_{2T}) by more than
0.006 
If they are all negative with large magnitudes, then the correlation matrix
will be singular. Thus, ρ_{1,2} = 0.3, ρ_{1,3}
= 0.4 and ρ_{2,3} = 0.5 are considered.
For these eight correlation matrices with different magnitudes for the elements above the diagonal, N_{1S} and N_{1T} perform as they do for R_{3j} with 1≤ j≤5. In particular, for the four of the eight matrices with at most one of the three correlations being positive, N_{1S} and N_{1T} perform as they do for R_{31} and R_{34}, i.e., there is little difference in the power estimates of N_{1S} and N_{1T}. For the three matrices with two positive elements above the diagonal, N_{1S} and N_{1T} perform as they do for R_{35}. For these three matrices, as n ranges from 6 to 100, a(N_{1T})a(N_{1S}) ranges from 0.009 to 0.002 and m(N_{1T})m(N_{1S}) ranges from 0.143 to 0.157. Finally, for the matrix with all three correlations positive and n = 6, 20 and 100, a(N_{1T}) and a(N_{1S}) agree to three decimal places and m(N_{1T})m(N_{1S}) ranges from 0.020 to 0.024. Recall, if ρ_{1,2} = ρ_{1,3} = ρ_{2,3}, then N_{1S} and N_{1T} are identical but for the matrix with different positive correlations, N_{1T} is slightly preferred over N_{1S}. Based on the correlation matrices studied here, N_{1T} is recommended over N_{1S} for p = 3. For the nonnull directions considered, it is believed that the possible loss in average power is offset by the possible gain in minimum power.
Next, p = 6 is considered and recall that N_{1S} and N_{1T} are identical for both R_{6,2} and R_{6,3}. For R_{6,1}, there is little difference in the power estimates of the two tests. As n ranges from 10 to 100, a(N_{1T})a(N_{1S}) = 0 to three decimal places and m(N_{1T})m(N_{1S}) ranges from 0.000 to 0.002. For R_{6,4}, as n ranges from 10 to 100, a(N_{1T})a(N_{1S}) ranges from 0.053 to 0.007 and m(N_{1T})m(N_{1S}) ranges from 0.002 to 0.019. There is not a substantial difference in the powers of these two tests in this case and one’s choice would depend on whether average power or minimum power is to be maximized. However, in the next paragraph, we study cases in which there is a substantial difference in the powers of the two tests.
To study the effect of the pattern of positive and negative correlations on
the powers of these two tests, we consider all 2^{15} cases with ρ_{i,j}
= ±0.35 for i<j.
Table 2: 
With p = 6, n = 10, V_{i,i} = 1.0 and V_{i,j}
= 0.35 for 1≤i≠j≤p and the number of V_{i,j} with i<j
that are negative fixed in column 1, the number of such matrices that are
positive definite and the minimum and maximum of a(N_{1T})a(N_{1S})
as well as m(N_{1T})m(N_{1S}) over all such positive definite
correlation matrices and all nonnull directions of the form (v_{1},
v_{2}, …, v_{p})' with v_{i} = 0 or 1 for 1≤i≤p
are given 

*If the number of negatives exceeds 11, the correlation matrix
is not positive definite 
Of course, not all such symmetric matrices with ones on the diagonal are positive
definite. In fact, the magnitude of the correlations is chosen to be 0.35 rather
than 0.40 which is used in R_{6,4}, because it yields more positive
definite matrices. With l the number of negative correlations with i<j, 0≤l≤11
and n = 10, Table 2 gives the number of such matrices that
are positive definite, the minimum and maximum of both a(N_{1T})a(N_{1S})
and m(N_{1T})m(N_{1s}) over all such matrices and all nonnull
directions considered here. Estimates for n = 10 are given because the difference
in the two tests are more pronounced for small n. For l>11, there are no
such positive definite matrices. For l = 0 and 11, there is little difference
in the estimated powers of the two tests and for 2≤l≤10, it appears that
the average of the power estimates for N_{1T} are somewhat smaller than
for N_{1S} but the minimum of the power estimates for N_{1T}
may be substantially larger than those of N_{1S}.
To further explore the last conclusion, l = 8 is considered because it has
the greatest loss in average power from using N_{1T} rather than N_{1S}
and (about) the greatest gain in minimum power from using N_{1T}. Of
the 1,665 positive definite correlation matrices with l = 8, the following has
the greatest loss in average power from using N_{1T} rather than N_{1S}:
R_{6,5} = (ρ_{i,j}) with  ρ_{i,j} = 0.35
for i≠j and ρ_{1,2} = ρ_{1,5} = ρ_{2,3}
= ρ_{2,4} = ρ_{2,5} = ρ_{3,5} = ρ_{4,5}
= ρ_{5,6} = 0.35. For this R with n = 10 and n = 100, a(N_{1T})a(N_{1S})
= 0.091 for both sample sizes and m(N_{1T})m(N_{1s}) = 0.508
and 0.487, respectively. For this correlation matrix, we recommend N_{1T}
because, even though it has smaller average power than N_{1S}, it will
provide some protection against the low power in the direction (0, 0, 0, 0,
0, 1)'. Based on the results for all of the correlation matrices we have studied
with p = 3 and 6, N_{1T} is recommended over N_{1S}. However,
it should be noted that for some correlation matrices and some directions, its
power is smaller than the usual F test that does not incorporate the information
that the θ_{i} are nonnegative.
Now briefly comparison between N_{2T} and N_{2S} is considered, that is, the new tests for unknown V. For p = 3, the powers of the two tests for the same correlation structures are considered with the same directions considered earlier. The largest differences in the tests due to the fact that V is unknown should occur for small n. For n = 6, R_{3,1} through R_{3,5}, a(N_{2T})a(N_{2S}) and m(N_{2T})m(N_{2S}) are both positive but these differences do not exceed 0.032 for the first four R. The power estimates for the two tests with R_{3,5} are given in Table 1. For n = 6 and this R, the average power of N_{2T} is 0.036 larger than for N_{2S} and the minimum power of N_{2T} is 0.287 larger than for N_{2S}. As n increases, the results are more like those for N_{1T} and N_{1S}. For instance, for n = 100 and R_{3,5}, the power estimates for N_{1T} and N_{2T} do not differ by more than 0.006. The same is true for N_{1S} and N_{2S}. Based on these results, N_{2T} is recommended over N_{2S} for p = 3.
For p = 6 and n = 10, the powers of the two tests for the same correlation matrices are considered with the same directions considered earlier. For R_{6,1}, R_{6,2} and R_{6,3}, N_{2T} has the larger average power estimate and the larger minimum power estimate. For R_{6,4}, a(N_{2T})a(N_{2S}) = 0.001 and m(N_{2T})m(N_{2S}) = 0.120 and N_{2T} is preferred over N_{2S}. For R_{6,5}, a(N_{2T})a(N_{2S}) = 0.013 and m(N_{2T})m(N_{2S}) = 0.700. As with the comparison of N_{1S} and N_{1T}, for the latter R, the loss in average power resulting from using N_{2T} rather than N_{2S} is more than offset by the protection against the extremely low power of N_{2S} in the direction (0, 0, 0, 0, 0, 1). N_{2T} is recommended over N_{2S}.
APPENDIX
Example 1: (N_{1R} is not scale invariant and N_{1C} is not permutation invariant) For both examples, let p = 2, V_{0} = R_{T} and V_{*} = DV_{0}D’ with D a scale or a permutation matrix. To show that N_{IR}, defined by (5), is not scale invariant, let D = diag(1.0, 2.0) and = (1.0, 1.1)’. The needed symmetric square roots, transformed mean vectors and sums are:
Thus, N_{1R} is not scale invariant.
To show that N_{1C}, defined by Eq. 5, is not permutation invariant, let D correspond to the permutation that interchanges the two indices and = (1.0, 1.0)’ Clearly, V_{*} = V_{0} which has Cholesky factor C given below. The common Cholesky factor and sums are:
Thus, N_{1C} is not permutation invariant.
ACKNOWLEDGMENT
The research of the first author was sponsored by the Thailand Research Fund.