HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2014 | Volume: 14 | Issue: 19 | Page No.: 2292-2298
DOI: 10.3923/jas.2014.2292.2298
Some Bounds and Conditional Maximum Bounds for RIC in Compressed Sensing
Shiqing Wang, Yan Shi and Limin Su

Abstract: Compressed sensing seeks to recover an unknown sparse signal with p entries by making far fewer than p measurements. The Restricted Isometry Constants (RIC) has become a dominant tool used for such cases since if RIC satisfies some bound then sparse signals are guaranteed to be recovered exactly when no noise is present and sparse signals can be estimated stably in the noisy case. During the last few years, a great deal of attention has been focused on bounds of RIC. Finding bounds of RIC has theoretical and applied significance. In this study we obtain a bound of RIC. It improves the results. Further we discuss the problems related larger bound of RIC and give the conditional maximum bound.

Fulltext PDF Fulltext HTML

How to cite this article
Shiqing Wang, Yan Shi and Limin Su, 2014. Some Bounds and Conditional Maximum Bounds for RIC in Compressed Sensing. Journal of Applied Sciences, 14: 2292-2298.

Keywords: Compressed sensing, L1 minimization, restricted isometry property and sparse signal recovery

INTRODUCTION

Compressed sensing aims to recover high-dimensional sparse signals based on considerably fewer linear measurements. We consider:

(1)

where the matrix with n<<p, zε is a vector of measurement errors and the unknown signal βε . Our goal is to reconstruct β based on y and Φ.

A naive approach for solving this problem is to consider L0 minimization where the goal is to find the sparsest solution in the feasible set of possible solutions. However, this is NP hard and thus is computationally infeasible. It is then natural to consider the method of L1 minimization which can be viewed as a convex relaxation of L0 minimization. The L1 minimization method in this context is:

(2)

This method has been successfully used as an effective way for reconstructing a sparse signal in many settings. (Donoho and Huo, 2001; Donoho, 2006; Candes and Tao, 2005; Candes et al., 2006; Candes and Tao, 2006, 2007; Cai et al., 2010a, b).

Recovery of high dimensional sparse signals is closely connected with Lasso and Dantzig selectors, (Candes and Tao, 2007; Bickel et al., 2009; Wang and Su, 2013a-c). One of the most commonly used frameworks for sparse recovery via L1 minimization is the Restricted Isometry Property (RIP) with a RIC introduced by Candes and Tao (2005). It has been shown that L1 minimization can recover a sparse signal with a small or zero error under various conditions on δk and θk, k. For example, the condition δkk, kk, 2k<1 is used in (Candes and Tao, 2005), δ3k+3δ4k<2 in (Candes et al., 2006), δ2kk, 2k<1 in (Candes and Tao, 2007), δ1.5kk, 1.5k<1 in (Cai et al., 2009) and δ1.25kk, 1.25k<1 in (Cai et al., 2010b).

The RIP conditions are difficult to verify for a given matrix Φ. A widely used technique for avoiding checking the RIP directly is to generate the matrix Φ randomly and to show that the resulting random matrix satisfies the RIP with high probability using the well-known Johnson-Lindenstrauss Lemma. (Baraniuk et al., 2008). This is typically done for conditions involving only the restricted isometry constant δ. Attention has been focused on δ2k as it is obviously necessary to have δ2k<1 for model identifiability. In a recent study, Davies and Gribonval (2009) constructed examples which showed that if δ2k≥0.7071, exact recovery of certain k sparse signal can fail in the noiseless case. On the other hand, sufficient conditions on δ2k has been given. For example, δ2k<0.4142 is used in (Candes, 2008), δ2k <0.4531 in (Foucart and Lai, 2009), δ2k <0.4652 in (Foucart, 2010), δ2k<0.4721 in (Cai et al., 2010b), δ2k <0.4734 in (Foucart, 2010) and δ2k <0.4931 in (Mo and Li, 2011). Some sufficient conditions on δk has been given. For example, δk<0.307 is used in (Cai et al., 2010c) and δk <0.308 in Ji and Peng, (2012) when k is even. In this study, δk <0.308 is given for any k and the conditional maximum bound δk <0.5 is obtained.

There are several benefits for improving the bound on δk. Firstly, it allows more measurement matrices to be used in compressed sensing. Secondly, for the same matrix Φ, it allows k to be larger, that is, it allows recovering a sparse signal with more nonzero elements. Furthermore, it gives better error estimation in a general problem to recover noisy compressible signals.

PRELIMINARIES

Let ||u||0 be the number of nonzero elements of vector u = (ui)εRp. u is called k-sparse if ||u||0≤k. For an nxp matrix Φ and an integer k,1≤k≤p, the k restricted isometry constant δk(Φ) is the smallest constant such that:

(3)

for every k-sparse vector u. If k+k'≤p, the k, k' restricted orthogonality constant δk, k'(Φ), is the smallest number that satisfies:

(4)

for all u and u' such that u and u' are k-sparse and k'-sparse, respectively and have disjoint supports. For notational simplicity we shall write δk for δk(Φ) and θk,k' for θk,k'(Φ) hereafter.

The following monotone properties can be easily checked:

(5)

(6)

Candes and Tao (2005) showed that the constants and are related by the following inequalities:

(7)

Cai et al. (2010b) showed that for any a≥1 and positive integers k, k' such than ak' is an integer, then:

(8)

Cai et al. (2010c) showed that for any xε:

(9)

Where:

NEW RIC BOUNDS OF COMPRESSED SENSING MATRICES

In this section, we consider new RIP conditions for sparse signal recovery. Suppose:

y = Φβ+z

with ||z||2≤ε. Denote the solution of the following L1 minimization problem:

(10)

The following is one of our main results of the study.

Theorem 1:Suppose β is k sparse with k>1. Then under the condition:

δk<0.308

the constrained L1 minimizer given in (10) satisfies:

In particular, in the noiseless case recovers β exactly.

This theorem improves δk<0.307 in (Cai et al., 2010c) to δk<0.308 and k is even in (Ji and Peng, 2012) to any k. The proof of the theorem is very long but elementary.

Proof: Let s, k be positive integers, 1≤s<k and:

Then from Theorem 3.1 in (Cai et al., 2010c), under the condition δk+tθk, s<1, we have:

By (8):

(11)

We show below that:

(12)

Where:

The proof is of elementary trigonometric functions, but it is very clever.

So:

It is easy to see f(x) is increasing when:

and decreasing when:

Thus f(x) obtains the minimum value:

That is, if k ≡ 0(mod9), let:

then under the condition δk<0.309 we have, see (Cai et al., 2010c):

(13)

If k is even, let , then:

(14)

If k≥9 is odd, let , then:

(15)

since f(x) is increasing when:

When k = 7, then:

(16)

When k = 5, then:

(17)

When k = 3, we note from the remark of Theorem 3.1 in (Cai et al., 2010c) that in these cases s = 1and , then:

(18)

From (11-18) yield:

if k is even and:

if k is odd. With the above relations we can also get:

Corollary 1: Suppose β is k sparse with k ≡ 0(mod9). Then under the condition δk<0.309 the constrained L1 minimizer given in (10) satisfies:

In particular, in the noiseless case recovers β exactly.

The proof sees (11-13):

Corollary 2: Suppose β is k sparse. If k≥9 is odd, then under the condition δk<ck the constrained L1 minimizer given in (10) satisfies:

Where:

In particular, in the noiseless case recovers β exactly. The proof sees (11-12) and (15). Note that 0.308<ck≤0.309 from (15).

To the best of our knowledge, this seems to be the first result for sparse recovery with conditions that only involve δk and k. In fact, only involving δk, k and only involving δk are equivalent.

THE CONDITIONAL MAXIMUM BOUND FOR RIC

Let h = -β. For any subset Q⊂{1, 2, ..., p} we define hQ = hIQ, where IQ denotes the indicator function of the set Q, i.e., IQ(j) = 1 if j∈Q and 0 if j∉Q. Let T be the index set of the k largest elements (in absolute value) and let Ω be the support of β. The following fact which is based on the minimality of , has been widely used (Candes et al., 2006):

(19)

We shall show that:

(20)

(21)

In fact:

and T has the k largest elements (in absolute value) and Ω has at most k elements, so we have by (19):

And:

Definition 1: Let Tm be the index set of the m largest elements (in absolute value). The set Tm is called a sparse index set, if:

and m≤k.

It is obvious that the sparse index set exists. In fact Tk is a sparse index set since:

Here we prove that any sparse index set Tm instead of Tk , Theorem 3.1 in (Cai et al., 2010c) can be improved.

Theorem 2: Suppose β is k-sparse and Tm is sparse index set. Let k1, k2, be positive integers such that k1≥m and 8(k1-m)≤k2. Let:

Then under the condition δk1+tθk1, k2<1 the L1 minimizer defined in (10) satisfies:

In particular, in the noiseless case where y = Φβ, L1 minimization recovers β exactly.

Proof: For any sparse index set Tm, let S0eTm be the index set of the k1 largest elements (in absolute value). Rearrange the indices of Sc0 if necessary according to the descending order of ,. Partition Sc0 into:

where , the last Si satisfies . If , then the theorem is trivially true. So here we assume that . Then it follows from (9) that:


Now:

Note that:

Also the next relation:

implies:

Putting them together we get:

If let m = k, then Theorem 2 is Theorem 3.1 in (Cai et al., 2010c).

Let m0≤m be smallest positive integer so that:

Then we have.

Theorem 3: Suppose β is k-sparse. Let k1, k2 be positive integers such that k1≥k≥m0 and 8(k1-m0)≤k2. Let:

Then under the condition δk+tθk1, k2 <1 the L1 minimizer defined in (10) satisfies:

In particular, in the noiseless case where y = Φβ, L1 minimization recovers β exactly.

The proof is similar to of Theorem 2.

Note that k is independent of h, but m and m0 are dependent of h, i.e., m = m(h) and m0 = m0(h).

The following is one of our main results of the study. It is the consequence of Theorem 2.

Theorem 4: Suppose β is k sparse with k>1. If k ≡ 0(mod 5) and Tk/5 is sparse index set, then under the condition δk<0.5 the constrained L1 minimizer given in (10) satisfies:

In particular, in the noiseless case recovers β exactly.

Proof: If k ≡ 0(mod 5) and Tk/5 is sparse index set, then in Theorem 2, set:

Thus:

Then under the condition:

we have:

By (5) and (7) we get:

In this case:

An explicitly example in (Cai et al., 2010c) is constructed in which δk<0.5, but it is impossible to recover certain k sparse signals. Therefore, the bound for δk cannot go beyond 0.5 in general in order to guarantee stable recovery of k sparse signals.

CONCLUSION

We recognized that ||hT||1 may be greater than ||hΩ||1 too much. Since ||hTs||1 (1≤s≤k) all may be greater than ||hΩ||1 and ||hT||1 is the largest of ||hTs||1 (1≤s≤k). We want to find a ||hT0|| (1≤s0≤k) such that ||hΩ||1≤||hT0|||1. On the other hand, the bound in (11) is function of δk. This makes the bound cannot more tight since δk is fixed. So we propose an idea. That is, the bound in right side hand is function of δs, where s≤k. From Ω and T immediately deduce four index sets Ωc∩T, Ω∩T, Ω∩Tc and Ωc∩Tc and m1=|Ωc∩T| = k-|Ω∩T|, m2 = |Ω∩T|, m3 = |Ω∩Tc| ≤k-|Ω∩T|.

It is easy to show that the bound of Theorem 2 is tighter than the one in (Cai et al., 2010c) under special cases. See the following examples.

Example 1: Suppose β is k-sparse and n≥0. Let:

If Ω = Tq, then under the condition δq+t3θq, n<1 the L1 minimizer defined in (10) satisfies:

In particular, in the noiseless case where y = Φβ, L1 minimization recovers β exactly.

In fact, the proof is similar to of Theorem 2 and note that:

Example 2: Suppose β is k-sparse and n≥0, where k is even. Let:

If |Ω∩T| = k/2, then under the condition:

the L1 minimizer defined in (10) satisfies:

The proof is similar to of Theorem 2.

REFERENCES

  • Donoho, D.L. and X. Huo, 2001. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory, 47: 2845-2862.
    CrossRef    Direct Link    


  • Donoho, D.L., 2006. Compressed sensing. IEEE Trans. Inform. Theory, 52: 1289-1306.
    CrossRef    


  • Candes, E.J. and T. Tao, 2005. Decoding by linear programming. IEEE Trans. Inform. Theory, 51: 4203-4215.
    CrossRef    Direct Link    


  • Candes, E., J. Romberg and T. Tao, 2006. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Applied Math., 59: 1207-1223.
    CrossRef    Direct Link    


  • Candes, E.J. and T. Tao, 2006. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inform. Theory, 52: 5406-5425.
    CrossRef    Direct Link    


  • Candes, E.J. and T. Tao, 2007. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Statist., 35: 2313-2351.
    Direct Link    


  • Cai, T.T., L. Wang and G. Xu, 2010. Shifting inequality and recovery of sparse signals. IEEE Trans. Signal Process., 58: 1300-1308.
    CrossRef    Direct Link    


  • Cai, T.T., L. Wang and G. Xu, 2010. Stable recovery of sparse signals and an oracle inequality. IEEE Trans. Inform. Theory, 56: 3516-3522.
    CrossRef    Direct Link    


  • Bickel, P.J., Y. Ritov and A.B. Tsybakov, 2009. Simultaneous analysis of Lasso and Dantzig selector. Ann. Statit., 37: 1705-1732.
    Direct Link    


  • Wang, S.Q. and L.M. Su, 2013. The oracle inequalities on simultaneous Lasso and Dantzig selector in high-dimensional nonparametric regression. Math. Problems Eng., Vol. 2013.
    CrossRef    


  • Wang, S.Q. and L.M. Su, 2013. Recovery of high-dimensional sparse signals via L1-minimization. J. Applied Math., Vol. 2013.
    CrossRef    


  • Wang, S.Q. and L.M. Su, 2013. Simultaneous Lasso and Dantzig selector in high dimensional nonparametric regression. Int. J. Applied Math. Stat., 42: 103-118.
    Direct Link    


  • Cai, T.T., G. Xu and J. Zhang, 2009. On recovery of sparse signals via ℓ1 minimization. IEEE Trans. Inform. Theory, 55: 3388-3397.
    CrossRef    Direct Link    


  • Baraniuk, R., M. Davenport, R. DeVore and M. Wakin, 2008. A simple proof of the restricted isometry property for random matrices. Constr. Approx., 28: 253-263.
    CrossRef    Direct Link    


  • Davies, M.E. and R. Gribonval, 2009. Restricted isometry constants where lp sparse recovery can fail for 0<p≤1. IEEE Trans. Inform. Theory, 55: 2203-2214.
    CrossRef    Direct Link    


  • Candes, E.J., 2008. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346: 589-592.
    CrossRef    Direct Link    


  • Foucart, S. and M. Lai, 2009. Sparsest solutions of underdetermined linear systems via lq-minimization for 0<q≤1. Applied Comput. Harmonic Anal., 26: 395-407.
    CrossRef    Direct Link    


  • Foucart, S., 2010. A note on guaranteed sparse recovery via l1 minimization. Applied Comput. Harmonic Anal., 29: 97-103.
    CrossRef    Direct Link    


  • Mo, Q. and S. Li, 2011. New bounds on the restricted isometry constant δ2k. Applied Comput. Harmonic Anal., 31: 460-468.
    CrossRef    Direct Link    


  • Cai, T.T., L. Wang and G. Xu, 2010. New bounds for restricted isometry constants. IEEE Trans. Inform. Theory, 56: 4388-4394.
    CrossRef    Direct Link    


  • Ji, J. and J. Peng, 2012. Improved bounds for restricted isometry constants. Discrete Dyn. Nat. Soc., Vol. 2012.
    CrossRef    

  • © Science Alert. All Rights Reserved