Open Access

The average sensitivity of an intersection of half spaces

Research in the Mathematical Sciences20141:13

https://doi.org/10.1186/s40687-014-0013-6

Received: 12 February 2014

Accepted: 10 September 2014

Published: 11 November 2014

Abstract

Abstract

We prove new bounds on the average sensitivity of the indicator function of an intersection of k halfspaces. In particular, we prove the optimal bound of O n log ( k ) . This generalizes a result of Nazarov, who proved the analogous result in the Gaussian case, and improves upon a result of Harsha, Klivans and Meka. Furthermore, our result has implications for the runtime required to learn intersections of halfspaces.

AMS Subject Classification

Primary; 52C45

Keywords

Boolean function Halfspaces Noise sensitivity Machine learning

Background

One of the most important measures of the complexity of a Boolean function f : ℝ n β†’ { Β± 1 } is that of its average sensitivity, namely
𝔸 π•Š ( f ) : = 𝔼 x ∼ u { Β± 1 } n # { i : f ( x ) β‰  f ( x i ) }

where x i above is x with the i t h coordinate flipped. The average sensitivity and related measures of noise sensitivity of a Boolean function have found several applications, perhaps most notably to the area of machine learning (see for example [1]). It has thus become important to understand how large the average sensitivity of functions in various classes can be.

Of particular interest is the study of the sensitivity of certain classes of algebraically defined functions. Gotsman and Linial [2] first studied the sensitivity of polynomial threshold functions (i.e. functions of the form f(x)=sgn(p(x)) for p a polynomial of bounded degree). They conjectured exact upper bounds on the sensitivity of polynomial threshold functions of limited degree, but were unable to prove them except in the case of linear threshold functions (when p is required to be degree 1). Since then, significant progress has been made towards proving this Conjecture. The first non-trivial bounds for large degree were proven in [3] by Diakonikolas et al. in 2010. Since then, progress has been rapid. In [4], the Gaussian analogue of the Gotsman-Linial Conjecture was proved, and in [5] the correct bound on average sensitivity was proved to within a polylogarithmic factor.

Another potential generalization of the degree-1 case of the Gotsman-Linial Conjecture (which bounds the sensitivity of the indicator function of a halfspace) would be to consider the sensitivity of the indictor function of the intersection of a bounded number of halfspaces. The Gaussian analogue of this question has already been studied. In particular, Nazarov has shown (see [6]) that the Gaussian surface area of an intersection of k halfspaces is at most O log k . This suggests that the average sensitivity of such a function should be bounded by O n log k . Although this bound has been believed for some time, attempts to prove it have been unsuccessful. Perhaps the closest attempt thus far was by Harsha, Klivans and Meka who show in [7] that an intersection of k sufficiently regular halfspaces has noise sensitivity with parameter Ξ΅ at most log(k)O(1)Ξ΅1/6. In this paper, we prove that the bound of O n log ( k ) is in face correct. In particular, we prove the following Theorem:

Theorem 1.

Let f be the indicator function of an intersection of k half spaces in n variables, then
𝔸 π•Š ( f ) = O n log ( k ) .

It should also be noted that Nazarov’s bound follows as a Corollary of Theorem 1, by replacing Gaussian random variables with averages of Bernoulli random variables. It is also not hard to show that this bound is tight up to constants. In particular:

Theorem 2.

If k≀2 n , there exists a function f in n variables given by the intersection of at most k half spaces so that
𝔸 π•Š ( f ) = Ξ© n log ( k ) .

Our proof of Theorem 1 actually uses very little information about halfspaces. In particular, we use only the fact that linear threshold functions are monotonic in the following sense:

Definition 1.

We say that a function f : { Β± 1 } n β†’ ℝ is unate if for all i, f is either increasing with respect to the i t h coordinate or decreasing with respect to the i t h coordinate.

We prove Theorem 1 by means of the following much more general statement:

Proposition 1.

Let f1,…,f k :{Β±1} n β†’{0,1}, be unate functions and let F:{Β±1} n β†’{0,1} be defined as F ( x ) = ∨ i = 1 k f i ( x ) . Then
𝔸 π•Š ( f ) = O n log ( k ) .

The application of Theorem 1 to machine learning is via a slightly different notion of noise sensitivity than that of the average sensitivity. In particular, we define the noise sensitivity as follows

Definition 2.

Let f:{Β±1} n β†’{0,1} be a Boolean function. For a parameter Ρ∈(0,1) we define the noise sensitivity of f with parameter Ξ΅ to be
β„• π•Š Ξ΅ ( f ) : = Pr ( f ( x ) β‰  f ( y ) )

where x and y are Bernoulli random variables where y is obtained from x by randomly and independently flipping the sign of each coordinate with probability Ξ΅.

Using this notation, we have that

Corollary 1.

If f:{Β±1} n β†’{0,1} is the indicator function of the intersection of k halfspaces, and Ρ∈(0,1) then
β„• π•Š Ξ΅ ( f ) = O Ξ΅ log ( k ) .

Remark 1.

This is false in general for intersections of unate functions, since if f is the tribes function on n variables (which is unate) then β„• π•Š Ξ΅ ( f ) = Ξ© ( 1 ) so long as Ξ΅=Ξ©(logβˆ’1(n)).

Finally, using the L1 polynomial regression algorithm of [1], we obtain the following:

Corollary 2.

The concept class of intersections of k halfspaces with respect to the uniform distribution on {Β±1} n is agnostically learnable with error o p t+Ξ΅ in time n O ( log ( k ) Ξ΅ βˆ’ 2 ) .

Proofs of the sensitivity bounds

The proof of Proposition 1 follows by a fairly natural generalization of one of the standard proofs for the case of a single unate function. In particular, if f:{Β±1} n β†’{0,1} is unate, we may assume without loss of generality that f is increasing in each coordinate. In such a case, it is easy to show that
𝔸 π•Š ( f ) = 𝔼 f ( x ) βˆ‘ i = 1 n x i ≀ 𝔼 max 0 , βˆ‘ i = 1 n x i = O n .

In fact, this technique can be extended to prove bounds on the sensitivity of unate functions with given expectation. In particular, Lemma 1 below provides an appropriate bound. Our proof of Proposition 1 turns out to be a relatively straightforward generalization of this technique. In particular, we show that by adding the f i one at a time, the change in sensitivity is bounded by a similar function of the change in total expectation.

Lemma 1.

Let S:{Β±1} n β†’{0,1} and let p = 𝔼 S ( x ) , then
𝔼 S ( x ) βˆ‘ i = 1 n x i = O p n log ( 1 / p ) .

Proof.

Note that:
𝔼 S ( x ) βˆ‘ i = 1 n x i ≀ ∫ 0 ∞ Pr S ( x ) βˆ‘ i = 1 n x i > y dy ≀ ∫ 0 ∞ min p , Pr βˆ‘ i = 1 n x i > y dy ≀ ∫ 0 ∞ min p , exp βˆ’ Ξ© y 2 / n dy ≀ O ∫ 0 ∞ min p , exp βˆ’ z 2 / n dz ≀ O ∫ 0 n log ( 1 / p ) pdz + ∫ n log ( 1 / p ) ∞ exp βˆ’ z 2 / n dz ≀ O ( p n log ( 1 / p ) ) .

We now prove Proposition 1.

Proof.

Let F m = ∨ i = 1 m f i ( x ) . Let S m (x)=F m (x)βˆ’Fmβˆ’1(x). Let p m = 𝔼 S m ( x ) . Our main goal will be to show that 𝔸 π•Š ( F m ) ≀ 𝔸 π•Š ( F m βˆ’ 1 ) + O p m n log ( p m ) , from which our result follows easily.

Consider 𝔸 π•Š ( F m ) βˆ’ 𝔸 π•Š ( F m βˆ’ 1 ) . We assume without loss of generality that f m is increasing in every coordinate.
𝔸 π•Š ( F m ) βˆ’ 𝔸 π•Š ( F m βˆ’ 1 ) = βˆ‘ i = 1 n 𝔼 F m ( x ) βˆ’ F m ( x i ) βˆ’ F m βˆ’ 1 ( x ) βˆ’ F m βˆ’ 1 ( x i ) ,

where x i denotes x with the i t h coordinate flipped. We make the following claim:

Claim. For each x,i,
F m ( x ) βˆ’ F m ( x i ) βˆ’ F m βˆ’ 1 ( x ) βˆ’ F m βˆ’ 1 ( x i ) ≀ x i F m ( x ) βˆ’ F m ( x i ) βˆ’ F m βˆ’ 1 ( x ) βˆ’ F m βˆ’ 1 ( x i ) .
(1)

Proof. Our proof is based on considering two different cases.

Case 1: f m (x)=f m (x i )=0

In this case, F m (x)=Fmβˆ’1(x) and F m (x i )=Fmβˆ’1(x i ), and thus both sides of Equation 1 are 0.

Case 2: f m (x)=1 or f m (x i )=1

Note that replacing x by x i leaves both sizes of Equation 1 the same. We may therefore assume without loss of generality that x i =1. Since f m is increasing with respect to the i t h coordinate, f m (x)β‰₯f m (x i ). Since at least one of them is 1, f m (x)=1. Therefore, F m (x)=1. Therefore, since
x i F m ( x ) βˆ’ F m ( x i ) β‰₯ F m ( x ) βˆ’ F m ( x i ) ,
and
βˆ’ x i F m βˆ’ 1 ( x ) βˆ’ F m βˆ’ 1 ( x i ) β‰₯ βˆ’ F m βˆ’ 1 ( x ) βˆ’ F m βˆ’ 1 ( x i ) ,

Equation 1 follows.

By the claim we have that
𝔸 π•Š ( F m ) βˆ’ 𝔸 π•Š ( F m βˆ’ 1 ) ≀ βˆ‘ i = 1 n 𝔼 x i F m ( x ) βˆ’ F m ( x i ) βˆ’ F m βˆ’ 1 ( x ) βˆ’ F m βˆ’ 1 ( x i ) = βˆ‘ i = 1 n 𝔼 x i S m ( x ) βˆ’ S m ( x i ) = βˆ‘ i = 1 n 𝔼 x i S m ( x ) βˆ’ βˆ‘ i = 1 n 𝔼 ( βˆ’ y i ) S m ( y ) = 2 𝔼 S m ( x ) βˆ‘ i = 1 n x i = O ( p m n log ( 1 / p m ) ) .

Where the on the third line above, we are letting y=x i , and the last line is by Lemma 1.

Hence, we have that
𝔸 π•Š ( F ) = βˆ‘ m = 1 k 𝔸 π•Š ( F m ) βˆ’ 𝔸 π•Š ( F m βˆ’ 1 ) = O n βˆ‘ m = 1 k p m log ( 1 / p m ) .
Let P = 𝔼 [ F ( x ) ] = βˆ‘ m = 1 k p m . By concavity of the function x log ( 1 / x ) for x∈(0,1), we have that
𝔸 π•Š ( F ) = O n P log ( k / P ) = O n log ( k ) .

This completes our proof.

Theorem 1 follows from Proposition 1 upon noting that 1βˆ’f is a disjunction of k linear threshold functions, each of which is unate. Our proof of Theorem 1 shows that the bound can be tight only if a large number of the halfspaces cut off an incremental volume of roughly 1/k. It turns out that this bound can be achieved when we take a random collection of halfspaces with such volumes. Before we begin to prove Theorem 2, we need the following Lemma:

Lemma 2.

For an integer n and 1/2>Ξ΅>2βˆ’n there exists a linear threshold function f:{Β±1} n β†’{0,1} so that
𝔼 x f ( x ) β‰₯ Ξ΅ ,
and
𝔸 π•Š ( f ) = Ξ© 𝔼 x [ f ( x ) ] n log ( 1 / Ξ΅ ) .

Proof.

This is easily seen to be the case if we let f(x) be the indicator function of βˆ‘ i = 1 n x i > t for t as large as possible so that this event takes place with probability at least Ξ΅.

Proof.

We note that it suffices to show that there is such as f given as the indicator function of a union of at most k half-spaces, as 1βˆ’f will have the same average sensitivity and will be the indicator function of an intersection. Let Ξ΅=1/k, and let f be the function given to us in Lemma 2. We note that if 𝔼 [ f ( x ) ] > 1 / 4 , then f is sufficient and we are done. Otherwise let m = ⌊ 1 / 4 𝔼 [ f ( x ) ] βŒ‹ ≀ k . For s∈{Β±1} n let f s (x)=f(s1x1,…,s n x n ). We note for each s that f s (x) is a linear threshold function with 𝔼 [ f s ( x ) ] = 𝔼 [ f ( x ) ] and 𝔸 π•Š ( f s ) = 𝔸 π•Š ( f ) .

Let
F ( x ) = ∨ i = 1 m f s i ( x )
for s i independent random elements of {Β±1} n . We note that F(x) is always the indicator of a union of at most k half-spaces, but we also claim that
𝔼 s i [ 𝔸 π•Š ( F ) ] = Ξ© n log ( k ) .

This would imply our result for appropriately chosen values of the s i .

We note that 𝔸 π•Š ( F ) is 21βˆ’n times the number of pairs of adjacent elements x,y of the hypercube so that F(x)=1,F(y)=0. This in turn is at least 21βˆ’n times the sum over 1≀i≀m of the number of pairs of adjacent elements of the hypercube x,y so that f s i ( x ) = 1 , f s i ( y ) = 0 and so that f s j ( x ) = f s j ( y ) = 0 for all jβ‰ i.

On the other hand, for each i, 21βˆ’n times the number of pairs of adjacent elements x,y so that f s i ( x ) = 1 , f s i ( y ) = 0 is
𝔸 π•Š ( f s i ) = 𝔸 π•Š ( f ) = Ξ© 𝔼 [ f ( x ) ] n log ( k ) = Ξ© m βˆ’ 1 n log ( k ) .
For each of these pairs, we consider the probability over the choice of s j that f s j ( x ) = 1 or f s j ( y ) = 1 for some j≠i. We note that for each fixed x and j that
Pr s j f s j ( x ) = 1 = 𝔼 s j f s j ( x ) = 𝔼 s j f x ( s j ) = 𝔼 z f ( z ) ≀ 1 4 m .
Thus, by the union bound, the probability that either f s j ( x ) = 1 or f s j ( y ) = 1 for some jβ‰ i is at most 1/2. Therefore, the expected number of adjacent pairs x,y with f s i ( x ) = 1 , f s i ( y ) = 0 and f s j ( x ) = f s j ( y ) = 0 for all jβ‰ i is at least 𝔸 π•Š ( f s j ) / 2 . Therefore,
𝔼 s i 𝔸 π•Š ( F ) β‰₯ βˆ‘ i = 1 m 𝔸 π•Š ( f ) / 2 = mΞ© m βˆ’ 1 n log ( k ) = Ξ© n log ( k ) ,

as desired. This completes our proof.

Learning theory application

The proofs of Corollaries 1 and 2 are by what are now fairly standard techniques, but are included here for completeness. The proof of Corollary 1 is by a technique of Diakonikolas et al. in [8] for bounding the noise sensitivity in terms of the average sensitivity.

Proof.

As the noise sensitivity is an increasing function of Ξ΅ for Ξ΅<1/2, we may round Ξ΅ down to 1/βŒˆΞ΅βˆ’1βŒ‰, and thus it suffices to consider Ξ΅=1/m for some integer m. We note that the pair of random variables x, y used to define the noise sensitivity with parameter Ξ΅ can be generated in the following way:
  1. 1.

    Randomly divide the n coordinates into m bins.

     
  2. 2.

    Randomly assign each coordinate a value in {Β±1} to obtain z.

     
  3. 3.

    For each bin randomly pick b i ∈{±1}. Obtain x from z by multiplying all coordinates in the i t h bin by b i for each i.

     
  4. 4.

    Obtain y from x by flipping the sign of all coordinates in a randomly chosen bin.

     

We note that this produces the same distribution on x and y since x is clearly a uniform element of {Β±1} n and the i t h coordinate of y differs from the corresponding coordinate of x if and only if i lies in the bin selected in step 4. This happens independently and with probability 1/m for each coordinate.

Next let f be the indicator function of an intersection of at most k halfspaces. Note that after the bins and z are picked in steps 1 and 2 above that f(x) is given by g(b) where g is the indicator function of an intersection of at most k halfspaces in m variables. In the same notation, f(y)=g(bβ€²) where bβ€² is obtained from b by flipping the sign of a single random coordinate. Thus, by definition, Pr g ( b ) β‰  g ( b β€² ) = 1 m 𝔸 π•Š ( g ) . Hence,
β„• π•Š Ξ΅ ( f ) = 𝔼 g 𝔸 π•Š ( g ) m ≀ O log ( k ) m m = log ( k ) m = Ξ΅ log ( k ) .

This completes our proof.

Corollary 2 will now follow by using this bound to bound the weight of the higher degree Fourier coefficients of such an f and then using the L1 polynomial regression algorithm of [1].

Proof.

Let f be the indicator function of an intersection of k halfspaces. Let f have Fourier transform given by
f = βˆ‘ S βŠ‚ [ n ] Ο‡ S f Μ‚ ( S ) .
It is well known that for ρ∈(0,1) that
β„• π•Š ρ ( f ) = 2 βˆ‘ S βŠ‚ [ n ] 1 βˆ’ ( 1 βˆ’ 2 ρ ) | S | f Μ‚ ( S ) 2 .
Therefore, we have that
β„• π•Š ρ ( f ) ≫ βˆ‘ | S | > 1 / ρ f Μ‚ ( S ) 2 .
By Corollary 1, this tells us that
βˆ‘ | S | > 1 / ρ f Μ‚ ( S ) 2 = O ρ log ( k ) .
Setting ρ=Ρ2/(C log(k)) for sufficiently large values of C yields
βˆ‘ | S | > C log ( k ) Ξ΅ βˆ’ 2 f Μ‚ ( S ) 2 < Ξ΅.

Our claim now follows from [1] Remark 4.

Declarations

Acknowledgements

This work was done with the support of an NSF postdoctoral research fellowship.

Authors’ Affiliations

(1)
Department of Computer Science and Engineering

References

  1. Kalai, AT, Klivans, AR, Mansour, Y, Servedio, RA: Agnostically Learning Halfspaces. In: Proceedings of the 46th Foundations of Computer Science (2005).Google Scholar
  2. Gotsman C, Linial N: Spectral properties of threshold functions. Combinatorica 1994, 14: 35–50. 10.1007/BF01305949MATHMathSciNetView ArticleGoogle Scholar
  3. Diakonikolas, I, Raghavendra, P, Servedio, RA, Tan, LY: Bounding the average sensitivity and noise sensitivity of polynomial threshold functions. In: Proceedings of the 42nd ACM Symposium on Theory of Computing. ACM (2010). Google Scholar
  4. Kane, DM: The Gaussian surface area and noise sensitivity of degree-d polynomial threshold functions. In: Proceedings of the 25th Annual Conference on Computational Complexity, pp. 205–210 (2010).Google Scholar
  5. Kane, DM: The Correct, Exponent for the Gotsman-Linial Conjecture. In: Proceedings of the 28th Annual Conference on Computational Complexity (2013).Google Scholar
  6. Adam, R, Klivans, RO, Servedio, RA: Learning geometric concepts via gaussian surface area. In: Proceedings of the 49th Foundations of Computer Science, pp. 541–550 (2008).Google Scholar
  7. Harsha, P, Kilvans, AR, Meka, R: An invariance principle for polytopes. In: Proceedings of the 42nd ACM Symposium on Theory of Computing (2010).Google Scholar
  8. Diakonikolas, I, Raghavendra, P, Tan, LY: Average sensitivity and noise sensitivity of polynomial threshold functions. Manuscript available at ., [http://arxiv.org/abs/0909.5011]

Copyright

Β© Kane; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.