Algorithms for Boolean Function Query Properties
Abstract
We present new algorithms to compute fundamental properties of a Boolean function given in truthtable form. Specifically, we give an algorithm for block sensitivity, an algorithm for ‘tree decomposition,’ and an algorithm for ‘quasisymmetry.’ These algorithms are based on new insights into the structure of Boolean functions that may be of independent interest. We also give a subexponentialtime algorithm for the spacebounded quantum query complexity of a Boolean function. To prove this algorithm correct, we develop a theory of limitedprecision representation of unitary operators, building on work of Bernstein and Vazirani.
Keywords: algorithm; Boolean function; truth table; query complexity; quantum computation.
1 Introduction
The query complexity of Boolean functions, also called blackbox or decisiontree complexity, has been well studied for years [3, 5, 13, 14, 16, 17]. Numerous Boolean function properties relevant to query complexity have been defined, such as sensitivity, block sensitivity, randomized and quantum query complexity, and degree as a real polynomial. But many open questions remain concerning the relationships between the properties. For example, are sensitivity and block sensitivity polynomially related? How small can quantum query complexity be, relative to randomized query complexity? Lacking answers to these questions, we may wish to gain insight into them by using computer analysis of small Boolean functions. But to perform such analysis, we need efficient algorithms to compute the properties in question. Such algorithms are the subject of the present paper.
Let be a Boolean function, and let be the size of the truth table of . We seek algorithms that have modest running time as a function of , given the truth table as input. The following table lists some properties important for query complexity, together with the complexities of the most efficient algorithms for them of which we know. In the table, ‘LP’ stands for linear programming reduction.
Query Property  Complexity  Source 
Deterministic query complexity  [7]  
Certificate complexity  [6]  
Degree as a real polynomial  This paper  
Approximate degree  About  Obvious (LP) 
Randomized query complexity  About  Obvious (LP) 
Block sensitivity  This paper  
Quasisymmetry  This paper  
Tree decomposition  This paper  
Quantum query complexity  Exponential  Obvious 
with qubit restriction  This paper 
There is also a complexitytheory rationale for studying algorithmic problems such as those considered in this paper. Much effort has been devoted to finding Boolean function properties that do not naturalize in the sense of Razborov and Rudich [15], and that might therefore be useful for proving circuit lower bounds. In our view, it would help this effort to have a better general understanding of the complexity of problems on Boolean function truth tables—both upper and lower bounds. Such problems have been considered since the 1950’s [19], but basic open questions remain, especially in the setting of circuit complexity [11]. This paper addresses the much simpler setting of query complexity.
We do not know of a polynomialtime algorithm to find quantum query complexity; we raise this as an open problem. However, even finding quantum query complexity via exhaustive search is nontrivial, since it involves representing unitary operators with limitedprecision arithmetic. The problem is deeper than that of approximating unitary gates with bounded error, which was solved by Bernstein and Vazirani [4]. In Section 7 we resolve the problem, and give an constantfactor approximation algorithm for boundederror quantum query complexity if the memory of the quantum computer is restricted to qubits.
We have implemented some of the algorithms discussed in this paper in a linkable C library [1], which is available for download.
2 Preliminaries
A Boolean function is a total function from onto . We use to denote the set of variables of , and use , or alternatively , to denote an input to . If is an input, denotes the Hamming weight of ; if is a set, denotes the cardinality of . Particular Boolean functions to which we refer are , , and , the , , and functions respectively on inputs.
3 Previous Work
To our knowledge, no algorithms for block sensitivity, quasisymmetry, tree decomposition, or quantum query complexity have been previously published. But algorithms for simpler query properties have appeared in the literature.
Given a Boolean function , the deterministic query complexity is the minimum height of a decision tree representing . Guijarro et al. [7] give a simple dynamic programming algorithm to compute . That is given as a truth table is crucial: if is nontotal and only the inputs for which is defined are given, then computing is (when phrased as a decision problem) NPcomplete [10].
The certificate complexity is the maximum, over all inputs , of the minimum number of input bits needed to prove the value of . Equivalently, is the minimum height of a nondeterministic decision tree for . Czort [6] gives an algorithm to compute . Again, if is not given as a full truth table, then computing is NPcomplete [8].
Let be the minimum degree of an variate real multilinear polynomial such that, for all , . The following lemma, adapted from Lemma 4 of [5], is easily seen to yield an dynamic programming algorithm for . Say that a function obeys the parity property if the number of inputs with odd parity for which equals the number of inputs with even parity for which .
Lemma 1 (Shi and Yao)
equals the size of the largest restriction of for which the parity property fails.
4 Block Sensitivity
Block sensitivity, introduced in [13], is a Boolean function property that is used to establish lower bounds. There are several open problems that an efficient algorithm for block sensitivity might help to investigate [13, 3, 5].
Let be an input to Boolean function , and let (a block) be a nonempty subset of . Let be the input obtained from by flipping the bits of .
Definition 1
A block is sensitive on if , and minimal on if is sensitive and no proper subblock of is sensitive. Then the block sensitivity of is the maximum number of disjoint minimal (or equivalently, sensitive) blocks on . Finally is the maximum of over all .
The obvious algorithm to compute (compute for each using dynamic programming, then take the maximum) uses time. Here we show how to reduce the complexity to by exploiting the structure of minimal blocks. Our algorithm has two main stages: one to identify minimal blocks and store them for fast lookup, another to compute for each using only minimal blocks. The analysis proceeds by showing that no Boolean function has too many minimal blocks, and therefore that if the algorithm is slow for some inputs (because of an abundance of minimal blocks), then it must be faster for other inputs.
Algorithm 1
(computes ) For each input :

Identify all sensitive blocks of ; place them in an AVL tree .

Loop over all sensitive blocks in in lexicographic order ( ). For each block , loop over all possible blocks that properly contain . Remove from all such blocks that are in the tree; such blocks have been identified as nonminimal.

Create lists, one list for each nonempty subset of variables. Then, for each minimal block in , insert a copy of into each list such that . The result is that, for each , , where is the power set of .

Let a state be a partition of . The set represents a union of disjoint minimal blocks that have already been selected; the set represents the set of variables not yet selected. Then , where is defined via the recursion Here the maximum evaluates to if is empty. Compute using depthfirst recursion, caching the values of so that each needs to be computed only once.
The block sensitivity is then the maximum of over all .
Let be the number of minimal blocks of of size . The analysis of Algorithm 1’s running time depends on the following lemma, which shows that large minimal blocks are rare in any Boolean function.
Lemma 2
.
Proof
The number of positions that can be occupied by a minimal block of size is for each input, or for all inputs. Consider an input with a minimal block of size . Block has nonempty subsets; label them . By the minimality of , for each the input has as minimal blocks if , and as a minimal block if . Therefore cannot have as a minimal block. So of the positions, only one out of can be occupied by a minimal block of size . When an additional factor of is needed, since has as a minimal block.
Theorem 4.1
Algorithm 1 takes time.
Proof
Step 1 takes time , totaled over all inputs. Let us analyze step 2, which identifies the minimal blocks. For each input , every block that is selected is minimal, since each nonminimal block in was removed in a previous iteration. Furthermore, for each block the number of removals of blocks is less than . Therefore the total number of removals is at most
which sums to . Since each removal takes time, the total time is .
We next analyze step 3, which creates the lists . Since each minimal block is contained in sets of variables, the total number of insertions is at most for input . So the time is by the previous calculation.
Finally we analyze step 4, which computes block sensitivity using the minimal blocks. Each evaluation is performed at most once, and involves looping through a list of minimal blocks contained in , with each iteration taking time. For each block , the number of distinct pairs such that is at most . Therefore, again, the time for each input is at most and a bound of follows.
5 Quasisymmetry
A Boolean function is symmetric if its output depends only on . Query complexity is well understood for symmetric functions: for example, for all nonconstant symmetric , the deterministic query complexity is and the zeroerror quantum query complexity is [3]. Thus, a program for analyzing Boolean functions might first check whether a function is symmetric, and if it is, dispense with many expensive tests. We call quasisymmetric if some subset of input bits can be negated to make symmetric. For example, is quasisymmetric but not symmetric. There is an obvious algorithm to test quasisymmetry; here we sketch a lineartime algorithm.
Call a restriction of a leftrestriction if each variable is fixed if and only if . Our algorithm recurses through all leftrestrictions: when it is called on restriction , it calls itself recursively on and . If either of these is not quasisymmetric, then the algorithm returns failure; otherwise, the algorithm tries to fit the restrictions together in such a way that itself is seen to be quasisymmetric. It does this by testing whether , with separate routines for the special cases in which or is a constant function or a or function. If the fittingtogether process succeeds, then the algorithm returns both the output of (encoded in compact form, as a symmetric function) and the set of input bits that must be flipped to make symmetric. Crucially, these return values occupy only bits of space. The algorithm also has subroutines to handle the special cases in which is a function or a constant function. In these cases is symmetric no matter which set of input bits is flipped. Since the time used by each invocation is linear in , the total time used is The following lemma shows that the algorithm deals with all of the ways in which a function can be quasisymmetric, which is key to the algorithm’s correctness.
Lemma 3
Let be a Boolean function on inputs. If two distinct (and noncomplementary) sets of input bits and can be flipped to make symmetric, then is either , , or a constant function.
Proof
Assume without loss of generality that is empty. Then has cardinality less than . We know that depends only on , and also that it depends only on where if and otherwise. Choose any Hamming weight , and consider an input with and with two variables and such that , , and . Let . We have , but on the other hand , so by symmetry. Again applying symmetry, whenever and . Therefore is either , , or a constant function.
6 Tree Decomposition
Many of the Boolean functions of most interest to query complexity are naturally thought of as trees of smaller Boolean functions: for example, ANDOR trees and majority trees. Thus, given a function , one of the most basic questions we might ask is whether it has a tree decomposition and if so what it is. In this section we define a sense in which every Boolean function has a unique tree decomposition, and we prove its uniqueness. We also sketch an algorithm for finding the decomposition.
Definition 2
A distinct variable tree is a tree in which

Every leaf vertex is labeled with a distinct variable.

Every nonleaf vertex is labeled with a Boolean function having as many variables as has children, and depending on all of its variables.

Every nonleaf vertex has at least two children.
Such a tree represents a Boolean function in the obvious way. We call the tree trivial if it contains exactly one vertex.
A tree decomposition of is a separation of into the smallest possible components, with the exception of , , and components (where denotes possible negation), which are left intact. The choice of , , and components is not arbitrary; these are precisely the three components that “associate,” so that, for example, . Formally:
Definition 3
A tree decomposition of is a distinct variable tree representing such that:

No vertex is labeled with a function that can be represented by a nontrivial tree, unless is , , or for some .

No vertex labeled with has a child labeled with .

No vertex labeled with has a child labeled with .

No vertex labeled with has a child labeled with .

Any vertex labeled with a function that is constant on all but one input is labeled with or .
Let doublenegation be the operation of negating the output of a function at some nonroot vertex , then negating the corresponding input of the function at ’s parent. Doublenegation is a trivial way to obtain distinct decompositions. This caveat aside, we can assert uniqueness:
Theorem 6.1
Every Boolean function has a unique tree decomposition, up to doublenegation.
Proof (Proof)
Given a vertex of a distinct variable tree, let be the set of variables in the subtree of which is the root. Assume that is represented by two distinct tree decompositions, and , such that has a vertex and has a vertex with and incomparable (i.e. they intersect, but neither contains the other). Then let , , , and . The crucial lemma is the following.
Lemma 4
is a function of , , , and , for some Boolean functions , , and .
Proof
We can write as , where is Boolean; similarly we can write as . We have that, for all settings of , . Consider a restriction that fixes all the variables in . This yields Therefore, for all restrictions of , depends on only a single bit obtained from , namely . So we can write as for some Boolean —or even more strongly as , since we know that does not depend on . By analogous reasoning we can write as for some functions and . So we have Next we restrict , obtaining which implies that, for some functions and , This shows that and are equivalent up to negation of output, since and must depend on for some restriction of . So we have for some Boolean functions (henceforth simply ), , and (). Next we restrict and : Thus, for all restrictions of and , depends on only a single bit obtained from , which we’ll call (and which can be taken equal to ). Note that does not depend on . Analogously, for both possible restrictions of , depends on only a single bit obtained from , which we’ll call . So we can write where and are twoinput Boolean functions. We claim that and .
There must exist a setting of such that depends on both and . Suppose there exists a setting of such that . must be a nonconstant function, so find a constant such that depends on , and choose a setting for and such that . (If is a function, then either or will work, whereas if is an or function, then only one value of will work.) For to be welldefined, we need that whenever , the value of is determined (since has no access to ). This implies that has the form or for some function . Therefore can be written as for some function .
Now repeat the argument for . We obtain that can be written as for some functions .and . Therefore So we can take (equivalently ), and write (or ) as
We now prove the main theorem: that has a unique tree decomposition, up to doublenegation. From Lemma 4, effectively has as inputs the two bits and , and the two bits and . Thus we can check, by enumeration, that either and are labeled with the same function, and that function is either , , , or ; or and are both labeled with either or . (Note that can be different for and for .)
In either case, for all there exists a function , taking , , and as input, that captures all that needs to be known about . Furthermore, since and do not depend on , neither does , and we can write it as . Let be the unique vertex in such that contains and is minimal among all sets that do so. If is labeled with , , or , then cannot be a vertex of . If is labeled with some other function, then and the function at is represented by a nontrivial tree. Either way we obtain a contradiction.
Now that we have ruled out the possibility of incomparable subtrees, we can establish uniqueness. Call a set unifiable if there exists a vertex , in some decomposition of , such that . Let be the collection of all unifiable sets. We have established that no pair , is incomparable: either , , or . We claim that any decomposition must contain a vertex with for every . For suppose that is not represented in some decomposition . Certainly , so let be the parent set of in : that is, the unique minimal set such that and there exists a vertex in with . Then the function at is represented by a nontrivial tree, containing a vertex with —were it not, then could not be a vertex in any decomposition. Furthermore, the function at cannot be , , or . If it were, then again could not be a vertex in any decomposition, since it would need to be labeled correspondingly with , , or . Having determined the unique set of vertices that comprise any tree decomposition, the vertices’ labels are also determined up to doublenegation.
We now sketch an algorithm to construct the tree decomposition. In a distinct variable tree, let be the set of variables in the subtree of which is the root. Then given a subset of , we can clearly decide in linear time whether a distinct variable tree representing could have a vertex with . So we can construct a decomposition in time, by checking whether a vertex could have for each subset satisfying .
The key insight for reducing the time to is to represent each restriction by a concise codeword, which takes up only bits rather than bits. We create the codewords recursively, starting with the smallest restrictions and working up to larger ones. The codewords need to satisfy the following conditions:

Two restrictions and over the same set of variables get mapped to identical codewords if and only if .

If is the negation of , then this fact is easy to tell given the codewords of and .

If a restriction is constant, then this fact is also easy to tell given its codeword.
We can satisfy this condition by building up a binary tree of restrictions at each recursive call, then assigning each restriction a codeword based on its position in the tree. For all , each object inserted into the tree is a concatenation of two codewords of size restrictions.
After the codewords are created, a second phase of the algorithm deletes redundant , , and vertices. This phase looks for vertices and with and incomparable, which, as a consequence of Theorem 6.1, can only have arisen by , , or . Both phases effectively perform an time operation for all subsets of subsets of , so the complexity is .
7 Quantum Query Complexity
The quantum query complexity of a Boolean function is the minimum number of oracle queries needed by a quantum computer to evaluate . Here we are concerned only with the boundederror query complexity (defined in [3]), since approximating unitary matrices with finite precision introduces bounded error into any quantum algorithm. A quantum query algorithm proceeds by an alternating sequence of unitary transformations and query transformations: Then is the minimum of over all that compute with bounded error.
There are several open problems that an efficient algorithm to compute might help to investigate [2, 3, 5]. Unfortunately, we do not know of such an algorithm. Here we show that, if we limit the number of qubits, we can obtain a subexponentialtime approximation algorithm via careful exhaustive search.
7.1 Overview of Result
For what follows, it will be convenient to extend the quantum oracle model to allow intermediate observations. With an unlimited workspace, this cannot change the number of queries needed [4]. In the spacebounded setting, however, it might make a difference.
We define a composite algorithm to be an alternating sequence Each is a quantum query algorithm that uses queries and at most qubits of memory for some . When terminates a basis state is observed. Each is a decision point, which takes as input the sequence , and as output decides whether to (1) halt and return , (2) halt and return , or (3) continue to . (The final decision point, , must select between (1) and (2).) There are no computational restrictions placed on the decision points. However, a decision point cannot modify the quantum algorithms that come later in the sequence; it can only decide whether to continue with the sequence. For a particular input, let be the probability, over all runs of , that quantum algorithm is invoked. Then uses a total number of queries
We define the spacebounded quantum query complexity to be the minimum number of queries used by any composite algorithm that computes with error probability at most and that is restricted to qubits. We give an approximation algorithm for taking time , which when is . The approximation ratio is for any . The difficulty in proving the result is as follows.
A unitary transformation is represented by a continuousvalued matrix, which might suggest that the quantum model of computation is analog rather than digital. But Bernstein and Vazirani [4] showed that, for a quantum computation taking steps, the matrix entries need to be accurate only to within bits of precision in the boundederror model. However, when we try represent unitary transformations on a computer with finite precision, a new problem arises. On the one hand, if we allow only matrices that are exactly unitary, we may not be able to approximate every unitary matrix. So we also need to admit matrices that are almost unitary. For example, we might admit a matrix if the norm of each row is sufficiently close to , and if the inner product of each pair of distinct rows is sufficiently close to . But how do we know that every such matrix is close to some actual unitary matrix? If it is not, then the transformation it represents cannot even approximately be realized by a quantum computer.
We resolve this issue as follows. First, we show that every almostunitary matrix is close to some unitary matrix in a standard metric. Second, we show that every unitary matrix is close to some almostunitary matrix representable with limited precision. Third, we upperbound the precision that suffices for a quantum algorithm, given a fixed accuracy that the algorithm needs to attain.
An alternative approach to approximating would be to represent each unitary matrix as a product of elementary gates. Kitaev [12] and independently Solovay [18] showed that a unitary matrix can be represented with arbitrary accuracy by a product of unitary gates. But this yields a algorithm, which is slower than ours. Perhaps the construction or its analysis can be improved; in any case, though, this approach is not as natural for the setting of query complexity.
7.2 AlmostUnitary Matrices
Let denote the conjugate inner product of and . The distance between matrices and in the norm is defined to be .
Definition 4
A matrix is almostunitary if .
In the following lemma, we start an almostunitary matrix and construct an actual unitary matrix that is close to in the norm.
Lemma 5
Let be a almostunitary matrix, with and . Then there exists a unitary matrix such that .
Proof
We first normalize each row so that . For each entry , We next form a unitary matrix from by using the Classical GramSchmidt (CGS) orthogonalization procedure (see [9] for details). The idea is to project to make it orthogonal to , then project to make it orthogonal to both and , and so on. Initially we set . Then for each , we set . Therefore
We need to show that the discrepancy between and does not increase too drastically as the recursion proceeds. Let . By hypothesis, . Then . Assume that for all . By induction, since and . So for all , .
Let . By the definition of , where is a column of . Since , is maximized when , or .
Adding from normalization yields a quantity less than . This can be seen by working out the arithmetic for the worst case of , .
The next lemma, which is similar to Lemma 6.1.3 of [4], is a sort of converse to Lemma 5: we start with an arbitrary unitary matrix, and show that truncating its entries to a precision produces an almostunitary matrix.
Lemma 6
Let and be matrices with and . If is unitary, then is almostunitary.
Proof
First, where the ’s are entries of and the ’s are error terms satisfying . So by the CauchySchwarz inequality, differs from by at most . Second, for , where the ’s and ’s are error terms, and the argument proceeds analogously.
7.3 Searching for Quantum Algorithms
In this section we use the results on almostunitary matrices to construct an algorithm. First we need a lemma about error buildup in quantum algorithms, which is similar to Corollary 3.4.4 of [4] (though the proof technique is different).
Lemma 7
Let be unitary matrices, be arbitrary matrices, and be an vector with . Suppose that, for all , , where . Then differs from by at most in the norm.
Proof
For each , let . By hypothesis, every entry of has magnitude at most ; thus, each row or column of has . Then The righthand side, when expanded, has terms. Any term containing matrices has norm at most , and can therefore add at most to the discrepancy with . So the total discrepancy is at most Since evaluated at is and since is concave, when . Therefore and the discrepancy is at most in the norm.
Theorem 7.1
There exists an approximation algorithm for taking time , with approximation ratio .
Proof
Given , we want, subject to the following two constraints, to find an algorithm that approximates with a minimum number of queries. First, uses at most qubits, meaning that and the relevant matrices are . Second, the correctness probability of is known to a constant accuracy . Certainly the number of queries never needs to be more than , for, although each quantum algorithm is spacebounded, the composite algorithm need not be. Let be the error we can tolerate in the matrices, and let be the resultant error in the final states. Setting , by Lemma 7 we have From the CauchySchwarz inequality, one can show that . Then solving for , which, since is constant, is . Solving for , we can verify that , as required by Lemma 7. If we generate almostunitary matrices, they need to be within of actual unitary matrices. By Lemma 5 we can use almostunitary matrices. Finally we need to ensure that we approximate every unitary matrix. Let be the needed precision. Invoking Lemma 6, we set and obtain that is sufficient.
Therefore the number of bits of precision needed per entry, , is . We thus need only bits to specify , and can search through all possible in time . The amount of time needed to evaluate a composite algorithm is polynomial in and , and is absorbed into the exponent. The approximation algorithm is this: first let be a constant at most , and let such that the maximum probability of correctness over all query algorithms is at least (subject to uncertainty), and return . The algorithm achieves an approximation ratio of , for the following reason. First, . Second, , since by repeating the optimal algorithm until it returns the same answer twice (which takes either two or three repetitions), the correctness probability can be boosted above . Finally, a simple calculation reveals that returns the same answer twice after expected number of invocations . Then find the smallest
8 Acknowledgments
I thank Umesh Vazirani for advice and encouragement, Rob Pike and Lorenz Huelsbergen for sponsoring the Bell Labs internship during which this work was done and for helpful discussions, Andris Ambainis and an anonymous reviewer for comments and corrections, Wim van Dam for a simplification in Section 7, and Peter Bro Miltersen for correspondence.
References
 [1] S. Aaronson, Boolean Function Wizard 1.0 (software library), http://www.cs.berkeley.edu/~aaronson/bfw, 2000.
 [2] A. Ambainis, Quantum lower bounds by quantum arguments, in Proceedings of the ThirtySecond Annual ACM Symposium on Theory of Computing, ACM, Portland, OR, 2000, pp. 636–643.
 [3] R. Beals, H. Buhrman, R. Cleve, M. Mosca, and R. de Wolf, Quantum lower bounds by polynomials, in Proc. 39’th IEEE Symp. on Foundations of Comp. Sci., 1998, pp. 352–361.
 [4] E. Bernstein and U. Vazirani, Quantum complexity theory, SIAM J. Comput., 26:5(1997), pp. 1411–1473.
 [5] H. Buhrman and R. de Wolf, Complexity measures and decision tree complexity: a survey, to appear in Theoretical Comp. Sci.
 [6] S. L. A. Czort, The complexity of minimizing disjunctive normal form formulas, Master’s Thesis, University of Aarhus, 1999.
 [7] D. Guijarro, V. Lavín, and V. Raghavan, Exact learning when irrelevant variables abound, Information Proc. Lett., 70(1999), pp. 233–239.
 [8] T. Hancock, T. Jiang, M. Li, and J. Tromp, Lower bounds on learning decision lists and trees, Information and Computation, 126(1996), pp. 114–122.
 [9] K. Hoffman and R. Kunze, Linear Algebra, Prentice Hall, 1971.
 [10] L. Hyafil and R. L. Rivest, Constructing optimal binary decision trees is NPcomplete, Information Proc. Lett., 5(1976), pp. 15–17.
 [11] V. Kabanets and JY Cai, Circuit minimization problem, in Proceedings of the ThirtySecond Annual ACM Symposium on Theory of Computing, ACM, Portland, OR, 2000, pp. 73–79.
 [12] A. Yu. Kitaev, Quantum computations: algorithms and error correction, Russian Math. Surveys, 52:6(1997), pp. 1191–1249.
 [13] N. Nisan, CREW PRAMs and decision trees, SIAM J. Comput., 20:6(1991), pp. 999–1007.
 [14] N. Nisan and M. Szegedy, On the degree of Boolean functions as real polynomials, Comput. Complexity, 4:4(1994), pp. 301–313. Earlier version in STOC’92.
 [15] A. A. Razborov and S. Rudich, Natural proofs, J. Comput. System Sci., 55(1997), pp. 24–35.
 [16] D. Rubinstein, Sensitivity vs. block sensitivity of Boolean functions, Combinatorica, 15:2(1995), pp. 297299.
 [17] M. Saks and A. Wigderson, Probabilistic Boolean decision trees and the complexity of evaluating game trees, in Proceedings of the TwentySeventh IEEE Symposium on Foundations of Computer Science, IEEE Computer Society, Ontario, Canada, 1986, pp. 29–38.
 [18] R. Solovay, Lie groups and quantum circuits, talk at workshop on Mathematics of Quantum Computation, Mathematical Sciences Research Institute, Spring 2000.
 [19] B. A. Trakhtenbrot, A survey of Russian approaches to perebor (bruteforce search) algorithms, Annals of the History of Computing, 6:4(1984), pp. 384–400.