The Goldman-Rota identity and the Grassmann scheme

We inductively construct an explicit (common) orthogonal eigenbasis for the elements of the Bose-Mesner algebra of the Grassmann scheme. The main step is a constructive, linear algebraic interpretation of the Goldman-Rota recurrence for the number of subspaces of a finite vector space. This interpretation shows that the up operator on subspaces has an explicitly given recursive structure. Using this we inductively construct an explicit orthogonal symmetric Jordan basis with respect to the up operator and write down the singular values, i.e., the ratio of the lengths of the successive vectors in the Jordan chains. The collection of all vectors in this basis of a fixed rank forms a (common) orthogonal eigenbasis for the elements of the Bose-Mesner algebra of the Grassmann scheme. We also pose a bijective proof problem on the spanning trees of the Grassmann graphs.


Introduction
This paper presents constructive and explicit proofs of two basic linear algebraic results on the subspace lattice.
The first result concerns the recursive structure of the up operator on subspaces. It is an elementary observation that the up operator (or equivalently, incidence matrices) on subsets of a n + 1-set can be built from two copies of the up operator on subsets of a n-set. The main purpose of this paper is to extend this inductive approach to the subspace lattice. A classical identity of Goldman and Rota suggests that the up operator on subspaces of a n + 1-dimensional vector space over F q can be built from two copies of the up operator in dimension n and q n − 1 copies of the up operator in dimension n − 1. Let us make this precise.
Let F n q denote the n-dimensional vector space of all column vectors of length n over F q and let B q (n) denote the collection of all subspaces of F n q . Partially order B q (n) by inclusion. The number of subspaces in B q (n) having dimension k is the q-binomial coefficient n k q and the total number of subspaces is the Galois number G q (n) = n k=0 n k q .
We identify F n q with the subspace of all vectors in F n+1 q with last component zero. Put t = q n −1. Motivated by the B(n) case we can ask for the following poset theoretic interpretation of (1): is it possible to write B q (n + 1) as a disjoint union where S 0 , . . . , S t are intervals in B q (n + 1), with S 0 order isomorphic to B q (n) and S 1 , . . . , S t order isomorphic to B q (n − 1). At least for q = 2 and n ≥ 4 the answer is no, as shown in [9].
We show that we can get a poset theoretic interpretation of (1) by considering a linear analog of (2). Moreover, the linear analog of the decomposition (2) can be explicitly given.
Let P be a finite graded poset with rank function r : P → N = {0, 1, 2, . . .}. The rank of P is r(P ) = max{r(x) : x ∈ P } and, for i = 0, 1, . . . , r(P ), P i denotes the set of elements of P of rank i. For a subset S ⊆ P , we set rankset(S) = {r(x) : x ∈ S}.
For a finite set S, let V (S) denote the complex vector space with S as basis. Let v = x∈S α x x, α x ∈ C be an element of V (S). By the support of v we mean the subset {x ∈ S : α x = 0}.
Let P be a graded poset with n = r(P ). Then we have V (P ) = V (P 0 ) ⊕ V (P 1 ) ⊕ · · · ⊕ V (P n ) (vector space direct sum). An element v ∈ V (P ) is homogeneous if v ∈ V (P i ) for some i, and if v = 0, we extend the notion of rank to nonzero homogeneous elements by writing r(v) = i. Given an element v ∈ V (P ), write v = v 0 + · · · + v n , v i ∈ V (P i ), 0 ≤ i ≤ n. We refer to the v i as the homogeneous components of v. A subspace W ⊆ V (P ) is homogeneous if it contains the homogeneous components of each of its elements. For a homogeneous subspace W ⊆ V (P ) we set rankset(W ) = {r(v) : v is a nonzero homogeneous element of W }.
The up operator U : V (P ) → V (P ) is defined, for x ∈ P , by U (x) = y y, where the sum is over all y covering x. We denote the up operator on V (B q (n)) by U n . For a finite vector space X over F q we denote by B q (X) the set of all subspaces of X and we denote by U X the up operator on V (B q (X)).
Let , denote the standard inner product on V (P ), i.e., x, y = δ(x, y) (Kronecker delta), Let (V, f ) be a pair consisting of a finite dimensional vector space V (over C) and a linear operator f on V . Let (W, g) be another such pair. By an isomorphism of pairs (V, f ) and (W, g) we mean a linear isomorphism θ : We give V (B q (n)) (and V (B q (X)) for a finite vector space X over F q ) the standard inner product. In Section 2 we prove the following result on the recursive structure of the pair (V (B q (n)), U n ). Taking dimensions we get (1). Theorem 1.1 Set t = q n − 1. There is an explicit orthogonal direct sum decomposition where (i) W (0), . . . , W (t) are U n+1 -closed (i.e., closed under the action of U n+1 ) homogeneous subspaces of V (B q (n + 1)) with rankset (W (0)) = {1, . . . , n + 1} and rankset (W (i)) = {1, . . . , n}, for i = 1, . . . , t.
(ii) V (B q (n)) ⊕ W (0) is U n+1 -closed and there is an explicit linear map θ n : V (B q (n)) → W (0) that is an isomorphism of pairs (V (B q (n)), qU n ) and (W (0), U n+1 ), sending homogeneous elements to homogeneous elements, increasing rank by one and satisfying (iii) For i = 1, . . . , t there is an explicit linear map γ n−1 (i) : V (B q (n − 1)) → W (i) that is an isomorphism of pairs (V (B q (n − 1)), U n−1 ) and (W (i), U n+1 ), sending homogeneous elements to homogeneous elements, increasing rank by one and satisfying Our second main result is concerned with explicit construction of orthogonal symmetric Jordan bases. Let P be a finite graded poset with rank function r. A graded Jordan chain in V (P ) is a sequence of nonzero homogeneous elements of V (P ) such that U (v i−1 ) = v i , for i = 2, . . . h, and U (v h ) = 0 (note that the elements of this sequence are linearly independent, being nonzero and of different ranks). We say that s starts at rank r(v 1 ) and ends at rank r(v h ). A graded Jordan basis of V (P ) is a basis of V (P ) consisting of a disjoint union of graded Jordan chains in V (P ). The graded Jordan chain (7) is said to be a symmetric Jordan chain (SJC) if the sum of the starting and ending ranks of s equals r(P ), i.e., A symmetric Jordan basis (SJB) of V (P ) is a basis of V (P ) consisting of a disjoint union of symmetric Jordan chains in V (P ).
Using Theorem 1.1 we prove the following result in Section 3.

Theorem 1.2
There is an algorithm to inductively construct an explicit orthogonal SJB J q (n) of V (B q (n)). When expressed in the standard basis the vectors in J q (n) have coefficients that are integral multiples of qth roots of unity. In particular, the coefficients are integral when q = 2.
Let 0 ≤ k ≤ n/2 and let (x k , . . . , x n−k ) be any SJC in J q (n) starting at rank k and ending at rank n − k. Then we have, for k ≤ u < n − k, A standard argument (recalled in Section 3) shows that the set {v ∈ J q (n) : r(v) = m} forms a common orthogonal eigenbasis for the elements of the Bose-Mesner algebra of the Grassmann scheme of m-subspaces.
The numbers on the right hand side of (8) are called the singular values of the up operator. These are important for applications. The existence of an orthogonal SJB of V (B q (n)) satisfying (8) was first stated explicitly in [22], with a proof based on [7]. See [18] for a proof based on the sl(2, C) method [15]. Very closely related results are shown in [6,20,13,1]. The existence of an orthogonal SJB satisfying (8) has several applications: in [19] we showed that the commutant of the GL(n, F q )-action on V (B q (n)) block diagonalizes with respect to the orthonormal basis given by the normalization of J q (n) and we used (8) to make this block diagonalization explicit, thereby obtaining a q-analog of the formula from [16] for explicit block diagonalization of the commutant of the symmetric group action on V (B(n)). This includes, as a special case, a formula for the eigenvalues of the elements of the Bose-Mesner algebra of the Grassmann scheme [5,4]. For other approaches to explicit block diagonalization see [13,1], the latter of which also gives applications to bounds on projective codes using semidefinite programming. In [18] we used (8) to give a positive combinatorial formula for the number of spanning trees of the q-analog of the n-cube and to show that the Laplacian eigenvalues of the Grassmann graphs, known in principle since [5], admit an elegant closed form. For another approach to the Laplacian eigenvalues of the Grassmann graphs see [13]. At the end of this paper we pose a bijective proof problem on spanning trees of the Grassmann graphs.
From the point of view of applications (and especially that of polynomial time computation) explicit construction of the basis J q (n) is not important as even to write down J q (n) takes exponential time. But it is of interest from a mathematical standpoint yielding useful additional insight into the linear structure of the subspace lattice. The situation is similar to bijective versus nonbijective proofs in enumeration. Substituting q = 1 in Theorem 1.2 we recover the explicit orthogonal SJB of V (B(n)) constructed in [17]. This basis was given a representation theoretic characterization in [17], namely, that it is the canonically defined symmetric Gelfand-Tsetlin basis of V (B(n)). Similarly, the basis J q (n) should also be studied from a representation theoretic viewpoint. We hope to return to this later. 2 Goldman-Rota recurrence In this section we prove Theorem 1.1. As stated in the introduction, we identify F k q , for k < n, with the subspace of F n q consisting of all vectors with the last n − k components zero. We denote by e 1 , . . . , e n the standard basis vectors of F n q . So B q (k) consists of all subspaces of F n q contained in the subspace spanned by e 1 , . . . , e k . Define A q (n) to be the collection of all subspaces in B q (n) not contained in the hyperplane For 1 ≤ k ≤ n, let A q (n) k denote the set of all subspaces in A q (n) with dimension k. We consider A q (n) as an induced subposet of B q (n).

Define a map
For X ∈ B q (n − 1), define X to be the subspace in A q (n) spanned by X and e n .
We shall now give a canonical orthogonal decomposition of V (A q (n + 1)).
where I is the n × n identity matrix.
The additive abelian group F n q is isomorphic to H(n + 1, q) via φ : F n q → H(n + 1, q) given by There is a natural (left) action of H(n+1, q) on A q (n+1) and (ii) Suppose Y covers X. Then the bipartite graph of the covering relations between [Y ] and [X] is regular with degrees q (on the [Y ] side) and 1 (on the [X] side).
Proof (i) This is clear.
(ii) Since the action of H(n + 1, q) on A q (n + 1) is clearly order preserving it follows that the bipartite graph in the statement is regular. Let Y ′ ∈ [Y ] also cover X. Then H(n + 1)(Y ′ ) = H(n + 1)(Y ) and it follows from Lemma 2.1(iii) that Y = Y ′ . So the degree on the [X] side is 1. Let dim (X) = k. Then, by Lemma 2.1(iv), |[Y ]| = q n−k and |[X]| = q n+1−k and hence, by regularity, the degree on the [Y ] side is q.
(iii) We may assume that Y covers X. If X = X ′ ∈ [X] then clearly X ∩ X ′ = H(n + 1)(X). So, by part (ii) and Lemma 2.1(iii), we can write Y as a union Let I q (n) denote the set of all distinct irreducible characters (all of degree 1) of H(n + 1, q) and let N q (n) denote the set of all distinct nontrivial irreducible characters of H(n + 1, q).
Let ψ k (respectively, ψ) denote the character of the permutation representation of H(n+1, q) on V (A q (n + 1) k ) (respectively, V (A q (n + 1))) corresponding to the left action. Clearly ψ = n+1 k=1 ψ k . Below [, ] denotes character inner product and the q-binomial coefficient n k q is taken to be zero when n or k is < 0.
Now assume g ∈ H(n + 1, q), g = I and let X ∈ A q (n + 1) k . Let the last column of g be (a 1 , . . . , a n , 1) t (t=transpose), where not all the a i 's are 0. Now note that . . , a n + b n , 1) t .
. Thus g either fixes all elements of [X] or no elements.
(d) The number of subspaces in B q (n) k−1 containing the nonzero vector (a 1 , . . . , a n , 0) t is n−1 k−2 q .
It follows from items (b), (c), (d) above that ψ k (g) = q n−k+1 n−1 k−2 q . (ii) This follows from the well known result that the multiplicity of the trivial representation in a permutation representation is the number of orbits, which in the present case is n k−1 q .
Thus (below the sum is over all where in the last step we have used q-Pascal's triangle (see Section 1.7 in [21]) (iv) Let 1 ≤ k ≤ n + 1. Restricting (9) to dimension k we get the following orthogonal decomposition Splitting V (A q (n + 1) k ) into H(n + 1, q)-irredicibles and taking dimensions using parts (ii) and (iii) we get the result. The initial conditions are easily verified. ✷ Let W (0) (respectively, W (0) k ) denote the isotypical component of V (A q (n + 1)) (respectively, V (A q (n + 1) k )) corresponding to the trivial representation of H(n + 1, q) and, for χ ∈ N q (n), let W (χ) (respectively, W (χ) k ) denote the isotypical component of V (A q (n + 1)) (respectively, V (A q (n + 1) k )) corresponding to the irreducible representation of H(n + 1, q) with character χ. We have the following orthogonal decompositions, the last of which is canonical (note that W (χ) n+1 is the zero module, by Theorem 2.3(iii)).
Proof Let {h 0 = 1, h 1 , . . . , h t } be a set of distinct coset representatives of G X , i.e., Write [X] = {X = X 0 , X 1 , . . . , X t } and assume without loss of generality that h i X = X i , 0 ≤ i ≤ t. Note that G X is the stabilizer of all the elements of [X].
We have The result follows since g∈G X χ(g) = 0 for every nontrivial character of G X . ✷ Theorem 2.5 (i) Let χ ∈ I q (n), X, Y ∈ A q (n + 1) with X ∼ Y . Then p(χ)(X) is a nonzero multiple of p(χ)(Y ).
(iii) Let χ ∈ I q (n) and let X, Y ∈ B q (n) with X covering Y . Then Then θ n is an isomorphism of pairs (V (B q (n)), qU n ) and (W (0), U n+1 ) and (v) Let χ ∈ N q (n). From Theorem 2.3 (iii) we have dim W (χ) n = 1. It thus follows from part (ii) that there is a unique element Then λ(χ) is an isomorphism of pairs (V (B q (X)), U X ) and (W (χ), U n+1 ) and satisfies (vi) For X ∈ B q (n) n−1 the number of χ ∈ N q (n) such that p(χ)( X) = 0 is q − 1.
(iv) By Theorem 2.3 (ii) the dimensions of V (B q (n)) and W (0) are the same. For X 1 = X 2 ∈ B q (n) the supports of θ n (X 1 ) and θ n (X 2 ) are disjoint. It follows that θ n is a vector space isomorphism.
We have (below the sum is over all Z covering X in B q (n)) θ n (qU n (X)) = θ n q Similarly (in the second step below T varies over all subspaces covering Y and in the third step Z varies over all subspaces in B q (n) covering X. We have used Lemma 2.1(ii) and Lemma 2.2(ii) to go from the second to the third step) (v) By part (iii) it follows that λ(χ)(Y ) = 0 for all Y ∈ B q (X). By Theorem 2.3 (iii) the dimensions of V (B q (X)) and W (χ) are the same. For Y 1 = Y 2 ∈ B q (X) the supports of λ(χ)(Y 1 ) and λ(χ)(Y 2 ) are disjoint. It follows that λ(χ) is a vector space isomorphism. Now, for Y ∈ B q (X), we have (below the sum is over all Z covering Y in B q (X)) Let Y ∈ B q (X). Before calculating U n+1 λ(χ)(Y ) we make the following observation. By Lemma 2.1(ii) every element covering Y is of the form Z, for some Z covering Y in B q (n). Suppose Z ∈ B q (n) − B q (X). Since dim (W (χ)) = G q (n − 1) (by Theorem 2.3(iii)), it follows by parts (ii) and (iii) that p(χ)( Z) = 0.
We now calculate U n+1 λ(χ)(Y ). In the second step below we have used the fact that U n+1 is H(n + 1, q)-linear and in the third step, using the observation in the paragraph above, we may restrict the sum to all Z covering Y in B q (X).
We have We will now show that p(χ)( Y ) = q n+k if Y ∈ B q (X) with dim (Y ) = k. This will prove (19).
Since, for χ ∈ N q (n) and X ∈ B q (n) n−1 , the support of p(χ)( X) is contained in [ X] and p(χ)( X) is orthogonal to p(π)( X) (where π is the trivial character), the result now follows by part (ii). ✷ To use Theorem 2.5 for computations we need the character table of H(n + 1, q), which is easy to write down explicitly since H(n + 1, q) is direct sum of n cyclic groups of order q. We now give a small example to illustrate part (v) of Theorem 2.5.
Similarly we can check that We have now proved most of Theorem 1.1 except for one small part. Let X ∈ B q (n) n−1 . The pairs (V (B q (X)), U X ) and (V (B q (n − 1)), U n−1 ) are clearly isomorphic with many possible isomorphisms. We now define a canonical isomorphism, based on the concept of a matrix in Schubert normal form.
A n × k matrix M over F q is in Schubert normal form (or, column reduced echelon form) provided (i) Every column is nonzero.
(ii) The first nonzero entry in every column is a 1. Let the first nonzero entry in column j occur in row r j .
(iii) We have r 1 < r 2 < · · · < r k and the submatrix of M formed by the rows r 1 , r 2 , . . . , r k is the k × k identity matrix.
It is well known that every k dimensional subspace of F n q is the column space of a unique n × k matrix in Schubert normal form (see Proposition 1.7.3 in [21] where the discussion is in terms of the row space).
Let X ∈ B q (n) n−1 and let M (X) be the n × (n − 1) matrix in Schubert normal form with column space X. The map τ (X) : F n−1 q → X given by e j → column j of X is clearly a linear isomorphism and this isomorphism gives rise to an isomorphism of pairs (V (B q (n−1)), U n−1 ) and (V (B q (X)), U X ) given by µ(X)(Y ) = τ (X)(Y ), Y ∈ B q (n−1).
Proof (of Theorem 1.1) It is convenient to write the orthogonal decomposition (3) as follows Note that |N q (n)| = q n − 1.
(i) We have already showed that each of W (0) and W (χ), χ ∈ N q (n) is U n+1 -closed. The rank sets of W (0) and W (χ) are also easily seen to be as stated.

Orthogonal symmetric Jordan basis
In this section we prove Theorem 1.2 and give an application to the Grassmann scheme. We also pose a bijective proof problem on the Grassmann graphs.
Proof (of Theorem 1. 2) The proof is by induction on n, the result being clear for n = 0, 1.
Let χ ∈ N q (n) and let (x k , . . . , x n−1−k ) be a SJC in J q (n − 1) starting at rank k and ending at rank n − 1 − k. Then, by Theorem 1.1, applied to the decomposition (20), (y k+1 , . . . , y n−k ), where y u+1 = γ n−1 (χ)(x u ), k ≤ u ≤ n − 1 − k, is a SJC in W (χ) (with respect to U n+1 ) starting at rank k + 1, ending at rank n − k. By the induction hypothesis we have, for k + 1 ≤ u ≤ n − k, Doing the above procedure for every SJC in J q (n − 1) we get an orthogonal SJB of W (χ) satisfying (8). Note that, by definition of λ(χ), if the coefficients (in the standard basis) of the vectors in J q (n − 1) were integral multiples of qth roots of unity then so will be the coefficients of the vectors in the SJB of W (χ). Similarly, doing the above procedure for every χ ∈ N q (n) we get an orthogonal SJB, with respect to U n+1 , of ⊕ χ∈Nq(n) W (χ) satisfying (8).
Now we consider the subspace V (B q (n))⊕W 0 . Let (x k , . . . , x n−k ) be a SJC in J q (n), starting at rank k and ending at rank n − k, and satisfying (8). Set x u = θ n (x u ), k ≤ u ≤ n − k. Then, by Theorem 1.1, (w k+1 , . . . , w n−k+1 ), where w u+1 = q u−k x u , k ≤ u ≤ n − k is a graded Jordan chain in W 0 (with respect to U n+1 ), starting at rank k + 1 and ending at rank n − k + 1. We have U n+1 (q u−k x u ) = q u+1−k x u+1 and so Also we have For convenience we define x k−1 = x k−1 = x n+1−k = 0. Note that (22) also holds for u = k − 1.
Now, by (4), we have, for k ≤ u ≤ n − k, Let Z be the subspace spanned by {x k , . . . , x n−k } and {x k , . . . , x n−k }. Clearly, by (21) and (23), Z is U n+1 -closed. We shall now get an orthogonal SJB of Z satisfying (8)  We consider two cases: (a) k = n − k : By (23), (x k , x k ) is an orthogonal SJB of Z going from rank k to rank k + 1. We have, from (22), and thus (8) is satisfied.
(b) k < n − k : Define the following vectors in Z.
Note that, using the induction hypothesis, the coefficients of y l , z l are also integral multiples of qth roots of unity. We claim that (y k , . . . , y n+1−k ) and (z k+1 , . . . , z n−k ) form an orthogonal SJB of Z satisfying (8).
Similarly, for k + 1 ≤ l ≤ n − k, Note that z n−k+1 = 0. Now we check that condition (8) holds. For k ≤ u < n + 1 − k we have by the induction hypothesis (in the second step below we have used (22). Note the second term in the denominator after the fourth step below. This is a fraction with a term [u − k] q in the denominator, which is zero for u = k. This is permissible here because of the presence of the factor [u − k] q 2 in the numerator) Similarly, for k + 1 ≤ u < n − k, we have Since θ n is an isomorphism, doing the procedure above for every SJC in J q (n) we get an orthogonal SJB of V (B q (n)) ⊕ W (0) satisfying (8). That completes the proof. ✷ We now consider the application of Theorem 1.2 to the Bose-Mesner algebra of the Grassmann scheme of m-subspaces. For convenience we assume 0 ≤ m ≤ n/2. We do not define this algebra here but instead work with the well known characterization that it equals the commutant of the GL(n, q)-action on V (B q (n) m ). For the proof of the following result see Chapter 29 of [10] where the q = 1 case is proven. The same proof works in general.
Thus End GL(n,q) (V (B q (n) m ))) is a commutative * -algebra with dimension m + 1 and so can be unitarily diagonalized.
Then J q (n, m) is a common orthogonal eigenbasis for the elements of End GL(n,q) (V (B q (n) m ))).
Let W q (n, i, k) be the subspace spanned by J q (n, i, k). Then we have an orthogonal direct sum decomposition Clearly dim(W q (n, i, k)) = n k q − n k−1 q . We shall now show that, for i = 0, 1, . . . , m, W q (n, i, k), k = 0, 1, . . . , i are GL(n, q)submodules of V (B q (n) i ). We do this by induction on i, the case i = 0 being clear.
We now have from Theorem 3.1 that (24) is the decomposition of V (B q (n) i ) into distinct irreducible modules. The result follows. ✷ Using (8) we can also determine the eigenvalues of the elements of End GL(n,q) (V (B q (n) m ))). More generally, we can explicitly block diagonalize End GL(n,q) (V (B q (n))). We refer to [19] for details.
Finally, we pose a bijective proof problem on the spanning trees of the Grassmann and Johnson graphs. Actually, this application only requires the existence of an orthogonal SJB satisfying (8) and not the actual construction from the present paper.
The number of spanning trees of a graph G is called the complexity of G and denoted c(G). The number of rooted spanning trees (i.e., a spanning tree plus a choice of a vertex as a root) of G is denoted c(G).
Let 0 ≤ m ≤ n/2. The Johnson graph C(n, m) is defined to be the graph with B(n) m , the set of all subsets in B(n) of cardinality m, as the vertex set and with two vertices X, Y ∈ B(n) m connected by an edge iff |X ∩ Y | = m − 1.
Let 0 ≤ m ≤ n/2. The Grassmann graph C q (n, m) is defined to be the graph with vertex set B q (n) m , and with two vertices X, Y ∈ B q (n) m connected by an edge iff dim(X ∩ Y ) = m − 1.
Let T q (n, m) and T (n, m) denote, respectively, the set of rooted spanning trees of C q (n, m) and C(n, m). have the same cardinality.
Proof We give an algebraic proof. For X ∈ B q (n) k , X ′ ∈ B q (n) k−1 , 1 ≤ k ≤ n note that |U D(X)| = |DU (X ′ )| = [k] q [n − k + 1] q . Now, using the existence of an orthogonal SJB of V (B q (n)) satisfying (8) it was proved in [18] that the Laplacian eigenvalues of C q (n, m) are [k] q [n − k + 1] q , k = 0, 1, . . . , m with respective multiplicities n k q − n k−1 q . It now follows from the matrix-tree theorem (see [3]) that c(C q (n, m)) = m k=1 It follows that the sets in the statement of the theorem have the same cardinality. ✷ The following result is an immediate corollary of the theorem above. We use similar notations as above. For m = 1, Theorem 3.4 gives n|T (n, 1)| = n n , a result for which there is a celebrated bijective proof [11].
Problem Find bijective proofs of Theorems 3.3 and 3.4.
Recently, a related open problem, that of finding a combinatorial proof of the product formula for the complexity of the hypercube was solved in [2].