An Algebra Associated with a Flag in a Subspace Lattice over a Finite Field and the Quantum Affine Algebra $U_q(\widehat{\mathfrak{sl}}_2)$

In this paper, we introduce an algebra $\mathcal{H}$ from a subspace lattice with respect to a fixed flag which contains its incidence algebra as a proper subalgebra. We then establish a relation between the algebra $\mathcal{H}$ and the quantum affine algebra $U_{q^{1/2}}(\widehat{\mathfrak{sl}}_2)$, where $q$ denotes the cardinality of the base field. It is an extension of the well-known relation between the incidence algebra of a subspace lattice and the quantum algebra $U_{q^{1/2}}(\mathfrak{sl}_2)$. We show that there exists an algebra homomorphism from $U_{q^{1/2}}(\widehat{\mathfrak{sl}}_2)$ to $\mathcal{H}$ and that any irreducible module for $\mathcal{H}$ is irreducible as an $U_{q^{1/2}}(\widehat{\mathfrak{sl}}_2)$-module.


Introduction
By a subspace lattice, also known as a projective geometry, we mean the partially ordered set (poset) of all subspaces of a finite-dimensional vector space over a finite field, where the ordering is given by inclusion. In the field of combinatorics, subspace lattices are regarded as q-analogs of Boolean lattices and therefore they have been studied from many combinatorial points of view, such as Grassmann codes and Grassmann graphs. On the other hand, the quantum affine algebras U q ( sl 2 ) are Hopf algebras that are q-deformations of the universal enveloping algebra of the affine Lie algebra sl 2 and their representations are developed in [1,Section 5] as trigonometric solutions of the quantum Yang-Baxter equation. Recently, the author succeeded in [7] in establishing a relation between an algebra associated with a subspace lattice and the quantum affine algebras U q ( sl 2 ) as an extension of the well-known relation between the incidence algebra of a subspace lattice and the quantum algebras U q (sl 2 ). In this paper, we introduce another algebra and establish its relation to the quantum affine algebra U q ( sl 2 ) which is in some sense the opposite extreme to that obtained in [7].
Here we briefly recall the known facts. See [5], [6] and [7] for more detail. Let H denote an N-dimensional vector space over a finite field F q of q elements and let P denote the subspace lattice consisting of all subspaces of H. From the poset structure of P , we define the lowering matrix L indexed by P whose (x, y)-entry is 1 if y covers x and 0 otherwise for x, y ∈ P . Similarly, we define the raising matrix R indexed by P whose (x, y)-entry is 1 if x covers y and 0 otherwise for x, y ∈ P . The poset P has the grading which is a partition of P into nonempty sets From this grading structure, for 0 ≤ i ≤ N, we define the i-th projection matrix E ⋆ i by the diagonal matrix indexed by P whose (x, x)-entry is 1 if x ∈ P i and 0 otherwise for x ∈ P . By the incidence algebra, we mean the complex matrix algebra generated by the above three kinds of matrices L, R and E ⋆ i , where 0 ≤ i ≤ N. It is known that there exists a surjective algebra homomorphism from the quantum algebra U q 1/2 (sl 2 ) to the incidence algebra. Moreover, it is also known that any irreducible module for the incidence algebra induces an irreducible U q 1/2 (sl 2 )-module of type 1.
In our previous paper [7], we extended the algebra homomorphism as follows. Let us fix one subspace x ∈ P with 0 < dim x < N and consider the following new "rectangle" partition of P with respect to x: for 0 ≤ i ≤ dim x and for 0 ≤ j ≤ N −dim x. Remark that this is a refinement of the grading. Then define the new projection matrices with respect to this partition and define the complex matrix algebra generated by the lowering, raising matrices and these new projection matrices. By the construction, this new algebra contains the incidence algebra as its subalgebra. Then it is shown in [7] that there exists an algebra homomorphism from the quantum affine algebra U q 1/2 ( sl 2 ) to the new algebra, which extends the above algebra homomorphism from U q 1/2 (sl 2 ) to the incidence algebra. Moreover it is also shown in [7] that any irreducible module for the new algebra induces an irreducible U q 1/2 ( sl 2 )-module of type (1,1) which is more precisely a tensor product of two evaluation modules. Now we summarize the main results of this paper. We fix a (full) flag {x i } N i=0 on H instead of the subspace x ∈ P , and consider the following new "hyper-cubic" partition of P with respect to {x i } N i=0 : for µ = (µ 1 , µ 2 , . . . , µ N ) ∈ {0, 1} N . Then for µ ∈ {0, 1} N , we define the projection matrix E * µ by the diagonal matrix indexed by P whose (y, y)-entry is 1 if y ∈ P µ and 0 otherwise for y ∈ P . We next define the complex matrix algebra H generated by the lowering, raising matrices and these new projection matrices E * µ , where µ ∈ {0, 1} N . By the construction, the algebra H contains the incidence algebra as its subalgebra. We prove that there exists an algebra homomorphism from the quantum affine algebra U q 1/2 ( sl 2 ) to the algebra H, which again extends the above algebra homomorphism from U q 1/2 (sl 2 ) to the incidence algebra. Moreover it is also proved that any irreducible module for the algebra H induces an irreducible U q 1/2 ( sl 2 )-module of type (1,1) which is more precisely a tensor product of evaluation modules of dimension 2. Our main results are Theorems 13.1 and 13.5. To prove the main theorems, we classify all the H-modules up to isomorphism and determine the multiplicities appearing in the standard module.
Seen from the viewpoint of the action of the general linear group GL(N, q) on the subspace lattice P , we may say the results of this paper are "opposite" to those obtained in our previous paper [7]. (In this paper, however, we will not take this point of view in any essential way. We refer the reader to [3] for this viewpoint.) Indeed, the partitions (1) and (2) turn out to be the orbits of maximal and minimal parabolic subgroups of GL(N, q), respectively. More precisely, the corresponding subgroups stabilize the fixed subspace x and the fixed flag {x i } N i=0 , respectively. It is worth pointing out that our proofs involve a natural and intrinsic combinatorial characterization of the subspace lattice, while the method used in our previous paper [7] is rather oriented towards Lie theory and the representation theory of quantum groups. In this paper, we fix a basis v 1 , v 2 , . . . , v N for H such that x i is spanned by v 1 , v 2 , . . . , v i for 1 ≤ i ≤ N. With respect to the basis, we identify each subspace in P with a certain matrix whose entries are in the base field F q . Then, we relate these matrices to classical combinatorial objects, such as Ferrers boards, rook placements and inversion numbers, and interpret algebraic properties of subspaces in terms of these matrices (and moreover, of other combinatorial objects above). Almost all the problems which we concern in this paper arrive at problems in such classical combinatorial fields. This type of argument is motivated by Delsarte [2] and the technique used in this paper is a kind of a generalized version of that in [2].
Comparing the partitions (1) and (2) again, one may ask whether same kinds of results can still be obtained if we take a more general partition, which is defined by replacing a subspace or a full flag by a general flag. We will not develop this point here because the required computation is expected to be far more complicated. However we emphasize that we have done for the two extremal and the most essential cases, and conjecture that similar results still hold in the general case.
We organize this paper as follows. In Section 2, we recall the basic notation and introduce a hyper-cubic structure in a subspace lattice. In Section 3, we recall some notation on Ferrers boards, rook placements and inversion numbers which is used in this paper. In Sections 4 and 5, we introduce a matrix representation of P and interpret some properties of matrices in terms of rook placements and inversion numbers. In Sections 6 and 7, we introduce the main object of this paper, the algebra H, and discuss the structure of it. In Sections 8, 9, 10 and 11, we discuss the H-action on the standard module and classify all the irreducible H-modules up to isomorphism. In Section 12, for the convenience of the reader, we repeat the relevant material, including the definition of the quantum affine algebra U q ( sl 2 ), from [1] without proofs, thus making our exposition self-contained. In Section 13, our main results are stated and proved.

A subspace lattice and its hyper-cubic structure
We now begin our formal argument. Recall the integers Z = {0, ±1, ±2, . . .} and the natural numbers N = {0, 1, 2, . . .} and let C denote the complex field. The Kronecker delta is denoted by δ. Throughout the paper except Section 12, we fix N ∈ N \ {0}. Throughout the paper except Sections 3, 10 and 12, we fix a prime power q. Let F q denote a finite field of q elements and let H denote a vector space over F q with dimension N. Let P denote the set of all subspaces of H. We view P as a poset with the partial order given by inclusion. The poset P is a graded lattice of rank N where the rank function is defined by its dimension and called the subspace lattice. For two subspaces y, z ∈ P , we say y covers z whenever z ⊆ y and dim z = dim y − 1. By a (full) flag on H we mean a sequence For the rest of this paper, we fix a flag By the N-cube we mean the poset consisting of all N-tuples in {0, 1} N with the partial order µ ≤ ν defined by µ m ≤ ν m for all 1 ≤ m ≤ N, where µ = (µ 1 , µ 2 , . . . , µ N ), ν = (ν 1 , ν 2 , . . . , ν N ) ∈ {0, 1} N . (We note that it is isomorphic to the Boolean lattice of all subsets of an N-set.) The N-cube is a graded lattice of rank N with the rank function defined by Proposition 2.1. There exists an order-preserving map from the subspace lattice P to the N-cube which sends y ∈ P to (µ 1 , µ 2 , . . . , µ N ) ∈ {0, 1} N where dim(y ∩ x m ) = µ 1 + µ 2 + · · · + µ m for 1 ≤ m ≤ N. Moreover this map is surjective.
Proof. For y ∈ P and 1 ≤ m ≤ N, observe that µ m = dim(y ∩ x m ) − dim(y ∩ x m−1 ) is either 0 or 1 since x m−1 x m and dim x m − dim x m−1 = 1. Therefore this correspondence becomes a map from P to the N-cube. It is clear that this map preserves the ordering. To show its surjectivity, let v 1 , v 2 , . . . , v N denote a basis for H adapted to the flag {x i } N i=0 . For any µ = (µ 1 , µ 2 , . . . , µ N ) ∈ {0, 1} N , consider the subspace y ∈ P spanned by the vectors {v i | 1 ≤ i ≤ N, µ i = 1} and check that y is mapped to µ. Therefore it is surjective.
1} N is the image of y ∈ P by the map in Proposition 2.1, we call µ the location of y. For µ ∈ {0, 1} N , let P µ denote the set of all subspaces at location µ. For notational convenience, for µ ∈ Z N we set P µ = ∅ unless µ ∈ {0, 1} N .
we say µ m-covers ν whenever ν m < µ m and ν n = µ n for 1 ≤ n ≤ N with n = m. Similarly, for y, z ∈ P , we say y m-covers z whenever y covers z and the location of y m-covers the location of z.
(i) Given z ∈ P µ and y ∈ P µ− m− n with y ⊆ z, there exists a unique element in P µ− n which m-covers y and which is n-covered by z.
(ii) Given z ∈ P µ and y ∈ P µ− m− n with y ⊆ z, there exist exactly q elements in P µ− m which n-cover y and which are m-covered by z.
(iii) Given y ∈ P µ− m and z ∈ P µ− n , if there exists an element that is covered by both y and z, then there exists a unique element that covers both y and z.
(iv) Given y ∈ P µ− m and z ∈ P µ− n , if there exists an element that covers both y and z, then there exists a unique element that is covered by both y and z.
Proof. (i) Set w = y + (z ∩ x n−1 ). It is easy to check that w covers y and w is covered by z.
Observe that the location of w is µ− n. On the other hand, any w ′ ∈ P µ− n which covers y and which is covered by z contains both y and z ∩ x n−1 . So w ⊆ w ′ . By computing dimensions, w and w ′ must coincide. The result follows.
(ii) There exist exactly q + 1 elements which cover y and which are covered by z since dim z − dim y = 2. Observe that they must belong to either P µ− n or P µ− m . Therefore the result follows from (i).
(iii) Since y and z are distinct, the element that is covered by both y and z must be y ∩ z. Therefore, y + z is a unique element which covers both y and z.

Ferrers boards
We introduce the notion of Ferrers boards. For the general theory on this topic, we refer the reader to [4, Chapters 1 and 2]. Note that we modify the notations of [4] to fit our setting. Let µ = (µ 1 , µ 2 , . . . , µ N ) ∈ {0, 1} N . Then µ has a natural correspondence with a bipartition of {1, 2, . . . , N}, which is defined by We remark that S µ and T µ are empty if and only if µ = 1 = (1, 1, . . . , 1) and µ = 0 = (0, 0, . . . , 0), respectively. The Ferrers board of shape µ is defined by If both S µ and T µ are not empty, i.e. if µ = 0, 1, we can draw a Ferrers board as a twodimensional subarray of a matrix whose rows indexed by S µ and columns indexed by T µ , whose (s, t)-entry has a box for all (s, t) ∈ B µ . This subarray is also known as a Young diagram of shape µ. Take a nonempty Ferrers board B µ of shape µ. For (s 0 , t 0 ) ∈ B µ , the rectangle in B µ with respect to (s 0 , t 0 ), denoted by B µ (s 0 , t 0 ), is defined by It is actually the rectangle in the corresponding Young diagram which includes the top-right corner and the (s 0 , t 0 )-th box as its bottom-left corner. We remark that such a rectangle is called the Durfee square if it is the largest square in B µ . To see the rectangle structure, we use the following notation: (1, 6), (1,8), (1,9), (1,12), (4,6), (4,8), (4,9), (4,12).
(ii) Let n denote the common value in (i). For 1 ≤ i ≤ n, the i-th smallest element in π 1 is strictly smaller than the i-th smallest element in π 2 .
Proof. (i) It is clear.
(ii) We may assume σ = ∅ since otherwise the assertion is clear. Let σ denote the permutation of {1, 2, . . . , n} corresponding to σ. For 1 ≤ i ≤ n, we write s i , t i for the i-th smallest element in π 1 , π 2 , respectively. Fix 1 ≤ i ≤ n. Since σ is a permutation, there exists i ≤ k ≤ n such that σ(k) ≤ i. So we have (s k , t σ(k) ) ∈ σ. Therefore s i ≤ s k < t σ(k) ≤ t i as desired.
Proof. We have shown in Lemma 3.3 that (i) implies (ii).
Proof. Immediate from Proposition 3.4. (ii) the cardinality of λ is even.
Then for q ∈ C with q = 0, 1, we have where the sum is taken over all rook placements σ on B µ of type λ.
Proof. If λ = ∅, the assertion is clear. We assume λ = ∅. We claim that there exists a bijection between the following two sets: (i) rook placements σ on B µ of type λ, such that inv(σ) = s∈λ∩Sµ a s . Suppose for the moment that the claim is true. Then we have So the result follows. Therefore, it remains to prove the claim. For a given rook placement σ on B µ of type λ and for s ∈ λ ∩ S µ , there exists a unique t ∈ λ ∩ T µ such that (s, t) ∈ σ. Then we set a s = inv(σ, s, t). Then for s ∈ λ ∩ S µ , we have The inequality follows from the fact that s ≤ t and the last equality follows by the direct caluculation. Conversely, set r = |λ ∩ S µ | and for Then consider the set By (9), we have σ(i) ≥ i − a i and so we have The above equality follows from the definition of ρ(s i , µ, λ). This implies that s i < t σ(i) . This holds for any 1 ≤ i ≤ r and so σ becomes a rook placement on B µ . It is clear that σ is of type λ. Therefore our claim holds.

The matrix representation of P
For a field K and for two finite nonempty sets S and T , let Mat S,T (K) denote the set of all matrices with rows indexed by S and columns indexed by T whose entries are in K. If S = T , we write it Mat S (K) for short. For M ∈ Mat S,T (K), the support of M, denoted by Supp(M), is the set of indices containing nonzero entries: For µ ∈ {0, 1} N , recall the corresponding bipartition S µ , T µ from (3) and the Ferrers board B µ of shape µ from (4). We will assume µ = 0, 1 in this section so that both S µ and T µ are nonempty.
Recall the set P µ of subspaces at location µ ∈ {0, 1} N from Definition 2.2.
Write w t as a linear combination of the fixed basis v 1 , v 2 , . . . , v t for x t . Without loss of generality, we may assume the coefficient of v t is 1. Use linear operations on the basis w t to make the coefficient of v t ′ 0 for any t ′ ∈ T µ with t = t ′ . Observe that the resulting basis Therefore the subspace y spanned by the vectors w t must belong to P µ .
We note that the matrix form of y depends on the basis v 1 , v 2 , . . . , v N for H.
Let µ ∈ {0, 1} N with µ = 0, 1. For s ∈ S µ , we denote by s − the one smaller element in S µ . If there is no such element, we set s − = 0. For t ∈ T µ , we denote by t + the one larger element in T µ . If there is no such element, we set t + = N + 1. Observe that for (s, t) ∈ B µ , we have (s − , t) ∈ B µ if s − = 0 and we have (s, t + ) ∈ B µ if t + = N + 1. For M ∈ M µ (F q ) and for (s, t) ∈ B µ , let M(s, t) denote the submatrix of M indexed by the rectangle with respect to (s, t) in (5). Moreover, we set is a subset of B µ , it suffices to show that no two elements in σ(M) have a common entry. To do this, we take (s 1 , t), (s 2 , t) ∈ σ(M) and assume By the two equalities above, we have s − 2 < s 1 , which contradicts to s 1 < s 2 . Therefore we must have s 1 = s 2 . Similarly, if we take (s, t 1 ), (s, t 2 ) ∈ σ(M), then one can show that t 1 = t 2 . So the result follows.
Recall the local inversion numbers of a rook placement from (8).
Proof. Fix (s, t) ∈ σ(M). Observe that rank (M(s, t)) can be computed as follows: Then by the definition of σ(M), each summand is 0 if (s ′ , t ′ ) ∈ σ(M) and it is 1 if (s ′ , t ′ ) ∈ σ(M). So, rank (M(s, t)) is equal to the cardinality of σ(M) ∩ B µ (s, t). The result follows from the definition of local inversion numbers.
Assume we are given a rook placement σ on B µ . Consider the matrix M σ ∈ M µ (F q ) defined by Then it is easy to check that σ(M) = σ. So (ii) implies (i).

The number of matrices with given parameter
Let µ ∈ {0, 1} N with µ = 0, 1. Recall from Lemma 4.7 that each matrix M µ (F q ) corresponds to a rook placement on the Ferrers board B µ of shape µ. Recall the sets from (3) and (6).
To simplify the notation, we set for a subset π 1 ⊆ S µ .
. . , N} is said to be column-full with respect to µ whenever T µ ⊆ λ. Moreover, a rook placement σ on B µ is said to be column-full whenever the type of σ is column-full.
Let µ ∈ {0, 1} N . We remark that a rook placement σ on B µ is column-full if and only if the column index set π 2 (σ), defined in (7), is maximal.
Proof. Let t ∈ T µ . We count the number of possibilities for the t-th column of M with σ = σ(M). Since σ is a column-full rook placement, there uniquely exists s ∈ S µ such that (s, t) ∈ σ. Recall from the definition of σ that we have r −+ (M, s, t) = r − (M, s, t) and r + (M, s, t) = rank (M(s, t)) − 1 in (10), (11) and (12). Then the number of possibilities for the t-th column of M(s, t) is Here the equality follows from the definition of σ and Lemma 4.
For the remaining entries, the choices are arbitrary. Therefore the total number of possibilities for the t-th column of M is The result follows from the definition of inv(σ) and the column-full property and where ρ(s, µ, λ) is defined in Lemma 3.9.
6 The algebra H Recall Mat P (C), the set of all matrices whose rows and columns are indexed by P and whose entries are in C. We see it as a C-algebra. We write I ∈ Mat P (C) for the identity matrix and O ∈ Mat P (C) for the zero matrix. In this section, we introduce a subalgebra H of Mat P (C) which represents the N-cube structure in P . Let V = CP denote the vector space over C consisting of the column vectors whose coordinates are indexed by P and whose entries are in C. Observe that Mat P (C) acts on V by left multiplication. We call V the standard module for Mat P (C). We equip V with the standard Hermitian inner product defined by u, v = u Tv for u, v ∈ V , where T denotes transpose and¯denotes complex conjugate.
Recall from Definition 2.2 that we have partitioned P into the sets P µ of all subspaces at location µ for µ ∈ {0, 1} N . For µ ∈ Z N , define a diagonal matrix E * µ ∈ Mat P (C) by Observe that E * µ = O unless µ ∈ {0, 1} N . By construction, we have Moreover we have a decomposition of V : where E * µ V is the subspace of V with the basis P µ . Thus, the matrix E * µ is the projection from V onto E * µ V and we call it the projection matrix.
Definition 6.1. By the above comments, the matrices E * µ , where µ ∈ {0, 1} N form a basis for a commutative subalgebra of Mat P (C). We denote this subalgebra by K.
Proof. Immediate from the construction.
Proposition 6.3. The algebra K in Definition 6.1 is generated by K m for 1 ≤ m ≤ N.
Proof. By Lemma 6.2, the matrices Because of Lemma 6.4 (ii), we call L m the lowering matrices and R m the raising matrices. Definition 6.5. Let H denote the subalgebra of Mat P (C) generated by L m , R m (1 ≤ m ≤ N) and the algebra K in Definition 6.1.
Proposition 6.6. The algebra H in Definition 6.5 is semisimple.
Proof. This follows since H is closed under the conjugate-transpose map.
We recall the incidence algebra, which is generated by L, R and E ⋆ i (0 ≤ i ≤ N) from the second paragraph in Section 1. We remark that H contains the incidence algebra as its subalgebra because L = N m=1 L m , R = N m=1 R m and E ⋆ i = µ∈{0,1} N ,|µ|=i E * µ . Moreover, if N ≥ 2, the incidence algebra is a proper subalgebra of H.

The structure of the algebra H
In this section, we discuss the relations among the generators L m , R m , K m of the algebra H.
Proposition 7.1. For 1 ≤ m, n ≤ N with m = n, the following hold.
Proof. This lemma follows by combining Lemmas 6.2 and 6.4 (i).
Proposition 7.2. For 1 ≤ m, n ≤ N, we have the following. 8 The L m -and R m -actions on V We now describe a basis for V , which is the key in this paper. In this section, we fix a basis v 1 , v 2 , . . . , v N for H adapted to the flag {x i } N i=0 and assume that the matrix forms in Definition 4.3 are always taken with respect to this basis v 1 , v 2 , . . . , v N .
Definition 8.1. Let χ denote a nontrivial character of the additive group F q and let µ ∈ {0, 1} N . For y ∈ P µ , define a vector χ y ∈ V as follows.
(i) If µ = 0 or 1, then for z ∈ P , the z-th entry of χ y is 1 if y = z and 0 otherwise.
(ii) If µ = 0, 1, then for z ∈ P , the z-th entry of χ y is defined by where Y, Z ∈ M µ (F q ) are the matrix forms of y, z, respectively in Definition 4.3. Here T denotes transpose and tr denotes the trace map of matrices.
For the rest of this section, we fix a nontrivial character χ of the additive group F q .
Lemma 8.2. For µ ∈ {0, 1} N , the set of vectors χ y ∈ V for y ∈ P µ in Definition 8.1 forms an orthogonal basis for the vector space E * µ V . Proof. Let µ ∈ {0, 1} N . For y ∈ P µ , observe that χ y ∈ E * µ V from the construction. If µ = 0 or 1, then the assertion is trivial since dim E * µ V = 1. Assume µ = 0, 1 and take y, y ′ ∈ P µ . Consider the Hermitian inner product where χ y (z), χ y ′ (z) denote the z-th entries of χ y , χ y ′ , respectively. By the definitions of χ y (z), χ y ′ (z) and by the orthogonality of character χ, we obtain χ y , χ y ′ = q |Bµ| = |P µ | if y = y ′ and 0 otherwise. Therefore the set of vectors χ y for y ∈ P µ becomes an orthogonal basis for a subspace V µ of E * µ V . By comparing their dimensions, we have V µ = E * µ V and the result follows.
Recall the m-covering relation from Definition 2.3.

Then y m-covers z if and only if
for s ∈ S µ and for t ∈ T ν .
Proof. Recalling the bijection of Proposition 4.2, for t ∈ T µ and t ′ ∈ T ν , we write Assume y covers z. For each t ′ ∈ T ν , since z ⊆ y, the vector w t ′ (Z) is a linear combination of w t (Y ), where t ∈ T µ . Comparing the coefficients of v t for t ∈ T µ , we have w t ′ (Z) = Z m,t ′ w m (Y ) + w t ′ (Y ). Then comparing the coefficients of v s for s ∈ S µ , we obtain the desired equality. On the other hand, assume the equality Z s,t ′ = Y s,t ′ + Z m,t ′ Y s,m for s ∈ S µ and t ′ ∈ T ν . By the same argument above, we have w t ′ (Z) ∈ y for all t ′ ∈ T ν . This implies y covers z, as desired.
Lemma 8.4. Let 1 ≤ m ≤ N and let µ, ν ∈ {0, 1} N with µ, ν = 0, 1 such that µ m-covers ν. Take y ∈ P µ , z ∈ P ν and let Y ∈ M µ (F q ), Z ∈ M ν (F q ) denote the matrix forms of y, z, respectively in Definition 4.3. Then the z-th entry of L m χ y is given by if Y s,m = t∈Tν Y s,t Z m,t for all s ∈ S µ with s < m and 0 otherwise.
Proof. By the definition of L m , the z-th entry of L m χ y is defined by where the sum is taken over all y ′ ∈ P µ such that y ′ m-covers z. Then by Definition 8.1 and Lemma 8.3, we have where the third sum is taken over all Y ′ s,m ∈ F q for s ∈ S µ with s < m, and where we set Y ′ s,m = 0 for s ∈ S µ with s > m. By the orthogonality of characters, the third sum does not vanish if and only if for all s ∈ S µ with s < m. Moreover, in this case, the sum is the number of choices for Y ′ s,m ∈ F q for s ∈ S µ with s < m, which is q |Sµ(m−1)| .
Proof. We count the number of possibilities for Z s,t ∈ F q for s ∈ S ν and t ∈ T ν . If s > t, then Z s,t = 0 since Supp(Z) ⊆ B ν . If s = m and s < t, then Z s,t is arbitrary and therefore the number of possibilities is q. The number of such pairs (s, t) is given by For the case s = m and m < t, by the constraint, the sequence (Z m,t ) t∈Tν ,t>m must be a solution of the system of linear equations over F q : where C = (Y s,t ) s∈Sµ,s<m,t∈Tν ,t>m is the coefficient matrix, u = (u t ) t∈Tν ,t>m is the unknown vector and c = (Y s,m ) s∈Sµ,s<m is the constant vector. By linear algebra, the system Cu = c has a solution if and only if the rank of the augmented matrix [C, c] is equal to the rank of the coefficient matrix C. By Definition 4.4, it is also equivalent to (s, m) ∈ σ(Y ) for all s ∈ S µ with s < m, which means m ∈ λ. Moreover, suppose there is a solution of the system Cu = c. Since there are |T ν (m + 1)| columns in C, the number of solutions is given by By Lemma 4.6, the rank of C is computed as follows: Therefore the result follows. if m ∈ λ, and 0 otherwise.
Proof. Similar to the proof of Lemma 8.6.
Definition 8.8. Let µ ∈ {0, 1} N and take y ∈ P µ . If µ = 0, 1, then let Y ∈ M µ (F q ) denote the matrix form of y in Definition 4.3. Then the type of y is defined to be the type of σ(Y ) in Definitions 3.5 and 4.4. If µ = 0 or 1, then the type of y is defined to be the empty set. We note that the type of y depends on the basis v 1 , v 2 , . . . , v N for H since the matrix form does.
Proof. Similar to the proof of Lemma 8.9.
Lemma 8.11. For µ ∈ {0, 1} N and for λ ⊆ {1, 2, . . . , N} with even cardinality, the following are equivalent: Proof. This is a matrix interpretation of Lemma 3.6. 9 The L m R m -and R m L m -actions on V In this section, we fix a basis v 1 , v 2 , . . . , v N for H adapted to the flag {x i } N i=0 and assume that the matrix forms in Definition 4.3 and the types in Definition 8.8 are always taken with respect to this basis v 1 , v 2 , . . . , v N . We also fix a nontrivial character χ of the additive group F q . Recall from Section 8, the definition of E λ for λ ⊆ {1, 2, . . . , N} with even cardinality depends on the basis v 1 , v 2 , . . . , v N and on the character χ. We show in this section, that E λ is independent of the basis v 1 , v 2 , . . . , v N for H adapted to the flag {x i } N i=0 and the nontrivial character χ of the additive group F q .
Then for v ∈ E * µ E λ V , we have the following: Proof. Observe that R m L m acts on E * µ V by Lemma 6.4 (ii). Fix y ∈ P µ of type λ in Definition 8.8. We show that χ y is an eigenvector for R m L m . If m ∈ S µ or m ∈ λ, then by Lemma 8.9, we have L m χ y = 0 and so χ y is an eigenvector for R m L m with respect to the eigenvalue 0. If µ = 1, then P µ = {H} and λ = ∅. So we have dim E * µ V = 1. Therefore, χ y is an eigenvector of R m L m and the corresponding eigenvalue is the number of subspaces which are m-covered by y = H, which is equal to q N −m = q κ(m,1,∅) by Lemma 2.4 (i). Set ν = µ − m so that µ m-covers ν. If m ∈ T µ , m ∈ λ and ν = 0, then P ν = {0} and λ = ∅. In other words, the matrix form of y in Definition 4.3 equals to the zero matrix O, and so y ′ -th entry χ y (y ′ ) of χ y is 1 if y ′ ∈ P µ and 0 if y ′ ∈ P µ . Since P ν = {0}, χ y is an eigenvector of R m L m and the corresponding eigenvalue is the number of subspaces which m-covers z = 0, which is equal to q m−1 = q κ(m, m,∅) by Lemma 2.4 (ii). If m ∈ T µ , m ∈ λ, µ = 1 and ν = 0, then we have Let y ′ ∈ P µ . Since L m and R m are (conjugate-)transpose to each other, we have Let Y, Y ′ ∈ M µ (F q ) and Z ∈ M ν (F q ) are the matrix forms of y, y ′ , z, respectively in Definition 4.3. Then by Lemma 8.4, it becomes where the sum is taken over all for all s ∈ S µ with s < m. Then since Supp(Z) ⊆ B ν , by the orthogonality of the characters, the sum vanishes unless Y s,t = Y ′ s,t for all s ∈ S µ and t ∈ T ν with s < t, which by (15) and Lemma 8.6 implies Y = Y ′ and so y = y ′ . In particular, χ y is an eigenvector of R m L m . Moreover, using Lemma 8.6 and |P µ | = q |Bµ| , we can easily show that the corresponding eigenvalues is q κ(m,µ,λ) .
We remark that the above proof of Proposition 9.3 also shows that the matrices E λ are independent of the basis v 1 , v 2 , . . . , v N for H adapted to the flag {x i } N i=0 and the nontrivial character χ of the additive group F q .
Then by Lemma 8.9, if λ is column-full with respect to µ, we have E * µ E λ V ⊆ V new . Suppose λ is not column-full with respect to µ. Then since we assume λ satisfies Lemma 3.6 (ii), there exists 1 ≤ m ≤ N such that m ∈ T µ and m ∈ λ. By Lemma 9.1, for any v ∈ E * µ E λ V , R m L m v is a nonzero scalar multiple of v. In particular, L m v = 0 and so v ∈ V new . By above comments and by the fact that V is the direct sum of E * µ E λ V , the result follows. Recall the column-full property in Definition 5.1. For µ ∈ {0, 1} N and λ ⊆ {1, 2, . . . , N} satisfying (ii) in Lemma 3.6, we say λ is row-full with respect to µ if S µ ⊆ λ.
Lemma 9.5. Let V old denote the set of all v ∈ V such that R m v = 0 for all 1 ≤ m ≤ N. Then we have where the sum is taken over all pairs (µ, λ) with µ ∈ {0, 1} N and λ ⊆ {1, 2, . . . , N} satisfying (ii) in Lemma 3.6 such that λ is row-full with respect to µ.
Proof. Similar to the proof of Lemma 9.4.
where the sum is taken over all 1 ≤ m ≤ N with m ∈ λ.
Therefore, we have and the result follows.
In the next lemma, we do not assume q to be a prime power.
where the sum is taken over all 1 ≤ m ≤ N with m ∈ λ.
We call this a reduced κ-sequence from a. For a κ-sequence a = (a 1 , a 2 , . . . , a n ) ∈ Z n with respect to ν = (ν 1 , ν 2 , . . . , ν n ) ∈ {0, 1} n , we define Observe that the value f (ν, a; q) is invariant under the reducing process above. In particular, if a ′ is a reduced κ-sequence with respect to ν ′ from a κ-sequence a with respect to ν, then we have f (ν, a; q) = f (ν ′ , a ′ ; q). Set n = N − |λ|. Let ν = ν(µ, λ) ∈ {0, 1} n be the sequence obtained from µ by removing all the coordinates indexed by λ. Consider the sequence a ∈ Z n defined by a = ((−1) µm κ(m, µ, λ)) m∈{1,2,...,N }\λ , where the index m increases from left to right. For 1 ≤ m < n ≤ N with m, n ∈ λ, observe that Therefore, the sequence a is a κ-sequence with respect to ν. Let a ′ be a reduced κ-sequence with respect to ν ′ from a. Then the left-hand side of the desired identity becomes f (ν ′ , a ′ ; q).
To show this, since it is an arithmetic sequence, it suffices to show that This follows from Lemma 10.1 since a ′ ∈a ′ a ′ = a∈a a. For the case 2|µ| > N, the proof is similar to that for the case 2|µ| < N. Hence the result follows.

The H-modules
Recall from Proposition 6.6 that the algebra H is semisimple. Thus the standard module V is a direct sum of irreducible H-modules, and every irreducible H-module appears in V up to isomorphism. We now discuss the H-submodules of V , which from now on we call H-modules for short.
Proposition 11.1. Any irreducible H-module is generated by a nonzero vector v ∈ V such that L m v = 0 for all 1 ≤ m ≤ N.
Let W denote an irreducible H-module and take a nonzero vector w ∈ W . If Φ(w) = ∅, then L m w = 0 for all 1 ≤ m ≤ N and by the irreducibility of W , the module W is generated by w and so the result follows.
Suppose Φ(w) = ∅. Let m = min Φ(w) and set w ′ = L m w ∈ W . By Proposition 7.2 (i) and (ii), we have Φ(w ′ ) Φ(w). By continuing this process at most |Φ(w)| times, we get a nonzero vector v ∈ W such that Φ(v) = ∅. By the same argument above, the assertion holds.
Recall from Sections 8 and 9, that there are the matrices E λ in H and that they turn out to be independent of the basis v 1 , v 2 , . . . , v N for H and the nontrivial character χ of the additive group F q . By Lemma 9.4 and Proposition 11.1, it suffices to consider the module Hv for v ∈ µ,λ E * µ E λ V , where the sum is taken over all pairs (µ, λ) with µ ∈ {0, 1} N and λ ⊆ {1, 2, . . . , N} satisfying (ii) in Lemma 3.6 such that λ is column-full with respect to µ in Definition 5.1.
on which the generators L m , R m (1 ≤ m ≤ N) act as follows: where we set w(ε) = 0 if ε is not of the form in (16).
Proof. Let H + denote the subalgebra of H generated by R 1 , R 2 , . . . , R N . Consider H + v, the H + -module generated by v. We show that H + v is an H-module. Let 1 ≤ m ≤ N.
Proposition 11.3. Referring to Proposition 11.2, the basis (16) for Hv satisfies the following.
Proof. By Proposition 11.2, we have w(ε) ∈ E * µ+ε V . The result follows from the definition of K m .
Theorem 11.4. For any irreducible H-module W , there uniquely exist µ ∈ {0, 1} N and λ ⊆ {1, 2, . . . , N} satisfying (ii) in Lemma 3.6 where λ is column-full with respect to µ, such that W is generated by a nonzero vector in E * µ E λ V . Moreover, W is determined up to isomorphism by µ and λ.
Proof. By Proposition 11.1, there exists a nonzero vector v ∈ W with L m v = 0 for all 1 ≤ m ≤ N such that W = Hv. According to the direct sum decomposition in Lemma 9.4, Since v is nonzero, there exists a pair (µ, λ) such that E * µ E λ v = 0. By Proposition 9.3, E * µ E λ v belongs to W and so by the irreducibility of W , Thus we have the two bases (16) for W . However, by comparing them, we obtain (µ ′ , λ ′ ) = (µ, λ) and the result follows.
Proof. Count the vectors in the basis (16) for W .
Proof. Take a nonzero vector v ∈ E * µ E λ V . We show that W = Hv is irreducible. Consider an irreducible H-module decomposition of W as follows.
for some positive integer r ≥ 1. According to this decomposition, we write v = w 1 + w 2 + · · · + w r such that w n ∈ W n (1 ≤ n ≤ r). Since this sum is direct and v ∈ E * µ E λ W , we find that w n is nonzero and w n ∈ E * µ E λ W for 1 ≤ n ≤ r. However, by Proposition 11.2, we have dim E * µ E λ W = 1. Since the vectors w n (1 ≤ n ≤ r) are linearly independent, this forces r = 1, i.e., W is irreducible.
The multiplicity of W in V is dim E * µ E λ V , which is determined in Corollary 5.3.
12 The quantum affine algebra U q ( sl 2 ) In this section, we fix a nonzero scalar q ∈ C which is not a root of unity. For n ∈ N, we define We recall the definition of U q ( sl 2 ) from [1] in terms of Chevalley generators.
It is known that the quantum affine algebra U q ( sl 2 ) has the following Hopf algebra structure. The comultiplication ∆ satisfies It is also known that there exists a family of finite-dimensional irreducible U q ( sl 2 )-modules We call V d (α) the evaluation module for U q ( sl 2 ) with the evaluation parameter α. We recurrently define the algebra homomorphism ∆ (N ) : This algebra homomorphism ∆ (N ) is called the N-fold comultiplication. For each N ≥ 1, by the (N − 1)-fold comultiplication ∆ (N −1) , a tensor product of N evaluation modules again becomes a U q ( sl 2 )-module. More precisely, a tensor product V d 1 (α 1 ) ⊗ · · · ⊗ V d N (α N ) has a basis on which the Chevalley generators act as follows: where ε = (ε 1 , ε 2 , . . . , ε N ) ∈ Z N and we define u(ε) = 0 if ε is not of the form in (21). Let W denote a finite-dimensional irreducible U q ( sl 2 )-module. By [1, Proposition 3.2], there exist scalars ǫ 0 , ǫ 1 ∈ {−1, 1} such that each eigenvalue of k i on W is ǫ i times an integral power of q for i = 0, 1. The pair (ǫ 0 , ǫ 1 ) is called the type of W . For each pair ǫ 0 , ǫ 1 ∈ {−1, 1}, there exists an algebra automorphism of U q ( sl 2 ) that sends By this automorphism, any finite-dimensional irreducible U q ( sl 2 )-module of type (ǫ 0 , ǫ 1 ) becomes that of type (1, 1).
Theorem 12.2 ([1, Theorem 4.11]). Every finite-dimensional irreducible U q ( sl 2 )-module of type (1, 1) is isomorphic to a tensor product of evaluation modules. Moreover, two such tensor products are isomorphic if and only if one is obtained from the other by permuting the factors in the tensor product.
With an evaluation module V d (α), we associate the set of scalars The set S d (α) is called a q-string of length d. Two q-strings S d 1 (α 1 ), S d 2 (α 2 ) are said to be in general position if one of the following occurs: is not a q-string, Moreover, several q-strings are said to be in general position if every two q-strings are in general position. 13 The algebra H and the quantum affine algebra U q 1/2 ( sl 2 ) In this section, we get back to the subspace lattice P over F q . Recall the matrices E λ ∈ H in Sections 8 and 9. Let µ ∈ {0, 1} N and λ ⊆ {1, 2, . . . , N} satisfy (ii) in Lemma 3.6. For v ∈ E * µ E λ V and 1 ≤ m ≤ N, if L m v = 0, then we have m ∈ T µ and m ∈ λ by Lemma 8.9 and so (L m R m )L m v = q κ(m,µ,λ) L m v by Lemma 9.1. Therefore, we define the matrix (L m R m ) −1 L m by for v ∈ V . We remark that (L m R m ) −1 L m does not mean the product of (L m R m ) −1 and L m since L m R m is not invertible by Lemma 9.1. Similarly, we define the matrix (R m L m ) −1 R m by for v ∈ V .
Proof. Use Propositions 11.2, 11.3 and Corollary 13.2. Then we have the following.
Proof. (i) By the definition of d, we have |d| = N −|λ|. By the assumption, we have |λ| = 2|µ| and so the result follows.
Hence the result follows from the above comments and d m = 1.
(ii) W µ,λ is isomorphic to the tensor product of V 1 (α m ), where 1 ≤ m ≤ N such that m ∈ λ.