A combinatorial approach to the q,t-symmetry relation in Macdonald polynomials

Using the combinatorial formula for the transformed Macdonald polynomials of Haglund, Haiman, and Loehr, we investigate the combinatorics of the symmetry relation $\widetilde{H}_\mu(\mathbf{x};q,t) = \widetilde{H}_{\mu^\ast}(\mathbf{x};t,q)$. We provide a purely combinatorial proof of the relation in the case of Hall-Littlewood polynomials ($q=0$) when $\mu$ is a partition with at most three rows, and for the coefficients of the square-free monomials in $\mathbf{x}$ for all shapes $\mu$. We also provide a proof for the full relation in the case when $\mu$ is a hook shape, and for all shapes at the specialization $t=1$. Our work in the Hall-Littlewood case reveals a new recursive structure for the cocharge statistic on words.


Introduction
Let Λ q,t (x) denote the ring of symmetric polynomials in the countably many indeterminates x 1 , x 2 , . . . , with coefficients in the field Q(q, t) of rational functions in two variables. The (transformed) Macdonald polynomials H µ (x; q, t) ∈ Λ q,t (x), indexed by the partitions µ, form an orthogonal basis of Λ q,t (x), and have specializations H µ (x; 0, 1) = h µ and H µ (x; 1, 1) = e n 1 , where h λ and e λ are the homogeneous and elementary symmetric functions, respectively. The polynomials H µ are a transformation of the functions P λ originally defined by Ian Macdonald in [11], and have been the subject of much recent attention in combinatorics and algebraic geometry. (See [5], [7], and [8], for instance .) The symmetric functions H µ may be defined as the unique collection of polynomials that satisfy certain triangularity conditions. To state them, recall that the Schur functions s λ form a basis for Λ. We define the dominance order to be the partial order ≤ on partitions given by λ > µ if and only if λ 1 + · · · + λ k ≥ µ 1 + · · · + µ k for all k > 0. Finally, define µ * to be the conjugate of a given partition µ, formed by reflecting its Young diagram about the diagonal.

The
Macdonald polynomials H µ are orthogonal with respect to the inner product on Λ q,t given by where the inner product on the right is the Hall inner product. That is, H µ , H λ q,t = 0 whenever µ = λ. (See [8] for details .) Recall the well-known Schur expansion where the coefficients K λµ are the Kostka numbers, defined combinatorially as the number of semistandard Young tableaux with shape λ and content µ. Since H µ (x; 0, 1) = h µ , it is natural to define a q, t-analog of the Kostka numbers by expanding the transformed Macdonald polynomials H µ (x; q, t) in terms of the Schur basis.
Definition 1. 2. The q, t-Kostka polynomials are the coefficients in the expansion H µ (x; q, t) = λ K λµ (q, t)s λ It was conjectured by Macdonald, and later proven by Haiman [9], that the q, t-Kostka polynomials K λµ (q, t) are polynomials in q and t with nonnegative integer coefficients. This fact is known as the Macdonald positivity conjecture. Haiman's proof involves showing that the polynomial K λµ (q, t) is the Hilbert series of a certain bi-graded module arising from the geometry of the Hilbert scheme of n points in the plane, and relies heavily on geometric methods. The problem of finding a purely combinatorial explanation of their positivity is still open, in the sense that there is no known formula for the coefficients of the form K λµ (q, t) = T q s(T ) t r(T ) , where T ranges over all semi-standard Young tableaux of shape λ and r and s are appropriate statistics on tableaux.
However, a different combinatorial definition of the transformed Macdonald polynomials H µ has been found, and appeared in the literature in [7] in 2004. The authors prove that where the sum ranges over all fillings σ of the diagram of µ with positive integers, and x σ is the monomial x m1 1 x m2 2 · · · where m i is the number of times the letter i occurs in σ. The statistics inv and maj are generalizations of the Mahonian statistics inv and maj for permutations. Their precise definitions can be stated as follows. Definition 1. 3. Given a word w = w 1 · · · w n where the letters w i are taken from some totally ordered alphabet A, a descent of w is an index i for which w i > w i+1 . The major index of w, denoted maj(w), is the sum of the descents of w.
Definition 1. 4. Given a filling σ of a Young diagram of shape µ drawn in French notation, let w (1) , . . . , w (µ1) be the words formed by the successive columns of σ, read from top to bottom. Then maj(σ) = s maj(w (s) ).
Example 1. 1. The major index of the filling in Figure 1 is 7, since the first column has major index 6, the second has major index 0, and the third column, 1.
Remark 1. 1. The major index restricts to the usual major index on words in the case that the partition is a single column.
For the statistic inv, we start with the definition provided in [7]. We use the notion of the arm of an entry, which is defined to be the number of squares strictly to the right of the entry. A descent is an entry which is strictly greater than the entry just below it.
Definition 1. 5. An attacking pair in a filling σ of a Young diagram is a pair of entries u and v with u > v satisfying one of the following conditions: 1. u and v are in the same row, with u to the left of v, or 2. u is in the row above v and strictly to its right.
Definition 1. 6. The quantity inv(σ) is defined to be the number of attacking pairs in σ minus the sum of the arms of the descents.
For our purposes, we will also need the following cleaner definition of the inv statistic. This more closely resembles the inv statistic on a permutation π, defined to be the number of pairs i < j for which π(i) > π(j).
Definition 1. 7. Let σ be any filling of a Young diagram with letters from a totally ordered alphabet A, allowing repeated letters. A relative inversion of a filling σ of a Young diagram is a pair of entries u and v in the same row, with u to the left of v, such that if b is the entry directly below u, one of the following conditions is satisfied: • u < v and b is between u and v in size, in particular u ≤ b < v.
• u > v and b is not between u and v in size, in particular either b < v < u or v < u ≤ b, If u and v are on the bottom row, we treat b as any value less than min(u, v), usually 0 in the case A = Z + .
Remark 1. 2. The conditions above for the triple (u, v, b) to form an inversion can also be thought of as saying that the ordering of the sizes of u, b, v orients the triple counterclockwise: Example 1. 3. In Figure 1, there are two relative inversions: one consists of the 5 and the rightmost 3 on the bottom row, and the other consists of the 3 and 6 on the second row.
In fact, the number of relative inversions in a filling σ is always equal to inv(σ). In [7], the authors introduce the related notion of an inversion triple. Our relative inversions are simply the inversion triples that contribute 1 to the total inv. The advantage of this description is that it allows us to think of the inv as being computed row by row (just as maj is computed column by column), except that determining each row's inversions depends on the entries in the previous row.
For completeness, we include here a proof that inv(σ) is equal to the number of relative inversions of σ.
Proposition 1. 1. The inversion number inv(σ) is equal to the number of relative inversions of σ.
Proof. Recall that inv(σ) is defined as the total number of attacking pairs minus the arms of the descents. Each descent of the form u > b where b is the entry directly below u contributes −1 towards inv(σ) for each v to the right of u in the same row. Each attacking pair contributes +1 towards inv(σ).
Define a good triple to be a triple of entries (u, v, b) where u is directly above and adjacent to b and v is to the right of u in its row, where we also allow b to be in the row below the bottom row of the tableau, with a value of 0. Then each contribution of −1 from a descent or +1 from an attacking pair is a member of a unique good triple. Therefore, inv(σ) is the sum of the contributions of each such triple.
A simple case analysis shows that each good triple contributes a total of either 0 or 1, taking into account the −1 from a possible descent and the +1 from a possible attacking pair. In particular, its contribution is 1 if it is a relative inversion and 0 otherwise. Thus inv(σ) is the total number of relative inversions.
Since this combinatorial formula is an expansion in terms of monomials rather than Schur functions, it does not give an immediate answer to the Macdonald positivity conjecture. Indeed, it perhaps raises more questions than it answers. For one, there is a well-known q, t-symmetry relation for the transformed Macdonald polynomials H µ (x; q, t), namely H µ (x; q, t) = H µ * (x; t, q). This is obvious from the triangularity conditions that define H µ , and is also clear from Haiman's geometric interpretation [9]. When combined with the combinatorial formula, however, we obtain a remarkable generating function identity: Notice that setting t = 1 and µ = (n) (a single-row shape) and taking the coefficient of x 1 · · · x n on both sides (considering fillings with distinct entries from 1 to n), we get the wellknown equation w∈Sn q inv(w) = w∈Sn q maj(w) , which demonstrates the equidistribution of the Mahonian statistics inv and maj on permutations. There are several known bijective proofs of this identity (see [1], [3], [12]).
In light of this, it is natural to ask if there is an elementary combinatorial proof of 2, in the following sense.
Definition 1. 8. The content of a filling σ, denoted |σ|, is the sequence α = (α 1 , · · · , α k ) where α i is the number of i's used in the filling. We also define the symbols: • F -set of all fillings of Young diagrams with positive integers • F α µ -set of fillings of shape µ and content α • F α µ | inv=a,maj=b -set of fillings σ ∈ F α µ for which inv(σ) = a and maj(σ) = b.
We also define a weighted set to be a set A equipped with a number of statistics stat 1 , stat 2 , . . ., and a morphism of weighted sets to be a map that preserves their statistics. We write (A; stat 1 , stat 2 , . . .) to denote the weighted set if the statistics are not understood.
Remark 1. 3. In [7], the authors give a combinatorial proof of the fact that the polynomials H µ are symmetric in the variables x i . We will make use of this fact repeatedly, rearranging the entries of α as needed. In other words, to solve the Symmetry Conjecture, it suffices to find a map ϕ that restricts to bijections F α µ | inv=a,maj=b → F r(α) µ * | inv=b,maj=a where r is some bijective map that rearranges the entries of α.
In this paper, we provide explicit bijections ϕ for several infinite families of values of a, b, α, and µ. In Section 2 we proceed to give a combinatorial proof of the symmetry relation for the specialization t = 1, and in Section 3, we give an explicit bijection ϕ in the case that µ is a hook shape.
The bulk of our results are developed in Section 4. Here we investigate the Hall-Littlewood specialization a = 0, which corresponds to setting q = 0 in the Macdonald polynomials. We give a combinatorial proof in this case for all shapes µ having at most three rows, and also for all shapes µ when the content α is fixed to be (1, 1, . . . , 1). We also conjecture a strategy for the general problem that draws on the work of Garsia and Procesi on the S n -modules R µ , which arise as the cohomology rings of the Springer fibers in type A. [4] In Section 5, we state some applications of the results on the Hall-Littlewood case to understanding the rings R µ , in particular regarding the cocharge statistic of Lascoux and Schutzenberger (see [4] or [8], for instance). In particular, we demonstrate a new recursive structure exhibited by the cocharge statistic on words.

Specialization at t = 1
In this section, we give a combinatorial proof of the Symmetry Conjecture at t = 1, namely H µ (x; q, 1) = H µ * (x; 1, q).
By the combinatorial formula in [7], it suffices to prove that, for any content α, To prove this, we build on Foata's well-known bijection on words. The bijection was originally defined on permutations in [3], as a map f : S n → S n for which inv(f (π)) = maj(π) for all π ∈ S n . This map can be extended to a bijection on words of any fixed content α that sends inv to maj, as follows.
Definition 2.1. (Foata's bijection on words.) Given a word w = w 1 · · · w n , we define φ(w) recursively as follows. If n = 1 set f (w) = w = w 1 . Otherwise, if w = w 1 · · · w n−1 is the truncation of w and f (w ) = v 1 · · · v n−1 , define f (w) to be the word formed by the following process.
• Add w n to the end of the word v 1 · · · v n−1 . If v n−1 ≤ w n , place a vertical bar to the right of each v i that is less than or equal to w n , and also at the beginning of the word. Otherwise, if v n−1 > w n , place a vertical bar to the right of each v i that is strictly greater than w n .
• In each subword separated by vertical bars, move the last entry to the beginning of the subword.
• Remove all vertical bars to obtain f (w).
For example, the process above applied to the word 4154223 yields the following.

Definition 2.2.
We say that a sequence of numbers a 1 , · · · , a n are in cyclic order if there exists an index i ∈ [n] for which a i+1 ≤ a i+2 ≤ · · · a n ≤ a 1 ≤ a 2 ≤ · · · ≤ a i .
Proposition 2.1. For any fixed partition λ, we have σ q maj(σ) = ρ q inv(ρ) where the first sum ranges over all fillings σ : λ → Z + of λ with distinct entries, and the second ranges over all fillings ρ : λ * → Z + of the conjugate partition λ with distinct entries. Proof. We extend Foata's bijection as follows.
Given a filling σ of λ, let v (1) , v (2) , . . . , v (k) be the words formed by reading each of the columns of λ from top to bottom. Let w (i) = f (v (i) ) for each i, so that maj(v (i) ) = inv(w (i) ). Notice that maj(λ) = k i=1 maj(v (i) ). We aim to construct a filling ρ of λ * such that inv(ρ) = k i=1 inv(w (i) ). Let the bottom row of ρ be w (1) . To construct the second row, let t 1 = w (1) 1 be the corner letter. Let x 1 , x 2 , . . . , x r be the unique ordering of the letters of w (2) for which the sequence t 1 , x 1 , x 2 , . . . , x r is in cyclic order. Notice that if x i is placed in the square above t, it would be part of exactly i relative inversions to the right of it, since x 1 , . . . , x i−1 would form inversions with it and the others would not. Now, in w (2) , let i k be the number of inversions whose left element is the kth letter of w (2) . Then write x i1 in the square above t 1 in order to preserve the number of inversions the first letter is a part of. Then for the square above t 2 = w (1) 2 , similarly order the remaining x's besides x i1 in cyclic order after t 2 , and write down in this square the unique such x i2 for which it is the left element of exactly i 2 inversions in its row. Continue this process for each k ≤ r to form the second row of the tableau.
Continue this process on each subsequent row, using the words w (3) , w (4) , . . ., to form a tableau ρ. We define f (σ) = ρ, and it is easy to see that this construction process is reversible (strip off the top row and rearrange according to inversion numbers, then strip off the second, and so on.) Thus we have extended the Foata bijection to tableaux of content α = (1, 1, . . . , 1), proving the result in this case.
Using this proposition, we prove two technical lemmata about the q-series involved. Define inv w (R) to be the number of relative inversions in a row R given a filling w of the row directly beneath it. Lemma 2.1. Let R be the i + 1st row in a partition diagram λ for some i ≥ 1. Let w = w 1 , . . . , w λi be a fixed filling of the ith row, underneath R. Let a 1 , . . . , a λi+1 be any λ i+1 distinct positive integers. Then where the sum ranges over all fillings of the row R with the integers a 1 , . . . , a λi+1 in some order. Proof. We know that We use a similar process to that in Proposition 2.1 to construct a bijection φ from the set of permutations r of a 1 , . . . , a λi+1 to itself such that inv w (φ(r)) = inv(r). Namely, let r = r 1 , . . . , r λi+1 be a permutation of a 1 , . . . , a λi+1 and let i k be the number of inversions that r k is a part of in r for each k. Let x 0 , . . . , x λi+1 be the ordering of the letters of r for which w 1 , x 0 , . . . , x λi+1 is in cyclic order. Let the first letter of φ(r) be x i1 , remove x i1 from the sequence, and repeat the process to form the entire row from the letters of r. Let φ(r) be this row.
The map φ can be reversed by using the the all-0's word for w and using the same process as above to recover r from φ(r). Thus φ is bijective. Moreover inv w (φ(r)) = inv(r) by construction. This completes the proof.
Lemma 2.2. Let r be the (i + 1)st row in a partition diagram λ for some i ≥ 1. Let w = w 1 , . . . , w λi be a fixed filling of the row directly underneath r. Let a 1 , . . . , a λi+1 be positive integers, with multiplicities m 1 , . . . , m k . Then where the sum ranges over all distinct fillings of the row r with the integers a 1 , . . . , a λi+1 in some order. Proof. Multiplying both sides of the relation by (m 1 ) q ! · · · (m k ) q !, we wish to show that This follows immediately by interpreting (λ i+1 ) q ! and each (m i ) q ! as in Lemma 2.1, and assigning all possible orderings to the repeated elements and counting the total number of relative inversions in each case.
We are now ready to prove Equation 3.
Proof. We break down each sum according to the contents of the columns of µ and the rows of µ * , respectively. For a given multiset of contents of the columns, where the entries in the ith column have multiplicities m where the sum ranges over all fillings σ with the given column entries. By Lemma 2.2, we have that the corresponding sum over fillings ρ with the given contents in the rows of µ * is the same: Summing over all possible choices of the entries from α for each column of µ, the result follows.

Hook Shapes
We now demonstrate a bijective proof of the Symmetry Conjecture in the case that µ is a hook shape, that is, µ = (m, 1, 1, 1, . . . , 1) for some m. There is a known combinatorial formula for the q, t-Kostka poloynomials in the case of hook shapes µ given by Stembridge [13], but it does not involve the inv and maj statistics. The symmetry of inv and maj was demonstrated for fillings of hook shapes with distinct entries in [2], and our proof for the general case is similar in nature.
We first need to define several new maps: Definition 3.1. Let rev be the mapping on words w that reverses the entries: rev(w 1 w 2 · · · w n ) = w n w n−1 · · · w 1 .
We also define a map flip N for each positive integer N , as the "flip" operation that sends a filling σ of a given diagram to the filling flip N (σ) of the same diagram whose (i, j)th square is filled with N − σ(i, j).
Let f be Foata's bijection on words, as defined in Section 2.
Definition 3.2. For any filling σ of a hook shape µ, let w be the word formed by reading the bottom row from left to right, and let v be the word formed by reading the first column from top to bottom. Let N be the smallest positive integer for which no larger number m ≥ N appears in σ. Define φ(σ) to be the filling of the conjugate shape µ * whose bottom row is rev(flip N (f (v))) and whose first column from top to bottom is f −1 (rev(flip N (w))).
Notice that, since the Foata bijection f preserves the last letter of each word, the corner square, in common to both the bottom row and first column, matches and so φ is a well-defined map.
Proof. This is clear from the fact that the composition rev • flip N preserves inv, and that the Foata bijection f has the property that inv(ψ(w)) = maj(w) and maj(ψ −1 (w)) = inv(w).
Example 3.1. Figure 2 shows an example of a filling σ of a hook shape µ and the corresponding filling φ(σ) of µ * . The column 5273 maps to 5723 under Foata's bijection, and reversing and flipping its digits with N = 9 we obtain 6724, which we write in the bottom row of φ(σ). The row 31486 reverses and flips (with N = 9) to 31586, which is the image under Foata's bijection of the permutation 38156. This we write down the column of φ(σ). The 6's match, so there is no conflict at the corner square.
As a corollary to Proposition 3.1, we have a proof of the Symmetry Conjecture for hook shapes.
Corollary 3.1. If µ is a hook shape, the map φ above satisfies the conditions of the Symmetry Conjecture, proving combinatorially that in this case.

Hall-Littlewood Specialization at q = 0
We now turn to the specialization in which one of the statistics is zero. In particular, setting q = 0, the symmetry relation becomes which is a symmetry relation between the transformed Hall-Littlewood polynomials H µ (x; t) := H µ (x; 0, t). In this case the symmetry relation becomes Combinatorially, we are trying to find natural morphisms of weighted sets, where F α µ | inv=0 is equipped with the maj statistic, and F α µ * | maj=0 is equipped with the inv statistic. For the bijection r(α), we will use the following.
In terms of alphabets, let A be a finite multiset of positive integers with maximum element M . The content of A is α if α i is the multiplicity of i in A. The complement of A, denoted A, is the multiset consisting of the elements M + 1 − a for all a ∈ A. Notice that the content of A is r(α).

Generalized Carlitz Codes
Our approach in the Hall-Littlewood case is partially motivated by Carlitz's bijection (S n ; inv) → (S n ; maj), an alternative to Foata's bijection that also demonstrates the equidistribution of inv and maj on permutations. A full proof of this bijection can be found in Carlitz's original paper [1], or in a somewhat cleaner form in [12]. For the reader's convenience we will define it here.
The bijection makes use of certain codes: A Carlitz code of length n is a word w = w 1 · · · w n consisting of nonnegative integers such that w n−i < i for all i. Let C n denote the set of all Carlitz codes of length n, equipped with the combinatorial statistic Σ taking a word to the sum of its entries.
Notice that the number of Carlitz codes of length n is equal to n!. This allows us to make use of the combinatorial object (C n ; σ) of Carlitz codes as an intermediate object connecting (S n ; inv) to (S n ; maj). In particular, the Carlitz bijection is the composite of two simple isomorphisms of weighted sets, defined as follows.
Definition 4. 3. The inversion code of a permutation π, denoted invcode(π), is the sequence c 1 , c 2 , . . . , c n where c i is the number of inversions of the form (j, i) in π, i.e. where i < j and i is to the right of j.
Example 4.1. We have invcode(4132) = 1210, because the 1 is the smaller entry of one inversion (4, 1), the 2 is the smaller entry of the two inversions (3,2) and (4,2), the 3 is the smaller entry of the inversion (4,3), and the 4 is not the smaller entry of any inversion.
Clearly invcode is a map S n → C n , and it is not hard to see that it is bijective: given a Carlitz code c 1 , . . . , c n , we can reconstruct the permutation π it came from as follows. First write down the number n, corresponding to c n = 0. Then, c n−1 is either 0 or 1, and respectively determines whether to write down n − 1 to the left or to the right of the n. The entry c n−2 then determines where to insert n − 2 in the sequence, and so on until we have reconstructed π. It is also clear that invcode is an isomorphism of weighted sets, sending inv(π) to Σ({c i }).
Definition 4. 4. The map majcode : S n → C n is defined as follows. Given π ∈ S n written as a permutation in word form, remove the n from π and set c 1 to be the amount the major index decreases as a result. Then remove the n − 1 and set c 2 to be the amount the major index decreases by, and so on until we have formed a sequence c 1 , c 2 , . . . , c n . Then we define majcode(π) = c 1 , c 2 , . . . , c n .
Example 4.2. Let π = 3241. Its major index is 1 + 3 = 4. Removing the 4 results in the permutation 321, which has major index 3, so the major index has decreased by 1 and we set c 1 = 1. Removing the 3 results in 21, which decreased the major index by 2. Hence c 2 = 2. Removing the 2 decreases the major index by c 3 = 1, and removing the 1 decreases it by c 4 = 0, so majcode(π) = 1210.
As in the case of invcode above, it is not hard to construct an inverse for majcode, making it an isomorphism of weighted sets (S n ; maj) → (C n , Σ).  In the context of Hall-Littlewood symmetry, we can think of the Carlitz bijection as a solution to the case in which µ = (1 n ) is a straight shape with one column, filled with distinct entries. Thus, we wish to generalize the notion of a Carlitz code to fillings of arbitrary shapes having inv or maj equal to 0, using arbitrary alphabets.
Our generalization is motivated by the monomial basis of the Garsia-Procesi modules in [4], which are closely connected to the cocharge (maj) statistic. We define a generalized Carlitz code as follows.
A word w has content α = (α 1 , . . . , α k ) if exactly α i of the entries of w are equal to i − 1 for each i. We also sometimes say it has content A where A is the multiset of letters of w.
We will see that the µ-sub-Yamanouchi words are the correct analog of Carlitz codes in the case that our Young diagram fillings have distinct entries. However, in general we require the following more precise definition.
We define C µ,A to be the collection of all µ-sub-Yamanouchi codes which are A-weakly increasing. We call such codes generalized Carlitz codes, and we equip this collection with the statistic Σ : C µ,A → Z by Σ(c) = c i , forming a weighted set (C µ,A ; Σ).
We now introduce the concept of the monomial of a code. The next three definitions are compatible with the notation in [4]. x c = x c1 n x c2 n−1 · · · x cn 1 . Also let C A (µ) be the set of all monomials x c of µ-sub-Yamanouchi words c that are A-weakly increasing.
In [4], the authors define a similar set of monomials B(µ), which are the generators of the modules R µ that arise naturally in the study of the Hall-Littlewood polynomials. We will see that in the case A = {1, 2, . . . , n}, we have C A (µ) = B(µ), by showing that the sets C A (µ) satisfy a generalized version of the recursion in [4]. To state this recursion we require two more definitions, which follow the notation in [4]. Definition 4. 9. Given a partition µ, define µ (i) to be the partition formed by removing the corner square from the column a i containing the last square in the ith row µ i . Definition 4. 10. Given a set of monomials C and a monomial m, we write m · C to denote the set of all monomials of the form m · x where x ∈ C.
The following recursion defines the sets B(µ).
We refer to these sets as the Garsia-Procesi module bases.
We require one new definition in order to state our general recursion in the next proposition.  . . , a n } with a 1 ≤ a 2 ≤ · · · ≤ a n be a multiset of positive integers, and let λ be a partition of n − 1. We define C   . . , a n } with a 1 ≤ a 2 ≤ · · · ≤ a n , we have We defer the proof of this recursion to section 6. Notice that in the case A = {1, 2, . . . , n}, since there are no repeated entries, Proposition 4.1 reduces to Since this is the same as the recursion given for the sets B(µ) described in the previous section, and C {1} ((1)) = {x 1 } = B((1)), we have the following corollary. As noted in [4], we can now also enumerate the sets C A (µ) in the case A = {1, 2, . . . , n}. For, in this case the simplified recursion gives with |C {1} ((1))| = 1. But the multinomial coefficients n µ satisfy 1 1 = 1 and the same recursion:

Inversion Codes
In [7], the authors define the cocharge word cw(σ) of fillings σ ∈ F| inv=0 . It is shown that where cc is the combinatorial statistic known as cocharge, which will be explored further in Section 5.
Similarly, for fillings ρ having maj(ρ) = 0, we can form an associated inversion word and describe a statistic on the inversion word that measures inv(ρ) in the case that maj(ρ) = 0. Definition 4. 13. Let ρ be a filling of shape µ having maj = 0. We define the inversion word of ρ as follows. Starting with the smallest value that appears in the filling, write the column numbers of the entries with that value as they appear in reading order, and then proceed with the second largest entry and so on. In order to compute inv(σ) given only its inversion word, we will use a visual representation of the inversion word, which we call a diagram.
The diagram a function f : A → Z + is the plot of the function with respect to the ordering on A. We say that the diagram has shape µ if |f −1 (i)| = µ i for each i.
The diagrams we will be using are essentially the plot of the inversion word, considered as a function on a multiset. Definition 4.15. Let ρ be a filling of µ * having maj(ρ) = 0, and let w be the inversion word of ρ. Let A be the multiset consisting of the entries of ρ, ordered from least to greatest and in reading order in the case of a tie. Let f : A → Z + be the function given by f (a i ) = w i . We define InvPlot(ρ) to be the diagram of the function f , whose plot has µ j dots in the jth row.
Notice that the InvPlot of a filling of shape µ * has shape µ, the conjugate shape. For instance, the tableau 3 4 2 2 4 2 2 1 has maj = 0, and its inversion word is 11213122. Its plot is as follows.

2 2 2 2 3 4 4
To compute the number of inversions, we define the inversion labeling of a diagram to be the result of labeling each row of dots µ i in the diagram with the numbers 1, 2, . . . , µ i from right to left: For fillings σ with maj(σ) = 0, there are no descents, and so the number of inversions in InvPlot(σ) is equal to inv(σ). In particular, type I and II inversions correspond to attacking pairs in the same row or on adjacent rows, respectively.
Remark 4.1. The type I and II inversions also correspond to the two types of inversions used to define the dinv statistic on parking functions. Indeed, this was the original motivation for the full definition of the inv statistic. [6] We now classify the types of diagrams that arise as the InvPlot of a filling. Definition 4. 16. A consecutive subsequence is in inversion-friendly order if, when each row is labeled from right to left as above, all dots of label i + 1 in the subsequence occur before the dots of label i for all i, and the dots of any given label appear in increasing order from bottom to top.
An example of an inversion-friendly subsequence is shown below. It is easy to check that, in the plot of any filling ρ having maj ρ = 0, every subsequence above a fixed letter of the alphabet A is in inversion-friendly order. We claim that the converse is true as well, namely, that every diagram having all such subsequences in inversion-friendly order corresponds to a unique Young diagram filling ρ having maj(ρ) = 0. We let ID µ,A the set of all diagrams of shape µ of inversion word type over A. We equip ID µ,A with its inv statistic to make it into a weighted set.
Proposition 4.2. Let µ be a partition of n, and let A be a multiset of n positive integers with content α. The map InvPlot is an isomorphism of weighted sets Proof. (Sketch.) As noted above, this is a map of sets that preserves the inv statistic since there are no descents. To show it is bijective, we construct its inverse. Let D be an arbitrary diagram in ID µ,A , and let f : A → Z + be the corresponding map. For any a ∈ A let (a) be the label on the dot at height f (a). Then let ρ be the filling of shape µ * in which a ∈ A is placed in the square in column f (a) from the left, and height (a) from the bottom. By the definition of InvPlot, we have that InvPlot(ρ) = D, and furthermore if D = InvPlot(σ) then ρ = σ. Thus the map sending D to ρ is the inverse of InvPlot.
Combining this with Proposition 4.2 gives the bijection we are aiming for.
The proof of Theorem 4.1 is somewhat technical, and so we defer it to Section 6.

Major Index Codes
To complete the proof of the Hall-Littlewood case, it now suffices to find a weighted set isomorphism majcode : Recall the recursion for the µ-sub-Yamanouchi codes of content A from Proposition 4.1: Using this recursion, one possible strategy for constructing majcode is by showing combinatorially that F α µ | inv=0 satisfies a similar recursion. In this section, we present some partial progress towards finding the map majcode. All of our work is based on the following four-step approach to the problem.
Step 1. Consider the content (1 n ) corresponding to fillings with distinct entries, and find an explicit weighted set isomorphism That is, ψ should send an inversion-free filling T of µ to an inversion-free filling ψ(T ) of µ (d+1) for some d, such that maj(ψ(T )) = maj(T ) − d.
Step 2. Define the majcode of a filling T having content (1 n ) to be Step 3. Check the base case of a single square, and conclude that because the recursion is satisfied, majcode is an isomorphism of weighted sets where C µ,[n] are the generalized Carlitz codes of shape µ and content [n].
Step 4. Show that there is a standardization map that respects maj, such that the composition majcode • Standardize is a bijection to C µ,A where A is the alphabet with content α. That is, show that after standardizing, we get a major index code which is A-weakly increasing, and none of these codes are mapped to twice.

Killpatrick's Method for Standard Fillings
For Step 1 in our strategy, in which A = {1, 2, . . . , n} is an alphabet with no repeated letters, such a map can easily be extracted from the work of Killpatrick [10]. In this paper, the author gives a combinatorial proof of a recursion for a generating function involving charge, written ch, and defined in terms of cocharge as ch(µ) = n(µ) − cc(µ) where n(µ) = i (i − 1) · µ i . Killpatrick defines W µ to be the set of words of content µ, and lets r i,µ = |{j > i : µ j = µ i }|. The recursion is stated as: .
If we substitute q → 1/q and multiply both sides by q n(µ) , this becomes which is equivalent to the recursion we stated in step 1 above. Killpatrick's map ψ allows us to define a map majcode that statisfies Steps 1-3 above. We therefore immediately obtain the following result.
However, Killpatrick's map majcode does not satisfy the requirements of Step 4. To illustrate this, we consider the case in which µ = (1 n ) is a straight column shape. In this case, Killpatrick's bijection majcode is defined by the following process: 1. Given a filling w of a straight column shape such as the one with reading word 1432 in the diagram below, check to see if the bottommost entry is the largest entry. If not, cyclically increase each entry by 1 modulo the number of boxes n. Each such cyclic increase, or cyclage, can be shown to decrease the major index by exactly one (see Section 5 for details in the language of cocharge). We perform the minimal number of cyclages to ensure that the bottommost letter is n, and let c 1 be the number of cyclages used. (In the figure, It follows that Standardize maps these three tableaux to the three standardized fillings whose codes majcode are 040000, 130000, and 220000, respectively. But these three tableaux are: Therefore, the map Standardize cannot preserve the relative ordering of the entries, or even the positions of the descents. This makes it unlikely that a simple rule for such a standardization map exists. However, it is possible that there exists a more complicated combinatorial rule for such a map, and we leave this as an open question for future investigation. We now return to Carlitz's bijection in the next section, in which we attempt to generalize majcode to arbitrary inversion-free fillings.

A Carlitz-like Method for General Fillings
In order to demonstrate that we can avoid the issues with repeated entries mentioned in the last section, we first consider the case of a single column. Recall that Carlitz's bijection (for permutations, or single columns with distinct entries) involves removing the largest entry n of the column and recording the difference in the major index. So, with repeated entries, we simply need a way of determining which of the possibly multiple n's we remove at each step.
Definition 4.19. Let σ be any filling of a column of height n with positive integers. We define the standardization labeling on repeated entries as follows.
1. Let i be a letter that occurs k times in σ. Remove any entries larger than i to form a smaller column σ .

2.
Find the bottommost i that is either on the very bottom of σ or has entries a and b above and below it with a > b. Assign this i a label of k and remove it. Repeat this process, labeling the next i by k − 1 and so on, until there are no i's left that satisfy this condition. 3. Finally, remove and label any remaining i's in order from top to bottom, decreasing the label by one each time.
We define Standardize(σ) is the unique column filling using labels 1, 2, . . . , n that respects the ordering of the entries of σ and breaks ties according to the standardization labeling. Proposition 4. 3. For any column filling σ with alphabet A, let ρ = Standardize(σ). Then ρ and σ have the same major index, and majcode(ρ) is A-weakly increasing.
We defer the proof to Section 6. The key step is the following technical lemma. Define a consecutive block of n's in a filling to be a maximal consecutive run of entries in a column which are all filled with the letter n.
Lemma 4.1. Given a filling of a one-column shape µ = (1 r ) and largest entry n, there is a unique way of ordering the n's in the filling, say n 1 , . . . , n αn , such that the following two conditions are satisfied. 1. Any consecutive block of n's in the column appears in the sequence in order from bottom to top, and 2. If we remove n 1 , . . . , n αn in that order, and let d i be the amount that the major index of the column decreases at the ith step, then the sequence d 1 , d 2 , . . . , d αn is weakly increasing.
We now can extend Carlitz's bijection to words having repeated letters (i.e. arbitrary fillings of one-column shapes).
Example 4. 6. Let σ be the one-column filling whose reading word is 6434666251664, the standardization labeling on the 6's is shown by the subscripts: Since this one-column shape has size 13, the filling Standardize(σ) will have the 6's relabeled as the numbers from 8 to 13 according to the subscripts above: 9 4 3 4 10 11 12 2 5 1 8 13 4 We then remove the 13, 12, . . . , 8 in order. This results in a sequence of difference values 1, 3, 3, 3, 5, 7, which is weakly increasing.
We are left with a column with reading word 4342514, in which there is only one 5, so Standardize changes that to a 7. We remove this to obtain a difference of 1 in the major index. We are left with 434214, in which the 4's are standardized as follows: Removing these in order from 6 down to 1 decreases the major index by 0, 2, 3, 2, 1, 0, respectively. Therefore, majcode(σ) = 1, 3, 3, 3, 5, 7, 1, 0, 2, 3, 2, 1, 0.
,A for any alphabet A with content α, and any one-column partition shape (1 n ).
(Note that the restriction inv = 0 has no real significance in this one-column case.) Proof. Carlitz's work shows that majcode is an isomorphism in the case that α = (1, 1, . . . , 1), i.e. A has one of each letter from 1 to n. In the case of repeated entries, we note that majcode is still injective. Indeed, given a code corresponding to a filling, there is a unique place to insert the next number at each step -by applying the Standardize map, using Carlitz's bijection, and then un-standardizing in the unique way so that the order of entries is preserved and the resulting alphabet is A. Now, notice that by our definition of majcode and Lemma 4.1, the codes we get are Aweakly increasing. We claim that they are also (1 n )-sub-Yamanouchi: at the ith step, there are n − i + 1 letters remaining, and the difference d i is either the position of the letter we're removing plus the number of descents strictly below it, or the number of descents weakly below it. Therefore, the maximum value of d i is n − i + 1, and so d 1 d 2 · · · d n is (1 n )-sub-Yamanouchi and A-weakly increasing. It follows that majcode is an injective morphism of weighted sets F r(α) Finally, notice that the two sets have the same cardinality: each has cardinality n α where α is the content of the alphabet A. It follows that majcode is bijective, as desired.

Reducing Rectangles to Columns
The last section demonstrated that the Carlitz bijection can be generalized to repeated entries for one-column shapes. We now present some progress towards a generalization to all shapes µ. Our primary tool is the following technical result, which generalizes the fact that if we remove the largest entry n from the bottom of a one-column shape, we get a major index code entry d = 0.
Proposition 4.5 (Main Lemma). Suppose σ : µ → Z + is a filling for which inv(σ) = 0 and the largest entry n appears in the bottom row. Let σ ↓ : µ (1) → Z + be the filling obtained by: 1. Removing the rightmost n from the bottom row of σ, which must be in the rightmost column since inv(σ) = 0, 2. Shifting each of the remaining entries in the rightmost column down one row, 3. Rearranging the entries in each row in the unique way so that inv(σ ↓ ) = 0.
Then the major index does not change: We defer the proof to the Section 6. It turns out that the construction σ → σ ↓ is reversible, and to see this we require the following lemma. This lemma allows us to recover σ from a tableau σ ↓ whose second-longest row µ k is one square shorter than its longest rows (µ 1 through µ k−1 ). We simply raise the appropriate entry a i from row µ k−1 to row µ k , then do the same from row µ k−2 to µ k−1 , and so on, and finally insert a number n in the bottom row, where n is larger than all of the other entries in σ ↓ .
defined combinatorially by the following process.
1. Given a filling σ : µ → Z + with distinct entries 1, . . . , n and inv(σ) = 0, let i be the row containing the entry n. Split the filling just beneath row i to get two fillings σ top and σ bot where σ bot consists of rows 1, . . . , i − 1 of σ and σ top consists of rows i and above. 2. Rearrange the entries of the rows of σ top in the unique way that forms a filling σ top for which inv( σ top ) = 0.
3. Apply the procedure of Proposition 4.5 to σ top , that is, removing the n from the bottom row and bumping each entry in the last column down one row. Let the resulting tableau be called τ . 4. Place τ on top of σ bot and rearrange all rows to form a tableau ρ having inv(ρ) = 0. Then we define ψ(σ) = ρ.
of Garsia and Procesi for rectangular shapes µ.
The map ψ of Theorem 6.1 is illustrated by the example below. The proofs of these results are deferred to sections 6.3 and 6. 4. For now, we state some facts pertaining to Theorem 6.1 that will be useful in extending this map to other shapes and alphabets. Proofs of these facts can also be found in section 6.4.
The next theorem suggests that the standardization map for rectangle shapes can be inherited from the standardization map for single-column shapes described above.

Theorem 4.4 (Reducing rectangles to columns). For σ ∈ F
(1 n ) µ | inv=0 with µ a rectangle, the value of d = maj(σ) − maj(ψ(σ)) can be determined as follows. Let σ 1 be the unique element of F (1 n ) µ | inv=0 for which n is in the bottom row and ψ(σ 1 ) = ψ(σ), so that σ 1↓ = ψ(σ 1 ) = ψ(σ). Let a h−1 , . . . , a 1 , n be the entries of the rightmost column of σ 1 from top to bottom. Then d is the same as the difference in the major index obtained from inserting n into the ith position in the one-column shape with reading word a h−1 , . . . , a 1 .
This theorem is so crucial to the proofs of the results in the next section that it is helpful to give the sequence of a i 's its own name. We call it the bumping sequence of σ. We can also say something about the position of these a i 's given the position of the largest entry.
Proposition 4. 7. Let µ be a rectangle shape of height h, and let σ ∈ F (1 n ) µ with its largest entry n in row i. Then if a 1 , . . . , a h−1 is the bumping sequence of σ, then a i+2 , . . . , a h−1 all occur in columns weakly to the right of the n, and each a j is weakly to the right of a j−1 for j ≥ i + 3.

Three Row Shapes
We now provide a complete bijection majcode in the case that µ = (µ 1 , µ 2 , µ 3 ) is a partition with at most three rows.
We start with the definition of majcode for two-row shapes, which we will use as part of the algorithm for three rows.  3. Let µ = (µ 1 , µ 2 ) be any two-row shape of size n. Then there is a weighted set isomorphism defined combinatorially by the following process. Given an element σ of F (1 n ) µ | inv=0 , that is, a filling of the two-row shape µ having no inversions, consider its largest entry n.
1. If the n is in the bottom row, define ψ(σ) = σ ↓ as in Proposition 4.5.
2. If the n is in the second row, remove it and re-order the remaining entries in the top row so that there are no inversions. Let ψ(σ) be the resulting filling.
Proof. We first show that ψ is a morphism of weighted sets. If the n we remove is in the bottom row, then by Proposition 4.5, the new filling | inv=0 and has the same major index as σ. This means that σ ↓ is in the d = 0 component of the disjoint union and the statistic is preserved in this case.
Otherwise, if the n is in the second (top) row, then We wish to show that the difference in major index, d = maj(σ) − maj(σ ), is 1 in this case. Indeed, notice that the bottom row remains unchanged after removing the n, and so the difference in major index will be the same as if we ignore the extra µ 1 − µ 2 numbers at the end of the bottom row and consider just the rectangle that includes the second row instead. By Theorem 6.1, it follows that d = 1. Therefore ψ is a morphism of weighted sets.
To show that ψ is bijective, we construct an inverse map φ. First, let σ ∈ F Then we can insert n into the bottom row, and if µ is a rectangle also bump up one of the entries of the bottom row according to Lemma 4.2. This creates a filling σ of shape µ having the same major index as σ . We define φ(σ ) = σ, which defines an inverse map for ψ on the restriction Now let σ be a filling of shape µ (2) . The shape µ (2) has a longer first row than second row, so we can insert n into the second row and rearrange the row entries to obtain an inversion-free filling σ of shape µ and content α. We define φ(σ ) = σ, and by Theorem 6.1 applied to the two-row rectangle inside µ of width equal to the top row of µ, the major index increases by 1 from σ to σ. Thus φ is an inverse to ψ on F We now complete the entire bijection for two rows by defining a standardization map for two-row fillings.
Definition 4. 22. For a two-row shape µ = (µ 1 , µ 2 ), we define the map as follows. Given a filling σ ∈ F α µ | inv=0 , define Standardize(σ) to be the filling of µ with content (1 n ) that respects the ordering of the entries of σ by size, with ties broken by reading order.
Example 4. 8. The standardization map for two rows is illustrated below. We can now define majcode for two-row shapes.
Proof. Putting together the recursions of Lemma 4.3 and Lemma 4.1, we have that for the content (1 n ) corresponding to alphabet [n], the map majcode is a weighted set isomorphism Now, let A be any alphabet with content α. Let σ be a filling of µ with content α. Then we know majcode(σ) = majcode(Standardize(σ)), so majcode(σ) ∈ C µ,[n] . In other words, majcode(σ) is µ-sub-Yamanouchi. In addition, since Standardize is an injective map (there is clearly only one way to un-standardize a standard filling to obtain a filling with a given alphabet), the map majcode, being a composition of Standardize and the majcode for standard fillings, is injective as well on fillings with content α.
We now wish to show that majcode(σ) = d 1 , . . . , d n is A-weakly increasing, implying that majcode is an injective morphism of weighted sets to C µ,A . To check this, let σ = Standardize(σ). Then any repeated letter from σ will become a collection of squares that have consecutive entries and are increasing in reading order in σ. Neither of the two operations of the map ψ affects the reading order of such subcollections, since consecutive integers a and a + 1 cannot occur in reverse order in a filling with distinct entries and no inversions. So, it suffices to show that if the largest entry m of σ occurs i times, then d 1 ≤ · · · ≤ d i .
In σ, the m's of σ become the numbers n − i + 1, n − i + 2, . . . , n, and occur in reading order. Thus we remove any of these that occur in the bottom row first, and for those we have d t = 0. We continue removing these from the bottom row until there are none left in the bottom row. Then the remaining d t 's up to d i will equal 1. Therefore, d 1 ≤ d 2 ≤ · · · ≤ d i , as required.
Finally, the number of fillings with content α is the same as the number of cocharge-friendly diagrams, which is the same as the number of inversion-friendly dot-diagrams for the reverse alphabet. This in turn is the same as the cardinality of C µ,A by the section on inversion codes above. Thus the injective map majcode is in fact a bijection. The result follows.
Corollary 4. 4. For any two-row shape µ and content α, for two-row shapes.
Example 4. 9. In Figure 3, the map majcode is applied to a two-row filling σ. The figure shows that majcode(σ) = 100010. If we apply invcode −1 to this code using the reversed alphabet, we obtain the the filling ρ below: Notice that maj(σ) = inv(ρ) = 2.
We now have the tools to extend our map ψ to three-row shapes.
Definition 4.24. Let σ be any filling of a three-row shape µ = (µ 1 , µ 2 , µ 3 ), and let σ be the 3 × µ 3 rectangle contained in σ. Let n be the largest entry in σ. Choosing one of these n's, say n i , we define ψ ni (σ) by the following process.
1. If n i is to the right of σ , remove the n as in the two-row algorithm to form ψ ni (σ).

2.
If n i is in the bottom row and in σ , then σ is a rectangle and we let ψ ni (σ) = σ ↓ . 3. If n i is in the second row and in σ , let a 2 be the top entry of the bumping sequence of σ . Let b be the entry in square (µ 2 + 1, 2) if it exists, and let b = n + 1 otherwise. If b ≥ a 2 , then remove n i and bump down a 2 to the second row, and if b < a 2 , simply remove n i . Rearrange the modified rows so that there are no inversions, and let ψ ni (σ) be the resulting filling. 4. If n i is in the top row and in σ , let a 1 , a 2 be the bumping sequence of the 3 × µ 3 rectangle in σ. If a 2 > a 1 or µ 2 = µ 3 , then remove n i from σ. Otherwise, if a 2 ≤ a 1 , remove n and bump a 2 up to the top row. Rearrange the modified rows so that there are no inversions, and let ψ ni (σ) to be the resulting filling.
Lemma 4. 4. Let µ = (µ 1 , µ 2 , µ 3 ) be any three-row shape of size n. Then the map ψ = ψ n defined above is a morphism of weighted sets when restricted to fillings having distinct entries. That is, in the case of distinct entries there is a unique choice of n, and is a morphism of weighted sets.
We defer the proof to Section 6. In that section, we also show: Lemma 4. 5. The map ψ of Lemma 4.4 is an isomorphism.
We can now complete the three-row case by defining its standardization map for fillings with repeated entries. This definition is designed to force the majcode sequences to be A-weakly increasing.
Definition 4.25. Given a filling σ of µ, define Standardize(σ) as follows. First, for any letter i that occurs with multiplicity in σ, label the i's with subscripts in reading order to distinguish them. If we bump one of them up or down one row, choose the one to bump from the row in question that preserves their reading order.
Let n be the largest entry that occurs in σ. For each such n t compute d t = maj(σ) − maj(ψ ni (σ)), and let d = min t ({d t }). Let n r be the last n in reading order for which d r = d. Form the filling ψ nr (σ), and repeat the process on the new filling. Once there are no n's left to remove, similarly remove the n − 1's, and so on until the empty tableau is reached. Now, consider the order in which we removed the entries of σ and change the corresponding entries to N, N − 1, . . . , 1 in that order, where N = |µ|. The resulting tableau is Standardize(σ).
We can now define majcode for three-row shapes.
See section 6 for the proof.
Corollary 4. 5. For any three-row shape µ and content α, for two-row shapes.
Example 4. 10. We demonstrate all of the above maps on the filling σ below, with its repeated entries labeled with subscripts in reading order to distinguish them. We will standardize and compute majcode simultaneously. To decide which of the 8's to remove first, we look at which would give the smallest first majcode. This is clearly the 8 3 in the bottom row, so we remove it and bump down the 2. To decide which of the remaining 8's to remove next, note that they both would decrease maj by 2, and so we remove the one that comes last in reading order, namely 8 2 . Since 1 < 2 we bump down the 1.  We can now use the two-row algorithm to complete the process, and we find majcode(σ) = 0220100000. The corresponding inversion diagram for the reverse alphabet {1, 1, 1, 3, 4, 5, 6, 7, 7, 8} is shown below. Note that inv(ρ) = maj(σ) = 5, and inv(σ) = maj(ρ) = 0.
Remark 4. 5. The map above essentially uses the fact that a three-row shape is the union of a rectangle and a two-row shape. For any shape that is the union of a rectangle and two rows, a similar map can be used to remove the first n, but then the resulting shape is no longer the union of a rectangle and a two-row shape. However, we believe that this method may generalize to all shapes, as follows.
Conjecture 4.1. One can extend the map ψ for three-row shapes to all shapes inductively, as follows. One would first extend it to shapes which are the union of a three-row shape and a single column, then use this to extend it to shapes which are the union of a three-row shape and a rectangle shape, using Theorem 4.4. One could then iterate this new map on any four-row shape, so majcode can then be defined on four-row shapes, and so on.

Application to Cocharge
Proposition 4.5 reveals an interesting property of the cocharge statistic on words, first defined by Lascoux and Schützenberger. To define it, we first recall the definition of Knuth equivalence.
Definition 5.1. Given a word w = w 1 · · · w n of positive integers, a Knuth move consists of either: • A transposition of the form xyz → xzy where x, y, z are consecutive letters and y < x ≤ z or z < x ≤ y • A transposition of the form xyz → yxz where x, y, z are consecutive letters and x ≤ z < y or y ≤ z < x.
Two words w, w are said to be Knuth equivalent, written w w, if one can be reached from the other via a sequence of Knuth moves. Knuth equivalence is an equivalence relation on words.
Cocharge was originally defined as follows.
Definition 5. 2. Given a word w = w 1 , · · · , w n with partition content µ, the cocharge of w, denoted cc(w) is the unique statistic satisfying the following properties: 1. It is constant on Knuth equivalence classes, that is, if w is Knuth equivalent to w then cc w = cc w.

2.
If w = w 1 w 2 · · · w n and w = 1, let cyc(w) = w 2 w 3 · · · w n w 1 be the word formed by moving the first letter to the end. Then cc(cyc(w)) = cc(w) − 1. 3. If the letters of w are weakly increasing then cc(w) = 0.
There is also an algorithmic way of computing cocharge.
Definition 5. 3. Let w be a word with partition content µ, so that it has µ 1 1's, µ 2 2's, and so on. Let w (1) be the subword formed by scanning w from right to left until finding the first 1, then continuing to scan until finding a 2, and so on, wrapping around cyclically if need be. Let w (2) be the subword formed by removing w (1) from w and performing the same process on the remaining word, and in general define w (i) similarly for i = 1, . . . , µ 1 . It turns out that cc(w) = i cc(w (i) ), (see, e.g., [7]) and one can compute the cocharge of a word w (i) having distinct entries 1, . . . , k by the following process.
1. Set a counter to be 0, and label the 1 in the word with this counter, i.e. give it a subscript of 0.
2. If the 2 in the word is to the left of the 1, increment the counter by 1, and otherwise do not change the counter. Label the 2 with the new value of the counter. 3. Continue this process on each successive integer up to k, incrementing the counter if it is to the left of the previous letter. 4. When all entries are labeled, the sum of the subscripts is the cocharge.
The link between the major index of inversion free fillings and the cocharge of words lies in the cocharge word construction.
As mentioned in Section 4.2, for any filling σ ∈ F| inv=0 we have maj(σ) = cc(cw(σ)). (See [7] for the proof.) Therefore, we can translate some of our results regarding such fillings to properties of words and their cocharge. We first require the following fact. Proof. If w = cw(σ) and σ has alphabet A = a 1 ≥ . . . ≥ a n , the letters a i for which the corresponding letter w i equals r are the entries in row r. The smallest -that is rightmost -a i , say a i0 , for which w i = 1 is the leftmost entry of the bottom row, i.e. the bottom entry of the first column. The second entry of the first column is then the first a i in cyclic order after a i0 for which w i = 2. This corresponds to the 2 in the subword w (1) , and similarly the letters in w (1) correspond to the entries in the first column.
A similar argument shows that the second column corresponds to w (2) , and so on.
In particular, Proposition 4.5 states that if the largest entry of a filling σ ∈ F (1 n ) µ | inv=0 is in the bottom row, then we can remove it, bump down any entries in its (rightmost) column, and rearrange the rows to get a filling with no inversions. By Lemma 5.1, this translates to the following result in terms of words.
Theorem 5.1. Let w = w 1 · · · w n be a word with partition content µ for which w 1 = 1. Let w (1) , · · · , w (µ1) be its decomposition into subwords as in Definition 5. 3. Then w 1 ∈ w (µ1) , and if w is the word formed by removing w 1 from w and also decreasing each letter that is in w (µ1) by one, then cc(w) = cc(w ).
This theorem fills a gap in our understanding of cocharge, as it gives a recursive way of dealing with words that start with 1. These are the only words that do not satisfy the relation cc(cyc(w)) = cc(w) − 1 of Definition 5.2.
Example 5.1. Consider the word 15221432313. It has three 1's, three 2's, and three 3's, but only one 4 and 5, so to find the word w (µ1) = w (3) we can ignore the 4 and 5. The words w (1) , w (2) , and w (3) , ignoring the 4 and 5, are the subwords listed below: and so the word w is formed by removing the leading 1 and decreasing the 2 and 3 from w (3) . Thus w = 5121432213.

Technical proofs
This section contains the proofs of all the results above whose proofs are particularly long or technical. . , a n } with a 1 ≤ a 2 ≤ · · · ≤ a n , we have

Proof of Proposition 4.1: The Recursion
Proof. The sets forming the union on the right hand side are disjoint because the ith set consists only of monomials having x i−1 n as their power of x n . We now show inclusion both ways. (⊆) Let x c ∈ C A (µ) where c = c 1 , . . . , c n is a µ-sub-Yamanouchi word which is A-weakly increasing. Let i = c 1 + 1, so that c 1 = i − 1. Also let c = c 2 , . . . , c n . Notice that if a 1 = a 2 then c 2 ≥ i − 1, and c is A \ {a 1 }-weakly increasing. Thus, to show x c ∈ x i−1 C (i−1) A (µ (i) ), we just need to show that c is µ (i) -sub-Yamanouchi.
Since c is µ-sub-Yamanouchi, there exists a Yamanouchi word d having µ i entries equal to i − 1 for each i, for which x c |x d . Let t be the highest index such that µ t+1 = µ i . Then µ (i) = (µ 1 , µ 2 , . . . , µ t − 1, · · · , µ k ). So, we wish to show that we can form a new µ-Yamanouchi word b from d so that we still have x c |x b but b 1 = t. This way c will be µ (i) -sub-Yamanouchi, with respect to b = b 2 , . . . , b n .
We have µ t+2 < µ t+1 by our assumption defining t, so there are strictly more t's than t + 1's in d. Notice that this means we can move the leftmost t in d any number of spots to the left without changing the fact that the word is Yamanouchi.
Also notice that d 1 ≥ c 1 = i − 1. But since there are exactly as many i − 1's as i's, i + 1's, and so on up to t in d, we must in fact have d 1 ≥ t, for otherwise the suffix d 2 , . . . , d n would not satisfy the Yamanouchi property. So d 1 ≥ t. Now, let d r be the leftmost t in d. We form a subword of d as follows. Let d 1 be the first letter of our subword. Then let d p1 be the leftmost letter between d 1 and d r with t ≤ d p1 ≤ d r , if it exists. Then let d p2 be the first letter between d p1 and d r for which t ≤ d p2 ≤ d p1 , and so on until we reach a point at which no such letter exists. We now have a subsequence of letters d 1 , d p1 , d p2 , . . . , d p k , d r = t where d r is the leftmost t in d. We define b to be the word formed from d by cyclically shifting this subsequence, replacing d pi with d pi−1 for all i > 1, replacing d p1 with d 1 , and replacing d 1 with d p k .
For instance, if µ = (4, 3, 3, 2, 2), i − 1 = 1, then t = 2, and we might have Then the subword of d consists of those letters in boldface above, and we cyclically shift the boldface letters to the right in their positions to form b = 240432130021100, which is still µ-Yamanouchi and still dominates c in the sense that x c |x b .
To verify that in general x c |x b , notice that c 1 = i − 1 ≤ t, and since the other letters in the subword decrease to the right, we have b i ≥ d i for all i > 1. Thus each b i ≥ c i for all i, and so To show that b is still Yamanouchi, notice that to form b from d, we have moved the leftmost t all the way to the left (which, we noted above, preserves the Yamanouchi property) and moved each d pj to the right without crossing over any element having value d pj − 1 (for otherwise our sequence d pj would have an extra element, a contradiction.) Thus we have not changed the property of there being at least as many d pj − 1's as d pj 's in each suffix, and we have not changed the property that there are at least as many d pj 's as d pj + 1's in each suffix, because we moved these elements to the right. The other Yamanouchi conditions remain unchanged, since we are only moving the letters d pj . Thus b is Yamanouchi as well.
(⊇) For the other inclusion, let c = c 1 , . . . , c n be a word such that . . , d n is Yamanouchi of shape µ by the definition of µ (i) , and since c 1 = i − 1, we have c 1 ≤ t = d 1 . Thus x c |x d . Finally, note that if a 1 = a 2 in A, then c 2 ≥ i − 1 by the definition of C (i) . Thus c is A-weakly increasing. It follows that x c ∈ C A (µ). We break the proof into several lemmas for clarity. Lemma 6.1. The map invcode is a well-defined morphism from ID µ,A → C µ,A for all µ and A.

Proof of Theorem 4.1: The Isomorphism invcode
Proof. Let w : A → Z + be a diagram in ID µ,A , and let c = invcode(w) . We first show that c is µ-sub-Yamanouchi. Let i > 0 and consider the subset of dots labeled i in the inversion labeling of w, say w(r 1 ), . . . , w(r t ) from left to right. We claim that w(r t−j ) is the left element of at most j inversions for each j = 0, . . . , t − 1. Indeed, w(r t−j ) is to the left of exactly j dots labeled i; those dots in a lower row form the Type I inversions with w(r t−j ). For Type II, the dots labeled i + 1 in a higher row must have an i to the right of them, so correspond to one of the dots labeled i in a higher row and to the right of w(r t−j ). Thus w(r t−j ) is the left element of at most j inversions, and so c rt−j ≤ j.
It follows that c r1 , . . . , c rt is an ordinary Carlitz code. Therefore, c can be decomposed into several Carlitz codes, one for each label, of lengths µ * 1 , µ * 2 , . . .. Let d i be the resulting upper bound on c i for each i. Then d is a union of the sequences µ * i , µ * i − 1, . . . , 2, 1, 0 for each i, arranged so that each of these sequences retains its order. Thus d is a Yamanouchi code, since every entry d i can be matched with a unique entry having value d i − 1 to its right, namely the next entry in the corresponding subsequence. Note also that d is Yamanouchi of shape µ, since there are µ 1 zeroes, µ 2 ones, etc in d. Since c is bounded above component-wise by d, we have that c is µ-sub-Yamanouchi.
We now show that c is A-weakly increasing. It suffices to show that for any two consecutive dots w(t), w(t + 1) of w that are in inversion-friendly order, we have c t ≤ c t+1 . Suppose the dot w(t) is labeled i in the inversion labeling, and w(t + 1) is labled j. Then by assumption, since they are in inversion-friendly order, we have either i = j with the j in a higher row than i, or j < i. The i is the left element of c t inversions and the j is the left element of c t+1 inversions.
First suppose i = j and the j is in a higher row than the i, that is, w(t + 1) > w(t). If b is an index to the right of the i such that (w(t), w(b)) is an inversion, then there are three possibilities: First, w(b) could be labeled i and be below w(t), in which case (w(t + 1), w(b)) is also an inversion. Second, w(b) could be labeled i + 1 and be above w(t) but below w(t + 1), in which case there is a dot labeled i in row w(b) to the right of b, forming an inversion with w(t + 1). And third, w(b) could be labeled i + 1 and be above row w(t + 1), in which case (w(t + 1), w(b)) is also an inversion. Thus there is at least one inversion with w(t) as its left element for every inversion with w(t + 1) as the left element, and so c t ≤ c t+1 in this case.
Similarly, if j < i, then any dot labeled i or i + 1 has a dot labeled j and a dot labeled j + 1 to its right, and so c t ≤ c t+1 in this case as well.
It follows that invcode is a well-defined map.

Lemma 6.2. The map invcode is injective.
Proof of Theorem 4.1. We will show that given a code c, we can form an inversion-friendly diagram by placing dots above c 1 , c 2 , . . . , c n from left to right. We claim that there is a unique height that is compatible with c at each step. With the empty word as a trivial base case, we proceed inductively. Suppose we have already placed the first t − 1 dots from the left. There may be several possible dot heights available for the (t)th dot, depending on the shape µ and which dot heights have already been chosen. We claim that each possible height would result in a different value of the code number c t . To show this, let h 1 < h 2 be two possible heights of the (t)th dot. Since the first t − 1 dots have been chosen and we know the shape of the diagram, the labels i and j of a dot at height h 1 or h 2 respectively are uniquely determined. We also note that the inversion code number c t is uniquely determined by the choice of the (t)th dot (given the first t − 1 dots), since any row of length µ r ≥ i that did not have a dot labeled i among the first t values must necessarily have one afterwards, and so the set of label values in each row to the right of the (t)th entry is determined.
So, let r be the inversion code number c t that would result from the dot at height h 1 labeled i, and s the code number for h 2 labeled j. We wish to show that s = r, and we consider the cases j ≤ i and j > i separately.
If j ≤ i, let k be the number of dots labeled i that would be below and to the right of the w(t) if w(t) = h 1 (labeled i). Then r − k would be the number of i + 1's above and to the right of it. Each of the k rows having the i's also have j's weakly to the right of them because j ≤ i, and each of the r − k rows with the i + 1's have both a j + 1 and a j to the right. Thus if w(t) = h 2 (labeled j) instead, the j would have at least r inversions, and so s ≥ r. But if w(t) = h 2 , then this j also forms an inversion with the j in row h 1 , giving an extra inversion. Thus s > r, and so s = r in this case.
If j > i, consider the s dots labeled j or j + 1 that would form an inversion with w(t) if w(t) = h 2 . Then each of these rows would also contain an i or i + 1 that would form an inversion with the i at height h 1 , in addition to the row h 2 itself, showing that r > s. Thus s = r, as desired.
We have that |C µ,A | = |C A (µ)| by our definition of C. Furthermore, when A = {1, 2, . . . , n} we have | ID µ,A | = n µ because we are simply counting the number of unrestricted diagrams having µ 1 dots in the first row, µ 2 in the second row, and so on. We can now conclude bijectivity in this case. We are now ready to prove Theorem 4.1.

Proof. We already have shown (Corollary 4.2) that invcode is a bijective map ID
Notice that for any other alphabet A = {a 1 , . . . , a n }, we have ID µ,A ⊂ ID µ,[n] and C µ,A ⊂ C µ,[n] . We also know that the map invcode : ID µ,[n] → C µ, [n] restricts to an injective map invcode : ID µ,A → C µ,A by Lemmas 6.1 and 6. 2

. It remains to show that it is surjective onto
. Then c is A-weakly increasing on constant letters of A. Let d = invcode −1 (c) ∈ ID µ,[n] . We wish to show that d is of inversion word type with respect to A, so that d ∈ ID µ,A , that is, if r < s and a r = a s in A then (d(a r ), d(a s )) is not an inversion. Suppose (d(a r ), d(a s )) is an inversion. Then either d(a r ) and d(a s ) are both dots labeled i with d(a s ) < d(a r ), or d(a r ) is labeled i and d(a s ) labeled i + 1 with d(a s ) > d(a r ).
In the first case, if (d(a s ), d(a t )) is another inversion involving a s , then either d(a t ) is lower than d(a s ) (and hence lower than d(a r )) and labeled i, or it is above it and labeled i + 1. If the former then (d(a r ), d(a t )) is an inversion, and if the latter, either there is an i in the same row forming an inversion with d(a r ), or the i + 1 is above d(a r ), forming an inversion with it. Thus d(a r ) is the left element of at least as many inversions as d(a s ), plus one for the inversion (d(a r ), d(a s )). Thus c r > c s .
In the second case, if (d(a s ), d(a t )) is another inversion, then d(a t ) is either lower (but possibly above d(a r )) and labeled i + 1, or higher and labeled i + 2. In the former case either d(a t ) itself forms an inversion with d(a r ) or the i in its row does. In the latter case the i + 1 in its row forms an inversion with d(a r ). Since (d(a r ), d(a s )) is an inversion as well, we again have c r > c s . But this contradicts the fact that c is A-weakly increasing.
Hence invcode is surjective, and thus bijective, from ID µ,A to C µ,A . Clearly the map preserves the statistics: the sum of all the entries of the inversion code of a diagram is the total number of inversions of the diagram, so invcode sends inv to Σ. Therefore, invcode : ID µ,A → C µ,A is an isomorphism of weighted sets.
We first prove the following technical lemma. Define a consecutive block of n's in a filling to be a maximal consecutive run of entries in a column which are all filled with the letter n. Proof. We first show (1) that there is a unique choice of entry labeled n at each step which minimizes d and is at the bottom of a consecutive block, and then that (2) the resulting sequence d i is weakly increasing. For any entry x, we define ψ x (σ) to be the column formed by removing the entry x from σ.
To prove (1), consider the bottommost entries of each consecutive block of n's. We wish to show that no two of these n's have the same value of d = maj(σ) − maj(ψ n (σ)) upon removal. So, suppose there is an n in the ith square from the top and an n in the jth square from the top, each at the bottom of their blocks, and call them n i and n j to distinguish them. Assume for contradiction that removing either of the n's results in a decrease by d of the major index.
Suppose an entry n has an entry a above it and b below. In ψ n (σ), a and b are adjacent, and they can either form a descent or not. If they do, then d = maj(σ) − maj(ψ(σ)) is equal to the number of descents below and including that n, and if they do not, then d is equal to the sum of the number of descents strictly below the n plus the position of the n from the top. We consider several cases based on the two possibilities for each of n i and n j .
If either n i or n j is at the very bottom of the filling, then removing that entry results in d = 0, and the other does not, so we may assume neither of n i or n j is in the bottom row.
Case 1: Each of n i and n j forms a new descent upon removal, in ψ ni (σ) and ψ nj (σ). Assume without loss of generality that i < j, and let t be the number of descents weakly below position j (meaning its position from the top is greater than or equal to j) and s the number of descents weakly below position i. Then since the n i is at the bottom of its block, it is a descent, so s > t. Since s and t are the values of d for the removal of the two n's, we have a contradiction.
Case 2: Neither n i nor n j , upon removal, forms a new descent. In this case, assume without loss of generality that i < j and let t be the number of descents strictly below position j. Let r be the number of descents strictly between rows i and j. Since the n's are at the bottom of their blocks, the two n's are descents as well, so the values of d upon removing the n's are i + r + t + 1 and j + t. By our assumption, these are equal, and so we have But j − i − 1 counts the number of squares strictly between positions i and j. Since r is the number of squares in this set which are descents, this means that every square between i and j must be a descent. But the square in position j has the highest possible label n, so the square just before it (above it) cannot be a descent. Hence we have a contradiction.
Case 3: One of the two n's, say the one in position i, forms a new descent upon removal, and the other does not. Then in this case defining t as the number of descents strictly below position j and s the number of descents weakly below position i, the two values of d are j + t and s. So j + t = s by our assumption, and so j = s − t, which implies s − t > 0, or s > t. Thus, necessarily i < j. Now, s − t is the number of descents between positions i and j, inclusive. Since i ≥ 1 there are at most j such squares, and the one preceding j cannot be a descent since there is an n in the jth position. Thus this quantity s − t is strictly less than j, but we showed before that j = s − t, a contradiction. This completes the proof of claim (1).
For claim (2), consider any two consecutive d values in this process, say d 1 and d 2 for simplicity, that correspond to the largest value n. Let n 1 and n 2 be the corresponding copies of n. We wish to show that d 1 ≤ d 2 .
First, notice that if n 1 and n 2 were in the same consecutive block before removal, we have d 1 = d 2 unless n 2 is a block of length 1 in ψ(σ), in which case d 2 ≥ d 1 .
So we may assume that n 1 and n 2 were in different consecutive blocks before removal. In this case the removal of n 1 may only change the value of d on removing n 2 by at most one, namely by either shifting it back by one position if n 1 is above n 2 in the column, or by removing one descent from below n 2 , if n 1 is below n 2 . Thus d 2 = maj(ψ n1 (σ)) − maj(ψ n2 (ψ n1 (σ))) is at most one less than maj(σ) − maj(ψ n2 (σ)). Since n 1 was chosen so as to minimize d 1 , and we showed in our proof of (1) that the choice is unique, this implies that d 2 + 1 > d 1 . Thus d 2 ≥ d 1 , as desired.
This completes the proof of (2). Proposition 4.3 now follows from the proof of the above lemma.

Proof of Main Lemma: Proposition 4.5
Proposition 4.5 (Main Lemma). Suppose σ : µ → Z + is a filling for which inv(σ) = 0 and the largest entry n appears in the bottom row. Let σ ↓ : µ (1) → Z + be the filling obtained by: 1. Removing the rightmost n from the bottom row of σ, which must be in the rightmost column since inv(σ) = 0, 2. Shifting each of the remaining entries in the rightmost column down one row, 3. Rearranging the entries in each row in the unique way so that inv(σ ↓ ) = 0.
Then the major index does not change: To prove Proposition 4.5, we require a new definition and several technical lemmata. We write (i, j) to denote the square in row i and column j of a Young diagram. Notice that the sum of the cocharge contributions of a tableau is equal to its major index. In addition, the three-step process of Proposition 4.5 does not change the major index.
Definition 6.1. The cocharge contribution cc (i,j) (σ) of an entry σ(i, j) of a filling σ is the number of descents that occur weakly below the entry (i, j) in its column, j.
It is easy to see that the cocharge contributions add up to the major index.
We omit the proof, and refer the reader to the example in Figure 4. Definition 6.2. Let w be any sequence consisting of k 0's and k 1's, and let a 1 , a 2 , . . . , a k be any ordering of the 0's. We define the crossing number of w with respect to this ordering as follows. Starting with a 1 , let b 1 be the first 1 to the right of a 1 in the sequence, possibly wrapping around cyclically if there are no 1's to the right of a 1 . Then let b 2 be the first 1 cyclically to the right of a 2 other than b 1 , and so on. Then the crossing number is the number of indices i for which b i is to the left of a i . Example 6.1. If we order the 0's from left to right, the word 10110010 has crossing number 2. Lemma 6. 3. Let w be any sequence consisting of k 0's and k 1's. Then its crossing number is independent of the choice of ordering of the 0's.

Proof.
Say that a word is 0-dominated if every prefix has at least as many 0's as 1's. First, we note that there exists a cyclic shift of w which is 0-dominated. Indeed, consider the partial sums of the (−1) wi 's in the sequence, so that any 0 contributes +1 and any 1 contributes −1. The total sum is 0, and we can shift to start at the index of the minimal partial sum; the partial sums will now all be positive. Now, we show by induction that any 0-dominated sequence has crossing number m = 0. It is clearly true for k = 1, since the only 0-dominated sequence is 01 in this case.
Suppose the claim holds for any 0-dominated sequence of k − 1 0's and k − 1 1's and let s be an 0-dominated sequence with k 0's. Choose an arbitrary 0 to be a 1 , and denote it0. Then since s is 0-dominated, the last term in s is a 1 and so0 will be paired with a 1, denoted1, to the right of it. Remove both0 and1 from s to form a sequence s having k − 1 0's and k − 1 1's.
We claim that s is 0-dominated. Note that all prefixes of s that end to the left of0 are unchanged, and hence still have at least as many 0's as 1's. Any prefix P that ends between0 and1 is the result of removing0 from a corresponding prefix P of s, which had at least as many 0's as 1's. If there were an equal number of 0's as 1's in P , then its last term is a 1. This means that1 was not the first 1 to the right of the 0, a contradiction. So P has strictly more 0's than 1's, and so P = P \ {0} has at least as many 0's as 1's. Finally, any prefix which ends to the right of1 has one less 0 and one less 1 than the corresponding initial subsequence of s, and so it also has at least as many 0's as 1's. It follows that s is 0-dominated.
By the inductive hypothesis, no matter how we order the remaining 0's, there are no crossing pairs. Since the choice of a 1 was arbitrary, the crossing number is 0 for any ordering of the 0's.
Returning to the main proof, let w = w 1 w 2 · · · w 2k and let i be such that the cyclic shift w = w i w i+1 · · · w 2k w 1 w 2 · · · w i−1 is 0-dominated. Then every pairing in w has the 0 to the left of the 1, and so the crossing number of w is the number of pairings in which the 0 is among w i · · · w 2k and the 1 is among w 1 · · · w i−1 . Hence, the crossing number is equal to the difference between the number of 1's and 0's among w 1 w 2 · · · w i−1 . This is independent of the choice of order of the 0's, and the proof is complete.
In the rest of the paper, if a row r is above a row s in a filling, we say that we rearrange r with respect to s if we place the entries of r in the unique ordering for which there are no inversions in row r, given that s is below it. Lemma 6. 4. Let σ be a filling of the two-row shape (k, k) with inv(σ) = 0. Let σ π be formed by rearranging the bottom row via the permutation π, nad rearranging the top row with respect to the new bottom row. Then maj(σ) = maj(σ π ).
Proof. Let w be cocharge word of the diagram. No matter what the permutation of rows, the cocharge word will remain unchanged, a sequence of k 1's and k 2's. But the permutation of the bottom row determines a permutation of the 1's, and the subsequent ordering of the top row is determined by the process of selecting the first remaining 2 cyclically to the right of the 1 at each step. It forms a descent if and only if that 2 is to the left of the 1, i.e. if it contributes to the crossing number. So the number of descents is equal to the crossing number of the cocharge word (thinking of the 1's as 0's and the 2's as 1's), and by Lemma 6.3 the proof is complete.
We now have the tools to prove the next technical lemma. Lemma 6. 5. Let a 1 , . . . , a w−1 be any positive integers, and suppose b 1 , . . . , b w are positive integers such that in the partial tableau Let b w , t 1 , . . . , t 2w−2 be the ordering of these letters that is in cyclic order, with ties broken in such a way that b w , a i , b i occur in that order in the sequence for each i. Then if we replace the a i 's with 0's and the b i 's with 1's, the suffix t 1 , . . . , t 2w−2 has crossing number 0 since each a i is paired with b i to its right.
It follows from Lemma 6.3 that, if we rearrange the a i 's, the crossing number is still 0 and so b w still corresponds to the 1 at the beginning of the sequence. It follows that b w is still in the last position in the new filling. Finally, by considering only the first w − 1 columns, we can apply Lemma 6.4 to see that the total number of descents among b 1 , . . . , b w−1 remains unchanged.
We require one more technical lemma regarding two-row fillings. First, notice that in a two-row shape with the bottom row ordered least to greatest and no inversions in the second row, the descents must be "left-justified": they must occur in columns 1, . . . , k for some k. For, if b r > a r is a descent and b r−1 ≤ a r−1 is not, then b r > a r−1 by transitivity and we have b r−1 ≤ a r−1 < b r , forming an inversion. Moreover, after the descents the b i 's are weakly increasing: b i ≤ b j for k < i < j -this follows directly from the fact that none of these b i 's are descents. The descents b 1 , . . . , b k are also weakly increasing; otherwise we would have an inversion.
We will use these facts repeatedly throughout.
Lemma 6. 6. Let a 1 ≤ · · · ≤ a w−1 and let b 1 , b 2 , . . . , b w be numbers such that the partial tableau has no inversions in the second row. Then if we bump b w down one row so that a 1 ≤ a 2 ≤ · · · ≤ a t ≤ b w < a t+1 ≤ · · · ≤ a w−1 is the bottom row, and leave b 1 , . . . , b w−1 unchanged, then the new tableau still has no inversions, and the descents in the second row remain the same (and left-justified).
Proof. Let k be the number of descents among the b's. If k = 0, there are no descents, and we must have b w ≤ a 1 so as not to have inversions. In this case, b w drops down into the first position in the bottom row, and there are still no descents and no inversions since b 1 ≤ b 2 ≤ . . . ≤ b w−1 in this case. If k ≥ 1, then b k > a k is the last descent. Since b k and b w do not form an inversion in the original tableau, we must either have a k < b k ≤ b w or b w ≤ a k < b k . We consider these cases separately.
Case 1: Suppose a k < b k ≤ b w . Then t > k, i.e. b w drops to a position to the right of the last descent, after which point we have b i ≤ a i for all such i. Thus, for instance, b t+1 < a t+1 , and since b w and b t+1 did not originally form a descent, we must have b t+1 ≤ b w ≤ a t+1 . This means that b t+1 ≤ b w , so b t+1 still does not form a descent in the new tableau. Then, similarly we have b t+2 ≤ b w , and so b t+2 ≤ a t+1 , and so on. Thus the descents have stayed the same in the new tableau.
Furthermore, since b i < b w for all i ≥ t + 1 in this case, we have b i < b w < a i for all i ≥ t + 1, and since the b i 's after position k are weakly increasing, none of these form inversions. Since b 1 , . . . , b t are above the same letters a 1 , . . . , a t as before and are in the same positions relative to the other b i 's, they cannot be the left elements of inversions either.
Case 2: Suppose now that b w ≤ a k < b k . If b w = a k then in fact it drops to the right of a k and it is the same as the previous case. So we can assume that b w < a k < b k .
Then t ≤ k, i.e. b w drops to a position underneath a descent of the original tableau shape. Since b w ≤ a t+1 and a t+1 < b t+1 is a descent, we have b w < b t+1 and so b t+1 is still a descent in the new tableau. Similarly b i is still a descent for all i ≤ k. To check that b k+1 is still not a descent, assume it is: that a k < b k+1 . Then b w ≤ a k < b k+1 , and so b w ≤ a k+1 ≤ b k+1 since the original filling had no inversions. If a k+1 < b k+1 , we get a contradiction, so a k+1 = b k+1 . But then b w = a k+1 , contradicting the fact that b w < a k+1 . Thus there is not an inversion in the (k + 1)st position. Hence the descents stay the same in this case as well.
Furthermore, consider b i and b j with i < j < w: if i is among 1, . . . , t then b i and b j do not form an inversion since b i is still above a i . If i and j are both among t + 1, . . . , k, then they do not form an inversion, since b i and b j are both descents and b i < b j . If i is among t + 1, . . . , k and j > k, note that b j < b w since it is in the run of non-descents of the b's. Hence b j < a i by transitivity, and so b j < a i < b i since b i is a descent. This implies that b i and b j do not form an inversion. Finally, if i > k and j > i, we are once again in the run of non-descents at the end, which is weakly increasing, and hence there are no inversions since none are descents. We conclude that the b i 's have no inversions among them in this case either.
Lemma 6. 7. Let a 1 , . . . , a w−1 , b 1 , . . . , b w , and c w be numbers such that the partial filling has no inversions in the second row. Then there exists an ordering t 1 , . . . , t w of a 1 , . . . , a w−1 , b w such that if s 1 , . . . , s w is the unique ordering of b 1 , . . . , b w−1 , c w for which the partial filling s 1 s 2 · · · s w t 1 t 2 · · · t w has no inversions in the second row, then the entry c w is directly above b w in the new filling.
Proof. Let T be the two-row filling consisting of the s's and t's as in the statement of the lemma. Let x be the cocharge word of T , with the bottom row indexed by 0 and the top by 1. Then x consists of 0's and 1's, and as in Lemma 6.3, the number of descents in T is the crossing number of this word. So b w is one of the 0's in this word, and c w is one of the 1's, and we wish to show that there is some ordering of the 0's in which b w is paired with c w .
Assume to the contrary that b w cannot be paired with c w no matter how we order the 0's. Choose a cyclic shift x of x whose crossing number is 0, as we did in Lemma 6. 3. If b w is to the left of c w in x, then since it can't be paired with c w there must be an index k between that of b w and c w at which the prefix of the first k letters is 0-dominated. For, if there were more 0's than 1's at every step up to c w then we can pair off the other 0's starting from the left until c w is the first 1 to the right of b w . This means we can choose a different cyclic ordering, starting at the k + 1st letter, for which the crossing number is also 0. In this cyclic shift, c w is to the left of b w . So we have reduced to the case that c w is to the left of b w .
In this case, c w is one of the 1's, and b w is one of the 0's, e.g. in the 0-dominated sequence 001011, we might have c w be the third entry and b w the fourth. Before we dropped down the b w and c w , we had a tableau whose cocharge word looked like this word except with the 0 of b w replaced by a 1, and the 1 of c w replaced by a 2 (in the example, this would give us the word 002111.) Remove the 2 from this word. In the resulting word of 0's and 1's, since we have bumped up a 0 to a 1 but removed one of the 1's before it, every prefix is 0-dominated except the entire word, which has one more 1 than it has 0's. Thus the very last 1 is the only entry which is not paired. But b w is, by assumption, the entry which is unpaired in the original ordering. This is a contradiction, since b w was a 0 in the bumped-down word and hence could not have been in the last position.
It follows that there must exist an ordering of the 0's in which b w is paired with c w . This completes the proof.
In the next two lemmas, we let σ : µ → Z + be a filling with inv(σ) = 0 whose largest entry appears in the bottom row, and let σ ↓ : µ (1) → Z + be constructed from σ as in the statement of Proposition 4.5.
Lemma 6. 8. Suppose inv(σ) = 0. Let i ≥ 1 be an index such that µ i+1 = µ 1 , i.e. the (i + 1)st row of µ is as long as the bottom row. Then we have Proof. We induct on i. For the base case, i = 1, the left hand side is the total cocharge contribution of the entries (1, 1), (1, 2), . . . , (1, µ 1 − 1) and the entry (2, µ 1 ). The square (1, µ 1 ) is filled with the largest number n, by our assumption that n appears in the bottom row and the fact that inv(σ) = 0. Thus the entry in (2, µ 1 ) cannot be a descent, and so the cocharge contribution of all of these entries are 0. Thus the left hand side is 0. The right hand side is also 0, since it is the sum of the cocharge contributions from the bottom row of σ ↓ .
For the induction, let i > 1 and suppose the claim is true for i − 1. Then the induction hypothesis states that Then if there are k descents among the entries (i + 1, µ 1 ) and (i, 1), . . . , (i, µ 1 − 1) of σ, then their total cocharge contribution is equal to s + k, since they are the entries strictly above those that contribute to the left hand side of the equation above.
So, to show that it suffices to show that the total cocharge contribution of the ith row of σ ↓ is also s + k. By the induction hypothesis it is equivalent to show that there are k descents among the entries in the ith row of σ ↓ . Now, let w = µ 1 be the width of the tableau, and let a 1 , . . . , a w−1 be the first w − 1 entries in row i − 1 of σ. Let b 1 , . . . , b w be the elements of row i, and let c w be the entry in square (i + 1, w), above b w . c w b 1 b 2 · · · b w−1 b w a 1 a 2 · · · a w−1 Consider the 2 × w tableau T with bottom row elements a 1 , . . . , a w−1 , b w and top row elements b 1 , . . . , b w−1 , c w . By Lemma 6.7, there is a way of rearranging the bottom row of T such that if we rearrange the top row respectively, then c w lies above b w . This suffices, for now the remaining columns will form a tableau with no inversions in the second row, with a 1 , . . . , a w−1 and b 1 , . . . , b w−1 as the entries of the rows. By Lemma 6.4 this has the same number of descents independent of the ordering of the a i 's, and c w will be a descent or not depending on whether it was a descent before. Thus there are still k descents in the ith row.
Lemma 6.8 shows that the cocharge contribution is conserved for rows i for which µ i+1 = µ 1 . The next lemma will show that the cocharge contribution is unchanged for higher rows as well. Again, here σ is a filling having its largest entry n occurring in the bottom row. Lemma 6. 9. Suppose inv(σ) = 0, and the rightmost (wth) column of µ has height µ * w = h. Then in σ ↓ , row h consists of the first w − 1 letters of row h of σ in the same order, and their cocharge contributions are the same as they were in σ.
It follows from this lemma that all higher rows are unchanged as well, and combining this with Lemma 6.8, it will follow that maj(σ) = maj(σ ↓ ).
Proof. We induct on h, the height of the rightmost column. For h = 1 and h = 2, we are done by previous lemmata (see Lemma 6.6). So, suppose h ≥ 3 and the claim holds for all smaller h.
Performing the operation of Proposition 4.5, suppose we have bumped down all but the topmost entry (in row h) of the rightmost column and rearranged each row with respect to the previous. Let rows h − 2, h − 1, and h have contents: Notice that, by the induction hypothesis, the entries c 1 , . . . , c w−1 are the same as they were in σ before bumping down c w and have the same cocharge contributions as they did before. Thus the row of d's as shown is currently the same as row h of σ. So, we wish to show that upon bumping d w down and rearranging all rows so that the filling has no inversions, the entries in row h are still d 1 , d 2 , . . . , d w−1 in that order, and that these entries have the same cocharge contributions as they did before.
We first show that the entries d 1 , . . . , d w−1 do not change their positions upon bumping d w down to row h − 1 (and rearranging so that there are still no inversions.) We proceed by strong induction on the width w. For the base case, w = 2, we have that d 1 is the only entry left in the top row, and therefore cannot change its position. Now, assume that the claim is true for all widths less than w. If d w bumps down and inserts in a row t above x t , then the numbers c 1 , . . . , c t−1 are still above x 1 , . . . , x t−1 respectively since they are still first in cyclic order after each. Likewise the entries d 1 , . . . , d t−1 remain the same in this case. Thus we may delete the first t − 1 columns and reduce to a smaller case, in which the claim holds by the induction hypothesis. This allows us to assume that when d w bumps down, it is in the first column, above x 1 , and so the tableau looks like: where the * 's are an appropriate permutation of the indices for d 1 , . . . , d w−1 and c 1 , . . . , c w−1 .
We now show that d 1 , . . . , d r remain in their respective positions for all r ≥ 1, by induction on r. (So, we are doing a triple induction on the height, the width of the tableau, and the index of the d's). For the base case, we wish to show that d 1 is the entry above d w in the new tableau. We have, from the fact that inv(σ) = inv(σ ↓ ) = 0, that the following triples are in cyclic order for any k such that 2 < k < w: are in cyclic order as well, and so on. At each step, to add c k to the list we only need consider rows up to that of x k−1 . Hence, the process continues up to k = r.
Finally, notice that since we are only concerned with relative cyclic order of the entries to determine their positions, we may cyclically increase all the entries modulo the highest entry in such a way that d w ≤ c 1 ≤ c 2 ≤ · · · ≤ c r in actual size. Furthermore, since we are currently only concerned with the position of d r , which is determined by its relative ordering with d i for i > r and with c r−1 , we may assume that c r ≤ c r+1 ≤ c r+2 ≤ . . . ≤ c n are increasing as well; it will make no difference as to the value of d * . But then the top two rows behave exactly as in the two-row case of Lemma 6. 6. We know that d r occurs in the rth column from this lemma, and the induction is complete.
We have shown that d 1 , . . . , d w−1 retain their ordering, and it remains to show that they retain their cocharge contributions. If any c k lies above x k , and hence d k above it, the column has not changed and so d k does indeed retain its cocharge contribution. So, as before, we may remove such columns and reduce to the case in which the entries are: For the first column, we have that (x 1 , d w , c 1 ) are in cyclic order since d w and c 1 do not form an inversion. Moreover, either x 1 = d w or x 1 = d w = c 1 , in which case we may assume that d w is in fact located in the second column instead, and reduce to a smaller case. So we may assume x 1 = d w . In addition, (c 1 , d 1 , d w ) are in cyclic order, with c 1 = d 1 unless c 1 = d 1 = d w , and if d 1 = d w then we must have d 1 = d 2 = · · · = d w so that d 1 does not form an inversion with any element in the new tableau. We now consider three cases based on the actual ordering of x 1 , d w , c 1 (which are in cyclic order): Case 1: Suppose x 1 < d w ≤ c 1 . Then since (c 1 , d 1 , d w ) are in cyclic order, either d 1 is greater than both c 1 and d w or less than or equal to both. Since both c 1 and d w are descents when over x 1 , the cocharge contribution of d 1 is unchanged in this case.
Case 2: Suppose d w ≤ c 1 ≤ x 1 . Then in this case neither c 1 nor d w is a descent when in the first column, and the same analysis as in Case 1 shows that d 1 has the same cocharge contribution in either case.
If d 1 is strictly greater than c 1 , it forms a descent with c 1 and not with d w . But note that d w is a descent when in the first column, and c 1 is not, so the total number of descents weakly beneath d 1 balances out and is equal in either case. If d 1 = c 1 , then d 1 = d 2 = · · · = d w , which is impossible since then c 1 = d w . So the cocharge contribution of d 1 is the same in this case as well.
This completes the proof that d 1 retains the same cocharge contribution. We now show the same holds for an arbitrary column i.
In the ith column, we have d i above c i−1 above x i . Note that (c i , d i , d w ) and (d w , c i−1 , c i ) are in cyclic order (the latter by the above argument which showed that d w , c 1 , c 2 , . . . , c w−1 are in cyclic order given that the c i 's are arranged as above), so (c i , d i , d w , c i−1 ) are in cyclic order. We also have that (x i , c i−1 , c i ) are in cyclic order, and by a similar argument as above we can assume The exact same casework as above for these three possibilities then shows that d i retains its cocharge contribution.
Proposition 4.5 now follows immediately from Lemmas 6.8 and 6.9 and Proposition 6.1.

Proof of Theorem 6.1: Reducing Rectangles
We first recall the statement of the theorem.
We first need to prove Lemma 4.2, which is a sort of inverse to Lemma 6.5. Proof. As usual, let us think of the a i 's as 0's and the b i 's as 1's in a cocharge word, arranged according to the magnitudes of the a i 's and b i 's. Then we have a sequence of w 0's and w − 1 1's, and we wish to show that there is a unique 0 that, when we change it to a 1, is not paired with any 0 when computing the crossing number. By Lemma 6.5, there is a unique such 1 in any word of w − 1 0's and w 1's.
So, by Lemma 6.5, it suffices to find a 0 in the original tableau such that upon removal, the remaining sequence starting with the entry to its right is 0-dominated. For instance, in the sequence 001110100, which has 5 zeros and 4 ones, if we remove the second-to-last zero and cyclically shift the letters so that the new sequence starts with the 0 to its right, we get the sequence 00011101, which is 0-dominated.
To show that there is a unique such 0, consider the up-down walk starting at 0 in which we move up one step for each 0 in the sequence and down one step for each 1. Then we end at height 1, since there is one more 0 than 1 in the sequence. For instance, the sequence 001110100 corresponds to the up-down walk: Consider the last visit to the minimum height of this walk. If the minimum height is 0 then we simply remove the last 0 in the sequence and we are done. If the minimum height is less than 0, then there are at least two up-steps (0's) following it since it is the last visit to the min. The first of these up-steps corresponds to a 0 which we claim is our desired entry. Indeed, if we remove this 0, the walk starting at the next step and cycling around the end of the word is a positive walk, corresponding to a 0-dominated sequence.
It is easy to see that if we do the same with any of the other 0-steps, the resulting walk will not be positive and so the corresponding sequence will not be 0-dominated. This completes the proof.
We now have the tools to prove Theorem 6.1.
Proof of Theorem 6.1. It is clear that ψ is a morphism of weighted sets, preserving the statistics, so we only need to show that ψ is a bijection. To do so, we construct an inverse map φ = ψ −1 that takes a pair (ρ, d) and returns an appropriate filling σ : µ → Z + , where ρ : µ (d−1) → Z + is a filling with no inversions using the letters 1, . . . , n − 1, and d is a number with 0 ≤ d ≤ µ * 1 − 1. For simplicity let h = µ * 1 be the height of µ. Let (ρ, d) be such a pair. Consider the fillings σ 1 , σ 2 , . . . , σ h formed as follows. Let σ h be the tableau obtained by inserting the number n into the top row of ρ and rearranging the entries of the top row so that inv(σ h ) = 0. Let σ h−1 be the tableau formed from ρ by first moving the unique element of the (h−1)st row given by Lemma 4.2 to the top row, and then inserting n into the (h − 1)st row and rearranging all rows so that there are no inversions again. Then, let σ h−1 be formed from ρ by first moving the same element, call it a h−1 , up to the top row, then using Lemma 4.2 again to move an element a h−2 from row h − 2 to row h − 1, and finally inserting n in row h − 2 and rearranging the rows again so that there are no inversions. Continuing in this manner, we define each of σ 1 , . . . , σ h likewise, and it is easy to see that ψ(σ i ) = ρ for all i, by using Lemma 6.5 repeatedly. Now, we wish to show that the numbers d i = maj(σ i ) − maj(ρ) for i = 1, . . . , h form a permutation of 0, . . . , h − 1. Let a 1 , . . . , a h−1 be the elements of rows 1, . . . , h − 1 that were moved up by 1 in each of the steps as described above. By Proposition 4.5, the filling σ 1 , whose rightmost column has entries a h−1 , a h−2 , . . . , a 1 , n from top to bottom, has the same major index as ρ. So d 1 = 0, and maj(σ 1 ) = maj(ρ). We will now compare all other σ i 's to σ 1 rather than to ρ.
We claim that the difference in the major index from σ 1 to σ i is the same as the difference obtained when moving n up to row i (and shifting all lower entries down by one) in the onecolumn filling having reading word a h−1 , a h−2 , . . . , a 1 , n. Then, by Carlitz's original bijection, we will be done, since each possible height gives a distinct difference value d between 0 and h−1.
To proceed, consider the total number of descents in each row. In σ i , the entry n is in the ith row. Let τ consist of the top h − i rows of this filling, arranged so that inv(τ ) = 0. Then the top h − i − 1 rows (row 2 to h − i of τ ) are the same as in σ 1 , with the same descents. Thus if we rearrange every row with respect to the one beneath, including rows i − 1 and below to form σ i , each row also has the same number of descents as it does in σ 1 by Lemma 6. 4.
We now show the same is true for row i + 1. In τ , we have a i above n, and the remaining entries in that row are above the same set of entries they were in σ 1 . So the number of descents in row i + 1 goes down by 1 from σ 1 to σ i if a i > a i−1 , and otherwise it remains the same.
For rows i and below, we use Lemma 6.7. For any row t from 2 to i, the entries of row t − 1 can be rearranged so that if row t is arranged on top of it with no inversions, the entry a t lies in the space above a t−1 (or n lies above a i−1 in the case t = i.) The remaining entries in the top row of this two-row arrangement are then above the same set of entries they were in σ 1 , with no inversions between them, and by Lemma 6.4 they have the same number of descents among them. So, the descents have only changed by what the comparison of each a t with a t−1 (or n with a i−1 ) contributes.
Therefore, the number of descents in a given row of σ i , relative to σ 1 , can either increase by 1, stay the same, or decrease by 1, according to whether it does in the one-column shape filled by a h , . . . , a 1 , n when we move n up to height i. Now, for rectangular shapes, if p t is the total number of descents in row t, it is easy to see that the total cocharge contribution (major index) of the filling is the sum of the partial sums p 1 + (p 1 + p 2 ) + (p 1 + p 2 + p 3 ) + · · · + (p 1 + · · · + p h ).
Since the values of p t in σ i differ by 0 or ±1 from the corresponding values of σ 1 , it follows that the difference d i is the sum of the partial sums of these differences. But this is the same as the difference in the one-column case we are comparing to. This completes the proof.
Note that Proposition 4.6 and Theorem 4.4 also follow immediately from the above proof. Finally, we prove Proposition 4.7 of section 4. 3

.3.
Proposition 4. 7. Let µ be a rectangle shape of height h, and let σ ∈ F (1 n ) µ with its largest entry n in row i. Then if a 1 , . . . , a h−1 is the bumping sequence of σ, then a i+2 , . . . , a h−1 all occur in columns weakly to the right of the n, and each a j is weakly to the right of a j−1 for j ≥ i + 3.

Proof.
Let c 1 , . . . , c r , n, c r+1 , . . . , c m−1 be the entries in row i from left to right. Consider the reordering of row i given by c 1 , . . . , c m−1 , n and order row i + 1 with respect to this ordering. Let the numbers in the new ordering in row i + 1 be b 1 , . . . , b m−1 , a i . Then a i is the same as the value of a i from Theorem 4.4 by Lemma 6.5; that is, a i would lie above n if we ordered c 1 , . . . , c m−1 by size as well. Now, since c 1 , . . . , c r are the first r entries in both orderings of row i, it follows that b 1 , . . . , b r must be the first r entries in both corresponding orderings of row i + 1. Thus a i , not being equal to any of b 1 , . . . , b r , must be weakly to the right of the column that n is in.
The same argument can be used to show that a i+1 is weakly to the right of a i as well, and so on. This completes the proof.

Proofs for Three Row Shapes
We first prove Lemma 4.4, restating the definition of ψ as part of the statement.
Lemma 4. 4. Let µ = (µ 1 , µ 2 , µ 3 ) be any three-row shape of size n. Then there is a morphism of weighted sets defined combinatorially by the following process. Given an element σ of F (1 n ) µ | inv=0 , consider its largest entry n. Let σ be the 3 × µ 3 rectangle contained in σ.
the major index is the same as if we simply replaced n by b in σ . Consider any arrangement of the second row of σ in which n is at the end, and arrange the top row relative to this ordering. Then a 2 is at the end of this top row by its definition, and so replacing n by b will make a 2 a descent and thereby increase the total cocharge contribution by 1. By Lemma 6.4 this is the same as the increase in the the cocharge contribution from σ to ψ(σ). Hence ψ(σ) lies in the d = 1 component of the disjoint union.
If b ≥ a 2 , we claim that if a 1 is the entry in the bottom row of the bumping sequence, then b < a 1 . If a 1 is to the right of the column that n is in then the claim clearly holds. Otherwise, let a 1 , d 1 , d 2 , . . . , d i be the consecutive entries in the bottom row starting from a 1 and ending at the entry d i beneath the n, and let c 1 , . . . , c i be the entries in the second row from the entry above a 1 to the entry just before the n. The c j 's are all descents, and the c j 's and d j 's are both increasing sequences. Since there are no inversions in the second row, we have b < d i . Since removing the n and bumping up a 1 results in the a 1 at the end of the second row by definition, upon doing this the d i 's all slide to the left one space, and the c i 's must also remain in position and remain descents by Proposition 4. 5. In particular, this means that d i < c i , and so b < c i as well. But then since there are no inversions it follows that b < d i−1 , which is less than c i−1 , and so on. Continuing, we find that b < a 1 as claimed.
Since b ≥ a 2 by assumption, it follows that a 2 < a 1 and so removing the n and bumping down a 2 in the rectangle results in a difference in major index of 2 by Theorem 4. 4. Note also that if we perform this bumping in the entire filling σ, the entry a 2 ends up to the left of column µ 3 + 1 since a 2 ≤ b and hence it is to the left of b in the second row. Thus the entries to the right of the rectangle are preserved, and maj(ψ(σ)) = maj(σ) − 2. It follows that ψ(σ) lies in the d = 2 component of the disjoint union.
Case 2. Suppose n is in the top row. If a 2 > a 1 , then removing n results in the major index decreasing by d = 2, and so ψ(σ) is in the d = 2 component of the disjoint union. Otherwise, a 2 ≤ a 1 . Since µ 2 = µ 3 , we remove the n and bump a 2 up to the top row.
Since a 2 ≤ a 1 , by Theorem 4.4 we find that simply removing the n results in a decrease by 1 in the major index. Since the top row has had a descent removed (by the proof of Theorem 6.1), it follows that the empty space created in the top row was not above a descent, for otherwise the major index would decrease by 2. Thus in particular b is not a descent.
It follows that if σ is formed by bumping up a 2 and inserting n in the second row, then the n, being the last descent in the second row, will appear among the first µ 3 columns of σ. In addition, since a 2 ≤ a 1 this results in an increase in major index by 1 from σ to σ, by Proposition 4. 6.
We now wish to show that b ≥ a 2 ; if so, we claim removing n from σ will result in a decrease by 2 in the major index, and will also result in the tableau ψ(σ), thereby showing that maj(ψ(σ)) = maj(σ) − 1 and so ψ(σ) is in the d = 1 component. To see that the major index decreases by 2 on removing n, note that by Proposition 4.4, the effect of removing the n is the same as replacing n by b in the one-column shape with entries a 2 , n, a 1 . If b ≥ a 2 then we have that b < a 1 by the same argument as in Case 1 above, and so the major index decreases by 2. Thus it suffices to show b ≥ a 2 .
If a 2 is not a descent of σ, this is clear, so suppose a 2 is a descent of σ in the second row. Let c be the entry directly below a 2 , and assume for contradiction that b < a 2 . Then b < c, and furthermore the first non-descent in row 2, say e, is less than c. Note that by our above argument we know that e lies within the rectangle σ . Now, we restrict our attention to σ and bump a 2 and a 1 up one row each, and consider the ordering of the bottom row in which we place c in the column one to the left of the column that e was contained in and shift the remaining entries to the left to fill the row. Rearranging the new second row with respect to the first, we consider the position of a 1 relative to c. If a 1 is to the left of the c we have a contradiction since a 1 must land in column µ 3 by Lemma 6.5 and the definition of bumping sequence. Therefore the entries in the second row to the left of c are unchanged. Since a 1 ≥ a 2 ≥ c, and all remaining entries in the second row are either a 1 or are less than c, we have that a 1 must be on top of the c in the second row. This is again a contradiction, since this implies that a 1 does not land in column µ 3 . It follows that b ≥ a 2 , as desired.
This completes the proof that ψ is a well-defined morphism of weighted sets. Proof. We know from the lemma above that ψ is a morphism; it suffices to show that it is bijective. First notice that the cardinality of F is n µ , and the cardinality of Thus the cardinalities of the two sets are equal, and so it suffices to show that ψ is surjective. To do so, choose an element ζ of the codomain. Then ζ can lie in any one of the three components of the disjoint union 2 d=0 F (1 n−1 ) µ (d+1) | inv=0 , and we consider these three cases separately. Case 2: Now, suppose ζ lies in the d = 1 component. If µ 2 = µ 3 then µ (1) = (µ 1 , µ 2 , µ 3 − 1) and so we can find a filling σ of µ that maps to ζ by Proposition 6.1. Otherwise, the shape of ζ is (µ 1 , µ 2 − 1, µ 3 ) and we wish to find a filling σ of shape µ for which ψ(σ) = ζ. Let ρ be the filling of µ formed by inserting n into the second row and rearranging entries so that there are no inversions. Notice that if the n lies to the right of column µ 3 then ψ(ρ) = ζ and we are done.
So, suppose n lies in the 3 × µ 3 rectangle in ρ. Let a 1 and a 2 be the bumping sequence of this rectangle. Since n is the rightmost descent in the second row of ρ, inserting it did not change the cocharge contribution of the portion to the right of column µ 3 ; there were no descents there in σ and there are none in ρ. Let b be the entry in column µ 3 + 1, row 2 of ρ. If b < a 2 , then ψ(ρ) = ζ and we are done.
Otherwise, if b ≥ a 2 , then by the argument in Lemma 4.4 we know that maj(ρ) − maj(ζ) = 2. We have that τ := ψ(ρ) is the filling formed by removing the n and bumping a 2 down to the second row, and that maj(ρ) − maj(τ ) = 2. Hence maj(τ ) = maj(ζ). Since b ≥ a 2 , a 2 lies to the left of b in τ and hence is weakly to the left of column µ 3 . So, let σ be the tableau formed by inserting n in the top row of τ . Now σ has shape µ, and can be formed directly from ρ by shifting the position of n among a 1 and a 2 as in Theorem 4. 4.
Case 3: Suppose ζ is in the d = 2 component. If µ 2 = µ 3 then we simply insert n into ζ in either row 2 or 3 according to Theorem 4.4 to obtain a tableau σ with ψ(σ) = ζ.
Otherwise, if µ 2 = µ 3 , ζ has shape (µ 1 , µ 2 , µ 3 − 1). Let ρ be the tableau of shape µ formed by inserting n in the top row of ζ. Let a 1 and a 2 be the entries in row 1 and 2 corresponding to this n in the 3 × µ 3 rectangle contained in ρ. Then if a 2 > a 1 , ψ(ρ) = ζ and we're done.
If instead a 2 ≤ a 1 , then removing n from ρ decreases its major index by 1. Since the number of descents in the top row goes down by exactly 1 by Lemma 4.3, we can conclude that the entry in row 2, column µ 3 is a non-descent; otherwise removing n from ρ would decrease the major index by 2. So, let σ be the filling formed by removing n from ρ, bumping a 2 to the top row, and inserting n in the second row. Since there are non-descents in the rectangle we have that n lies in the rectangle in σ as well.
Finally, again by the argument used for Lemma 4.4 we have that a 2 ≤ b where b is the entry in row 2, column µ 3 + 1 in σ. Thus ψ(σ) = ζ as desired.
We can now complete the three-row case by using the standardization map Standardize defined in Section 4. 3.4 for fillings with repeated entries. We first state a structure lemma about three-row shapes with no inversions. Lemma 6. 10. If the consecutive entries b 1 , . . . , b n in some row of a filling with no inversions are directly above a weakly increasing block of squares c 1 ≤ · · · ≤ c n in the row below, then there exists a k for which b 1 , . . . , b k are descents and b k+1 , . . . , b n are not descents. Moreover b 1 ≤ · · · ≤ b k and b k−1 ≤ · · · ≤ b n are both increasing blocks of squares.
Proof. This is clear by the definition of inversions.
In particular, the second row has one (possibly empty) block of descents and one (possibly empty) block of non-descents. The third row has up to two blocks of descents, one for each of the blocks in the second row, and so on.
We also need to show that the cardinalities of the sets are equal in the case of repeated entries.

Proof.
Given an alphabet A, the cocharge word of any filling using the letters in A has the property that it is weakly increasing on any run of a repeated letter, where we list the elements of A from largest to smallest. Furthermore, the cocharge word has content µ. It is not hard to see that a word is the cocharge word of a filling in F α µ | inv=0 if and only if it has content µ and is weakly increasing over repeated letters of A, listed from greatest to least.
Recall that the fillings in F r(α) µ * | maj=0 can be represented by their inversion word, and a word is an inversion word for such a filling if and only if it has content µ and every subsequence corresponding to a repeated letter of the reversed alphabet is in inversion-friendly order. By swapping the inversion-friendly order for weakly increasing order above each repeated letter, we have a bijection between inversion words and cocharge words, and hence a bijection (of sets, not of weighted sets) from F α µ | inv=0 to F r(α) µ * | maj=0 . By Theorem 4.1, we have that F r(α) µ * | maj=0 = |C µ,A |, and so the cardinality of F α µ | inv=0 is equal to |C µ,A | as well.
Proof. By Lemmas 4.4, 4.5, and 4.1, we have that for the content (1 n ) corresponding to alphabet [n], the map majcode is a weighted set isomorphism Now, let A be any alphabet with content α. Let σ be a filling of µ with content α. Then we know majcode(σ) = majcode(Standardize(σ)), so majcode(σ) ∈ C µ,[n] . In other words, majcode(σ) is µ-sub-Yamanouchi. In addition, since Standardize is an injective map (there is clearly a unique way to un-standardize a standard filling to obtain a filling with a given alphabet), the map majcode, being a composition of Standardize and the majcode for standard fillings, is injective as well on fillings with content α.
We now wish to show that majcode(σ) = d 1 , . . . , d n is A-weakly increasing, implying that majcode is an injective morphism of weighted sets to C µ,A . By Lemma 6.11 this will imply that it is an isomorphism of weighted sets. It suffices to show this for the largest letter m of A by the definition of standardization. Suppose m occurs i times. We wish to show that d j ≤ d j+1 for all j ≤ i − 1. So choose j ≤ i − 1.
Suppose d j = 0. Then by the definition of Standardize, we have that the m we removed from ψ j−1 (σ) was in the bottom row. If there are still m's in the bottom row of ψ j (σ) then d j+1 = 0 as well. Otherwise d j+1 > 0, so d j ≤ d j+1 in this case.
Suppose d j = 1. Then the m we removed from ψ j−1 (σ) was in either the first or second row and there were no m's in the bottom row. By the definition of ψ, there are therefore no m's in the bottom row of ψ(σ) either, and so d j+1 ≥ 1 = d j .
Finally, suppose d j = 2. Let m j be the m we remove from ψ j−1 (σ) to obtain d j = 2. As in the previous case we have d j+1 ≥ 1, and we wish to show d j+1 = 1. Let m j+1 be the corresponding m. Since d j is minimal for ψ j−1 (σ), there are no m's in ψ j−1 (σ) which we can treat as the largest entry and remove according to ψ to form d j = 1. Therefore if we removed m j+1 before m j we would also have a difference of 2 in the major index.
We consider three subcases separately for the locations of m j and m j+1 : they can either both be in the second row, m j can be in the second row with m j+1 in the third (top) row, or they can both be in the top row. No other possibilities exist because they must occur in reverse reading order, and cannot be in the bottom row since d j = 2.
Subcase 1: Suppose both m j and m j+1 are in the second row. Then m j+1 and m j are at the end of the block of descents in that order, and weakly to the left of column µ 3 . Let b be the entry in row 2, column µ 3 + 1. Let a 2 be the entry in the third row in the bumping sequence of m j , and let a 2 be the entry in the bumping sequence of m j+1 in ψ j (σ). Since d j = 2, we have a 2 ≤ b and b < m, and so a 2 = m. Therefore no new m's are dropped down. In other words, m j+1 is indeed the m that will be removed upon applying ψ the second time.
We now need to check that m j+1 remains to the left of column µ 3 after applying ψ. Indeed, by Proposition 4.4, we have that the number of descents in row 2 goes down by one, and the number of descents in the top row remains the same, upon applying ψ to ψ j−1 (σ). Since there are no m's in the bottom row, m j+1 is the rightmost descent in the second row of ψ j (σ), and the descent we lost was m j , so m j+1 remains in its column.
We now just need to show that a 2 ≤ b , where b is the entry in row 2, column µ 3 after applying φ. Either b = b, b = a 2 , or b is the entry b 0 that is bumped out from the first µ 3 − 2 columns when we drop down a 2 .
Consider any ordering of the first µ 3 entries of the second row of ψ j−1 (σ) such that the two m's (m j+1 and m j ) are at the end in that order, and also place b 0 in the third-to-last position. Now, rearrange the entries above these so that there are no inversions. We know that a 2 is at the end of the top row, above m j , by its definition. Let a be the entry above m j+1 and let c be the entry to the left of that (if such a column exists.) If b = a 2 , then the first µ 3 − 2 entries of the second row are unchanged on applying ψ. In our new ordering above, this means a 2 = a, and since a and a 2 occur above the two m's in our new ordering, we have a ≤ a 2 . It follows that a 2 ≤ b .
If b = a 2 , then b is either b or b 0 . To find a 2 in our new ordering, bumping down the a 2 can be thought of as replacing the b 0 with a 2 and rearranging the top row again so that there are no inversions. The first µ 3 − 3 entries remain in the same positions, and either c or a lies above m j+1 based on which comes later in cyclic order after a 2 . So either a 2 = a or a 2 = c.
We now have to show that whether a 2 is a or c, it is less than both b and b 0 . Notice that a 2 ≤ b 0 : Since m j+1 stays in its place, either a 2 replaces a larger entry among the descents in the second row, which in turn bumps out a larger entry b 0 among the non-descents, or it replaces a non-descent itself and displaces a larger non-descent b 0 to its right. So if a 2 = a, then since a ≤ a 2 we have a 2 ≤ a 2 ≤ b 0 and also a 2 ≤ a 2 ≤ b since a 2 ≤ b.
Finally, if a 2 = c, then a 2 , a, c are in cyclic order. If c ≤ a 2 we are done by the above argument. Otherwise a 2 < a ≤ c or a 2 = a = c, in which case a 2 = a and we are done by the previous case. So a 2 < a ≤ c, but we already know a ≤ a 2 , so we have a contradiction. It follows that a 2 ≤ b as desired.
Subcase 2: Suppose m j and m j+1 are in the top row. Then by Lemma 6.10 and since there are no m's in the second row by the definition of Standardize, the m's are either in the first or second block of descents in the third row. If either of them is in the second block, it is clear that removing m j results in d j = 1, not 2, a contradiction. So they are both in the first block, themselves above descents in the second row, with m j+1 and m j adjacent and in that order. Now, removing m j will cause the block of non-descents to its right to slide to the left one space (since they are necessarily less than the entry beneath m j ). If the second block of nondescents in the third row is nonempty, one of these will replace the last entry above the descents in the second row, since all of these are still less than the entry below m j and the least among the entries to the right will replace it. In that case the number of descents to the right of m j is unchanged, and so d j = 1, a contradiction. Thus there are no non-descents in the second block, i.e. above the non-descents in row 2.
Because of this, removing m j simply causes all the entries to its right to slide to the left one space, and the first descent to its right becomes a non-descent. The same then happens when we remove m j+1 by the same argument. It follows that d j+1 = 2 in this case. Subcase 3: Suppose m j is in the second row and m j+1 in the top. Then the m j+1 is to the left of m j , in the first block of descents in the third row, since otherwise we would have a difference of 1 on removing m j+1 . Moreover, as in the previous case, the top row has no non-descents above the non-descents in row 2.
So, let a 1 , . . . , a r be the entries in row 3 that lie weakly to the right of m j 's column. Then a 1 is not a descent and each of a 2 , . . . , a r are descents. Let m j , b 2 , . . . , b r be the entries below them. If we rearrange these in the second row in the increasing order b 2 , . . . , b r , m j , and then rearrange the a i 's above them as a σ(1) , . . . , a σ(r) so that there are no inversions, there are still r − 1 descents among the a σ(i) 's by Lemma 6.4. These descents must be a σ(1) , . . . , a σ(r−1) by Lemma 6.10, and the last entry a σ(r) above the m j is the entry in m j 's bumping sequence. Now, to form ψ j (σ), we remove m j and drop down a σ(r) . Notice that the entries in the top row to the left of where m j was are unchanged: consider the 3 × µ 3 rectangle and bump the m j down to the bottom row according to Theorem 6.1. Then bump it out according to Proposition 4.5, which leaves us with the same top row as that of ψ j (σ). The entire top row save for the last entry is unchanged upon applying Proposition 4.5, and so having the m j inserted into the second row instead can only change the entries to the right of it in the row above. Thus the entries to its left in the top row are unchanged, and have the same cocharge contribution as well.
Finally, in the columns weakly to the right of the column that m j was in, the entries in the top row are a σ(1) , . . . , a σ(r−1) in some order. We claim that the entries in the second row are formed by replacing at most one of b 2 , . . . , b r by a smaller entry, which is either a σ(r) or something bumped to the right by a σ(r) if a σ(r) lands in a column to the left of the b i 's. Indeed, the only way it would be a larger entry replacing them is if a descent replaced m j , but in this case we would have d j = 1 since the number of descents in the second row would be the same, and the number of descents in the top row would decrease by only 1.
Therefore, the entries a σ(1) , . . . , a σ(r−1) are all descents in the top row, and so removing m j+1 still results in a difference d j+1 = 2. In particular, the descents formed by m j+1 and one of the a i 's are removed, since the a's all slide one position to the left, and did not form new descents upon removing the m j+1 before the m j .
This completes the proof.

Acknowledgments
This work was partially funded by the Hertz foundation and the NSF Graduate Research Fellowship Program. The author thanks Mark Haiman for his guidance and support.