Deformation of Cayley's hyperdeterminants

We introduce a deformation of Cayley's second hyperdeterminant for even-dimensional hypermatrices. As an application, we formulate a generalization of the Jacobi-Trudi formula for Macdonald functions of rectangular shapes generalizing Matsumoto's formula for Jack functions.

In [7] properties of hyperdeterminants and discriminants are studied in general. In particular, it is easy to see that det [2m] (A) is invariant when A is replaced by B • k A, where the action is defined for example, (B • 1 A)(i 1 , · · · , i 2m ) = n j=1 B i 1 ,j A(j, i 1 , · · · , i 2m ) for any B ∈ GL n . This means that the hyperdeterminant is an invariant under the action of GL ⊗2m n . In general, it is a challenging problem to come up with relative invariants under the (non-diagonal) action of GL n × · · · × GL n .
A q-analog Hankel determinant has been studied in [8] (see also [1]) and q-analog of the hyperdeterminant for non-commuting matrices has been introduced in the context of quantum groups [9].
In this paper, we introduce a λ-hyperdeterminant (of commuting entries) by replacing permutations with alternating sign matrices in a natural and nontrivial manner. We will show that the new hyperdeterminant deforms Cayley's hyperdeterminant. As an application, we give a Jacobi-Trudi type formula for Macdonald polynomials to generalize the previous formula of Matsumoto for Jack polynomials of rectangular shapes. It would be interesting to uncover the relation with q-Hankel determinants. Also the relationship with q-deformations remains mysterious.

λ-hyperdeterminants
To introduce the λ-hyperdeterminant, we first consider generalization of the permutation sign in the usual determinant, and then generalize Cayley's hyperdeterminants.
A permutation matrix is any square matrix obtained by permuting the rows (or columns) of the identity matrix, therefore its determinant is equal to the sign of the corresponding permutation. Each row or column of a fixed permutation matrix has only one nonzero entry along each row and each column. Generalizing the notion of permutation matrices, an alternating sign matrix is a square matrix with entries 0's, 1's and (−1)'s such that the sum of each row and column is 1 and the nonzero entries in each row and column alternate in sign. Therefore, each permutation matrix is an alternating sign matrix. However, there are more alternative sign matrices than permutation matrices of the same size. For example, the unique alternating sign 3 × 3 matrix which is not a permutation matrix is Clearly the nonzero entries in each row or column of an alternating sign matrix must start with 1. We denote by Alt n the set of n by n alternating sign matrices and P n the subset of permutation matrices of size n. The set P n , endowed with the usual matrix multiplication, is a group isomorphic to the symmetric group S n . As n increases, Alt n contains more and more non-permutation matrices. In fact, it is known [17,10] that Alt n has a cardinality of n−1 i=0 (3i+1)! (n+i)! . For X = (x ij ) ∈ Alt n , we define the inversion number of X by [14] i(X) = r>i,s<j x ij x rs .
If P = (δ i,π i ) is the permutation matrix associated to the permutation π ∈ S n , Then i(P ) counts the number of pairs (i, j) such that i < j but π i > π j , i.e., i(P ) is the inversion number of π, thus explains the name for alternating sign matrices.
Let n(X) be the number of negative entries in X. For any n × n matrix A = (a ij ), In general A X is a quotient of two monomials and the degree of the denominator being n(X).
Let λ be a parameter and A an n × n matrix. Following Robbins and Rumsey [15], we introduce the λ-determinant as follows.
Note that the second equality holds even for λ = 1 due to 0 0 = 1. We remark that the original λ-determinant [15] corresponds to a sign change of ours, while our definition deforms the usual determinant at λ = 1. In fact when λ = 1, the second summand vanishes, therefore det 1 (A) reduces to the usual determinant. In general, det λ (A) is a rational function or rather a polynomial function in the variable a ±1 ij as long as a ij = 0 whenever X ij = −1 for some X ∈ Alt n . For any 3 × 3 matrix A = (a ij ) det λ (A) = a 11 a 22 a 33 − λa 12 a 21 a 33 − λ 3 a 13 a 22 a 31 − λa 11 a 23 a 32 + λ 2 a 12 a 23 a 31 + λ 2 a 13 a 21 a 32 + λ(1 − λ) a 12 a 21 a 23 a 32 a 22 , where the last term is due to the alternating sign matrix Q in (2). Another example is a λ-deformation of the Vandermonde determinant [15]: For commutating variables x i , 1 i n, one has that which is a continuous deformation of the Vandermonde determinant at λ = 1. When λ = −1, this formula has an interesting connection with the Pfaffian Pf: where the last equality is Schur's formula for Pfaffian [13]. Here the Pfaffian Pf(M ) of an antisymmetric matrix M is defined by Pf(M ) = det(M ). A permutation matrix P of size n sends the row vector (1, 2, . . . , n) to the vector (π(1), π(2), . . . , π(n)) of its corresponding permutation π ∈ S n by right matrix multiplication, i.e.
Note that the λ-hyperdeterminant is different from the q-hyperdeterminant defined on quantum linear group [9].
We also remark that alternating sign matrices with n(X 1 ) > 0 do not contribute to the sum, the first summation variable X 1 is therefore always taken as a permutation matrix. Moreover, one can even fix the first indices to be 1, 2, . . . , n and drop the quantum factor.
We now explain how the λ-hyperdeterminant deforms the Cayley hyperdeterminant.

Proposition 2. For any 2m-dimensional hypermatrix
Proof. When λ = 1 the summands in the right side of (9) (omitting the factor) vanish if one of X i 's has at least one negative entry. Thus only permutation matrices contribute to the sum, so we can let X r = P σr = (δ jσr(i) ) ∈ S n for r = 1, · · · , 2m. Subsequently X r (i) = n j=1 jX r (i, j) = n j=1 jδ jσr(i) = σ r (i), and we see that the right side of (9) matches exactly with that of (1).

Hyperdeterminant formula for Macdonald functions
We now generalize Matsumoto's hyperdeterminant formula for Jack polynomials [13] as an application of the λ-hyperdeterminant.
Recall that the classical Jacobi-Trudi formula expresses the Schur function associated to partition µ as a determinant of simpler Schur functions of row shapes: In [13], Matsumoto gave a simple formula for Jack functions of rectangular shapes using the hyperdeterminant (see also [1]).

Proposition 3 ([13]
). Let k, s, m be positive integers. The rectangular Jack functions Q (k s ) (m −1 ) can be expressed compactly as follows.
As the Schur function s µ is the specialization Q µ (1) of the Jack function, one sees immediately that this formula specializes to the Jacobi-Trudi formula for rectangular Schur functions.
Macdonald symmetric functions Q µ (q, t) [12] are a family of orthogonal symmetric functions with two parameters q, t and labeled by partition µ. In this paper we consider the case that t = q m , m any positive integer. When q approaches to 1, Q µ (q, q m ) specializes to the Jack symmetric function Q µ (m −1 ) [12] which has been used in many applications (eg. [2]). When the partition µ = (k s ), the symmetric function Q µ (q, t) or Q µ (m −1 ) is referred as the rectangular (shaped) Macdonald function.
We now state our main result.
Theorem 4. Let k, s, m be positive integers. Up to a scalar factor, the rectangular Macdonald function Q (k s ) (q, q m ) is a 2m-dimensional λ-hyperdeterminant: where det [2m] q is the λ-hyperdeterminant with λ = q.
Remark 5. When q = 1, one recovers the hyperdeterminant formula (13) for Jack functions of rectangular shapes. The formula also gives another expression of the general formula of Lassalle-Schlosser [11] (see also [16]) in the special shape.
To prepare its proof, we first recall the following result from [4].
Proposition 6. Let ρ = (k s ) be an rectangular partition with k, s > 0. Let m be a positive integer. Then (q;q)sm We can now prove Theorem 4.
Proof. Recall that the q-Dyson Laurent polynomial (15) is a generalization of the Vandemonde polynomial. Using (4), we can rewrite the q-Dyson Laurent polynomial in the z i using λ-determinants.
where in the last equality we have used the fact that the sum of entries in each row of an alternating sign matrix is one. By Proposition 6, (q;q)sm (q;q) s m Q (k s ) (q, q m ) is the coefficient of z k 1 z k 2 · · · z k s in F G = X 1 ,...,X 2m ∈Alts [φ q (X 1 , · · · , X 2m ) s i=1 z s j=1 m r=1 j(Xr(i,j)−X m+r (i,j)) i s i=1 ∞ j=0 Q j (q, q m )z j i ], and this coefficient is seen as X 1 ,...,X 2m ∈Alts φ q (X 1 , · · · , X 2m ) s i=1 Q k− s j=1 m r=1 j(Xr(i,j)−X m+r (i,j)) (q, q m ).
We now show formula (14) holds. The entries of matrix M in Definition 1 is now given which then implies the formula (14).