A Survey of the Hadamard Maximal Determinant Problem

In a celebrated paper of 1893, Hadamard established the maximal determinant theorem, which establishes an upper bound on the determinant of a matrix with complex entries of norm at most $1$. His paper concludes with the suggestion that mathematicians study the maximum value of the determinant of an $n \times n$ matrix with entries in $\{ \pm 1\}$. This is the Hadamard maximal determinant problem. This survey provides complete proofs of the major results obtained thus far. We focus equally on upper bounds for the determinant (achieved largely via the study of the Gram matrices), and constructive lower bounds (achieved largely via quadratic residues in finite fields and concepts from design theory). To provide an impression of the historical development of the subject, we have attempted to modernise many of the original proofs, while maintaining the underlying ideas. Thus some of the proofs have the flavour of determinant theory, and some appear in print in English for the first time. We survey constructions of matrices in order $n \equiv 3 \mod 4$, giving asymptotic analysis which has not previously appeared in the literature. We prove that there exists an infinite family of matrices achieving at least $0.48$ of the maximal determinant bound. Previously the best known constant for a result of this type was $0.34$.

The story, of course, does not begin with Hadamard. Thomson conjectured in 1885 a bound on the determinant of a matrix in terms of the norms of its rows; this was established shortly afterward by Muir. In his Résolution d'une question relative aux déterminants of 1893, Hadamard gives (i) a proof of the so-called Hadamard determinant bound (which is essentially the Muir-Thomson bound), (ii) an explicit statement of the maximal determinant problem (for R), and (iii) solutions to this problem at orders 2 t , 12 and 20. Nevertheless, in Section 1 we follow Hadamard's exposition (his paper being as readable today as in 1893), before tracing a little of the history of the determinant bound before and after Hadamard. Again following the original exposition, we give the proof of Fischer's inequality using compound matrices, which generalises Hadamard. In Section 2 we describe the (real) maximal determinant problem, which is to construct {±1} matrices attaining the determinant bound, and describe some results from the theory of Hadamard matrices.
In quick succession in the 1960s, Ehlich and Wojtas produced sharper bounds than Hadamard's for {±1} matrices of orders n ≡ 1, 2, 3 mod 4. Their bounds are presented in Section 3. Each of the three cases has its own peculiarities, discussed in turn in Sections 4, 5 and 6 respectively. In each case, we survey the known constructions which achieve a determinant within a constant factor of the best known bound, and comment on computational and theoretical work at small orders. In Section 6.1 we analyse the known theoretical constructions for matrices with large determinant at orders n ≡ 3 mod 4. We generalise a construction of Neubauer and Radcliffe, allowing us to prove that there exists an infinite family of matrices exceeding 0.48 of the Ehlich bound.
In writing this paper, the authors made the conscious decision to present the main results for the maximal determinant problem with some historical context. Thus our presentation is approximately chronological, and we attempt to follow the techniques of the original authors. These choices result in some heterogeneity of style: Hadamard worked with Hermitian matrices while Fischer worked with real symmetric matrices, for example, and we have not attempted to reconcile these accounts. We perceive two underlying themes which run through many proofs in this area.
1. The Gram matrix of a real-valued matrix is symmetric positive definite. All its eigenvalues are real and positive. Most of the determinant bounds that we present use linearity of the determinant in the rows of a matrix to express the determinant as the sum of a positive and a negative term. The positive term becomes an upper bound on the determinant, and minimising the negative term saturates the corresponding bound. Slightly intricate induction hypotheses appear to be a necessary feature of these proofs. Theorem 1 is the prototype of this result, and Theorems 8 and 17 follow the same pattern, which reaches its most developed form in the results of Section 6.

The Hadamard determinant bound
A curiosity of Hadamard's paper to the eye of the modern reader is the absence of concepts from linear algebra. For Hadamard, a matrix is nothing but an array from which the determinant (considered a homogeneous polynomial function of degree n in n 2 variables) is computed. Our proof follows Hadamard's, with notation modernised and what Hadamard refers to as an identité bien connu presented explicitly.
In this paper, matrices are square unless stated otherwise. We use I n and J n to denote the n × n identity and all-ones matrices respectively, and drop the subscript when the order is clear from context. Recall that a matrix is Hermitian if G * = G, and positive definite if its eigenvalues are positive real numbers. The Gram matrix of M is the matrix M M * , which has as entries the inner products of rows of M . A positive definite Hermitian matrix G is a Gram matrix: via the square root of a positive matrix, it can be shown that there exists a matrix X such that XX * = G. Conversely, the Gram matrix of a set of linearly independent vectors is Hermitian positive definite. There is a well-developed theory for positive definite matrices; see for example the monograph of Horn and Johnson [28]. We follow Hadamard in considering a minor of order k to be the determinant of a k × k submatrix.
Proof. Define G = M M * , and recall that the (i, j) entry of G, which we denote g i,j , is the inner product of rows i and j of M . Since its diagonal entries are real and g i,j = g * j,i , the matrix G is Hermitian. Furthermore, det(G) = det(M ) det(M * ), being the product of a complex number and its conjugate, is real and non-negative.
For a subset I of {1, 2, . . . , n}, denote by G I the principal submatrix of G with rows and columns indexed by I. We write P I for det(G I ) and N I for the determinant obtained upon setting the bottom-right entry of G I to zero. If I = {1, 2, . . . , k} then we write P k for P I and N k for N I . Since the determinant is linear in the rows of the matrix, Equation (1) illustrates the Laplace expansion of the determinant of G k . We gather all terms containing g k,k and see that By induction on |I| we will establish that N I is always non-positive. For this we require a general determinantal identity. Let U 1 and U 4 be invertible square matrices of size k × k and (n − k) × (n − k) respectively. For any U 2 and U 3 such that the displayed matrix U is invertible, set V = U −1 , and decompose into blocks as in U : Now, take determinants on both sides of the expression We return to our inductive proof. Suppose that I = {i, j}. Recalling that g i,j = g * j,i because G is Hermitian, and the result is established for the base case |I| = 2. Suppose now that the inductive hypothesis holds for all I for which |I| ≤ k − 1. For notational convenience, we will work with the set {1, 2, . . . , k}, but I can be taken to be arbitrary of size k. Take V to be the rightmost matrix displayed in Equation (1), so that det(V ) = N k . If det(N k ) = 0 the induction hypothesis holds, so suppose that N k is invertible. Let V 1 be G k−2 , which is the submatrix of G k containing the first k − 2 rows and columns. The entries of U = V −1 are the (k − 1) × (k − 1) cofactors of V . So det(U ) = N −1 k and det(V 1 ) = P k−2 . We denote by γ the (non-principal) minor obtained by deleting row k − 1 and column k of V . Then up to some (−1) factors which cancel in the determinant, Applying Equation (4), we obtain The terms P k−2 and P k−1 are determinants of Gram matrices and hence non-negative. By the inductive hypothesis, N {1,...,k−2,k} ≤ 0 so the right-hand side is non-positive. The signs of N −1 k and N k agree, and so N k is non-positive and the result is established by induction.
Since the g k,k are real and positive, and the N k are non-positive, Equation (2) now shows that P k ≤ k i=1 g i,i . By hypothesis, all entries in M have modulus bounded by 1 so each term in the product satisfies |g k,k | ≤ n and det(G) ≤ n n . Finally, | det(M )| ≤ n n/2 and the proof is complete.
Equality holds in the identity P k = g k,k P k−1 + N k if and only if det(N k ) vanishes. Since N k contains a positive definite minor P k−1 , this occurs if and only if the final column of N k is identically zero. Applying this observation repeatedly, equality in Theorem 1 holds if and only if all of the minors N k vanish, which forces M M * to be diagonal.
The most substantial part of Hadamard's proof is devoted to establishing that the determinant of a symmetric positive definite matrix is bounded by the product of its diagonal elements. This more general result was conjectured by William Thomson (later Lord Kelvin) in 1885. As recounted by Maritz in his masterly mathematical biography of Thomas Muir [35], the result was established by Muir shortly afterward. For reasons never elaborated upon, Muir's publication was delayed until 1901. Even then, the result is established only for 4 × 4 matrices, with the claim that the proof extended easily to larger dimensions 1 . In 1899, Fredholm [25] also established Thomson's conjecture, but acknowledged in 1900 that this result was a direct consequence of Theorem 1.
In the form of a bound on the determinant of a symmetric positive definite matrix, Hadamard's result gained importance due to connections to Fredholm's theory of differential equations, with a new proof by Wirtinger in 1907 [53] and a generalisation by Fischer in 1908 [24]. In fact, we shall have use for Fischer's inequality in later sections of this paper, and so provide a proof modeled closely on the original. Both results in this section are easily established using techniques of positive definite matrices. Bechenbach and Bellman claim that there are perhaps a hundred proofs of the Hadamard inequality [2]; a one-line proof is given on page 505 of Horn and Johnson [28].
Theorem 2 (Satz III, [24]). Suppose that G is positive definite and symmetric, and that where A and D are square submatrices. Then det(G) ≤ det(A) det(D) with equality if and only if B = 0.
Proof. To fix notation, let A be k × k and D be (n − k) × (n − k). We will follow Fischer's proof, which involves the k th compound of G. This is the matrix with rows and columns indexed by the distinct k-subsets of {1, . . . , n} with the entry in row X and column Y the minor G X,Y of G with rows labelled by X and columns labelled by Y . We denote the k th compound of a matrix M by M (k) . The following results on compounds would have been well known to Fischer's contemporaries (for further discussion, see, for example, Section 0.8 of [28]): 1. The Sylvester-Franke theorem: det(M (k) ) = det(M ) ( n−1 k−1 ) . 2. Jacobi's formula for the k th adjugate: Adj(M (k) ) X,Y is (−1) σ(X,Y ) M X,Y , where M X,Y is the complementary minor of M X,Y and σ(X, Y ) = x∈X x + y∈Y y. The k th adjugate satisfies the relation Adj(M (k) )M (k) = det(M )I ( n k ) . 3. The (generalised) Cauchy-Binet formula: Fischer establishes a Hadamard-type bound for positive definite matrices. In the notation of Theorem 1, det(G) ≤ P n−1 g n,n .
This result is immediate from the proof of Theorem 1, which also shows that the bound is attained precisely when g i,n = 0 for 1 ≤ i ≤ n − 1. Next, Fischer decomposes the k th compound as where F is square of order n k −1, and f is a column vector. We evaluate det(F ) via the method of Equation (3). Set U = G (k) and U 1 = F . By the Sylvester-Franke theorem, det(U ) = det(G) ( n−1 k−1 ) . By Jacobi's formula for the adjugate, V 4 is proportional to the minor of G complementary to A, namely V 4 = det(G) −1 det(D). Hence det(F ) = det(G) ( n−1 k−1 )−1 det(D). By hypothesis, G is symmetric positive definite, so G = M M * for some matrix M . By the Cauchy-Binet formula, M (k) (M (k) ) * = G (k) and hence G (k) is positive definite. So we may apply Equation (5) from which Fischer's inequality follows by cancelling the common factor of det(G) ( n−1 k−1 )−1 . If det(A) = 0 then det(G) = 0 and Fischer's inequality holds trivially, so suppose that A has full rank. The entries of f are minors of G in which the columns of A are held fixed and the rows vary: these are precisely the minors with rows drawn from A and B * . For a fixed row b r of B * consider the minors consisting of k − 1 rows of A and b r . All of these minors vanish if and only if b r = 0. But equality holds in Equation 5 precisely when all entries of the vector f are zero; hence B * (and B) are zero matrices.
In the original paper, Fischer characterises the cases of equality in Theorem 2 via an argument similar to Hadamard's demonstration that the minors N k are non-positive. We substitute a slightly more direct (if anachronistic) proof. Fischer also provides a direct proof of Equation (5), so his theorem gives an independent proof of Theorem 1. To see this, apply Theorem 2 recursively to the Gram matrix G = M M * until 1 × 1 blocks on the diagonal are obtained. Then the determinant of G is bounded by the product of its diagonal entries, and the last sentence of the proof of Theorem 1 completes the proof.

Hadamard matrices and the maximal determinant problem
Let G be a symmetric positive definite matrix. As we have seen, the key step in Hadamard's proof of Theorem 1 is establishing the bound det(G) ≤ n i=1 g i,i . From Hadamard (but more explicitly from Fischer), one sees that that this bound is met with equality precisely when G is diagonal. When G = M M * is a Gram matrix, we see that the maximal determinant is obtained precisely when the rows of M are orthogonal. Geometrically, the volume of a parallelopiped with fixed edge lengths is maximised when the edges are orthogonal. This geometric approach was used by Craigen [16] to establish Hadamard's inequality directly from Pythagoras. There is no existence question to consider here: orthogonal matrices are plentiful and rows can be renormalised at will. As noted already by Sylvester [50], the discrete Fourier transform matrices furnish examples which saturate Hadamard's determinant bound in any dimension over the complex field. In contrast, there is a nontrivial existence theory for matrices saturating Hadamard's determinant bound over R, which we consider in this section.
Suppose now that M is a real-valued n × n matrix of maximal determinant with entries of norm at most 1. Since the determinant is a linear function of the matrix entry M i,j , without loss of generality, the entries can be chosen from {±1}. The remainder of this survey is devoted to the following problem, originally suggested as a topic for investigation by Hadamard.
Maximal determinant problem. What is the maximal determinant of an n× n matrix with entries in {±1}?
Initial progress on this problem was made by Hadamard, who established the following result. Proof. The first two claims follow directly from Hadamard's observation that the bound is saturated if and only if HH ⊤ = nI n . For the last claim, observe that the matrices (1) and 1 1 1 −1 saturate the bound in dimensions 1 and 2. Suppose that H has dimension n ≥ 3. Since the magnitude of the determinant is invariant under permutation and negation of rows and columns, we may assume that the first row of H has all entries positive. Orthogonality then forces an equal number of positive and negative entries in the second row. Hence n is even.
The proof that n is divisible by 4 is only slightly more involved. Consider permuting the columns of H so that the first three rows are in the form where 1 x denotes an all-ones vector of length x. Orthogonality of rows forces the equations These equations are solved precisely when a = b = c = d and hence the dimension is a multiple of 4.
Matrices meeting the determinant bound with equality have become known as Hadamard matrices. There is a substantial literature devoted to Hadamard matrices; we refer the reader to three monographs which have appeared in the past 15 years for further details, [19,27,46]. Existence of Hadamard matrices is well-studied. The following omnibus result provides references to some well-known constructions of Hadamard matrices.  2. p a + 1 where p is prime and p a ≡ 3 mod 4 [44]. 3. 2(p a + 1) where p is prime and p a ≡ 1 mod 4 [44]. 4. p(p + 2) + 1 where p and p + 2 are twin primes [49]. 5. 4p 4t where p is prime and t ≥ 1 [55] 6. 4t for all values of t ≤ 250 except for t ∈ {167, 179, 223} [31]. 7. n = ab/2 or n = abcd/16 where a, b, c, d are orders of Hadamard matrices [18,47]. 8. There exist constants α and β such that, if t is an odd positive integer, then there exists a Hadamard matrix of order 2 ⌈α+β log 2 (t)⌉ t; see [17,46].
As demonstrated, the maximal determinant problem for n ≡ 0 mod 4 is extensive. Paley conjectured in the 1930s that the bound is attained in every dimension divisible by 4. We note that Hadamard matrices have found application in the construction of error-correcting codes, experimental designs and more recently in the design of quantum algorithms. The reader is referred to the monographs of Horadam [27] and Bengtsson and Zyczkowski [3] for further details.

Finite fields, quadratic residues and the Paley construction
The guiding principle in the assembly of this survey was to produce a self-contained reference on the maximal determinant problem. Upper bounds are only half of this story. To establish that the bounds are optimal, infinite families of matrices achieving these bounds are required. As illustrated in Proposition 4, there are many constructions for Hadamard matrices. We shall see in Section 4 that there are just two known constructions for infinite families of matrices when n ≡ 1, 2 mod 4 saturating the relevant determinant bounds. All of these constructions rely on properties of quadratic residues in finite fields. We will assume the following results about finite fields, proofs of which can be found in a standard textbook on abstract algebra, e.g., [30].
1. For each odd prime power q there exists a finite field with q elements, unique up to isomorphism. We denote this field by F q . 2. The multiplicative group of F q is cyclic of order q − 1.
3. An element x ∈ F q is a quadratic residue if there exists y ∈ F q such that y 2 = x. Otherwise, x is a quadratic non-residue. The function χ : F q → C given by Hence the number of non-zero quadratic residues is q−1 2 . 4. It follows that χ(−1) = (−1) q−1 2 , so −1 is a quadratic residue if and only if q ≡ 1 mod 4. The matrices constructed in Proposition 5 and their variants are frequently useful in the construction of maximal determinant matrices, and also occur in multiple other contexts.
Proposition 5. Suppose that p is an odd prime number and χ is the quadratic character of F p . We define has zeroes on the diagonal and off-diagonal entries in {±1}. Further, Q is circulant and satisfies QQ ⊤ = pI −J.
Proof. The matrix is circulant since (x + 1) − (y + 1) = x − y. The matrix has zero entries on the diagonal and ±1 entries off the diagonal (depending on whether the equation z 2 = x − y has a solution or not). So it suffices to compute the inner product of two rows. Since the number of non-zero quadratic residues equals the number of non-residues, x∈Fp χ(x) = 0. We compute the inner product of the rows labelled a and b. It will be convenient to sum over the non-zero terms in the inner product: In moving from the second line to the third, we used that χ is multiplicative. In moving from the third line to the fourth, we use that χ(y 2 ) = 1. In moving form the fourth line to the fifth, we used that the sum x χ(x) is equal to 0. The terms excluded from the sum are χ(1) + χ(0), but χ(0) = 0, and the result follows.
The next result is the Paley type I construction of Hadamard matrices. Following well-established conventions, a Hadamard matrix H with is called skew-symmetric if H − I is skew-symmetric in the usual sense; Proposition 6 (Lemma 2, [44]). Suppose that p ≡ 3 mod 4 is prime, and let j p denote the column vector of length p of all ones. Then the matrix is a skew-symmetric Hadamard matrix of order p + 1.
Since all entries of M are in {±1} it suffices to check that distinct rows of M are orthogonal to verify that M M ⊤ = (q + 1)I q+1 . Each non-terminal row contains 1 + q−1 2 negative entries coming from the last column and the non-residues, and so is orthogonal to the last row. The inner product of any two non-terminal rows gains a contribution +1 from the last column and a contribution of −1 from the remaining q columns.
Throughout this survey we describe constructions for primes p ≡ 3 mod 4. In all cases, the constructions generalise (possibly with minor variations) to all odd prime powers. Thus the construction of Paley type I matrices is essentially unchanged for prime powers q ≡ 3 mod 4, though indices are drawn from {F q , +}, and the resulting matrix has a block-circulant submatrix, rather than a circulant submatrix. Then for prime powers q ≡ 1 mod 4, the Paley core is symmetric, and a variant of this construction gives a Hadamard matrix of order 2q + 2. For analysis of the corresponding matrix of order p ≡ 1 mod 4, see Proposition 21.

The Ehlich-Wojtas bound
We have seen that Hadamard's bound is attained infinitely often, conjecturally in every dimension which is a multiple of 4. On the other hand, the proof of Proposition 3 shows that in all other dimensions no three {±1} vectors are pairwise orthogonal. In this section, we follow the treatment of Wojtas [54] to establish tighter bounds on maximal determinants in these dimensions. The next lemma will be a key tool in bounding the determinant of a non-diagonal positive definite matrix.
Lemma 7. Let B be the following positive definite symmetric matrix, and assume further that Proof. For each i in the interval from 1 to k, subtract b i /b times the last row from the i th row. Similarly, subtract b * i /b times the last column from the i th column. The result is a symmetric matrix B ′ conjugate to B, which is therefore positive definite: Clearly, det(B ′ ) = det(B) = b∆ where ∆ is the determinant of the k × k matrix in the upper left of B ′ . We apply the Hadamard bound (as interpreted for positive definite matrices) and the bound |b i |b −1 ≥ 1 to complete the proof: The next theorem was established independently by Ehlich [22] and Wojtas [54], via essentially the same argument. We have followed Wojtas' proof, which is determinant theoretic, in the style of Hadamard.
Theorem 8. Let G be an n × n real positive definite symmetric matrix, with diagonal entries m. Let b be a positive real number such that b ≤ |g i,j | for all off-diagonal entries of G. Then Proof. Since the determinant is linear in the rows of G, we rewrite the determinant as follows: Consider the second term on the right-hand side of Equation (7): the principal minors of the matrix are positive, so the matrix is positive definite if and only if the determinant is positive. This is Sylvester's characterisation of positive definite matrices (see Theorem 7.2.5, [28]), so Lemma 7 applies. We obtain the inequality where G n−1 is the (n − 1) × (n − 1) principal minor of G n . In the case that the second term is non-positive, we obtain so this inequality holds in either case. Finally, we establish the result by induction. Observe that for the case n = 2, the result holds: for any a ≤ |g 1,2 |. Now, assume the result holds for (n − 1) × (n − 1) matrices, in particular for the matrix G n−1 in Equation (8). Then This completes the proof.
Later a characterisation of certain matrices meeting the bound of Theorem 8 will be required.
Corollary 9. Let G be an n × n symmetric positive definite matrix, with diagonal entries n and |g i, , then up to permutation and negation of rows and columns, where J is the all-ones matrix.
Proof. The bound in Theorem 8 is attained if and only if the bound in Lemma 7 is attained. This relies on the Hadamard bound, which is attained only if the displayed matrix B ′ of Equation (6) is diagonal. Suppose there is an off-diagonal entry g i,j of magnitude larger than |b|. Without loss of generality, we permute the rows and columns of G so that this entry is in the last column. Negating rows and columns, we may assume that all entries in the last row and column of G are positive. Then we calculate the determinant in the manner of Equation (7). Evaluate the determinant of the rightmost term as in Lemma 7, observing that |g i,j | > b forces a strict inequality. Hence |g i,j | = b for all off-diagonal entries in the matrix.
Tracing the proof of Theorem 8 with this matrix, we are led again to Lemma 7, in which the bottom-right entry of G is replaced with b. Subtracting the final row of this matrix from all others results in subtracting b from all entries in the matrix. This matrix is diagonal precisely when all off-diagonal entries are equal to b, completing the proof. 4 The Barba bound and matrices with n ≡ 1 mod 4 The next result was first established by Barba [1], but follows easily from Theorem 8. For an overview of the history of this result, see Neubauer and Radcliffe [39]. Hence We will now work to characterise the Gram matrices which attain the bound of Corollary 10. If M is a {±1} matrix of odd order, then no two rows of M are orthogonal. It is possible to say a little more.  Proof. This follows directly from the Schur complement formula (Section 0.8, [28]). For any block matrix in which A is invertible, Apply this result to M , observing that 1 ⊤ H1 = e(H).
It is well known that the maximal excess of a Hadamard matrix of order n is bounded above by n √ n, and that equality is achieved if and only if n = 4t 2 is the square of an even integer, and every row has sum 2t [4]. A Hadamard matrix with constant row sums is called regular in the literature. If there exists such a Hadamard matrix 2 then Proposition 13 gives a matrix of order 4t 2 + 1 with determinant (2t + 1)(4t 2 ) 2t 2 . This should be compared to the bound of Corollary 10: upon making the substitution n = 4t 2 + 1 we obtain the bound det(M ) ≤ √ 8t 2 + 1(4t 2 ) 2t 2 . Comparing (2t + 1) to √ 8t 2 + 1 we see that this determinant exceeds 1/ √ 2 of the Barba bound (and indeed is somewhat better for small values of t). Constructions for infinite families of regular Hadamard matrices are known: there exist regular Hadamard matrices of order 4q 4 for every odd prime power q, and there exists a regular Hadamard matrix of order 16n 2 whenever there exists a Hadamard matrix of order 4n [36,38]. Orrick and Solomon [42] have developed a normalisation technique which suggests that Hadamard matrices with large excess are relatively common.

Designs and the Brouwer-Whiteman construction
In this section, we construct a matrix of order 2p 2 + 2p + 1 satisfying the conditions of Theorem 12, where p ≡ 3 mod 4 is prime. This result was obtained independently by Brouwer [11] and by Whiteman [52]. The construction extends readily to all odd prime powers. For the general case, we refer the reader to the work of Neubauer and Radcliffe [39]. We begin this section by introducing the matrices I, J and C and establishing some of their basic properties. In Propositions 14 and 15 we combine these ingredients to form large sets of orthogonal vectors in dimensions p 2 and p 2 + 2p respectively. Then in Theorem 16, we add a single row and column to these matrices to yield a maximal determinant matrix in dimension 2p 2 + 2p + 1.
Recall that I and J denote the identity and all-ones matrix respectively, where the dimension is clear from context. Let j m denote the row vector of length m with all entries equal to 1. A useful observation is that for any matrix M , the entries of JM are the column sums of M while the entries of M J are the row sums of M .
Let Q be the p × p Paley core of Proposition 5, and let C = Q − I. The reader should verify that C has all entries in {±1} and, since p ≡ 3 mod 4, that Q is skew-symmetric, and It follows from Proposition 5 that JC = CJ = −J.
Finally, define the tensor product A ⊗ B = [a i,j B] i,j . Provided the matrices have compatible dimensions, matrix multiplication distributes over the tensor product: (A ⊗ B)(M ⊗ N ) = AM ⊗ BN . We will require some well-known results from the theory of combinatorial designs in this section; for further information the reader is directed to the monograph of Beth, Jungnickel and Lenz [5].
The affine designs are an important family of 2-designs obtained from vector spaces over finite fields.
Definition 2. Let U be a vector space of dimension 2 over F p . Let V be the set of vectors of U and B be the set of 1-dimensional subspaces and their translates. Since any two vectors determine a unique line, (V, B) is a 2-(q 2 , q, 1) design. The incidence matrix is q 2 × (q 2 + q), and can be partitioned into q + 1 parallel classes: sets of blocks which partition the point set.
Let us be a little more explicit in our description of the affine plane: parallel classes consist of pencils of parallel lines in the plane. One pencil consists of "vertical" lines, which are all of the form {(c, x) : x ∈ F p } for fixed c ∈ F p . The remaining lines consist of point-sets of the form {(x, ax + b) : x ∈ F p } for some a, b ∈ F p . The parallel classes are obtained by fixing a and varying b.
The incidence matrix of the affine plane has p 2 rows and p 2 + p columns. We will assume that the columns are grouped into p+1 parallel classes. By elementary linear algebra, each p 2 ×p submatrix contains a unique 1 in each row, and p non-zero entries in each column. Denote this matrix by M p , and observe that M p M ⊤ p = pI + J. Proof. Consider the p 2 × p submatrix F of M p corresponding to the i th parallel class. The corresponding block of M is just F C. Since each row of F contains a single 1, every row of F C is just a row of C. Hence the entries of M all belong to ±1, and the diagonal entries of M M ⊤ are all p 2 .
By the 2-design property, any pair of points are contained in a unique block, so the inner product of two rows in M p is 1. Hence for any two distinct rows of M , there is a unique parallel class in which they have the same row of C. In all other parallel classes they differ. Hence, the inner product gains a +p term from the parallel class where they agree, and p terms −1 from the parallel classes in which they disagree, and every pair of rows is orthogonal.
The next proposition, like the previous one, constructs a large set of orthogonal vectors with rows drawn from J and C.
Proposition 15. Let C be the Paley core of order p, where p ≡ 3 mod 4. Let J be the all-ones matrix of order p, and let j p be a vector of ones of length p. Then the (p 2 + 2p) × (2p 2 + 2p) matrix Proof. Essentially, the proof reduces to computing N N ⊤ and carefully evaluating each of the terms. Let us compute the inner product of the first block of the matrix with itself (equivalently, the inner product of any two rows from the first block). First observe that N is a {±1} matrix, so the diagonal of N N ⊤ is as claimed.
In particular, we conclude that two distinct rows from this block are orthogonal. We now verify the orthogonality of rows from two distinct blocks. To perform this computation by hand, it is convenient to simplify each term in the product individually, using that j p ⊗ J = J ⊗ j p , and that J(j p ⊗ C) = j p ⊗ JC = −J ⊗ j p : The remaining verifications are similar and are left for the reader.
In Propositions 14 and 15, the assumption that p ≡ 3 mod 4 is necessary. Using the affine plane, we constructed p 2 pairwise orthogonal vectors with entries {±1} in dimension p 2 + p. For primes p ≡ 1 mod 4 this is impossible, by Proposition 3. Using tensor products, we constructed p 2 + 2p orthogonal vectors in dimension 2p 2 + 2p. To complete our construction of maximal determinant matrices, we assemble M and N into a square matrix of dimension (p + 1) 2 + p 2 .
Theorem 16. Let W be the following matrix, assembled from the matrices of Propositions 14 and 15 with a single row and column appended: Then W W ⊤ = (2p 2 + 2p)I + J, and so W is a maximal determinant matrix. Furthermore, W has constant row sums 2p + 1. Recall also that the row sum of u ⊗ v is the product of the row sums, and that row sums are linear. For example, the inner product of the first row of W with any row from the third block evaluates as The remaining verifications are similar, and left to the reader.
In light of the first column, we need to show that the inner product of a row of [−M, −M ] with a row of N is +2. Take for example a row from the first block of N . Since the rows of M 0 all come from C, the contributions in the second and fourth displayed columns are −1 and 1 respectively. Since C contains p−1 2 entries +1 and p+1 2 entries −1, and the rows of M 1 are concatenations of rows of C, the contribution from the third block is The contribution from the final block is also +1, and hence the inner product evaluates as Here, too, we leave the remaining verifications to the reader.
We note again that this result extends readily to odd prime powers; such a matrix has order (q + 1) 2 + q 2 . There are nine orders n = 4t + 1 with n ≤ 200 for which 2n − 1 is a perfect square. Of these, n = 5, 13, 41 are sufficiently small that maximal determinant matrices may be found by ad hoc means. Orders n = 25, 61, 113, 181 are of the form q 2 + (q + 1) 2 , and so Theorem 16 applies. The remaining two cases are open. For n = 85, the Barba bound is 13 · 84 42 , while Proposition 13 produces a matrix with determinant 10 · 84 42 . A matrix with a larger determinant, 501 49 · 84 42 , was constructed by Orrick and Solomon [40]. For n = 145, the Barba bound is 17 · 144 72 while Proposition 13 gives a matrix with determinant 13 · 144 72 .
At orders n ≡ 1 mod 4 where the Barba bound cannot be attained, rather less is known. Chadjipantelis, Kounias and Moyssiadis [12] gave an analysis of the Gram matrices of maximal determinant matrices at orders 17 and 21, and found explicit matrices of maximal determinant. Their method was extended by Brent, Orrick, Osborn and Zimmerman [7] to find the Gram matrices of maximal determinant at order 37. To our knowledge, these are the only cases not covered by Theorem 12 for which the maximal determinant is known. To be entirely explicit: we are not aware of work establishing the maximal determinants at orders 29, 33, 45 or 49, and these are the only open cases with n ≡ 1 mod 4 and n ≤ 50. Computational work by Orrick and Solomon shows that for all orders n ≤ 100, matrices attaining at least 0.7 of the Barba bound exist, and can be obtained from Hadamard matrices of large excess using Proposition 13.

A refined bound and the case n ≡ 2 mod 4
The analysis of the case n ≡ 2 mod 4 is a continuation of the techniques developed thus far. The results in this section were obtained by Cohn [14], Ehlich [22], Whiteman [52] and Wojtas [54]. Proof. We start with the first statement. Let G := M M ⊤ , with entries g i,j , then G is positive definite and symmetric. Since n ≡ 2 mod 4 and M has entries in {±1}, it follows that g i,i = n and g i,j is even, for all 1 ≤ i, j ≤ n.
Otherwise, g i,j ≡ 0 mod 4 for some i = j. Up to simultaneous permutation of rows and columns of G, we may assume that g 1,j ≡ 2 mod 4 for 1 ≤ j ≤ k and g 1,j ≡ 0 mod 4 for k + 1 ≤ j ≤ n. Set where A is k × k and D is (n − k) × (n − k). We claim that all entries of A and D are 2 mod 4 and that all entries of B are 0 mod 4. For any r, s, t in the range 1 to n, we have Since m i,j ∈ {±1}, each of the terms (m r,i + m s,i ) and (m r,i + m t,i ) is even, so their product is divisible by 4. Since g r,r ≡ 2 mod 4, it follows that g r,s + g s,t + g t,r ≡ 2 mod 4. Setting t = 1 and r, s ≤ k we see that g s,1 ≡ g r,1 ≡ 2 mod 4 and hence g r,s ≡ 2 mod 4. Hence, every entry of A is 2 mod 4. Similarly, it can be shown that the entries of D are 2 mod 4 and, exploiting that G is symmetric, that the entries of B are 0 mod 4. Next, we apply Theorem 2 to see that Since the elements of A and D are all 2 mod 4, we can apply the bound of Theorem 8 with m = n and b = 2: This bound is maximised when n − 2k = 0, or, equivalently, when k = n/2. The bound is attained when equality holds in both Fischer's inequality, which requires that B = 0, and in the Ehlich-Wojtas bound with b = 2, characterised by Corollary 9.
A little further work gives a necessary Diophantine condition for the existence of a matrix meeting the bound of Theorem 17.
Thus N commutes with N ⊤ , and it follows that N commutes with G. It will be convenient to write where all blocks are n/2 × n/2, as established in the proof of Theorem 17. We then see that XJ = JX for all X ∈ {A, B, C, D}. But XJ is constant on rows, while JX is constant on columns. We conclude that XJ = JX = xJ, where all row and column sums of X are equal to x. To conclude the proof, consider the matrix product Evaluating the product of the first two and the last two matrices, we obtain aJ bJ cJ dJ aJ bJ cJ dJ On the other hand, evaluating N N ⊤ first, we obtain Equating these expressions, we conclude that a 2 + b 2 = 2n − 2, as required.
It is possible to continue the argument of Theorem 18 a little further: from ac = −bd and a 2 + b 2 = c 2 + d 2 , it follows that a = ±d and b = ∓c. So matrices attaining the bound of Theorem 17 are intimately related to sums of two squares. The well-known characterisation of Fermat shows that an integer fails to be a sum of two squares if and only if its square-free part is divisible by a prime p ≡ 3 mod 4; see, for example, [29]. From Theorem 16 we obtain matrices meeting the bound of Theorem 17. Proof. Compute the Gram matrix: the diagonal blocks are of the form 2W W ⊤ = (2n − 2)I n + 2J n , while the off-diagonal blocks are 0.
Theorem 20 (Theorem 1, [48], Theorem 2, [32]). For any odd prime power q there exists a pair of circulant matrices R and S of order v = q 2 + q + 1 with entries {±1} such that The matrix R S S ⊤ −R ⊤ has maximal determinant. The row-sums of R are all equal to 2q + 1 and the row sums of S are −1.
For an odd prime power q, Corollary 19 gives matrices of order 4q 2 + 4q + 2 while Theorem 20 gives matrices of order 2q 2 + 2q + 2. To our knowledge, these are the only known constructions for infinite families of maximal determinant matrices in dimensions n ≡ 2 mod 4. The following result, seemingly due to Cohn, provides a denser family of matrices which come within a factor of 2 of optimality.
Proposition 21 (Theorem 3, [14]). Let q ≡ 1 mod 4 be a prime power, and let Q be the matrix obtained from the quadratic residue symbol by Q i,j = (i − j) q−1/2 . Then the matrix has order n = q + 1 and determinant n(n − 2) Proof. Since q ≡ 1 mod 4, we have that −1 is a quadratic residue in F q . So Q is symmetric and by Proposition 5, QQ ⊤ = qI − J. In particular, the eigenvalues of QQ ⊤ are 0 with multiplicity 1 and q with multiplicity q − 1.
Since Tr(Q) = 0, the eigenvalues of Q are 0 with multiplicity 1, and ± √ q each with multiplicity q−1 2 . We compute: So the eigenvalues of M M ⊤ are: (q + 1) with multiplicity 2, and q + 1 ± 2 √ q each with multiplicity q−1 2 . Hence Hence |det(M )| = (q + 1)(q − 1) q−1 2 , within a multiplicative factor of q+1 2q−2 ∼ 1 2 of the bound of Theorem 17. There are several other constructions in the literature for matrices of order n ≡ 2 mod 4 with large determinant. Brent and Osborn [8] consider submatrices of order n − 2 of a Hadamard matrix of order n. Brent, Osborne and Smith [9] add two rows and columns to a Hadamard matrix. This work is discussed further in Section 6.1. We conclude this section with an overview of known results for small orders. Computational results by Djoković and Kotsireas [20,21] show that a pair of circulant matrices R, S satisfying the identity RR ⊤ + SS ⊤ = (2n − 2)I + 2J exists at all orders n for which 2n − 2 is a sum of two squares up to n = 198. As in Proposition 20, such matrices easily yield maximal determinant matrices of order n. In contrast to the Diophantine condition for matrices meeting the Barba bound, the condition that 2n − 2 be a sum of two squares is relatively easy to satisfy 3 : the only orders with n ≡ 2 mod 4 with n ≤ 100 for which 2n − 2 is not a sum of two squares are n ∈ {22, 34, 58, 70, 78, 94}.
Recent work of Chasiotis, Kounias and Farmakis [13] addresses the smallest of these cases, n = 22. Having identified two matrices with large determinant, they perform an exhaustive search for potential Gram matrices with determinant exceeding those of their examples, finding 25 such matrices. Each of these is excluded from being a Gram matrix, and thus the maximal determinant is established to be 40 · 20 10 , with two inequivalent Gram matrices being realisable. This should be compared to the bound 42·20 10 . To our knowledge, the maximal determinant at any order greater than 22 satisfying n ≡ 2 mod 4 for which 2n − 2 is not a sum of two squares remains open. 6 Ehlich's analysis of the case n ≡ 3 mod 4 Ehlich develops a bound for maximal determinants when n ≡ 3 mod 4 through a careful analysis of the minors of such a matrix. These results were previously translated into English and the analysis sharpened by Brent, Osborn, Orrick and Zimmerman [7], but we include our analysis (which differs slightly from theirs) for the sake of completeness.
For each integer 1 ≤ m ≤ n, define the following set of m × m matrices: The m × m minors of an n × n matrix with entries in {±1} all belong to C m , though the set does not consist exclusively of Gram matrices. We will study the maximal determinant of an element of C m , via inductive methods of the type that we have seen previously. In contrast to previous proofs, the bounds typically cannot be met with equality. Denote by γ m the maximal determinant of an element of C m .
Proof. The proof is by induction. Observe first that Suppose that γ m > (n − 3)γ m−1 , and let C be the following (m + 1) × (m + 1) matrix, chosen such that the top-left m × m minor is γ m , and the last row and column are as displayed: We evaluate the determinant as follows: Proof. Suppose that C 1 is a positive definite matrix in C m with some entry α ∈ {−1, 3}, and that det(C 1 ) = γ m . Then up to conjugation by a permutation matrix we may assume that where |α| ≥ 3 and we further assume that If this does not hold, we may permute the final two rows and columns of C 1 and replace it with a similar matrix with the required property. By the argument of Proposition 22, both matrices of Equation (10) are positive definite. Then let We will show that det(C 2 ) ≥ det(C 1 ), contradicting the assumption that C 1 has maximal determinant. As before, we use that the determinant is linear in the rows: Denote the rightmost term in the expansion above by R. We have established that the (n − 1) × (n − 1) submatrix at the top-left of R is positive definite. So R is positive definite if and only if its determinant is positive. But the bottom-right 2 × 2 submatrix of R is degenerate. So by Fischer's inequality, if R were positive definite we would have det(R) ≤ det(A) · 0, which is a contradiction. Thus det(R) ≤ 0.
Discarding det(R) we have an upper bound for det(C 1 ) as follows: Compute, in the same fashion, the determinant of C 2 : Again the third term vanishes, and the first two may be evaluated as before: Comparing this with (11) and recalling the inequality (10), we get that det(C 2 ) ≥ det(C 1 ) and this inequality is strict if |α| > 3. We conclude that an element of maximal determinant in C m has entries in the set {−1, 3}.
Definition 3. Let J t be the t × t matrix with all entries equal to 1. Define B t = (n − 3)I t + 3J t to be an Ehlich-block of size t. An Ehlich-block matrix is an n × n matrix with Ehlich-blocks along the diagonal, and all other entries outside the Ehlich-blocks equal to −1. To each Ehlich-block matrix there is associated a partition of n, given by the Ehlich-block sizes.
If det(C m ) = γ m then, up to similarity, C m is an Ehlich-block matrix.
Proof. We follow the same proof strategy as in Proposition 23: we explicitly produce a matrix with a larger determinant from an element of C m which is not an Ehlich-block matrix. Up to simultaneous permutation of rows and columns we may assume that the matrices have the form Without loss of generality, we assume that the principal minor obtained from deleting the last row and column of C 1 is less than or equal to the corresponding principal minor of C 2 . (If not, we relabel the rows of C 1 and redefine C 2 .) We evaluate the determinant of C 1 using linearity in the rows: As before, the rightmost term in this expression violates Fischer's inequality, but has a positive definite submatrix of order m − 1, so has non-positive determinant. Expanding the determinant of C 2 in the same way gives an expression where each term dominates the corresponding term of det(C 1 ), completing the proof.
Having established the maximal determinant of a matrix in the class C n has the structure of Theorem 24, Ehlich evaluates the determinant in terms of the corresponding partition n = r 1 + r 2 + . . . + r s , obtaining Via a lengthy and intricate analysis, Ehlich obtains the following explicit result.
2. Each part has size ⌊n/f (n)⌋ or ⌈n/f (n)⌉, and this partition is uniquely determined.
For n ≥ 63, an explicit upper bound on the maximal determinant of an n × n matrix M is In fact, no matrices are known which achieve the bound given by Ehlich. Inspecting the approximations made during the proof, this is perhaps unsurprising: already in the n = 2 case of Proposition 22, the approximations are not sharp. Detailed but elementary analysis of the proof of Theorem 25 shows that equality in the bound could be achieved if and only if n = 7m. Cohn [15] has shown using number theoretic techniques that the Ehlich bound is integral only when n is of the form 112t 2 ± 28t + 7, while Tamura [51] has applied the Hasse-Minkowski criteria for equivalence of quadratic forms to show that the smallest order at which the Ehlich bound could be achieved is at least 511. On the other hand, Ehlich's bound is asymptotically optimal up to some constant factor.
Orrick [41] attributes the solution of the maximal determinant problem at orders n = 3, 7 to Williamson and n = 11 to Ehlich. In the same paper, Orrick determines the maximal determinant of order 15. The corresponding Gram matrix has three Ehlich-blocks of size 4 and one of size 3. Later work of Brent, Osborn, Orrick and Zimmermann [7] computed the maximal determinant at order 19. At both orders, the technique used is a careful refinement of the method of Chadjipantelis, Kounias and Moyssiadis [12]: a candidate matrix with large determinant is identified, its Gram matrix is computed, and all symmetric positive definite matrices with larger determinant are ruled out as Gram matrices. Interestingly, at order 19, the matrices with largest determinant are not Ehlich-block matrices though they contain 18 × 18 submatrices which are in Ehlich-block form. Bounds on the maximal determinant for n ≡ 3 mod 4 are described in Table 1 at the end of the paper.
6.1 Improved lower bounds for n ≡ 3 mod 4 We conclude with an investigation of direct constructions for {±1} matrices with n ≡ 3 mod 4 having large determinant. First we describe results of Brent, Osborn and Smith using the probabilistic method. Recall that in Proposition 13, a Hadamard matrix was augmented by a row and column of 1's to obtain a matrix with n ≡ 1 mod 4 and large determinant. Even when using the optimal Hadamard matrices for this method (those with maximal excess), the ratio of the determinant obtained to the bound of Corollary 10 tends to zero as n tends to infinity. A remarkable generalisation of this result was obtained by Brent, Osborn and Smith [9], in which multiple rows and columns are added to a Hadamard matrix. Columns are chosen uniformly at random, while the rows added are chosen deterministically. Via careful analysis, the authors show that the ratio of the determinant to the Hadamard bound does not tend to 0 as n tends to infinity. The reader is referred to the original paper for the proof of the following result.
Theorem 26 (Theorem 3.6, [9]). If 0 ≤ d ≤ 3, and h is the order of a Hadamard matrix then there exists a matrix M of order n = h + d such that 2 eπ d/2 n n/2 ≤ det(M ) ≤ n n/2 . A more general result is possible in which the parameter d is not bounded, but all results obtained by these methods contain a factor (2/eπ) d/2 . Thus results obtained by this method decay exponentially in the distance to the nearest Hadamard matrix, but are independent of the order of the matrix. In the case n ≡ 3 mod 4, we set d = 3 in Theorem 26 to obtain a constant 0.1133. But this comparison is to the Hadamard bound: as n → ∞ the ratio of the Ehlich and Hadamard bounds tends to 0.4284, so that for sufficiently large n, Theorem 26 shows that whenever there exists a Hadamard matrix of order n there exists a matrix of order n + 3 achieving at least 0.264 of the Ehlich bound. As a special case of this result, we highlight the following. Proof. If b c c d x 1 x 2 = λ x 1 x 2 , then bJ cJ cJ dJ Since M − aI 2k clearly has rank 2, all other eigenvalues are zero. The eigenvalues of M are of the form a + λ where λ is an eigenvalue of M − aI, so the result follows. The determinant evaluation follows by identifying the sum and product of the eigenvalues with the trace and determinant of the 2 × 2 matrix, respectively.
The proof of the next result is identical for the displayed matrices. The matrices of Corollary 19 are in the form of matrix M 1 while those of Theorem 20, and those constructed by Djoković and Kotsireas are in the form of matrix M 2 .
In Proposition 30 the result appears asymmetric in r and s. In fact, from a pair of matrices R, S satisfying RR ⊤ + SS ⊤ = (2k − 2)I + 2J, four different determinants are obtained, depending on the row-sum of the matrix on the principal diagonal, which is drawn from {±r, ±s}. For sufficiently large values of k, the terms 4k 2 r 2 − 16k 2 r dominate and the determinant is maximised when r is large and negative.