Discrepancy of Matrices of Zeros and Ones

R. A. Brualdi, J. Shen

Abstract


Let $m$ and $n$ be positive integers, and let $R=(r_1,\ldots, r_m)$ and $ S=(s_1,\ldots, s_n)$ be non-negative integral vectors. Let ${\cal A} (R,S)$ be the set of all $m \times n$ $(0,1)$-matrices with row sum vector $R$ and column vector $S$, and let $\bar A$ be the $m \times n$ $(0,1)$-matrix where for each $i$, $1\le i \le m$, row $i$ consists of $r_i$ $1$'s followed by $n-r_i$ $0$'s. If $S$ is monotone, the discrepancy $d(A)$ of $A$ is the number of positions in which $\bar A$ has a $1$ and $A$ has a $0$. It equals the number of $1$'s in $\bar A$ which have to be shifted in rows to obtain $A$. In this paper, we study the minimum and maximum $d(A)$ among all matrices $A \in {\cal A} (R,S)$. We completely solve the minimum discrepancy problem by giving an explicit formula in terms of $R$ and $S$ for it. On the other hand, the problem of finding an explicit formula for the maximum discrepancy turns out to be very difficult. Instead, we find an algorithm to compute the maximum discrepancy.


Full Text: PDF