1$ and
$q>1$. This doesn't, however, generalize to arbitrary posets obtained by
removing some ranks from $\operatorname*{Rect}\left( p,q\right) $ (indeed
$\operatorname*{ord} R_{P}$ is infinite for some posets of this type,
cf. Section \ref{sect.negres}).
\end{remark}
\section{Reduced labellings}
The proof that we give for Theorem \ref{thm.rect.ord} and Theorem
\ref{thm.rect.antip.general} is largely inspired by the proof of
Zamolodchikov's conjecture in case $AA$ given by Volkov in \cite{volkov}%
\footnote{``Case $AA$'' refers to the Cartesian product of the Dynkin diagrams
of two type-$A$ root systems. This, of course, is a rectangle, just as in our
Theorem \ref{thm.rect.ord}.}. This is not very surprising because the orbit of
a $\mathbb{K}$-labelling under birational rowmotion appears superficially
similar to a solution of a $Y$-system of type $AA$. Yet we do not see a way to
derive Theorem \ref{thm.rect.ord} from Zamolodchikov's conjecture or vice
versa. (Here the Y-system has an obvious
``reducibility property'', consisting of two decoupled subsystems -- a
property not obviously satisfied in the case of birational rowmotion.)
The first step towards our proof of Theorem \ref{thm.rect.ord} is to restrict attention to
so-called \textit{reduced labellings}, which are not much less general than arbitrary
labellings: Many results can be proven for all labellings by means of proving them for
reduced labellings first, and then extending them to general labellings by fairly simple
arguments. We will use this tactic in our proof of Theorem \ref{thm.rect.ord}. A slightly
different way to reduce the case of a general labelling to that of a reduced one is taken in
\cite[\S 4]{einstein-propp}.
\begin{definition}
A labelling $f\in\mathbb{K}^{\widehat{\operatorname*{Rect}%
\left( p,q\right) }}$ is said to be \textit{reduced} if $f\left( 0\right)
=f\left( 1\right) =1$. The set of all reduced labellings $f\in
\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$ will be
identified with $\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$ in the
obvious way.
Note that fixing the values of $f\left( 0\right) $ and $f\left( 1\right) $
like this makes $f$ ``less generic'', but still the operator
$R_{\operatorname*{Rect}\left( p,q\right) }$ restricts to a rational map
from the variety of all reduced labellings $f\in\mathbb{K}%
^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$ to itself. (This is
because the operator $R_{\operatorname*{Rect}\left( p,q\right) }$ does not
change the values at $0$ and $1$, and does not degenerate from setting
$f\left( 0\right) =f\left( 1\right) =1$.)
\end{definition}
\begin{proposition}
\label{prop.rect.reduce}
Assume that almost every (in the Zariski sense)
reduced labelling $f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left(
p,q\right) }}$ satisfies $R_{\operatorname*{Rect}\left( p,q\right) }%
^{p+q}f=f$. Then, $\operatorname*{ord}\left( R_{\operatorname*{Rect}\left(
p,q\right) }\right) =p+q$.
\end{proposition}
\begin{proof}
%%% [Proof of Proposition \ref{prop.rect.reduce}.] WHY?? What else would we be proving here?
Let $g\in
\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$ be any
$\mathbb{K}$-labelling of $\operatorname*{Rect}\left( p,q\right) $ which is
sufficiently generic for $R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}g$
to be well-defined.
We can easily find a $\left( p+q+1\right) $-tuple $\left(
a_{0},a_{1},...,a_{p+q}\right) \in\left( \mathbb{K}^{\times}\right)
^{p+q+1}$ such that \newline
$\left( a_{0},a_{1},...,a_{p+q}\right) \flat g$ is a
reduced $\mathbb{K}$-labelling (in fact, set $a_{0}=\dfrac{1}{g\left(
0\right) }$ and $a_{p+q}=\dfrac{1}{g\left( 1\right) }$, and choose all
other $a_{i}$ arbitrarily). Corollary \ref{cor.Rl.scalmult} then yields%
\begin{equation}
R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}\left( \left( a_{0}%
,a_{1},...,a_{p+q}\right) \flat g\right) =\left( a_{0},a_{1},...,a_{p+q}%
\right) \flat\left( R_{\operatorname*{Rect}\left( p,q\right) }%
^{p+q}g\right) . \label{pf.rect.reduce.short.2}%
\end{equation}
We have assumed that almost every (in the Zariski sense) reduced labelling
$f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$
satisfies $R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}f=f$. Thus, every
reduced labelling $f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left(
p,q\right) }}$ for which $R_{\operatorname*{Rect}\left( p,q\right) }%
^{p+q}f$ is well-defined satisfies $R_{\operatorname*{Rect}\left( p,q\right)
}^{p+q}f=f$ (because $R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}f=f$
can be written as an equality between rational functions in the labels of $f$,
and thus it must hold everywhere if it holds on a Zariski-dense open subset).
Applying this to $f=\left( a_{0},a_{1},...,a_{p+q}\right) \flat g$, we
obtain that $R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}\left( \left(
a_{0},a_{1},...,a_{p+q}\right) \flat g\right) =\left( a_{0},a_{1}%
,...,a_{p+q}\right) \flat g$. Thus,%
$$
\left( a_{0},a_{1},...,a_{p+q}\right) \flat g = R_{\operatorname*{Rect}%
\left( p,q\right) }^{p+q}\left( \left( a_{0},a_{1},...,a_{p+q}\right)
\flat g\right)
=\left( a_{0},a_{1},...,a_{p+q}\right) \flat\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}g\right)
$$
(by (\ref{pf.rect.reduce.short.2})).
We can cancel the \textquotedblleft$\left( a_{0},a_{1},...,a_{p+q}\right)
\flat$\textquotedblright\ from both sides of this equality (because all the
$a_{i}$ are nonzero), and thus obtain $g=R_{\operatorname*{Rect}\left(
p,q\right) }^{p+q}g$.
Now, forget that we fixed $g$. We thus have proven that
$g=R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}g$ holds for every
$\mathbb{K}$-labelling $g\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left(
p,q\right) }}$ of $\operatorname*{Rect}\left( p,q\right) $ which is
sufficiently generic for $R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}g$
to be well-defined. In other words, $R_{\operatorname*{Rect}\left(
p,q\right) }^{p+q}=\operatorname*{id}$ as partial maps. Hence,
$\operatorname*{ord}\left( R_{\operatorname*{Rect}\left( p,q\right)
}\right) \mid p+q$.
On the other hand, Lemma~\ref{lem.ord.poor-mans-projord} yields
that $\operatorname*{ord}\left(
R_{\operatorname*{Rect}\left( p,q\right) }\right) $ is divisible by
$\left( p+q-1\right) +1=p+q$. Combined with $\operatorname*{ord}\left(
R_{\operatorname*{Rect}\left( p,q\right) }\right) \mid p+q$, this yields
$\operatorname*{ord}\left( R_{\operatorname*{Rect}\left( p,q\right)
}\right) =p+q$.
\end{proof}
Let us also formulate the particular case of Theorem
\ref{thm.rect.antip.general} for reduced labellings, which we will use a stepping stone to
the more general theorem.
\begin{theorem}
\label{thm.rect.antip}
Let $f\in\mathbb{K}^{\widehat{\operatorname*{Rect}%
\left( p,q\right) }}$ be reduced. Assume that $R_{\operatorname*{Rect}%
\left( p,q\right) }^{\ell}f$ is well-defined for every $\ell\in\left\{
0,1,...,i+k-1\right\} $. Then for any $\left( i,k\right) \in\operatorname*{Rect}%
\left( p,q\right) $ we have
\[
f\left( \left( p+1-i,q+1-k\right) \right) =\dfrac{1}{\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right) \left( \left(
i,k\right) \right) }.
\]
\end{theorem}
% We will prove this before we prove the general form (Theorem
% \ref{thm.rect.antip.general}), and in fact we are going to derive Theorem
% \ref{thm.rect.antip.general} from its particular case, Theorem
% \ref{thm.rect.antip}. We are not going to encumber this section with the
% derivation; its details can be found in Section \ref{sect.rect.finish}.
\section{The Grassmannian parametrization: statements}
In this section, we introduce the main actor in our proof of
Theorem \ref{thm.rect.ord}: an assignment of a reduced $\mathbb{K}$-labelling
of $\operatorname*{Rect}\left( p,q\right) $, denoted $\operatorname*{Grasp}%
\nolimits_{j}A$, to any integer $j$ and almost any matrix $A\in\mathbb{K}%
^{p\times\left( p+q\right) }$ (Definition \ref{def.Grasp}). This assignment
will give us a family of $\mathbb{K}$-labellings of $\operatorname*{Rect}%
\left( p,q\right) $ which is large enough to cover almost all reduced
$\mathbb{K}$-labellings of $\operatorname*{Rect}\left( p,q\right) $ (Proposition
\ref{prop.Grasp.generic}), while at the same time
the construction of this assignment makes it easy to track the behavior of the
$\mathbb{K}$-labellings in this family through multiple iterations of
birational rowmotion. Indeed, we will see that birational rowmotion has a very
simple effect on the reduced $\mathbb{K}$-labelling $\operatorname*{Grasp}%
\nolimits_{j}A$ (Proposition \ref{prop.Grasp.GraspR}).
% In this section, we introduce the main actor in our proof of Theorem \ref{thm.rect.ord}: an
% assignment of a reduced $\mathbb{K}$-labelling of $\operatorname*{Rect}\left( p,q\right) $,
% denoted $\operatorname*{Grasp}% \nolimits_{j}A$, to any integer $j$ and almost any matrix
% $A\in\mathbb{K}^{p\times\left( p+q\right) }$ (Definition \ref{def.Grasp}). This assignment
% will give us a family of $\mathbb{K}$-labellings of $\operatorname*{Rect}% \left( p,q\right)
% $ which is large enough to cover almost all reduced $\mathbb{K}$-labellings of
% $\operatorname*{Rect}\left( p,q\right) $ (Proposition \ref{prop.Grasp.generic}), while at
% the same time the construction of this assignment makes it easy to track the behavior of the
% $\mathbb{K}$-labellings in this family through multiple iterations of birational
% rowmotion. Indeed, we will see that birational rowmotion has a very simple effect on the
% reduced $\mathbb{K}$-labelling $\operatorname*{Grasp}% \nolimits_{j}A$ (Proposition
% \ref{prop.Grasp.GraspR}).
\begin{definition}
\label{def.minors} Let $A\in
\mathbb{K}^{u\times v}$ be a $u\times v$-matrix for some nonnegative integers
$u$ and $v$.
%%% (This means, at least in this paper, a matrix with $u$ rows and $v$ columns.)
\textbf{(a)} For every $i\in\left\{ 1,2,...,v\right\} $, let $A_{i}$ denote
the $i$-th column of $A$.
\textbf{(b)} Moreover, we extend this definition to all $i\in\mathbb{Z}$ as
follows: For every $i\in\mathbb{Z}$, let%
\[
A_{i}=\left( -1\right) ^{\left( u-1\right) \left( i-i^{\prime}\right)
/ v}\cdot A_{i^{\prime}},
\]
where $i^{\prime}$ is the element of $\left\{ 1,2,...,v\right\} $ which is
congruent to $i$ modulo $v$. (Thus, $A_{v+i}=\left( -1\right) ^{u-1}A_{i}$
for every $i\in\mathbb{Z}$. Consequently, the sequence $\left( A_{i}\right)
_{i\in\mathbb{Z}}$ is periodic with period dividing $2v$, and if $u$ is odd,
the period also divides $v$.)
\textbf{(c)} For any two integers $a$ and $b$ satisfying $a \leq b$, we let $A\left[
a:b\right] $ be the matrix whose columns (from left to right) are $A_{a}, A_{a+1}, \ldots,A_{b-1} $.
\textbf{(d)} For any four integers $a$, $b$, $c$ and $d$ satisfying $a\leq b$
and $c\leq d$, we let $A\left[ a:b\mid c:d\right] $ be the matrix whose
columns (from left to right) are $A_{a}$, $A_{a+1}$, $...$, $A_{b-1}$, $A_{c}%
$, $A_{c+1}$, $...$, $A_{d-1}$. (This matrix has $b-a+d-c$
columns.\footnote{It is not always a submatrix of $A$. Its columns are columns of
$A$ multiplied with $1$ or $-1$; they can appear several times and need not
appear in the same order as they appear in $A$.})\footnote{We notice that
we allow the case $a=b$. In this case, obviously, the columns of the matrix
$A\left[ a:b\mid c:d\right]$ are $A_c$, $A_{c+1}$, $...$, $A_{d-1}$, so we have
$A\left[a:b\mid c:d\right] = A\left[c:d\right]$. Similarly, the case $c=d$ is allowed.}
When $b-a+d-c=u$,
%(note: not just $b-a+d-c\equiv
%u\operatorname{mod}v$),
this matrix $A\left[ a:b\mid c:d\right] $ is a
square matrix, and thus has a determinant $\det\left( A\left[ a:b\mid
c:d\right] \right) $.
\textbf{(e)} We extend the definition of $\det\left( A\left[ a:b\mid
c:d\right] \right) $ to encompass the case when $b=a-1$ or $d=c-1$, by
setting $\det\left( A\left[ a:b\mid c:d\right] \right) =0$ in this case
(although the matrix $A\left[ a:b\mid c:d\right] $ itself is not defined in
this case).
\end{definition}
\begin{example}
If $A =\left(
\begin{array}
[c]{ccc}%
3 & 5 & 7\\
4 & 1 & 9
\end{array}
\right) $, then $A_{5}=\left( -1\right) ^{\left( 2-1\right) \left( 5-2\right)
/3}\cdot A_{2}=-A_{2}=-\left(
\begin{array}
[c]{c}%
5\\
1
\end{array}
\right) =\left(
\begin{array}
[c]{c}%
-5\\
-1
\end{array}
\right) $ and $A_{-4}=\left( -1\right) ^{\left( 2-1\right) \left(
\left( -4\right) -2\right) /3}\cdot A_{2}=A_{2}=\left(
\begin{array}
[c]{c}%
5\\
1
\end{array}
\right) $.
If $A =\left(
\begin{array}
[c]{cc}%
1 & 2\\
3 & 2\\
-5 & 4
\end{array}
\right) $, then $A_{0}=\left( -1\right) ^{\left( 3-1\right) \left( 0-2\right)
/2}\cdot A_{2}=A_{2}=\left(
\begin{array}
[c]{c}%
2\\
2\\
4
\end{array}
\right) $.
\end{example}
\begin{remark}
Some parts of Definition \ref{def.minors} might look accidental and haphazard;
here are some motivations and aide-memoires:
The choice of sign in Definition \ref{def.minors} \textbf{(b)} is not only the
``right'' one for what we are going to do below, but also naturally appears in
\cite[Remark 3.3]{postnikov}. It guarantees, among other things, that if
$A\in\mathbb{R}^{u\times v}$ is totally nonnegative, then the matrix having
columns $A_{1+i}$, $A_{2+i}$, $...$, $A_{v+i}$ is totally nonnegative for
every $i\in\mathbb{Z}$.
The notation $A\left[ a:b\mid c:d\right] $ in Definition \ref{def.minors}
\textbf{(d)} borrows from Python's notation $\left[ x:y\right] $ for taking
indices from the interval $\left\{ x,x+1,...,y-1\right\} $.
The convention to define $\det\left( A\left[ a:b\mid c:d\right] \right)$
as $0$ in Definition \ref{def.minors} \textbf{(e)} can be motivated using
exterior algebra as follows: If we identify $\wedge^{u}\left( \mathbb{K}%
^{u}\right) $ with $\mathbb{K}$ by equating with $1\in\mathbb{K}$ the wedge
product $e_{1}\wedge e_{2}\wedge...\wedge e_{u}$ of the standard basis
vectors, then $\det\left( A\left[ a:b\mid c:d\right] \right) =A_{a}\wedge
A_{a+1}\wedge...\wedge A_{b-1}\wedge A_{c}\wedge A_{c+1}\wedge...\wedge
A_{d-1}$; this belongs to the product of $\wedge^{b-a}\left( \mathbb{K}%
^{u}\right) $ with $\wedge^{d-c}\left( \mathbb{K}^{u}\right) $ in
$\wedge^{u}\left( \mathbb{K}^{u}\right) $. If $b=a-1$, then this product is
$0$ (since $\wedge^{b-a}\left( \mathbb{K}^{u}\right) =\wedge^{-1}\left(
\mathbb{K}^{u}\right) =0$), so $\det\left( A\left[ a:b\mid c:d\right]
\right) $ has to be $0$ in this case.
\end{remark}
The following four propositions are all straightforward observations.
\begin{proposition}
\label{prop.minors.0}Let $A\in\mathbb{K}^{u\times v}$.
Let $a\leq b$ and $c\leq d$ be four integers satisfying
$b-a+d-c=u$. Assume that some element of the interval $\left\{
a,a+1,...,b-1\right\} $ is congruent to some element of the interval
$\left\{ c,c+1,...,d-1\right\} $ modulo $v$. Then, $\det\left( A\left[
a:b\mid c:d\right] \right) =0$.
\end{proposition}
\begin{proof}
The assumption yields that the matrix $A\left[ a:b\mid c:d\right] $ has two columns which
are proportional to each other by a factor of $\pm1$. Hence, this matrix has determinant
$0$.
\end{proof}
\begin{proposition}
\label{prop.minors.antisymm}Let $A\in \mathbb{K}^{u\times v}$. Let $a\leq b$ and $c\leq d$ be four integers satisfying
$b-a+d-c=u$. Then,%
\[
\det\left( A\left[ a:b\mid c:d\right] \right) =\left( -1\right)
^{\left( b-a\right) \left( d-c\right) }\det\left( A\left[ c:d\mid
a:b\right] \right) .
\]
\end{proposition}
\begin{proof}
This follows from the fact
that permuting the columns of a matrix multiplies its determinant by the sign
of the corresponding permutation.
\end{proof}
\begin{proposition}
\label{prop.minors.complete}Let $A\in
\mathbb{K}^{u\times v}$. Let $a$, $b_{1}$, $b_{2}$ and $c$ be four integers satisfying
$a\leq b_{1}\leq c$ and $a\leq b_{2}\leq c$. Then,%
\[
A\left[ a:b_{1}\mid b_{1}:c\right] =A\left[ a:b_{2}\mid b_{2}:c\right] .
\]
\end{proposition}
\begin{proof}
Both matrices $A\left[
a:b_{1}\mid b_{1}:c\right] $ and $A\left[ a:b_{2}\mid b_{2}:c\right] $ are
simply the matrix with columns $A_{a}$, $A_{a+1}$, $...$, $A_{c-1}$.
\end{proof}
%%% TR: TO TRIVIAL FOR THE REF; OMITTING ALL THE LATER REFERENCES TO IT!
% \begin{proposition}
% \label{prop.minors.trivial}Let $A\in
% \mathbb{K}^{u\times v}$. Let $c$ and $d$ be two integers satisfying $c\leq d$. Then:
% \textbf{(a)} Any integers $a_{1}$ and $a_{2}$ satisfy%
% \[
% A\left[ a_{1}:a_{1}\mid c:d\right] =A\left[ a_{2}:a_{2}\mid c:d\right] .
% \]
% \textbf{(b)} Any integers $a_{1}$ and $a_{2}$ satisfy%
% \[
% A\left[ c:d\mid a_{1}:a_{1}\right] =A\left[ c:d\mid a_{2}:a_{2}\right] .
% \]
% \textbf{(c)} If $a$ and $b$ are integers satisfying $c\leq b\leq d$, then%
% \[
% A\left[ c:b\mid b:d\right] =A\left[ c:d\mid a:a\right] .
% \]
% \end{proposition}
% \begin{proof}
% All six matrices in the above
% equalities are simply the matrix with columns $A_{c}$, $A_{c+1}$, $...$,
% $A_{d-1}$.
% \end{proof}
\begin{proposition}
\label{prop.minors.period}Let $A\in
\mathbb{K}^{u\times v}$. Let $a\leq b$ and $c\leq d$ be four integers satisfying
$b-a+d-c=u$. Then
\textbf{(a)}
$\det\left( A\left[ v+a:v+b\mid v+c:v+d\right] \right) =\det\left(
A\left[ a:b\mid c:d\right] \right)$.
\smallskip
\textbf{(b)}
$\det\left( A\left[ a:b\mid v+c:v+d\right] \right) =\left( -1\right)
^{\left( u-1\right) \left( d-c\right) }\det\left( A\left[ a:b\mid
c:d\right] \right)$.
\smallskip
\textbf{(c)}
$\det\left( A\left[ a:b\mid v+c:v+d\right] \right) =\det\left( A\left[
c:d\mid a:b\right] \right)$.
\smallskip
\end{proposition}
\begin{proof}
Straightforward from the definition and basic properties of the determinant.
% Nothing about this
% is anything more than trivial. Part \textbf{(a)} and \textbf{(b)} follow from
% the fact that $A_{v+i}=\left( -1\right) ^{u-1}A_{i}$ for every
% $i\in\mathbb{Z}$ (which is owed to Definition \ref{def.minors} \textbf{(b)})
% and the multilinearity of the determinant. The proof of part \textbf{(c)}
% additionally uses Proposition \ref{prop.minors.antisymm} and a careful sign
% computation (notice that $\left( -1\right) ^{\left( d-c-1\right) \left(
% d-c\right) }=1$ because $\left( d-c-1\right) \left( d-c\right) $ is even,
% no matter what the parities of $c$ and $d$ are). All details can be easily
% filled in by the reader.
\end{proof}
\begin{definition}
\label{def.Grasp}Let $p$ and $q$ be two positive
integers. Let $A\in\mathbb{K}^{p\times\left( p+q\right) }$. Let
$j\in\mathbb{Z}$.
\textbf{(a)} We define a map $\operatorname*{Grasp}\nolimits_{j}A\in
\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$ by%
\begin{align}
\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
i,k\right) \right) & =\dfrac{\det\left( A\left[ j+1:j+i\mid
j+i+k-1:j+p+k\right] \right) }{\det\left( A\left[ j:j+i\mid
j+i+k:j+p+k\right] \right) }\label{def.Grasp.def}\\
& \ \ \ \ \ \ \ \ \ \ \left. \right. \nonumber
\end{align}
This is well-defined when the matrix $A$ is sufficiently generic (in the
sense of Zariski topology), since the matrix $A\left[ j:j+i\mid
j+i+k:j+p+k\right] $ is obtained by picking $p$ distinct columns out of $A$,
some possibly multiplied with $\left( -1\right) ^{u-1}$. This map
$\operatorname*{Grasp}\nolimits_{j}A$ will be considered as a reduced
$\mathbb{K}$-labelling of $\operatorname*{Rect}\left( p,q\right) $ (since we
are identifying the set of all reduced labellings $f\in\mathbb{K}%
^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$ with $\mathbb{K}%
^{\operatorname*{Rect}\left( p,q\right) }$).
\textbf{(b)} It will be handy to extend the map $\operatorname*{Grasp}%
\nolimits_{j}A$ to a slightly larger domain by blindly following
(\ref{def.Grasp.def}) (and using Definition \ref{def.minors} \textbf{(e)}),
accepting the fact that outside $\left\{ 1,2,...,p\right\} \times\left\{
1,2,...,q\right\} $ its values can be \textquotedblleft
infinity\textquotedblright\ (whatever this means):%
\begin{align*}
\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
0,k\right) \right) & =0\ \ \ \ \ \ \ \ \ \ \text{for all }k\in\left\{
1,2,...,q\right\} ;\\
\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
p+1,k\right) \right) & =\infty\ \ \ \ \ \ \ \ \ \ \text{for all }%
k\in\left\{ 1,2,...,q\right\} ;\\
\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
i,0\right) \right) & =0\ \ \ \ \ \ \ \ \ \ \text{for all }i\in\left\{
1,2,...,p\right\} ;\\
\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
i,q+1\right) \right) & =\infty\ \ \ \ \ \ \ \ \ \ \text{for all }%
i\in\left\{ 1,2,...,p\right\} .
\end{align*}
\end{definition}
The term \textquotedblleft$\operatorname*{Grasp}$\textquotedblright%
\ is meant to suggest \textquotedblleft Grassmannian
parametrization\textquotedblright, as we will later parametrize (generic)
reduced labellings on $\operatorname*{Rect}\left( p,q\right) $ by matrices
via this map $\operatorname*{Grasp}\nolimits_{0}$. The reason for the word
\textquotedblleft Grassmannian\textquotedblright\ is that, while we have
defined $\operatorname*{Grasp}\nolimits_{j}$ as a rational map from the matrix
space $\mathbb{K}^{p\times\left( p+q\right) }$, it actually is not defined
outside of the Zariski-dense open subset $\mathbb{K}_{\operatorname*{rk}%
=p}^{p\times\left( p+q\right) }$ of $\mathbb{K}^{p\times\left( p+q\right)
}$ formed by all matrices whose rank is $p$, on which it factors through the
quotient of $\mathbb{K}_{\operatorname*{rk}=p}^{p\times\left( p+q\right) }$
by the left multiplication action of $\operatorname*{GL}\nolimits_{p}%
\mathbb{K}$ (because it is easy to see that $\operatorname*{Grasp}%
\nolimits_{j}A$ is invariant under row transformations of $A$); this quotient
is a well-known avatar of the Grassmannian.
The formula (\ref{def.Grasp.def}) is inspired by the $Y_{ijk}$ of Volkov's
\cite{volkov}; similar expressions (in a different context) also appear in
\cite[Theorem 4.21]{kirillov-intro}.
\begin{example}
If $p=2$, $q=2$ and $A=\left(
\begin{array}
[c]{cccc}%
a_{11} & a_{12} & a_{13} & a_{14}\\
a_{21} & a_{22} & a_{23} & a_{24}%
\end{array}
\right) $, then%
\[
\left( \operatorname*{Grasp}\nolimits_{0}A\right) \left( \left(
1,1\right) \right) =\dfrac{\det\left( A\left[ 1:1\mid1:3\right] \right)
}{\det\left( A\left[ 0:1\mid2:3\right] \right) }=\dfrac{\det\left(
\begin{array}
[c]{cc}%
a_{11} & a_{12}\\
a_{21} & a_{22}%
\end{array}
\right) }{\det\left(
\begin{array}
[c]{cc}%
-a_{14} & a_{12}\\
-a_{24} & a_{22}%
\end{array}
\right) }=\dfrac{a_{11}a_{22}-a_{12}a_{21}}{a_{12}a_{24}-a_{14}a_{22}}%
\]
and%
\[
\left( \operatorname*{Grasp}\nolimits_{1}A\right) \left( \left(
1,2\right) \right) =\dfrac{\det\left( A\left[ 2:2\mid3:5\right] \right)
}{\det\left( A\left[ 1:2\mid4:5\right] \right) }=\dfrac{\det\left(
\begin{array}
[c]{cc}%
a_{13} & a_{14}\\
a_{23} & a_{24}%
\end{array}
\right) }{\det\left(
\begin{array}
[c]{cc}%
a_{11} & a_{14}\\
a_{21} & a_{24}%
\end{array}
\right) }=\dfrac{a_{13}a_{24}-a_{14}a_{23}}{a_{11}a_{24}-a_{14}a_{21}}.
\]
\end{example}
We will see more examples of values of $\operatorname*{Grasp}\nolimits_{0}A$
in Example \ref{ex.Grasp.generic}.
The next two propositions follow easily from the definition and elementary properties listed above.
\begin{proposition}
\label{prop.Grasp.period}Let $p$ and $q$ be two
positive integers. Let $A\in\mathbb{K}^{p\times\left( p+q\right) }$ be a
matrix. Then, $\operatorname*{Grasp}\nolimits_{j}A=\operatorname*{Grasp}%
\nolimits_{p+q+j}A$ for every $j\in\mathbb{Z}$ (provided that $A$ is
sufficiently generic in the sense of Zariski topology for
$\operatorname*{Grasp}\nolimits_{j} A$ to be well-defined).
\end{proposition}
% \begin{proof}
% We need to show
% that
% \[
% \left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
% i,k\right) \right) =\left( \operatorname*{Grasp}\nolimits_{p+q+j}A\right)
% \left( \left( i,k\right) \right)
% \]
% for every $\left( i,k\right) \in\left\{ 1,2,...,p\right\} \times\left\{
% 1,2,...,q\right\} $. But we have%
% \begin{align*}
% & A\left[ p+q+j:p+q+j+i\mid p+q+j+i+k:p+q+j+p+k\right] \\
% & =A\left[ j:j+i\mid j+i+k:j+p+k\right]
% \end{align*}
% (by Proposition \ref{prop.minors.period} \textbf{(a)}, applied to $u=p$,
% $v=p+q$, $a=j$, $b=j+i$, $c=j+i+k$ and $d=j+p+k$) and%
% \begin{align*}
% & A\left[ p+q+j+1:p+q+j+i\mid p+q+j+i+k-1:p+q+j+p+k\right] \\
% & =A\left[ j+1:j+i\mid j+i+k-1:j+p+k\right]
% \end{align*}
% (by Proposition \ref{prop.minors.period} \textbf{(a)}, applied to $u=p$,
% $v=p+q$, $a=j+1$, $b=j+i$, $c=j+i+k-1$ and $d=j+p+k$). Using these equalities,
% we immediately obtain $\left( \operatorname*{Grasp}\nolimits_{j}A\right)
% \left( \left( i,k\right) \right) =\left( \operatorname*{Grasp}%
% \nolimits_{p+q+j}A\right) \left( \left( i,k\right) \right) $ from the
% definition of $\operatorname*{Grasp}\nolimits_{j}A$.
% \end{proof}
\begin{proposition}
\label{prop.Grasp.antipode}Let $A\in\mathbb{K}^{p\times\left( p+q\right) }$. Let $\left(
i,k\right) \in\operatorname*{Rect}\left( p,q\right) $
and $j\in\mathbb{Z}$. Then%
\[
\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
i,k\right) \right) =\dfrac{1}{\left( \operatorname*{Grasp}%
\nolimits_{j+i+k-1}A\right) \left( \left( p+1-i,q+1-k\right) \right) }%
\]
(provided that $A$ is sufficiently generic in the sense of Zariski topology
for \newline
$\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
i,k\right) \right) $ and $\left( \operatorname*{Grasp}\nolimits_{j+i+k-1}%
A\right) \left( \left( p+1-i,q+1-k\right) \right) $ to be well-defined).
\end{proposition}
\begin{proof}
Expand the definitions of $\left(
\operatorname*{Grasp}\nolimits_{j}A\right) \left( \left( i,k\right)
\right) $ and \newline
$\left( \operatorname*{Grasp}\nolimits_{j+i+k-1}A\right)
\left( \left( p+1-i,q+1-k\right) \right) $ and apply Proposition
\ref{prop.minors.period} \textbf{(c)} twice.
\end{proof}
Each of the next two propositions has one of the following sections devoted to its proof.
These are the key lemmas that will allow us fairly easily to prove our main
Theorems \ref{thm.rect.ord}, \ref{thm.rect.antip} and \ref{thm.rect.antip.general} in
Section \ref{sect.rect.finish}.
\begin{proposition}
\label{prop.Grasp.GraspR}Let $A\in\mathbb{K}^{p\times\left( p+q\right) }$.
Let $j\in\mathbb{Z}$. Then%
\[
\operatorname*{Grasp}\nolimits_{j}A=R_{\operatorname*{Rect}\left( p,q\right)
}\left( \operatorname*{Grasp}\nolimits_{j+1}A\right)
\]
(provided that $A$ is sufficiently generic in the sense of Zariski topology
for the two sides of this equality to be well-defined).
\end{proposition}
\begin{proposition}
\label{prop.Grasp.generic}For almost every (in the Zariski sense) $f\in
\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$, there exists a matrix
$A\in\mathbb{K}^{p\times\left( p+q\right) }$ satisfying
$f=\operatorname*{Grasp}\nolimits_{0}A$.
\end{proposition}
\section{The Pl\"{u}cker-Ptolemy relation}
This section is devoted to proving Proposition \ref{prop.Grasp.GraspR}. Our main tool is a
fundamental determinantal identity, which we call
the \textit{Pl\"{u}cker-Ptolemy relation}:
\begin{theorem}
\label{thm.pluecker.ptolemy}Let $A\in
\mathbb{K}^{u\times v}$ be a $u\times v$-matrix for some nonnegative integers
$u$ and $v$. Let $a$, $b$, $c$ and $d$ be four integers satisfying $a\leq b+1$
and $c\leq d+1$ and $b-a+d-c=u-2$. Then,%
\begin{align*}
& \det\left( A\left[ a-1:b\mid c:d+1\right] \right) \cdot\det\left(
A\left[ a:b+1\mid c-1:d\right] \right) \\
& +\det\left( A\left[ a:b\mid c-1:d+1\right] \right) \cdot\det\left(
A\left[ a-1:b+1\mid c:d\right] \right) \\
& =\det\left( A\left[ a-1:b\mid c-1:d\right] \right) \cdot\det\left(
A\left[ a:b+1\mid c:d+1\right] \right) .
\end{align*}
\end{theorem}
Notice that the special case of this theorem for $v=u+2$, $a=2$, $b=p$,
$c=p+2$ and $d=p+q$ is the following lemma:
\begin{lemma}
\textit{\label{lem.pluecker.ptolemy}}Let
$u\in\mathbb{N}$. Let $B\in\mathbb{K}^{u\times\left( u+2\right) }$ be a
$u\times\left( u+2\right) $-matrix. Let $p$ and $q$ be two integers $\geq2$
satisfying $p+q=u+2$. Then,%
\begin{align}
& \det\left( B\left[ 1:p\mid p+2:p+q+1\right] \right) \cdot\det\left(
B\left[ 2:p+1\mid p+1:p+q\right] \right) \nonumber\\
& +\det\left( B\left[ 2:p\mid p+1:p+q+1\right] \right) \cdot\det\left(
B\left[ 1:p+1\mid p+2:p+q\right] \right) \nonumber\\
& =\det\left( B\left[ 1:p\mid p+1:p+q\right] \right) \cdot\det\left(
B\left[ 2:p+1\mid p+2:p+q+1\right] \right) .
\label{lem.pluecker.ptolemy.eq}%
\end{align}
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm.pluecker.ptolemy}.]
% If $a=b-1$ or
% $c=d-1$, then Theorem \ref{thm.pluecker.ptolemy} degenerates to a triviality
% (namely, $0+0=0$). Hence, for the rest of this proof, we assume WLOG that
% neither $a=b-1$ nor $c=d-1$. Hence, $a\leq b$ and $c\leq d$.
Theorem \ref{thm.pluecker.ptolemy} follows from the well-known Pl\"{u}cker relations
(see, e.g., \cite[(QR)]{kleiman-laksov}) applied to the $u\times\left(
u+2\right) $-matrix $A\left[ a-1:b+1\mid c-1:d+1\right] $. The extended versions
\cite{grinberg-roby-arxiv} of this
paper have a self-contained proof, which we briefly outline here. First we reduce Theorem
\ref{thm.pluecker.ptolemy} to its special case, Lemma \ref{lem.pluecker.ptolemy}, by
shifting columns. The latter can now be derived by
(a) using row-reduction to transform as many columns as possible into standard basis vectors;
(b) permuting columns to bring the matrices in (\ref{lem.pluecker.ptolemy.eq}) into block
triangular form; and (c) using that the determinant of such a matrix is the product of the
determinants of its blocks.
\end{proof}
% But let us show
% an alternative proof of Theorem \ref{thm.pluecker.ptolemy} which avoids the
% use of the Pl\"{u}cker relations:
% Let $p=b-a+2$ and $q=d-c+2$. Then, $p\geq2$, $q\geq2$ and $p+q=u+2$.
% Let $B$ be the matrix whose columns (from left to right) are $A_{a-1}$,
% $A_{a}$, $...$, $A_{b}$, $A_{c-1}$, $A_{c}$, $...$, $A_{d}$. Then, $B$ is a
% $u\times\left( u+2\right) $-matrix and satisfies%
% \begin{align*}
% A\left[ a-1:b\mid c:d+1\right] & =B\left[ 1:p-1\mid p+2:p+q+1\right] ;\\
% A\left[ a:b+1\mid c-1:d\right] & =B\left[ 2:p\mid p+1:p+q\right] ;\\
% A\left[ a:b\mid c-1:d+1\right] & =B\left[ 2:p-1\mid p+1:p+q+1\right] ;\\
% A\left[ a-1:b+1\mid c:d\right] & =B\left[ 1:p\mid p+2:p+q\right] ;\\
% A\left[ a-1:b\mid c-1:d\right] & =B\left[ 1:p-1\mid p+1:p+q\right] ;\\
% A\left[ a:b+1\mid c:d+1\right] & =B\left[ 2:p\mid p+2:p+q+1\right] .
% \end{align*}
% Hence, the equality that we have to prove, namely%
% \begin{align*}
% & \det\left( A\left[ a-1:b\mid c:d+1\right] \right) \cdot\det\left(
% A\left[ a:b+1\mid c-1:d\right] \right) \\
% & +\det\left( A\left[ a:b\mid c-1:d+1\right] \right) \cdot\det\left(
% A\left[ a-1:b+1\mid c:d\right] \right) \\
% & =\det\left( A\left[ a-1:b\mid c-1:d\right] \right) \cdot\det\left(
% A\left[ a:b+1\mid c:d+1\right] \right) ,
% \end{align*}
% rewrites precisely as (\ref{lem.pluecker.ptolemy.eq}). Hence, in
% order to complete the proof of Theorem \ref{thm.pluecker.ptolemy}, we only
% need to verify Lemma \ref{lem.pluecker.ptolemy}.
% \end{proof}
% \begin{proof}
% Let $\left(
% e_{1},e_{2},...,e_{u}\right) $ be the standard basis of the $\mathbb{K}%
% $-vector space $\mathbb{K}^{u}$.
% Let $\alpha$ and $\beta$ be the $\left( p-1\right) $-st entries of the
% columns $B_{1}$ and $B_{p+q}$ of $B$. Let $\gamma$ and $\delta$ be the $p$-th
% entries of the columns $B_{1}$ and $B_{p+q}$ of $B$.
% We need to prove (\ref{lem.pluecker.ptolemy.eq}). Since
% (\ref{lem.pluecker.ptolemy.eq}) is a polynomial identity in the entries of
% $B$, let us WLOG assume that the columns $B_{2}$, $B_{3}$, $...$, $B_{p+q-1}$
% of $B$ (these are the middle $u$ among the altogether $u+2=p+q$ columns of
% $B$) are linearly independent (since $u$ vectors in $\mathbb{K}^{u}$ in
% general position are linearly independent). Then, by applying row
% transformations to the matrix $B$, we can transform these columns into the
% basis vectors $e_{1}$, $e_{2}$, $...$, $e_{u}$ of $\mathbb{K}^{u}$. Since the
% equality (\ref{lem.pluecker.ptolemy.eq}) is preserved under row
% transformations of $B$ (indeed, row transformations of $B$ amount to row
% transformations of all six matrices appearing in
% (\ref{lem.pluecker.ptolemy.eq}), and thus their only effect on the equality
% (\ref{lem.pluecker.ptolemy.eq}) is to multiply the six determinants appearing
% in (\ref{lem.pluecker.ptolemy.eq}) by certain scalar factors, but these scalar
% factors are all equal and thus don't affect the validity of the equality), we
% can therefore WLOG assume that the columns $B_{2}$, $B_{3}$, $...$,
% $B_{p+q-1}$ of $B$ \textbf{are} the basis vectors $e_{1}$, $e_{2}$, $...$,
% $e_{u}$ of $\mathbb{K}^{u}$. The matrix $B$ then looks as follows:%
% \[
% \left(
% \begin{array}
% [c]{cccccccccccc}%
% \ast & 1 & 0 & \cdots & 0 & 0 & 0 & 0 & 0 & \cdots & 0 & \ast\\
% \ast & 0 & 1 & \cdots & 0 & 0 & 0 & 0 & 0 & \cdots & 0 & \ast\\
% \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots
% & \ddots & \vdots & \ast\\
% \ast & 0 & 0 & \cdots & 1 & 0 & 0 & 0 & 0 & \cdots & 0 & \ast\\
% \ast & 0 & 0 & \cdots & 0 & 1 & 0 & 0 & 0 & \cdots & 0 & \ast\\
% \alpha & 0 & 0 & \cdots & 0 & 0 & 1 & 0 & 0 & \cdots & 0 & \beta\\
% \gamma & 0 & 0 & \cdots & 0 & 0 & 0 & 1 & 0 & \cdots & 0 & \delta\\
% \ast & 0 & 0 & \cdots & 0 & 0 & 0 & 0 & 1 & \cdots & 0 & \ast\\
% \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots
% & \ddots & \vdots & \vdots\\
% \ast & 0 & 0 & \cdots & 0 & 0 & 0 & 0 & 0 & \cdots & 1 & \ast
% \end{array}
% \right) ,
% \]
% where asterisks ($\ast$) signify entries which we are not concerned with.
% Now, there is a method to simplify the determinant of a matrix if some columns
% of this matrix are known to belong to the standard basis $\left( e_{1}%
% ,e_{2},...,e_{u}\right) $. Indeed, such a matrix can first be brought to a
% block-triangular form by permuting columns (which affects the determinant by
% $\left( -1\right) ^{\sigma}$, with $\sigma$ being the sign of the
% permutation used), and then its determinant can be evaluated using the fact
% that the determinant of a block-triangular matrix is the product of the
% determinants of its diagonal blocks. Applying this method to each of the six
% matrices appearing in (\ref{lem.pluecker.ptolemy.eq}), we obtain%
% \begin{align*}
% \det\left( B\left[ 1:p\mid p+2:p+q+1\right] \right) & =\left(
% -1\right) ^{p+q}\left( \alpha\delta-\beta\gamma\right) ;\\
% \det\left( B\left[ 2:p+1\mid p+1:p+q\right] \right) & =1;\\
% \det\left( B\left[ 2:p\mid p+1:p+q+1\right] \right) & =\left(
% -1\right) ^{q-1}\beta;\\
% \det\left( B\left[ 1:p+1\mid p+2:p+q\right] \right) & =\left(
% -1\right) ^{p-1}\gamma;\\
% \det\left( B\left[ 1:p\mid p+1:p+q\right] \right) & =\left( -1\right)
% ^{p-2}\alpha;\\
% \det\left( B\left[ 2:p+1\mid p+2:p+q+1\right] \right) & =\left(
% -1\right) ^{q-2}\delta.
% \end{align*}
% Hence, (\ref{lem.pluecker.ptolemy.eq}) rewrites as%
% \[
% \left( -1\right) ^{p+q}\left( \alpha\delta-\beta\gamma\right)
% \cdot1+\left( -1\right) ^{q-1}\beta\cdot\left( -1\right) ^{p-1}%
% \gamma=\left( -1\right) ^{p-2}\alpha\cdot\left( -1\right) ^{q-2}\delta.
% \]
% Upon cancelling the signs, this simplifies to $\left( \alpha\delta
% -\beta\gamma\right) +\beta\gamma=\alpha\delta$, which is trivially true. Thus
% we have proven (\ref{lem.pluecker.ptolemy.eq}).
% \end{proof}
% \begin{remark}
% Instead of transforming the middle $p+q$ columns of the matrix $B$ to the
% standard basis vectors $e_{1}$, $e_{2}$, $...$, $e_{u}$ of $\mathbb{K}^{u}$ as
% we did in the proof of Lemma \ref{lem.pluecker.ptolemy}, we could have
% transformed the first and last columns of $B$ into the two last standard basis
% vectors $e_{u-1}$ and $e_{u}$. The resulting identity would have been
% Dodgson's condensation identity (which appears, e.g., in
% \cite[\textit{(Alice)}]{zeilberger-twotime}), applied to the matrix formed by
% the remaining $u$ columns of $B$ and after some interchange of rows and columns.
% \end{remark}
We are now ready to prove the key lemma that birational rowmotion acts by a cyclic shifted
on Grasp-labelings.
\begin{proof}
[Proof of Proposition \ref{prop.Grasp.GraspR}.]Let $f=\operatorname*{Grasp}%
\nolimits_{j+1}A$ and $g=\operatorname*{Grasp}\nolimits_{j}A$. We want to show that
$g=R_{\operatorname*{Rect}\left( p,q\right) }\left( f\right) $.
By Proposition \ref{prop.R.implicit.converse} this will follow
once we can show that%
\begin{equation}
g\left( v\right) =\dfrac{1}{f\left( v\right) }\cdot\dfrac{\sum
\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left( p,q\right)
};\\u\lessdot v}}f\left( u\right) }{\sum\limits_{\substack{u\in
\widehat{\operatorname*{Rect}\left( p,q\right) };\\u\gtrdot v}}\dfrac
{1}{g\left( u\right) }}\ \ \ \ \ \ \ \ \ \ \text{for every }v\in
\operatorname*{Rect}\left( p,q\right) . \label{pf.Grasp.GraspR.goal}%
\end{equation}
Let $v = (i,j) \in\operatorname*{Rect}\left( p,q\right) $.
We are clearly in one of the following four cases:
\textit{Case 1:} We have $v\neq\left( 1,1\right) $ and $v\neq\left(
p,q\right) $.
\textit{Case 2:} We have $v=\left( 1,1\right) $ and $v\neq\left(
p,q\right) $.
\textit{Case 3:} We have $v\neq\left( 1,1\right) $ and $v=\left(
p,q\right) $.
\textit{Case 4:} We have $v=\left( 1,1\right) $ and $v=\left( p,q\right) $.
For Case 1 all elements
$u\in\widehat{\operatorname*{Rect}\left( p,q\right) }$ satisfying $u\lessdot
v$ belong to $\operatorname*{Rect}\left( p,q\right) $, and the same holds
for all $u\in\widehat{\operatorname*{Rect}\left( p,q\right) }$ satisfying
$u\gtrdot v$.
Now in $\operatorname*{Rect}\left( p,q\right)$
there are at most two elements $u$ of $\widehat{\operatorname*{Rect}\left(
p,q\right) }$ satisfying $u\lessdot v$, namely $\left( i,k-1\right) $
%(which exists only if $k\neq1$)
and $\left( i-1,k\right) $.
%(which exists only if $i\neq1$)
Hence, the sum $\sum\limits_{\substack{u\in
\widehat{\operatorname*{Rect}\left( p,q\right) };\\u\lessdot v}}f\left(
u\right) $ takes one of the three forms $f\left( \left( i,k-1\right)
\right) +f\left( \left( i-1,k\right) \right) $, $f\left( \left(
i,k-1\right) \right) $ and $f\left( \left( i-1,k\right) \right) $. By the convention
of Definition \ref{def.Grasp} \textbf{(b)}, all of these three forms can be
rewritten uniformly as $f\left( \left( i,k-1\right) \right) +f\left(
\left( i-1,k\right) \right) $.
% (because if $\left( i,k-1\right) \notin\operatorname*{Rect}\left( p,q\right) $ then Definition
% \ref{def.Grasp} \textbf{(b)} guarantees that $f\left( \left( i,k-1\right)
% \right) =0$, and similarly $f\left( \left( i-1,k\right) \right) =0$ if
% $\left( i-1,k\right) \notin\operatorname*{Rect}\left( p,q\right) $).
So we have%
\begin{equation}
\sum\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left( p,q\right)
};\\u\lessdot v}}f\left( u\right) =f\left( \left( i,k-1\right) \right)
+f\left( \left( i-1,k\right) \right) . \label{pf.Grasp.GraspR.f}%
\end{equation}
Similarly,
\begin{equation}
\sum\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left( p,q\right)
};\\u\gtrdot v}}\dfrac{1}{g\left( u\right) }=\dfrac{1}{g\left( \left(
i,k+1\right) \right) }+\dfrac{1}{g\left( \left( i+1,k\right) \right) },
\label{pf.Grasp.GraspR.g}%
\end{equation}
where we set $\dfrac{1}{\infty}=0$ as usual.
But $f=\operatorname*{Grasp}\nolimits_{j+1}A$. Hence,%
\begin{align*}
& f\left( \left( i,k-1\right) \right) \\
& =\left( \operatorname*{Grasp}\nolimits_{j+1}A\right) \left( \left(
i,k-1\right) \right) \\
% & =\dfrac{\det\left( A\left[ \left( j+1\right) +1:\left( j+1\right)
% +i\mid\left( j+1\right) +i+\left( k-1\right) -1:\left( j+1\right)
% +p+\left( k-1\right) \right] \right) }{\det\left( A\left[ j+1:\left(
% j+1\right) +i\mid\left( j+1\right) +i+\left( k-1\right) :\left(
% j+1\right) +p+\left( k-1\right) \right] \right) }\\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{Grasp}\nolimits_{j+1}A\right) \\
& =\dfrac{\det\left( A\left[ j+2:j+i+1\mid j+i+k-1:j+p+k\right] \right)
}{\det\left( A\left[ j+1:j+i+1\mid j+i+k:j+p+k\right] \right) }%
\end{align*}
and
\begin{align*}
& f\left( \left( i-1,k\right) \right) \\
& =\left( \operatorname*{Grasp}\nolimits_{j+1}A\right) \left( \left(
i-1,k\right) \right) \\
% & =\dfrac{\det\left( A\left[ \left( j+1\right) +1:\left( j+1\right)
% +\left( i-1\right) \mid\left( j+1\right) +\left( i-1\right) +k-1:\left(
% j+1\right) +p+k\right] \right) }{\det\left( A\left[ j+1:\left(
% j+1\right) +\left( i-1\right) \mid\left( j+1\right) +\left( i-1\right)
% +k:\left( j+1\right) +p+k\right] \right) }\\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{Grasp}\nolimits_{j+1}A\right) \\
& =\dfrac{\det\left( A\left[ j+2:j+i\mid j+i+k-1:j+p+k+1\right] \right)
}{\det\left( A\left[ j+1:j+i\mid j+i+k:j+p+k+1\right] \right) }.
\end{align*}
Due to these two equalities, (\ref{pf.Grasp.GraspR.f}) becomes%
\begin{align}
\sum\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left( p,q\right)
};\\u \lessdot v}}f\left( u\right)
&=\dfrac{\det\left( A\left[ j+2:j+i+1\mid j+i+k-1:j+p+k\right] \right)
}{\det\left( A\left[ j+1:j+i+1\mid j+i+k:j+p+k\right] \right) }\nonumber\\
& \ \ \ \ \ \ \ \ \ \ +\dfrac{\det\left( A\left[ j+2:j+i\mid
j+i+k-1:j+p+k+1\right] \right) }{\det\left( A\left[ j+1:j+i\mid
j+i+k:j+p+k+1\right] \right) }\nonumber\\
& =\left( \det\left( A\left[ j+1:j+i+1\mid j+i+k:j+p+k\right] \right)
\right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\left( \det\left( A\left[ j+1:j+i\mid
j+i+k:j+p+k+1\right] \right) \right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\left( \det\left( A\left[ j+1:j+i\mid
j+i+k:j+p+k+1\right] \right) \right. \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \left. \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[
j+2:j+i+1\mid j+i+k-1:j+p+k\right] \right) \right. \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \left. +\det\left( A\left[ j+2:j+i\mid
j+i+k-1:j+p+k+1\right] \right) \right. \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \left. \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[
j+1:j+i+1\mid j+i+k:j+p+k\right] \right) \right) \nonumber\\
& =\left( \det\left( A\left[ j+1:j+i+1\mid j+i+k:j+p+k\right] \right)
\right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\left( \det\left( A\left[ j+1:j+i\mid
j+i+k:j+p+k+1\right] \right) \right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[ j+1:j+i\mid
j+i+k-1:j+p+k\right] \right) \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[ j+2:j+i+1\mid
j+i+k:j+p+k+1\right] \right) \label{pf.Grasp.GraspR.side1}%
\end{align}
(by Theorem \ref{thm.pluecker.ptolemy}, applied to $a=j+2$, $b=j+i$,
$c=j+i+k$ and $d=j+p+k$).
On the other hand, $g=\operatorname*{Grasp}\nolimits_{j}A$, so a similar series of
computations gives
% \begin{align*}
% & g\left( \left( i,k+1\right) \right) \\
% & =\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
% i,k+1\right) \right) =\dfrac{\det\left( A\left[ j+1:j+i\mid j+i+\left(
% k+1\right) -1:j+p+\left( k+1\right) \right] \right) }{\det\left(
% A\left[ j:j+i\mid j+i+\left( k+1\right) :j+p+\left( k+1\right) \right]
% \right) }\\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{Grasp}\nolimits_{j}A\right) \\
% & =\dfrac{\det\left( A\left[ j+1:j+i\mid j+i+k:j+p+k+1\right] \right)
% }{\det\left( A\left[ j:j+i\mid j+i+k+1:j+p+k+1\right] \right) }%
% \end{align*}
% and therefore%
% \begin{equation}
% \dfrac{1}{g\left( \left( i,k+1\right) \right) }=\dfrac{\det\left(
% A\left[ j:j+i\mid j+i+k+1:j+p+k+1\right] \right) }{\det\left( A\left[
% j+1:j+i\mid j+i+k:j+p+k+1\right] \right) }. \label{pf.Grasp.GraspR.g.1}%
% \end{equation}
% Also, from $g=\operatorname*{Grasp}\nolimits_{j}A$, we obtain%
% \begin{align*}
% & g\left( \left( i+1,k\right) \right) \\
% & =\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
% i-1,k\right) \right) =\dfrac{\det\left( A\left[ j+1:j+\left( i+1\right)
% \mid j+\left( i+1\right) +k-1:j+p+k\right] \right) }{\det\left( A\left[
% j:j+\left( i+1\right) \mid j+\left( i+1\right) +k:j+p+k\right] \right)
% }\\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{Grasp}\nolimits_{j}A\right) \\
% & =\dfrac{\det\left( A\left[ j+1:j+i+1\mid j+i+k:j+p+k\right] \right)
% }{\det\left( A\left[ j:j+i+1\mid j+i+k+1:j+p+k\right] \right) },
% \end{align*}
% so that%
% \begin{equation}
% \dfrac{1}{g\left( \left( i+1,k\right) \right) }=\dfrac{\det\left(
% A\left[ j:j+i+1\mid j+i+k+1:j+p+k\right] \right) }{\det\left( A\left[
% j+1:j+i+1\mid j+i+k:j+p+k\right] \right) }. \label{pf.Grasp.GraspR.g.2}%
% \end{equation}
% Due to (\ref{pf.Grasp.GraspR.g.1}) and (\ref{pf.Grasp.GraspR.g.2}), the
% equality (\ref{pf.Grasp.GraspR.g}) becomes%
\begin{align}
\sum\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left( p,q\right)
};\\u\gtrdot v}}\dfrac{1}{g\left( u\right) }
& =\dfrac{\det\left( A\left[ j:j+i\mid j+i+k+1:j+p+k+1\right] \right)
}{\det\left( A\left[ j+1:j+i\mid j+i+k:j+p+k+1\right] \right) }\nonumber\\
& \ \ \ \ \ \ \ \ \ \ +\dfrac{\det\left( A\left[ j:j+i+1\mid
j+i+k+1:j+p+k\right] \right) }{\det\left( A\left[ j+1:j+i+1\mid
j+i+k:j+p+k\right] \right) }\nonumber\\
& =\left( \det\left( A\left[ j+1:j+i\mid j+i+k:j+p+k+1\right] \right)
\right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\left( \det\left( A\left[ j+1:j+i+1\mid
j+i+k:j+p+k\right] \right) \right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\left( \det\left( A\left[ j:j+i\mid
j+i+k+1:j+p+k+1\right] \right) \right. \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \left. \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[
j+1:j+i+1\mid j+i+k:j+p+k\right] \right) \right. \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \left. +\det\left( A\left[ j+1:j+i\mid
j+i+k:j+p+k+1\right] \right) \right. \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \left. \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[
j:j+i+1\mid j+i+k+1:j+p+k\right] \right) \right) \nonumber\\
& =\left( \det\left( A\left[ j+1:j+i\mid j+i+k:j+p+k+1\right] \right)
\right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\left( \det\left( A\left[ j+1:j+i+1\mid
j+i+k:j+p+k\right] \right) \right) ^{-1}\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[ j:j+i\mid j+i+k:j+p+k\right]
\right) \nonumber\\
& \ \ \ \ \ \ \ \ \ \ \cdot\det\left( A\left[ j+1:j+i+1\mid
j+i+k+1:j+p+k+1\right] \right) \label{pf.Grasp.GraspR.side2}%
\end{align}
(by Theorem \ref{thm.pluecker.ptolemy}, applied to $a=j+1$, $b=j+i$,
$c=j+i+k+1$ and $d=j+p+k$).
% Since $v=\left( i,k\right) $ and $g=\operatorname*{Grasp}\nolimits_{j}A$, we
% have%
Now by definition we get:
\begin{align}
g\left( v\right)
& =\left( \operatorname*{Grasp}\nolimits_{j}A\right) \left( \left(
i,k\right) \right) =\dfrac{\det\left( A\left[ j+1:j+i\mid
j+i+k-1:j+p+k\right] \right) }{\det\left( A\left[ j:j+i\mid
j+i+k:j+p+k\right] \right) }\label{pf.Grasp.GraspR.side3}
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{Grasp}\nolimits_{j}A\right) .\nonumber
\end{align}
\begin{align}
\text{while } f\left( v\right) & =\left( \operatorname*{Grasp}\nolimits_{j+1}A\right) \left( \left(
i,k\right) \right) \nonumber\\
% & =\dfrac{\det\left( A\left[ \left( j+1\right) +1:\left( j+1\right)
% +i\mid\left( j+1\right) +i+k-1:\left( j+1\right) +p+k\right] \right)
% }{\det\left( A\left[ j+1:\left( j+1\right) +i\mid\left( j+1\right)
% +i+k:\left( j+1\right) +p+k\right] \right) }\nonumber\\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{Grasp}\nolimits_{j+1}A\right) \nonumber\\
& =\dfrac{\det\left( A\left[ j+2:j+i+1\mid j+i+k:j+p+k+1\right] \right)
}{\det\left( A\left[ j+1:j+i+1\mid j+i+k+1:j+p+k+1\right] \right) }.
\label{pf.Grasp.GraspR.side4}%
\end{align}
So we can rewrite the terms $\sum\limits_{\substack{u\in
\widehat{\operatorname*{Rect}\left( p,q\right) };\\u \lessdot v}}f\left(
u\right) $, $\sum\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left(
p,q\right) };\\u \gtrdot v}}\dfrac{1}{g\left( u\right) }$, $g\left(
v\right) $ and $f\left( v\right) $ in (\ref{pf.Grasp.GraspR.goal}) using
the equalities (\ref{pf.Grasp.GraspR.side1}), (\ref{pf.Grasp.GraspR.side2}),
(\ref{pf.Grasp.GraspR.side3}) and (\ref{pf.Grasp.GraspR.side4}), respectively.
The resulting equation is a tautology because all determinants cancel out.
%(this can be checked by the reader).
This proves (\ref{pf.Grasp.GraspR.goal}) in Case 1.
% Let us now consider Case 3. In this case, we have $v\neq\left( 1,1\right) $
% and $v=\left( p,q\right) $. Hence, (\ref{pf.Grasp.GraspR.side1}),
% (\ref{pf.Grasp.GraspR.side3}) and (\ref{pf.Grasp.GraspR.side4}) are still
% valid, whereas (\ref{pf.Grasp.GraspR.side2}) gets superseded by the simpler
% equality%
% \begin{equation}
% \sum\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left( p,q\right)
% };\\u\gtrdot v}}\dfrac{1}{g\left( u\right) }=\dfrac{1}{g\left( 1\right)
% }=\dfrac{1}{1}=1. \label{pf.Grasp.GraspR.side2a}%
% \end{equation}
% From here, we can proceed as in Case 1 above (using
% (\ref{pf.Grasp.GraspR.side2a}) instead of (\ref{pf.Grasp.GraspR.side2})), with
% the only difference being that instead of Theorem \ref{thm.pluecker.ptolemy}
% we get to apply the equalities
% \begin{align*}
% & \det\left( A\left[ j+1:j+p+1\mid j+p+q:j+p+q\right] \right) \\
% & =\det\left( A\left[ j+1:j+p+1\mid j+p+q+1:j+p+q+1\right] \right)
% \end{align*}
% and
% \begin{align*}
% \det\left( A\left[ j+1:j+p\mid j+p+q:j+p+q+1\right] \right) &= \det\left( A\left[ j:j+p\mid j+p+q:j+p+q\right] \right)
% \end{align*}
% (which can be easily proven\footnote{\textit{Proof.} We have%
% \begin{align*}
% & \det\left( A\left[ j+1:j+p\mid j+p+q:j+p+q+1\right] \right) \\
% & =\det\left( A\left[ j+1:j+p\mid p+q+j:p+q+j+1\right] \right)
% =\det\left( \underbrace{A\left[ j:j+1\mid j+1:j+p\right] }%
% _{\substack{=A\left[ j:j+p\mid j+p+q:j+p+q\right] \\\text{(by Proposition
% \ref{prop.minors.trivial} \textbf{(c)})}}}\right) \\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by Proposition \ref{prop.minors.period}
% \textbf{(c)}, applied to }u=p\text{, }v=p+q\text{, }a=j+1\text{, }b=j+p\text{,
% }c=j\text{ and }d=j+1\right) \\
% & =\det\left( A\left[ j:j+p\mid j+p+q:j+p+q\right] \right) ,
% \end{align*}
% qed.}). Thus, (\ref{pf.Grasp.GraspR.goal}) is proven in Case 3.
% Similarly, Case 2 (which differs from case 1 in that
% (\ref{pf.Grasp.GraspR.side1}) gets superseded by the simpler
% equality $
% \sum\limits_{\substack{u\in\widehat{\operatorname*{Rect}\left( p,q\right)
% };\\u\lessdot v}}f\left( u\right) =f\left( 0\right) =1.
% $) can be reduced to the two equalities
% \[
% \det\left( A\left[ j+1:j+1\mid j+1:j+p+1\right] \right) =\det\left(
% A\left[ j+1:j+2\mid j+2:j+p+1\right] \right)
% \]
% (this holds because of Proposition \ref{prop.minors.complete}) and
% \[
% \det\left( A\left[ j+2:j+2\mid j+2:j+p+2\right] \right) =\det\left(
% A\left[ j+1:j+1\mid j+2:j+p+2\right] \right)
% \]
% (this is because of Proposition \ref{prop.minors.trivial} \textbf{(a)}).
% We have now proven (\ref{pf.Grasp.GraspR.goal}) in each of the Cases 1, 2 and
% 3. We leave the proof in Case 4 to the reader (this case is completely
% straightforward, since it has $\left( p,q\right) =v=\left( 1,1\right) $).
% Thus, we now know that (\ref{pf.Grasp.GraspR.goal}) holds in each of the four
% Cases 1, 2, 3 and 4. This completes the proof of (\ref{pf.Grasp.GraspR.goal})
% and, with it, the proof of Proposition \ref{prop.Grasp.GraspR}.
Proofs of the other three cases follow the same lines of argument, but are simpler.
Note, however, that it is only in Cases 3 and 4 that we use the fact that
the sequence $\left(
A_{n}\right) _{n\in\mathbb{Z}}$ is ``$\left( p+q\right) $-periodic up to sign'' as opposed to
an arbitrary sequence of length-$p$ column vectors.
\end{proof}
% A remark seems in order, about why we paid so much attention to the
% ``degenerate'' Cases 2, 3 and 4. Indeed, only in Cases 3 and 4 have we used
% the fact that the sequence $\left( A_{n}\right) _{n\in\mathbb{Z}}$ is
% ``$\left( p+q\right) $-periodic up to sign'' rather than just an arbitrary
% sequence of length-$p$ column vectors. Had we left out these seemingly
% straightforward cases, it would have seemed that the proof showed a result too
% good to be true (because it is rather clear that the periodicity in the
% definition of $A_{n}$ for general $n\in\mathbb{Z}$ is needed).
\section{\label{sect.dominance}Dominance of the Grassmannian parametrization}
In this section we prove Proposition~\ref{prop.Grasp.generic}
that the space of $\mathbb{K}$-labelings that we can obtain in the
form $\operatorname*{Grasp}\nolimits_{0}A$ is sufficiently diverse to cover everything we
need. Before plunging into the details of the general case, we illustrate the approach we take
with an example.
\begin{example}
\label{ex.Grasp.generic}Let $p=q=2$ and
$f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left( 2,2\right) }}$ be a
generic reduced labelling. We want to construct a matrix $A\in\mathbb{K}%
^{2\times\left( 2+2\right) }$ satisfying $f=\operatorname*{Grasp}%
\nolimits_{0}A$.
Clearly the condition $f=\operatorname*{Grasp}\nolimits_{0}A$ imposes $4$
equations on the eight entries of $A$;
%(one for every element of $\operatorname*{Rect}\left( 2,2\right) $).
thus, we are trying to solve an
underdetermined system. However, we can get rid of the superfluous freedom if
we additionally try to ensure that our matrix $A$ has the form
$A = \left( I_{p}\mid B\right) =\left(
\begin{array}
[c]{cccc}%
1 & 0 & x & y\\
0 & 1 & z & w
\end{array}
\right) $ for some
$B=\left(
\begin{array}
[c]{cc}%
x & y\\
z & w
\end{array}
\right) \in\mathbb{K}^{2\times2}$.
Now,%
\begin{align*}
\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid B\right) \right)
\left( \left( 1,1\right) \right) & =\dfrac{\det\left( \left( I_{p}\mid
B\right) \left[ 1:1\mid1:3\right] \right) }{\det\left( \left( I_{p}\mid
B\right) \left[ 0:1\mid2:3\right] \right) }=\dfrac{\det\left(
\begin{array}
[c]{cc}%
1 & 0\\
0 & 1
\end{array}
\right) }{\det\left(
\begin{array}
[c]{cc}%
-y & 0\\
-w & 1
\end{array}
\right) }=\dfrac{-1}{y};\\
\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid B\right) \right)
\left( \left( 1,2\right) \right) & =\dfrac{\det\left( \left( I_{p}\mid
B\right) \left[ 1:1\mid2:4\right] \right) }{\det\left( \left( I_{p}\mid
B\right) \left[ 0:1\mid3:4\right] \right) }=\dfrac{\det\left(
\begin{array}
[c]{cc}%
0 & x\\
1 & z
\end{array}
\right) }{\det\left(
\begin{array}
[c]{cc}%
-y & x\\
-w & z
\end{array}
\right) }=\dfrac{-x}{wx-yz};\\
\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid B\right) \right)
\left( \left( 2,1\right) \right) & =\dfrac{\det\left( \left( I_{p}\mid
B\right) \left[ 1:2\mid2:3\right] \right) }{\det\left( \left( I_{p}\mid
B\right) \left[ 0:2\mid3:3\right] \right) }=\dfrac{\det\left(
\begin{array}
[c]{cc}%
1 & 0\\
0 & 1
\end{array}
\right) }{\det\left(
\begin{array}
[c]{cc}%
-y & 1\\
-w & 0
\end{array}
\right) }=\dfrac{1}{w};\\
\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid B\right) \right)
\left( \left( 2,2\right) \right) & =\dfrac{\det\left( \left( I_{p}\mid
B\right) \left[ 1:2\mid3:4\right] \right) }{\det\left( \left( I_{p}\mid
B\right) \left[ 0:2\mid4:4\right] \right) }=\dfrac{\det\left(
\begin{array}
[c]{cc}%
1 & x\\
0 & z
\end{array}
\right) }{\det\left(
\begin{array}
[c]{cc}%
-y & 1\\
-w & 0
\end{array}
\right) }=\dfrac{z}{w}.
\end{align*}
The requirement $f=\operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid
B\right) $ therefore translates into the following system, which is solved by elimination
(in order $w,y,z,x$) as shown:
\bigskip
\[
\left\{
\begin{array}
[c]{lcl}%
f\left( \left( 1,1\right) \right) & = & \dfrac{-1}{y};\\
f\left( \left( 1,2\right) \right) & = & \dfrac{-x}{wx-yz};\\
f\left( \left( 2,1\right) \right) & = & \dfrac{1}{w};\\[9pt]
f\left( \left( 2,2\right) \right) & = & \dfrac{z}{w}%
\end{array}
\right. .
\qquad \implies \qquad
%
\left\{
\begin{array}
[c]{lcl}%
w & = & \dfrac{1}{f((2,1))};\\
x & = & \dfrac{-f((1,2))f((2,2))}{[f((1,2))+f((2,1))]f((1,1))};\\[10pt]
y & = & \dfrac{-1}{f((1,1))};\\[12pt]
z & = & \dfrac{f((2,2))}{f((2,1))}%
\end{array}
\right. .
\]
% This system can be solved by elimination: First, compute $w$ using $f\left(
% \left( 2,1\right) \right) =\dfrac{1}{w}$, obtaining $w=\dfrac{1}{f\left(
% \left( 2,1\right) \right) }$; then, compute $y$ using $f\left( \left(
% 1,1\right) \right) =\dfrac{-1}{y}$, obtaining $y=\dfrac{-1}{f\left( \left(
% 1,1\right) \right) }$; then, compute $z$ using $f\left( \left( 2,2\right)
% \right) =\dfrac{z}{w}$ and the already eliminated $w$, obtaining
% $z=\dfrac{f\left( \left( 2,2\right) \right) }{f\left( \left( 2,1\right)
% \right) }$; finally, compute $x$ using $f\left( \left( 1,2\right) \right)
% =\dfrac{-x}{wx-yz}$ and the already eliminated $w,y,z$, obtaining
% $x=\dfrac{-f\left( \left( 1,2\right) \right) f\left( \left( 2,2\right)
% \right) }{\left( f\left( \left( 1,2\right) \right) +f\left( \left(
% 2,1\right) \right) \right) f\left( \left( 1,1\right) \right) }$.
While the denominators in these fractions can vanish, leading to underdetermination or
unsolvability, this will not happen for \textbf{generic} $f$.
% This approach to solving $f=\operatorname*{Grasp}\nolimits_{0}A$ generalizes
% to arbitrary $p$ and $q$, and motivates the following proof.
\end{example}
We apply this same technique to the general proof of Proposition \ref{prop.Grasp.generic}.
For any fixed $f\in \mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$, solving the
equation $f=\operatorname*{Grasp}\nolimits_{0}A$ for $A\in \mathbb{K}^{p\times\left(
p+q\right) }$ can be considered as a system of $pq$ equations on $p\left( p+q\right) $
unknowns. While this (nonlinear) system is usually underdetermined, we can restrict
the entries of $A$ by requiring that the leftmost $p$ columns of $A$ form the $p\times p$
identity matrix, leaving us with only $pq$ unknowns only; for $f$
sufficiently generic, the resulting system will be uniquely solvable by ``triangular
elimination'' (i.e., there is an equation containing only one unknown; then, when this
unknown is eliminated, the resulting system again contains an equation with only one
unknown, and once this one is eliminated, one gets a further system containing an equation
with only one unknown, and so forth).
% -- like a triangular system of linear equations with
% nonzero entries on the diagonal, but without the linearity. Of course, this is not a
% complete proof because the applicability of ``triangular elimination'' has to be proven, not
% merely claimed.
We will sketch the ideas of this proof, leaving all straightforward details to the
reader. We word the argument using algebraic properties of families of rational functions
instead of using the algorithmic nature of ``triangular elimination'' (similarly to how most
applications of linear algebra use the language of bases of vector spaces rather than talk
about the process of solving systems by Gaussian elimination). While this clarity comes at
the cost of a slight disconnect from the motivation of the proof, we hope that the reader
will still see how the wind blows. We first introduce some notation to capture the essence
of ``triangular elimination'' without having to talk about actually moving around variables
in equations.
\begin{definition}
\label{def.algebraic.triangularity.short}Let $\mathbb{F}$ be a field. Let
$\mathbf{P}$ be a finite set.
\textbf{(a)} Let $x_{\mathbf{p}}$ be a new symbol for every $\mathbf{p}%
\in\mathbf{P}$. We will denote by $\mathbb{F}\left( x_{\mathbf{P}}\right) $
the field of rational functions over $\mathbb{F}$ in the indeterminates
$x_{\mathbf{p}}$ with $\mathbf{p}$ ranging over all elements of $\mathbf{P}$
(hence altogether $\left\vert \mathbf{P}\right\vert $ indeterminates). We also
will denote by $\mathbb{F}\left[ x_{\mathbf{P}}\right] $ the ring of
polynomials over $\mathbb{F}$ in the indeterminates $x_{\mathbf{p}}$ with
$\mathbf{p}$ ranging over all elements of $\mathbf{P}$. (Thus, $\mathbb{F}%
\left( x_{\mathbf{P}}\right) =\mathbb{F}\left( x_{\mathbf{p}_{1}%
},x_{\mathbf{p}_{2}},...,x_{\mathbf{p}_{n}}\right) $ and $\mathbb{F}\left[
x_{\mathbf{P}}\right] =\mathbb{F}\left[ x_{\mathbf{p}_{1}},x_{\mathbf{p}%
_{2}},...,x_{\mathbf{p}_{n}}\right] $ if $\mathbf{P}$ is written in the form
$\mathbf{P}=\left\{ \mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{n}\right\}
$.) The symbols $x_{\mathbf{p}}$ are understood to be distinct, and are used
as commuting indeterminates. We regard $\mathbb{F}\left[ x_{\mathbf{P}%
}\right] $ as a subring of $\mathbb{F}\left( x_{\mathbf{P}}\right) $, and
$\mathbb{F}\left( x_{\mathbf{P}}\right) $ as the field of quotients of
$\mathbb{F}\left[ x_{\mathbf{P}}\right] $.
\textbf{(b)} If $\mathbf{Q}$ is a subset of $\mathbf{P}$, then $\mathbb{F}%
\left( x_{\mathbf{Q}}\right) $ can be canonically embedded into
$\mathbb{F}\left( x_{\mathbf{P}}\right) $, and $\mathbb{F}\left[
x_{\mathbf{Q}}\right] $ can be canonically embedded into $\mathbb{F}\left[
x_{\mathbf{P}}\right] $. We regard these embeddings as inclusions.
\textbf{(c)} Let $\mathbb{K}$ be a field extension of $\mathbb{F}$. Let $f$ be
an element of $\mathbb{F}\left( x_{\mathbf{P}}\right) $. If $\left(
a_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}\in\mathbb{K}^{\mathbf{P}}$
is a family of elements of $\mathbb{K}$ indexed by elements of $\mathbf{P}$,
then we let $f\left( \left( a_{\mathbf{p}}\right) _{\mathbf{p}\in
\mathbf{P}}\right) $ denote the element of $\mathbb{K}$ obtained by
substituting $a_{\mathbf{p}}$ for $x_{\mathbf{p}}$ for each $\mathbf{p}%
\in\mathbf{P}$ in the rational function $f$. This $f\left( \left(
a_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}\right) $ is defined only if
the substitution does not render the denominator equal to $0$. If $\mathbb{K}$
is infinite, this shows that $f\left( \left( a_{\mathbf{p}}\right)
_{\mathbf{p}\in\mathbf{P}}\right) $ is defined for almost all $\left(
a_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}\in\mathbb{K}^{\mathbf{P}}$
(with respect to the Zariski topology).
\textbf{(d)} Let $\mathbf{P}$ now be a finite totally ordered set, and let
$\vartriangleleft$ be the smaller-than relation of $\mathbf{P}$. For every
$\mathbf{p}\in\mathbf{P}$, let $\mathbf{p}\Downarrow$ denote the subset
$\left\{ \mathbf{v}\in\mathbf{P}\ \mid\ \mathbf{v}\vartriangleleft
\mathbf{p}\right\} $ of $\mathbf{P}$. For every $\mathbf{p}\in\mathbf{P}$,
let $Q_{\mathbf{p}}$ be an element of $\mathbb{F}\left( x_{\mathbf{P}%
}\right) $.
We say that the family $\left( Q_{\mathbf{p}}\right) _{\mathbf{p}%
\in\mathbf{P}}$ is $\mathbf{P}$\textit{-triangular} if and only if the
following condition holds:
\textit{Algebraic triangularity condition:} For every $\mathbf{p}\in
\mathbf{P}$, there exist elements $\alpha_{\mathbf{p}}$, $\beta_{\mathbf{p}}$,
$\gamma_{\mathbf{p}}$, $\delta_{\mathbf{p}}$ of $\mathbb{F}\left(
x_{\mathbf{p}\Downarrow}\right) $ such that $\alpha_{\mathbf{p}}%
\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma_{\mathbf{p}}\neq0$ and
$Q_{\mathbf{p}}=\dfrac{\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta_{\mathbf{p}}%
}{\gamma_{\mathbf{p}}x_{\mathbf{p}}+\delta_{\mathbf{p}}}$.\ \ \ \
\footnotetext{Notice that the fraction $\dfrac{\alpha_{\mathbf{p}%
}x_{\mathbf{p}}+\beta_{\mathbf{p}}}{\gamma_{\mathbf{p}}x_{\mathbf{p}}%
+\delta_{\mathbf{p}}}$ is well-defined for any four elements $\alpha
_{\mathbf{p}}$, $\beta_{\mathbf{p}}$, $\gamma_{\mathbf{p}}$, $\delta
_{\mathbf{p}}$ of $\mathbb{F}\left( x_{\mathbf{p}\Downarrow}\right) $ such
that $\alpha_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma
_{\mathbf{p}}\neq0$. (Indeed, $\gamma_{\mathbf{p}}x_{\mathbf{p}}%
+\delta_{\mathbf{p}}\neq0$ in this case, as can easily be checked.)}
\end{definition}
We will use $\mathbf{P}$-triangularity via the following fact:
\begin{lemma}
\label{lem.algebraic.triangularity.short}Let $\mathbb{F}$ be a field. Let
$\mathbf{P}$ be a finite totally ordered set. For every $\mathbf{p}%
\in\mathbf{P}$, let $Q_{\mathbf{p}}$ be an element of $\mathbb{F}\left(
x_{\mathbf{P}}\right) $. Assume that $\left( Q_{\mathbf{p}}\right)
_{\mathbf{p}\in\mathbf{P}}$ is a $\mathbf{P}$-triangular family. Then:
\textbf{(a)} The family $\left( Q_{\mathbf{p}}\right) _{\mathbf{p}%
\in\mathbf{P}}\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right) \right)
^{\mathbf{P}}$ is algebraically independent (over $\mathbb{F}$).
\textbf{(b)} There exists a $\mathbf{P}$-triangular family $\left(
R_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}\in\left( \mathbb{F}\left(
x_{\mathbf{P}}\right) \right) ^{\mathbf{P}}$ such that every $\mathbf{q}%
\in\mathbf{P}$ satisfies $Q_{\mathbf{q}}\left( \left( R_{\mathbf{p}}\right)
_{\mathbf{p}\in\mathbf{P}}\right) =x_{\mathbf{q}}$.
\end{lemma}
\begin{proof}
The proof of this lemma -- an exercise in elementary algebra and
induction -- is omitted; it can be found in \cite[Lemma 15.3]{grinberg-roby-arxiv}.
\end{proof}
Armed with this definition, we are ready to tackle the
proof of Proposition \ref{prop.Grasp.generic} that $\mathbb{K}$-labelings can be generically
parametrized by $\operatorname*{Grasp}\nolimits_{0}A$.
\begin{proof}
[Proof of Proposition \ref{prop.Grasp.generic}.]Let $\mathbb{F}$ be
the prime field of $\mathbb{K}$. (This means either $\mathbb{Q}$ or
$\mathbb{F}_{p}$ depending on the characteristic of $\mathbb{K}$.) In the
following, the word ``algebraically independent'' will always mean
``algebraically independent over $\mathbb{F}$'' (rather than over $\mathbb{K}$
or over $\mathbb{Z}$).
Let $\mathbf{P}$ be a totally ordered set such that
$
\mathbf{P}=\left\{ 1,2,...,p\right\} \times\left\{ 1,2,...,q\right\}
$ as sets,
%\text{ as sets,}%
and such that%
\[
\left( i,k\right) \trianglelefteq\left( i^{\prime},k^{\prime}\right)
\text{ for all }\left( i,k\right) \in\mathbf{P}\text{ and }\left(
i^{\prime},k^{\prime}\right) \in\mathbf{P}\text{ satisfying }\left( i\geq
i^{\prime}\text{ and }k\leq k^{\prime}\right) ,
\]
where $\trianglelefteq$ denotes the smaller-or-equal relation of $\mathbf{P}$.
Such a $\mathbf{P}$ clearly exists (in fact, there usually exist several
such $\mathbf{P}$, and it doesn't matter which of them we choose).
We denote the smaller-than relation of $\mathbf{P}$ by
$\vartriangleleft$. We will later see what this total order is good for
(intuitively, it is an order in which the variables can be eliminated; in
other words, it makes our system behave like a triangular matrix rather than
like a triangular matrix with permuted columns), but for now let us notice
that it is generally not compatible with $\operatorname*{Rect}\left(
p,q\right) $.
Let $Z:\left\{ 1,2,...,q\right\} \rightarrow\left\{ 1,2,...,q\right\} $
denote the map which sends every \newline
$k\in\left\{ 1,2,...,q-1\right\} $ to $k+1$
and sends $q$ to $1$. Thus, $Z$ is a permutation in the symmetric group
$S_{q}$, and can be written in cycle notation as $\left( 1,2,...,q\right) $.
Consider the field $\mathbb{F}\left( x_{\mathbf{P}}\right) $ and the ring
$\mathbb{F}\left[ x_{\mathbf{P}}\right] $ defined as in Definition
\ref{def.algebraic.triangularity.short}.
%%% It's hard to imagine a reader who needs this reminder at this stage\dots
% Recall that we need to prove Proposition \ref{prop.Grasp.generic}. In other
% words, we need to show that for almost every $f\in\mathbb{K}%
% ^{\operatorname*{Rect}\left( p,q\right) }$, there exists a matrix
% $A\in\mathbb{K}^{p\times\left( p+q\right) }$ satisfying
% $f=\operatorname*{Grasp}\nolimits_{0}A$.
In order to prove Proposition \ref{prop.Grasp.generic}, it is enough to show that there exists a matrix
$\widetilde{D}\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right) \right)
^{p\times\left( p+q\right) }$ satisfying%
\begin{equation}
x_{\mathbf{p}}=\left( \operatorname*{Grasp}\nolimits_{0}\widetilde{D}\right)
\left( \mathbf{p}\right) \ \ \ \ \ \ \ \ \ \ \text{for every }\mathbf{p}%
\in\mathbf{P}\text{.} \label{pf.Grasp.generic.short.reduce-to-rational}%
\end{equation}
For then we can obtain a matrix $A\in\mathbb{K}^{p\times\left( p+q\right) }$
satisfying $f=\operatorname*{Grasp}\nolimits_{0}A$ for almost every
$f\in\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$ simply by
substituting $f\left( \mathbf{p}\right) $ for every $x_{\mathbf{p}}$ in all
entries of the matrix $\widetilde{D}$
% \ \ \ \ \footnote{Indeed, this matrix $A$
% (obtained by substitution of $f\left( \mathbf{p}\right) $ for $x_{\mathbf{p}%
% }$) will be well-defined for almost every $f\in\mathbb{K}%
% ^{\operatorname*{Rect}\left( p,q\right) }$ (the ``almost'' is due to the
% possibility of some denominators becoming $0$), and will satisfy $f\left(
% \mathbf{p}\right) =\left( \operatorname*{Grasp}\nolimits_{0}A\right)
% \left( \mathbf{p}\right) $ for every $\mathbf{p}\in\mathbf{P}$ (because
% $\widetilde{D}$ satisfies (\ref{pf.Grasp.generic.short.reduce-to-rational})),
% that is, $f=\operatorname*{Grasp}\nolimits_{0}A$.}.
% Hence, all we need to show
% is the existence of a matrix $\widetilde{D}\in\left( \mathbb{F}\left(
% x_{\mathbf{P}}\right) \right) ^{p\times\left( p+q\right) }$ satisfying
% (\ref{pf.Grasp.generic.short.reduce-to-rational}).
Now define a matrix $C\in\left( \mathbb{F}\left[ x_{\mathbf{P}}\right] \right)
^{p\times q}$ by
\[
C=\left( x_{\left( i,Z\left( k\right) \right) }\right) _{1\leq i\leq
p,\ 1\leq k\leq q}.
\]
This is simply a matrix whose entries are all the indeterminates
$x_{\mathbf{p}}$ of the polynomial ring $\mathbb{F}\left[ x_{\mathbf{P}%
}\right] $, albeit in a strange order (tailored to make
the ``triangularity'' argument work nicely). This matrix $C$ is not
directly related to the $\widetilde{D}$ we will construct, but will be used in
its construction.
For every $\left( i,k\right) \in\mathbf{P}$, define element
$\mathfrak{N}_{\left( i,k\right) }, \mathfrak{D}_{\left( i,k\right) }\in\mathbb{F}\left[ x_{\mathbf{P}%
}\right] $ by%
\begin{align}
\mathfrak{N}_{\left( i,k\right) } & = \det\left( \left( I_{p}\mid C\right)
\left[ 1:i\mid i+k-1:p+k\right] \right) .
\label{lem.Grasp.generic.short.Ndef}\\
\mathfrak{D}_{\left( i,k\right) } & = \det\left( \left( I_{p}\mid C\right)
\left[ 0:i\mid i+k:p+k\right] \right) .
\label{lem.Grasp.generic.short.Ddef}%
\end{align}
% For every $\left( i,k\right) \in\mathbf{P}$, define element
% $\mathfrak{N}_{\left( i,k\right) }\in\mathbb{F}\left[ x_{\mathbf{P}%
% }\right] $ by%
% \begin{equation}
% \mathfrak{N}_{\left( i,k\right) }=\det\left( \left( I_{p}\mid C\right)
% \left[ 1:i\mid i+k-1:p+k\right] \right) .
% \label{lem.Grasp.generic.short.Ndef}%
% \end{equation}
% For every $\left( i,k\right) \in\mathbf{P}$, define an element
% $\mathfrak{D}_{\left( i,k\right) }\in\mathbb{F}\left[ x_{\mathbf{P}%
% }\right] $ by%
% \begin{equation}
% \mathfrak{D}_{\left( i,k\right) }=\det\left( \left( I_{p}\mid C\right)
% \left[ 0:i\mid i+k:p+k\right] \right) .
% \label{lem.Grasp.generic.short.Ddef}%
% \end{equation}
Our plan from here is the following:
\textit{Step 1:} We will find alternate expressions for the polynomials
$\mathfrak{N}_{\left( i,k\right) }$ and $\mathfrak{D}_{\left( i,k\right)
}$ which will give us a better idea of what variables occur in these polynomials.
\textit{Step 2:} We will show that $\mathfrak{N}_{\left( i,k\right) }$ and
$\mathfrak{D}_{\left( i,k\right) }$ are nonzero for all $\left( i,k\right)
\in\mathbf{P}$.
\textit{Step 3:} We will define a $Q_{\mathbf{p}}\in\mathbb{F}\left(
x_{\mathbf{P}}\right) $ for every $\mathbf{p}\in\mathbf{P}$ by $Q_{\mathbf{p}%
}=\dfrac{\mathfrak{N}_{\mathbf{p}}}{\mathfrak{D}_{\mathbf{p}}}$, and we will
show that $Q_{\mathbf{p}}=\left( \operatorname*{Grasp}\nolimits_{0}\left(
I_{p}\mid C\right) \right) \left( \mathbf{p}\right) $.
\textit{Step 4:} We will prove that the family $\left( Q_{\mathbf{p}}\right)
_{\mathbf{p}\in\mathbf{P}}\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right)
\right) ^{\mathbf{P}}$ is $\mathbf{P}$-triangular.
\textit{Step 5:} We will use Lemma \ref{lem.algebraic.triangularity.short}
\textbf{(b)} and the result of Step 4 to find a matrix $\widetilde{D}%
\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right) \right) ^{p\times\left(
p+q\right) }$ satisfying (\ref{pf.Grasp.generic.short.reduce-to-rational}).
We now fill in a few details for each step.
\textit{Details of Step 1:} We introduce two more pieces of notation
pertaining to matrices:
\begin{itemize}
\item If $\ell\in\mathbb{N}$, and if $A_{1}$, $A_{2}$, $...$, $A_{k}$ are
several matrices with $\ell$ rows each, then $\left( A_{1}\mid A_{2}%
\mid...\mid A_{k}\right) $ will denote the matrix obtained by starting with
an (empty) $\ell\times0$-matrix, then attaching the matrix $A_{1}$ to it on
the right, then attaching the matrix $A_{2}$ to the result on the right, etc.,
and finally attaching the matrix $A_{k}$ to the result on the right.
For example,
% if $p$ is a nonnegative integer, and $B$ is a matrix with $p$ rows,
% then $\left( I_{p}\mid B\right) $ means the matrix obtained from the
% $p\times p$ identity matrix $I_{p}$ by attaching the matrix $B$ to it on the
% right. (As a concrete example,
$\left( I_{2}\mid\left(
\begin{array}
[c]{cc}%
1 & -2\\
3 & 0
\end{array}
\right) \right) =\left(
\begin{array}
[c]{cccc}%
1 & 0 & 1 & -2\\
0 & 1 & 3 & 0
\end{array}
\right) $.
\item If $\ell\in\mathbb{N}$, if $B$ is a matrix with $\ell$ rows, and if
$i_{1}$, $i_{2}$, $...$, $i_{k}$ are some elements of $\left\{ 1,2,...,\ell
\right\} $, then $\operatorname*{rows}\nolimits_{i_{1},i_{2},...,i_{k}}B$
will denote the matrix whose rows (from top to bottom) are the rows labelled
$i_{1}$, $i_{2}$, $...$, $i_{k}$ of the matrix $B$.
\end{itemize}
We will use without proof a standard fact about determinants of block matrices:
\begin{itemize}
\item Given a commutative ring $\mathbb{L}$, two nonnegative integers $a$ and
$b$ satisfying $a\geq b$, and a matrix $U\in\mathbb{L}^{a\times b}$, we have%
\begin{equation}
\det\left( \left(
\begin{array}
[c]{c}%
I_{a-b}\\
0_{b\times\left( a-b\right) }%
\end{array}
\right) \mid U\right) =\det\left( \operatorname*{rows}%
\nolimits_{a-b+1,a-b+2,...,a}U\right)
\label{pf.Grasp.generic.short.step1.block1}%
\end{equation}
and%
\begin{equation}
\det\left( \left(
\begin{array}
[c]{c}%
0_{b\times\left( a-b\right) }\\
I_{a-b}%
\end{array}
\right) \mid U\right) =\left( -1\right) ^{b\left( a-b\right) }%
\det\left( \operatorname*{rows}\nolimits_{1,2,...,b}U\right) .
\label{pf.Grasp.generic.short.step1.block2}%
\end{equation}
% (Here, $0_{u\times v}$ denotes the $u\times v$ zero matrix for all
% $u\in\mathbb{N}$ and $v\in\mathbb{N}$, and $\left(
% \begin{array}
% [c]{c}%
% I_{a-b}\\
% 0_{b\times\left( a-b\right) }%
% \end{array}
% \right) $ and $\left(
% \begin{array}
% [c]{c}%
% 0_{b\times\left( a-b\right) }\\
% I_{a-b}%
% \end{array}
% \right) $ are to be read as block matrices.)
\end{itemize}
Using this we can rewrite
% \[
% \left( I_{p}\mid C\right) \left[ 1:i\mid i+k-1:p+k\right] =\left( \left(
% %
% \begin{array}
% [c]{c}%
% I_{i-1}\\
% 0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
% \end{array}
% \right) \ \mid\ \left( I_{p}\mid C\right) \left[ i+k-1:p+k\right]
% \right) ,
% \]
% so that%
% \begin{align*}
% & \det\left( \left( I_{p}\mid C\right) \left[ 1:i\mid i+k-1:p+k\right]
% \right) \\
% & =\det\left( \left(
% \begin{array}
% [c]{c}%
% I_{i-1}\\
% 0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
% \end{array}
% \right) \ \mid\ \left( I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right)
% \\
% & =\det\left( \operatorname*{rows}\nolimits_{i,i+1,...,p}\left( \left(
% I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) \right)
% \end{align*}
% (by (\ref{pf.Grasp.generic.short.step1.block1})). Thus,%
\begin{align}
\mathfrak{N}_{\left( i,k\right) } & =\det\left( \left( I_{p}\mid
C\right) \left[ 1:i\mid i+k-1:p+k\right] \right) \nonumber\\
& =\det\left( \left(
\begin{array}
[c]{c}%
I_{i-1}\\
0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
\end{array}
\right) \ \mid\ \left( I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right)
\\
& =\det\left( \operatorname*{rows}\nolimits_{i,i+1,...,p}\left( \left(
I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) \right) .
\label{pf.Grasp.generic.short.step1.N}%
\end{align}
Also,%
\begin{align*}
& \left( I_{p}\mid C\right) \left[ 0:i\mid i+k:p+k\right] \\
& =\left( \underbrace{\left( I_{p}\mid C\right) _{0}}_{\substack{=\left(
-1\right) ^{p-1}C_{q}\\\text{(due to Definition \ref{def.minors}
\textbf{(b)})}}}\ \mid\ \left(
\begin{array}
[c]{c}%
I_{i-1}\\
0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
\end{array}
\right) \ \mid\ \left( I_{p}\mid C\right) \left[ i+k:p+k\right] \right)
\\
& =\left( \left( -1\right) ^{p-1}C_{q}\ \mid\ \left(
\begin{array}
[c]{c}%
I_{i-1}\\
0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
\end{array}
\right) \ \mid\ \left( I_{p}\mid C\right) \left[ i+k:p+k\right] \right)
,
\end{align*}
whence%
\begin{align}
\mathfrak{D}_{\left( i,k\right) } & =\det\left( \left( I_{p}\mid C\right) \left[ 0:i\mid i+k:p+k\right]
\right) \nonumber\\
& =\det\left( \left( -1\right) ^{p-1}C_{q}\ \mid\ \left(
\begin{array}
[c]{c}%
I_{i-1}\\
0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
\end{array}
\right) \ \mid\ \left( I_{p}\mid C\right) \left[ i+k:p+k\right] \right)
\nonumber\\
& =\left( -1\right) ^{p-1}\det\left( C_{q}\ \mid\ \left(
\begin{array}
[c]{c}%
I_{i-1}\\
0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
\end{array}
\right) \ \mid\ \left( I_{p}\mid C\right) \left[ i+k:p+k\right] \right)
\nonumber\\
& =\underbrace{\left( -1\right) ^{p-1}\left( -1\right) ^{i-1}}_{=\left(
-1\right) ^{p-i}}\det\left( \left(
\begin{array}
[c]{c}%
I_{i-1}\\
0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
\end{array}
\right) \ \mid\ C_{q}\ \mid\ \left( I_{p}\mid C\right) \left[
i+k:p+k\right] \right) \nonumber\\
% & \ \ \ \ \ \ \ \ \ \ \left(
% \begin{array}
% [c]{c}%
% \text{since permuting the columns of a matrix multiplies the}\\
% \text{determinant by the sign of the permutation}%
% \end{array}
% \right) \\
& =\left( -1\right) ^{p-i}
\det\left( \left(
\begin{array}
[c]{c}%
I_{i-1}\\
0_{\left( p-\left( i-1\right) \right) \times\left( i-1\right) }%
\end{array}
\right) \ \mid\ C_{q}\ \mid\ \left( I_{p}\mid C\right) \left[
i+k:p+k\right] \right) \nonumber\\
& =\left( -1\right) ^{p-i}\det\left( \operatorname*{rows}%
\nolimits_{i,i+1,...,p}\left( C_{q}\ \mid\ \left( I_{p}\mid C\right)
\left[ i+k:p+k\right] \right) \right) .
\label{pf.Grasp.generic.short.step1.D}%
\end{align}
% (by (\ref{pf.Grasp.generic.short.step1.block1})). Thus,%
% \begin{align}
% \mathfrak{D}_{\left( i,k\right) } & =\det\left( \left( I_{p}\mid
% C\right) \left[ 0:i\mid i+k:p+k\right] \right) \nonumber\\
% & =\left( -1\right) ^{p-i}\det\left( \operatorname*{rows}%
% \nolimits_{i,i+1,...,p}\left( C_{q}\ \mid\ \left( I_{p}\mid C\right)
% \left[ i+k:p+k\right] \right) \right) .
% \label{pf.Grasp.generic.short.step1.D}%
% \end{align}
Although these alternative formulas (\ref{pf.Grasp.generic.short.step1.N})
and (\ref{pf.Grasp.generic.short.step1.D}) for $\mathfrak{N}_{\left(
i,k\right) }$ and $\mathfrak{D}_{\left( i,k\right) }$ are not shorter
than the definitions, they involve smaller matrices (unless $i=1$)
and are more useful in understanding the monomials appearing in $\mathfrak{N}%
_{\left( i,k\right) }$ and $\mathfrak{D}_{\left( i,k\right) }$.
\textit{Details of Step 2:} We claim that $\mathfrak{N}_{\left( i,k\right)
}$ and $\mathfrak{D}_{\left( i,k\right) }$ are nonzero for all $\left(
i,k\right) \in\mathbf{P}$.
\textit{Proof.} Let $\left( i,k\right) \in\mathbf{P}$. Let us first check
that $\mathfrak{N}_{\left( i,k\right) }$ is nonzero.
This follows from observing that, if $0$'s and $1$'s are substituted for the indeterminates
$x_{\left(i, k\right)}$ in an appropriate way, then the columns of the matrix $\left(
I_{p}\mid C\right) \left[ 1:i\mid i+k-1:p+k\right]$ become the standard basis vectors of
$\mathbb{K}^p$ (in some order), and so the determinant $\mathfrak{N}_{\left( i,k\right) }$
of this matrix becomes $\pm 1$, which is nonzero.
% There are, in fact, many ways to do this. Here is probably the shortest one:
% Assume the contrary, i.e., assume that $\mathfrak{N}_{\left( i,k\right) }%
% =0$. Then a straighforward Zariski-density argument shows that every matrix
% $G\in\mathbb{F}^{p\times\left( p+q\right) }$ satisfies $\det\left( G\left[ 1:i\mid
% i+k-1:p+k\right] \right) =0$.
% \ \ \ \ \footnote{\textit{Proof.} Let $\widetilde{\mathbb{F}}$ be a field
% extension of $\mathbb{F}$ such that $\left\vert \widetilde{\mathbb{F}%
% }\right\vert =\infty$. (We need this to make sense of Zariski density
% arguments.) We are going to prove that every matrix $G\in\widetilde{\mathbb{F}%
% }^{p\times\left( p+q\right) }$ satisfies $\det\left( G\left[ 1:i\mid
% i+k-1:p+k\right] \right) =0$; this will clearly imply the same claim for
% $G\in\mathbb{F}^{p\times\left( p+q\right) }$.
% \par
% Let $G\in\widetilde{\mathbb{F}}^{p\times\left( p+q\right) }$. We want to
% prove that $\det\left( G\left[ 1:i\mid i+k-1:p+k\right] \right) =0$. Since
% this is a polynomial identity in the entries of $G$, we can WLOG assume that
% $G$ is generic enough that the first $p$ columns of $G$ are linearly
% independent (since this just restricts $G$ to a Zariski-dense open subset of
% $\widetilde{\mathbb{F}}^{p\times\left( p+q\right) }$). Assume this. Then, we
% can write $G$ in the form $\left( U\mid V\right) $, with $U$ being the
% matrix formed by the first $p$ columns of $G$, and $V$ being the matrix formed
% by the remaining $q$ columns. Since the first $p$ columns of $G$ are linearly
% independent, the matrix $U$ is invertible.
% \par
% Left multiplication by $U^{-1}$ acts on matrices column by column. This yields%
% \[
% U^{-1}\cdot\left( G\left[ 1:i\mid i+k-1:p+k\right] \right) =\left(
% U^{-1}G\right) \left[ 1:i\mid i+k-1:p+k\right] .
% \]
% Also, $U^{-1} \underbrace{G}_{=\left( U\mid V\right) } = U^{-1}\left( U\mid
% V\right) = \left( U^{-1}U \mid U^{-1}V\right) = \left( I_{p} \mid
% U^{-1}V\right) $.
% \par
% Now, we have $\mathfrak{N}_{\left( i,k\right) }=0$. Since $\mathfrak{N}%
% _{\left( i,k\right) }=\det\left( \left( I_{p}\mid C\right) \left[
% 1:i\mid i+k-1:p+k\right] \right) $, this yields that $\det\left( \left(
% I_{p}\mid C\right) \left[ 1:i\mid i+k-1:p+k\right] \right) =0$. But the
% matrix $C$ is, in some sense, the ``most generic matrix'': namely, the entries
% of the matrix $C$ are pairwise distinct commuting indeterminates, and
% therefore we can obtain any other matrix (over a commutative $\mathbb{F}%
% $-algebra) from $C$ by substituting the corresponding values for the
% indeterminates. In particular, we can make a substitution that turns $C$ into
% $U^{-1}V$. Thus, from $\det\left( \left( I_{p}\mid C\right) \left[ 1:i\mid
% i+k-1:p+k\right] \right) =0$, we obtain $\det\left( \left( I_{p}\mid
% U^{-1}V\right) \left[ 1:i\mid i+k-1:p+k\right] \right) =0$.
% \par
% Now,%
% \begin{align*}
% & \left( \det U\right) ^{-1}\cdot\det\left( G\left[ 1:i\mid
% i+k-1:p+k\right] \right) \\
% & =\det\left( \underbrace{U^{-1}\cdot\left( G\left[ 1:i\mid
% i+k-1:p+k\right] \right) }_{=\left( U^{-1}G\right) \left[ 1:i\mid
% i+k-1:p+k\right] }\right) =\det\left( \left( \underbrace{U^{-1}%
% G}_{=\left( I_{p}\mid U^{-1}V\right) }\right) \left[ 1:i\mid
% i+k-1:p+k\right] \right) \\
% & =\det\left( \left( I_{p}\mid U^{-1}V\right) \left[ 1:i\mid
% i+k-1:p+k\right] \right) =0.
% \end{align*}
% Multiplying this with $\det U$ (which is nonzero since $U$ is invertible), we
% obtain $\det\left( G\left[ 1:i\mid i+k-1:p+k\right] \right) =0$, qed.}.
% But this is absurd, because we can pick $G$ to have the $p$ columns labelled
% $1$, $2$, $...$, $i-1$, $i+k-1$, $i+k$, $...$, $p+k-1$ linearly independent, contradicting
% our assumption.
% Hence, $\mathfrak{N}%
% _{\left( i,k\right) }$ is nonzero. Similarly, $\mathfrak{D}_{\left(
% i,k\right) }$ is nonzero.
Similarly, $\mathfrak{D}_{\left( i,k\right) }$ is nonzero.
\textit{Details of Step 3:} Define $Q_{\mathbf{p}}\in\mathbb{F}\left(
x_{\mathbf{P}}\right) $ for every $\mathbf{p}\in\mathbf{P}$ by $Q_{\mathbf{p}%
}=\dfrac{\mathfrak{N}_{\mathbf{p}}}{\mathfrak{D}_{\mathbf{p}}}$. This is
well-defined because Step 2 has shown that $\mathfrak{D}_{\mathbf{p}}$ is
nonzero. Moreover, it is easy to see that every $\mathbf{p} = \left( i,k\right)
\in\mathbf{P}$ satisfies%
\begin{equation}
Q_{\left( i,k\right) }=\left( \operatorname*{Grasp}\nolimits_{0}\left(
I_{p}\mid C\right) \right) \left( \left( i,k\right) \right)\,, \text{ i.e., }
Q_{\mathbf{p}}=\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid
C\right) \right) \left( \mathbf{p}\right) .
\label{pf.Grasp.generic.short.step3}%
\end{equation}
% \footnote{Indeed, the definition of $\operatorname*{Grasp}\nolimits_{0}\left(
% I_{p}\mid C\right) $ yields%
% \[
% \left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid C\right) \right)
% \left( \left( i,k\right) \right) =\dfrac{\det\left( \left( I_{p}\mid
% C\right) \left[ 1:i\mid i+k-1:p+k\right] \right) }{\det\left( \left(
% I_{p}\mid C\right) \left[ 0:i\mid i+k:p+k\right] \right) }=\dfrac
% {\mathfrak{N}_{\left( i,k\right) }}{\mathfrak{D}_{\left( i,k\right) }}%
% \]
% (by (\ref{lem.Grasp.generic.short.Ndef}) and
% (\ref{lem.Grasp.generic.short.Ddef})).}
% In other words, every $\mathbf{p}%
% \in\mathbf{P}$ satisfies%
% \begin{equation}
% Q_{\mathbf{p}}=\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid
% C\right) \right) \left( \mathbf{p}\right) .
% \label{pf.Grasp.generic.short.step3}%
% \end{equation}
\textit{Details of Step 4:} To prove the family $\left(
Q_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}\in\left( \mathbb{F}\left(
x_{\mathbf{P}}\right) \right) ^{\mathbf{P}}$ is $\mathbf{P}$-triangular, we need that
for every $\mathbf{p}\in\mathbf{P}$, there exist elements $\alpha_{\mathbf{p}%
}$, $\beta_{\mathbf{p}}$, $\gamma_{\mathbf{p}}$, $\delta_{\mathbf{p}}$ of
$\mathbb{F}\left( x_{\mathbf{p}\Downarrow}\right) $ such that $\alpha
_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma_{\mathbf{p}}\neq0$
and $Q_{\mathbf{p}}=\dfrac{\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta
_{\mathbf{p}}}{\gamma_{\mathbf{p}}x_{\mathbf{p}}+\delta_{\mathbf{p}}}$ (where
$\mathbf{p}\Downarrow$ is defined as in Definition
\ref{def.algebraic.triangularity.short} \textbf{(d)}). So fix $\mathbf{p}=\left( i,k\right)
\in\mathbf{P}$.
% Write $\mathbf{p}$ in the form $\mathbf{p}=\left( i,k\right) $.
We will actually do something slightly better than we need. We will find
elements $\alpha_{\mathbf{p}}$, $\beta_{\mathbf{p}}$, $\gamma_{\mathbf{p}}$,
$\delta_{\mathbf{p}}$ of $\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] $
(not just of $\mathbb{F}\left( x_{\mathbf{p}\Downarrow}\right) $) such that
$\alpha_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma_{\mathbf{p}%
}\neq0$ and $\mathfrak{N}_{\mathbf{p}}=\alpha_{\mathbf{p}}x_{\mathbf{p}}%
+\beta_{\mathbf{p}}$ and $\mathfrak{D}_{\mathbf{p}}=\gamma_{\mathbf{p}%
}x_{\mathbf{p}}+\delta_{\mathbf{p}}$. (Of course, the conditions
$\mathfrak{N}_{\mathbf{p}}=\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta
_{\mathbf{p}}$ and $\mathfrak{D}_{\mathbf{p}}=\gamma_{\mathbf{p}}%
x_{\mathbf{p}}+\delta_{\mathbf{p}}$ combined imply $Q_{\mathbf{p}}%
=\dfrac{\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta_{\mathbf{p}}}{\gamma
_{\mathbf{p}}x_{\mathbf{p}}+\delta_{\mathbf{p}}}$, hence the yearned-for
$\mathbf{P}$-triangularity.)
We first handle two ``boundary'' cases: (a) $k=1$, and (b) $k\neq1$ but $i=p$.
The case when $k=1$ is very easy: we get that $\mathfrak{N}_{\mathbf{p}}=1$ (using
(\ref{pf.Grasp.generic.short.step1.N})) and that $\mathfrak{D}_{\mathbf{p}%
}=\left( -1\right) ^{i+p}x_{\mathbf{p}}$ (using
(\ref{pf.Grasp.generic.short.step1.D})). Consequently, we can take
$\alpha_{\mathbf{p}}=0$, $\beta_{\mathbf{p}}=1$, $\gamma_{\mathbf{p}}=\left(
-1\right) ^{i+p}$ and $\delta_{\mathbf{p}}=0$, and it is clear that all three
requirements $\alpha_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}%
\gamma_{\mathbf{p}}\neq0$ and $\mathfrak{N}_{\mathbf{p}}=\alpha_{\mathbf{p}%
}x_{\mathbf{p}}+\beta_{\mathbf{p}}$ and $\mathfrak{D}_{\mathbf{p}}%
=\gamma_{\mathbf{p}}x_{\mathbf{p}}+\delta_{\mathbf{p}}$ are satisfied.
The case when $k\neq1$ but $i=p$ is not much harder. In this case,
(\ref{pf.Grasp.generic.short.step1.N}) simplifies to $\mathfrak{N}%
_{\mathbf{p}}=x_{\mathbf{p}}$, and (\ref{pf.Grasp.generic.short.step1.D})
simplifies to $\mathfrak{D}_{\mathbf{p}}=x_{\left( p,1\right) }$. Hence, we
can take $\alpha_{\mathbf{p}}=1$, $\beta_{\mathbf{p}}=0$, $\gamma_{\mathbf{p}%
}=0$ and $\delta_{\mathbf{p}}=x_{\left( p,1\right) }$ to achieve
$\alpha_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma_{\mathbf{p}%
}\neq0$ and $\mathfrak{N}_{\mathbf{p}}=\alpha_{\mathbf{p}}x_{\mathbf{p}}%
+\beta_{\mathbf{p}}$ and $\mathfrak{D}_{\mathbf{p}}=\gamma_{\mathbf{p}%
}x_{\mathbf{p}}+\delta_{\mathbf{p}}$. Note that this choice of $\delta
_{\mathbf{p}}$ is legitimate because $x_{\left( p,1\right) }$ does lie in
$\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] $ (since $\left(
p,1\right) \in\left. \mathbf{p}\Downarrow\right. $).
The remaining case, where neither $k=1$ nor $i=p$ takes a bit more work.
Consider the matrix $\operatorname*{rows}\nolimits_{i,i+1,...,p}\left(
\left( I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) $ (this matrix
appears on the right hand side of (\ref{pf.Grasp.generic.short.step1.N})).
Each entry of this matrix comes either from the matrix $I_{p}$ or from the
matrix $C$. If it comes from $I_{p}$, it clearly lies in $\mathbb{F}\left[
x_{\mathbf{p}\Downarrow}\right] $. If it comes from $C$, it has the form
$x_{\mathbf{q}}$ for some $\mathbf{q}\in\mathbf{P}$, and this $\mathbf{q}$
belongs to $\left. \mathbf{p}\Downarrow\right. $ unless the entry is the
$\left( 1,p-i+1\right) $-th entry. Therefore, each entry of the matrix
$\left( I_{p}\mid C\right) \left[ i+k-1:p+k\right] $ apart from the
$\left( 1,p-i+1\right) $-th entry lies in $\mathbb{F}\left[ x_{\mathbf{p}%
\Downarrow}\right] $, whereas the $\left( 1,p-i+1\right) $-th entry is
$x_{\mathbf{p}}$. Hence, if we use the Laplace expansion with respect to the first
row to compute the determinant of this matrix, we obtain a formula of the form%
\begin{align*}
& \det\left( \operatorname*{rows}\nolimits_{i,i+1,...,p}\left( \left(
I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) \right) \\
& =x_{\mathbf{p}}\cdot\left( \text{some polynomial in entries lying in
}\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] \right) \\
& \ \ \ \ \ \ \ \ \ \ +\left( \text{more polynomials in entries lying in
}\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] \right) \\
& \in\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] \cdot x_{\mathbf{p}%
}+\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] .
\end{align*}
In other words, there exist elements $\alpha_{\mathbf{p}}$ and $\beta
_{\mathbf{p}}$ of $\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] $ such
that \newline$\det\left( \operatorname*{rows}\nolimits_{i,i+1,...,p}\left(
\left( I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) \right)
=\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta_{\mathbf{p}}$. Consider these
$\alpha_{\mathbf{p}}$ and $\beta_{\mathbf{p}}$. We have%
\begin{align}
\mathfrak{N}_{\mathbf{p}} & =\mathfrak{N}_{\left( i,k\right) }=\det\left(
\operatorname*{rows}\nolimits_{i,i+1,...,p}\left( \left( I_{p}\mid C\right)
\left[ i+k-1:p+k\right] \right) \right) \ \ \ \ \ \ \ \ \ \ \left(
\text{by (\ref{pf.Grasp.generic.short.step1.N})}\right)
\label{pf.Grasp.generic.short.step4.N0}\\
& =\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta_{\mathbf{p}}.
\label{pf.Grasp.generic.short.step4.N}%
\end{align}
We can similarly deal with the matrix $\operatorname*{rows}%
\nolimits_{i,i+1,...,p}\left( C_{q}\ \mid\ \left( I_{p}\mid C\right)
\left[ i+k:p+k\right] \right) $ which appears on the right hand side of
(\ref{pf.Grasp.generic.short.step1.D}). Again, each entry of this matrix apart
from the $\left( 1,p-i+1\right) $-th entry lies in $\mathbb{F}\left[
x_{\mathbf{p}\Downarrow}\right] $, whereas the $\left( 1,p-i+1\right) $-th
entry is $x_{\mathbf{p}}$. Using the Laplace expansion again, we thus see that
\[
\det\left( \operatorname*{rows}\nolimits_{i,i+1,...,p}\left( C_{q}%
\ \mid\ \left( I_{p}\mid C\right) \left[ i+k:p+k\right] \right) \right)
\in\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] \cdot x_{\mathbf{p}%
}+\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] ,
\]
so that%
\[
\left( -1\right) ^{p-i}\det\left( \operatorname*{rows}%
\nolimits_{i,i+1,...,p}\left( C_{q}\ \mid\ \left( I_{p}\mid C\right)
\left[ i+k:p+k\right] \right) \right) \in\mathbb{F}\left[ x_{\mathbf{p}%
\Downarrow}\right] \cdot x_{\mathbf{p}}+\mathbb{F}\left[ x_{\mathbf{p}%
\Downarrow}\right] .
\]
Hence, there exist elements $\gamma_{\mathbf{p}}$ and $\delta_{\mathbf{p}}$ of
$\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] $ such that \newline%
$\left( -1\right) ^{p-i}\det\left( \operatorname*{rows}%
\nolimits_{i,i+1,...,p}\left( C_{q}\ \mid\ \left( I_{p}\mid C\right)
\left[ i+k:p+k\right] \right) \right) =\gamma_{\mathbf{p}}x_{\mathbf{p}%
}+\delta_{\mathbf{p}}$. Consider these $\gamma_{\mathbf{p}}$ and
$\delta_{\mathbf{p}}$. We have%
\begin{align}
\mathfrak{D}_{\mathbf{p}} & =\mathfrak{D}_{\left( i,k\right) }=\left(
-1\right) ^{p-i}\det\left( \operatorname*{rows}\nolimits_{i,i+1,...,p}%
\left( C_{q}\ \mid\ \left( I_{p}\mid C\right) \left[ i+k:p+k\right]
\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{by
(\ref{pf.Grasp.generic.short.step1.D})}\right)
\label{pf.Grasp.generic.short.step4.D0}\\
& =\gamma_{\mathbf{p}}x_{\mathbf{p}}+\delta_{\mathbf{p}}.\nonumber
\end{align}
We thus have found elements $\alpha_{\mathbf{p}}$, $\beta_{\mathbf{p}}$,
$\gamma_{\mathbf{p}}$, $\delta_{\mathbf{p}}$ of $\mathbb{F}\left[
x_{\mathbf{p}\Downarrow}\right] $ satisfying $\mathfrak{N}_{\mathbf{p}%
}=\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta_{\mathbf{p}}$ and $\mathfrak{D}%
_{\mathbf{p}}=\gamma_{\mathbf{p}}x_{\mathbf{p}}+\delta_{\mathbf{p}}$. In order
to finish the proof of $\mathbf{P}$-triangularity, we only need to show that
$\alpha_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma_{\mathbf{p}%
}\neq0$.
In order to achieve this goal, we notice that
\[
\alpha_{\mathbf{p}}\underbrace{\mathfrak{D}_{\mathbf{p}}}_{=\gamma
_{\mathbf{p}}x_{\mathbf{p}}+\delta_{\mathbf{p}}}-\underbrace{\mathfrak{N}%
_{\mathbf{p}}}_{=\alpha_{\mathbf{p}}x_{\mathbf{p}}+\beta_{\mathbf{p}}}%
\gamma_{\mathbf{p}}=\alpha_{\mathbf{p}}\left( \gamma_{\mathbf{p}%
}x_{\mathbf{p}}+\delta_{\mathbf{p}}\right) -\left( \alpha_{\mathbf{p}%
}x_{\mathbf{p}}+\beta_{\mathbf{p}}\right) \gamma_{\mathbf{p}}=\alpha
_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma_{\mathbf{p}}.
\]
Hence, proving $\alpha_{\mathbf{p}}\delta_{\mathbf{p}}-\beta_{\mathbf{p}%
}\gamma_{\mathbf{p}}\neq0$ is equivalent to proving $\alpha_{\mathbf{p}%
}\mathfrak{D}_{\mathbf{p}}-\mathfrak{N}_{\mathbf{p}}\gamma_{\mathbf{p}}\neq0$.
It is the latter that we are going to do, because $\alpha_{\mathbf{p}}$,
$\mathfrak{D}_{\mathbf{p}}$, $\mathfrak{N}_{\mathbf{p}}$ and $\gamma
_{\mathbf{p}}$ are easier to get our hands on than $\beta_{\mathbf{p}}$ and
$\delta_{\mathbf{p}}$.
% So we need to prove that $\alpha_{\mathbf{p}}\mathfrak{D}_{\mathbf{p}%
% }-\mathfrak{N}_{\mathbf{p}}\gamma_{\mathbf{p}}\neq0$. To do so, we look back
% at
Recall that our proof that
\[
\det\left( \operatorname*{rows}\nolimits_{i,i+1,...,p}\left( \left(
I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) \right) \in
\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] \cdot x_{\mathbf{p}%
}+\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right]
\]
% This proof
proceeded by applying the Laplace expansion with respect to the first
row to the matrix $\operatorname*{rows}\nolimits_{i,i+1,...,p}\left( \left(
I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) $. The only term
involving $x_{\mathbf{p}}$ was%
\[
x_{\mathbf{p}}\cdot\left( \text{some polynomial in entries lying in
}\mathbb{F}\left[ x_{\mathbf{p}\Downarrow}\right] \right) .
\]
% Recalling the statement of the Laplace expansion, we notice that \textquotedblleft
% some polynomial in entries lying in $\mathbb{F}\left[ x_{\mathbf{p}%
% \Downarrow}\right] $\textquotedblright\ in this term
The second factor above is actually the $\left(
1,p-i+1\right) $-th cofactor of the matrix \newline
$\operatorname*{rows}%
\nolimits_{i,i+1,...,p}\left( \left( I_{p}\mid C\right) \left[
i+k-1:p+k\right] \right) $. Hence,%
\begin{align}
\alpha_{\mathbf{p}} & =\left( \text{the }\left( 1,p-i+1\right) \text{-th
cofactor of }\operatorname*{rows}\nolimits_{i,i+1,...,p}\left( \left(
I_{p}\mid C\right) \left[ i+k-1:p+k\right] \right) \right) \nonumber\\
& =\left( -1\right) ^{p-i}\cdot\det\left( \operatorname*{rows}%
\nolimits_{i+1,i+2,...,p}\left( \left( I_{p}\mid C\right) \left[
i+k-1:p+k-1\right] \right) \right) .
\label{pf.Grasp.generic.short.step4.alpha}%
\end{align}
Similarly,%
\begin{equation}
\gamma_{\mathbf{p}}=\det\left( \operatorname*{rows}\nolimits_{i+1,i+2,...,p}%
\left( C_{q}\ \mid\ \left( I_{p}\mid C\right) \left[ i+k:p+k-1\right]
\right) \right) \label{pf.Grasp.generic.short.step4.gamma}%
\end{equation}
(note that we lost the sign $\left( -1\right) ^{p-i}$ from
(\ref{pf.Grasp.generic.short.step4.D0}) since it got cancelled against the
$\left( -1\right) ^{p-\left( i+1\right) }$ arising from the definition of
a cofactor).
Now, since neither $k=1$ nor $i=p$, $\left(
i+1,k-1\right) $ also belongs to $\mathbf{P}$; hence, we can apply
(\ref{pf.Grasp.generic.short.step1.N}) to $\left( i+1,k-1\right) $ in lieu
of $\left( i,k\right) $, and obtain%
\[
\mathfrak{N}_{\left( i+1,k-1\right) }=\det\left( \operatorname*{rows}%
\nolimits_{i+1,i+2,...,p}\left( \left( I_{p}\mid C\right) \left[
i+k-1:p+k-1\right] \right) \right) .
\]
In light of this, (\ref{pf.Grasp.generic.short.step4.alpha}) becomes%
\[
\alpha_{\mathbf{p}}=\left( -1\right) ^{p-i}\cdot\mathfrak{N}_{\left(
i+1,k-1\right) }.
\]
Similarly, applying (\ref{pf.Grasp.generic.short.step1.D}) to $\left(
i+1,k-1\right) $ in lieu of $\left( i,k\right) $, rewrites
(\ref{pf.Grasp.generic.short.step4.gamma}) as%
\[
\gamma_{\mathbf{p}}=\left( -1\right) ^{p-\left( i+1\right) }%
\cdot\mathfrak{D}_{\left( i+1,k-1\right) }.
\]
Hence,%
\begin{align*}
& \underbrace{\alpha_{\mathbf{p}}}_{=\left( -1\right) ^{p-i}\cdot
\mathfrak{N}_{\left( i+1,k-1\right) }}\mathfrak{D}_{\mathbf{p}}%
-\mathfrak{N}_{\mathbf{p}}\underbrace{\gamma_{\mathbf{p}}}_{=\left(
-1\right) ^{p-\left( i+1\right) }\cdot\mathfrak{D}_{\left( i+1,k-1\right)
}}\\
& =\left( -1\right) ^{p-i}\cdot\mathfrak{N}_{\left( i+1,k-1\right) }%
\cdot\mathfrak{D}_{\mathbf{p}}-\mathfrak{N}_{\mathbf{p}}\cdot
\underbrace{\left( -1\right) ^{p-\left( i+1\right) }}_{=-\left(
-1\right) ^{p-i}}\cdot\mathfrak{D}_{\left( i+1,k-1\right) }\\
& =\left( -1\right) ^{p-i}\cdot\left( \mathfrak{N}_{\left(
i+1,k-1\right) }\mathfrak{D}_{\mathbf{p}}+\mathfrak{N}_{\mathbf{p}%
}\mathfrak{D}_{\left( i+1,k-1\right) }\right) .
\end{align*}
Thus, we can shift our goal from proving $\alpha_{\mathbf{p}}\mathfrak{D}%
_{\mathbf{p}}-\mathfrak{N}_{\mathbf{p}}\gamma_{\mathbf{p}}\neq0$ to proving
$\mathfrak{N}_{\left( i+1,k-1\right) }\mathfrak{D}_{\mathbf{p}}%
+\mathfrak{N}_{\mathbf{p}}\mathfrak{D}_{\left( i+1,k-1\right) }\neq0$.
But this turns out to be surprisingly simple: Since $\mathbf{p}=\left(
i,k\right) $, we have%
\begin{align}
& \mathfrak{N}_{\left( i+1,k-1\right) }\mathfrak{D}_{\mathbf{p}%
}+\mathfrak{N}_{\mathbf{p}}\mathfrak{D}_{\left( i+1,k-1\right) }\nonumber\\
& =\mathfrak{N}_{\left( i+1,k-1\right) }\mathfrak{D}_{\left( i,k\right)
}+\mathfrak{N}_{\left( i,k\right) }\mathfrak{D}_{\left( i+1,k-1\right)
}=\mathfrak{D}_{\left( i,k\right) }\cdot\mathfrak{N}_{\left(
i+1,k-1\right) }+\mathfrak{N}_{\left( i,k\right) }\cdot\mathfrak{D}%
_{\left( i+1,k-1\right) }\nonumber\\
& =\det\left( \left( I_{p}\mid C\right) \left[ 0:i\mid i+k:p+k\right]
\right) \cdot\det\left( \left( I_{p}\mid C\right) \left[ 1:i+1\mid
i+k-1:p+k-1\right] \right) \nonumber\\
& \ \ \ \ \ \ \ \ \ \ +\det\left( \left( I_{p}\mid C\right) \left[
1:i\mid i+k-1:p+k\right] \right) \nonumber \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot\det\left( \left( I_{p}\mid
C\right) \left[ 0:i+1\mid i+k:p+k-1\right] \right) \nonumber\\
% & \ \ \ \ \ \ \ \ \ \ \left(
% \begin{array}
% [c]{c}%
% \text{here, wejust substituted }\mathfrak{D}_{\left( i,k\right) }\text{,
% }\mathfrak{N}_{\left( i+1,k-1\right) }\text{, }\mathfrak{N}_{\left(
% i,k\right) }\text{ and }\mathfrak{D}_{\left( i+1,k-1\right) }\\
% \text{by their definitions}%
% \end{array}
% \right) \nonumber\\
& =\det\left( \left( I_{p}\mid C\right) \left[ 0:i\mid
i+k-1:p+k-1\right] \right) \cdot\det\left( \left( I_{p}\mid C\right)
\left[ 1:i+1\mid i+k:p+k\right] \right)
\label{pf.Grasp.generic.short.step4.pluck1}%
\end{align}
by definition and Theorem \ref{thm.pluecker.ptolemy}. On the other hand, $\left( i,k-1\right) $ and
$\left( i+1,k\right) $ also belong to $\mathbf{P}$ and satisfy%
\[
\mathfrak{D}_{\left( i,k-1\right) }=\det\left( \left( I_{p}\mid C\right)
\left[ 0:i\mid i+k-1:p+k-1\right] \right)
\]
and
\[
\mathfrak{N}_{\left( i+1,k\right) }=\det\left( \left( I_{p}\mid C\right)
\left[ 1:i+1\mid i+k:p+k\right] \right)
\]
% (by the respective definitions of $\mathfrak{D}_{\left( i,k-1\right) }$ and
% $\mathfrak{N}_{\left( i+1,k\right) }$).
Hence,
(\ref{pf.Grasp.generic.short.step4.pluck1}) becomes%
\begin{align*}
& \mathfrak{N}_{\left( i+1,k-1\right) }\mathfrak{D}_{\mathbf{p}%
}+\mathfrak{N}_{\mathbf{p}}\mathfrak{D}_{\left( i+1,k-1\right) }\\
& =\underbrace{\det\left( \left( I_{p}\mid C\right) \left[ 0:i\mid
i+k-1:p+k-1\right] \right) }_{=\mathfrak{D}_{\left( i,k-1\right) }}%
\cdot\underbrace{\det\left( \left( I_{p}\mid C\right) \left[ 1:i+1\mid
i+k:p+k\right] \right) }_{=\mathfrak{N}_{\left( i+1,k\right) }}\\
& =\mathfrak{D}_{\left( i,k-1\right) }\cdot\mathfrak{N}_{\left(
i+1,k\right) }\neq0
\end{align*}
by Step 2. This finishes our
proof that $\mathfrak{N}_{\left( i+1,k-1\right) }\mathfrak{D}_{\mathbf{p}%
}+\mathfrak{N}_{\mathbf{p}}\mathfrak{D}_{\left( i+1,k-1\right) }\neq0$, thus
also that $\alpha_{\mathbf{p}}\mathfrak{D}_{\mathbf{p}}-\mathfrak{N}%
_{\mathbf{p}}\gamma_{\mathbf{p}}\neq0$, hence also that $\alpha_{\mathbf{p}%
}\delta_{\mathbf{p}}-\beta_{\mathbf{p}}\gamma_{\mathbf{p}}\neq0$, and
ultimately of the $\mathbf{P}$-triangularity of the family $\left(
Q_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}$.
\textit{Details of Step 5:} Recall that our goal is to prove the existence of
a matrix $\widetilde{D}\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right)
\right) ^{p\times\left( p+q\right) }$ satisfying
(\ref{pf.Grasp.generic.short.reduce-to-rational}).
By Step 4, we know that the family $\left( Q_{\mathbf{p}}\right)
_{\mathbf{p}\in\mathbf{P}}\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right)
\right) ^{\mathbf{P}}$ is $\mathbf{P}$-triangular. Hence, Lemma
\ref{lem.algebraic.triangularity.short} \textbf{(b)} shows that there exists a
$\mathbf{P}$-triangular family $\left( R_{\mathbf{p}}\right) _{\mathbf{p}%
\in\mathbf{P}}\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right) \right)
^{\mathbf{P}}$ such that every $\mathbf{q}\in\mathbf{P}$ satisfies
$Q_{\mathbf{q}}\left( \left( R_{\mathbf{p}}\right) _{\mathbf{p}%
\in\mathbf{P}}\right) =x_{\mathbf{q}}$.
% Consider this $\left( R_{\mathbf{p}%
% }\right) _{\mathbf{p}\in\mathbf{P}}$.
Applying Lemma
\ref{lem.algebraic.triangularity.short} \textbf{(a)} to this family $\left(
R_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}$, we conclude that $\left(
R_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}$ is algebraically independent.
In Step 3, we have shown that $Q_{\mathbf{p}}=\left( \operatorname*{Grasp}%
\nolimits_{0}\left( I_{p}\mid C\right) \right) \left( \mathbf{p}\right) $
for every $\mathbf{p}\in\mathbf{P}$. Renaming $\mathbf{p}$ as $\mathbf{q}$, we
rewrite this as follows:%
\begin{equation}
Q_{\mathbf{q}}=\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}\mid
C\right) \right) \left( \mathbf{q}\right) \ \ \ \ \ \ \ \ \ \ \text{for
every }\mathbf{q}\in\mathbf{P}.
\label{pf.algebraic.triangularity.short.step5.1}%
\end{equation}
Now, let $\widetilde{C}\in\left( \mathbb{F}\left( x_{\mathbf{P}}\right)
\right) ^{p\times\left( p+q\right) }$ denote the matrix obtained from
% the matrix
$C\in\left( \mathbb{F}\left[ x_{\mathbf{P}}\right] \right)
^{p\times\left( p+q\right) }$ by substituting $\left( R_{\mathbf{p}%
}\right) _{\mathbf{p}\in\mathbf{P}}$ for the variables $\left(
x_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}$. Since
(\ref{pf.algebraic.triangularity.short.step5.1}) is an identity between
rational functions in the variables $\left( x_{\mathbf{p}}\right)
_{\mathbf{p}\in\mathbf{P}}$, we thus can substitute $\left( R_{\mathbf{p}%
}\right) _{\mathbf{p}\in\mathbf{P}}$ for the variables $\left(
x_{\mathbf{p}}\right) _{\mathbf{p}\in\mathbf{P}}$ in
(\ref{pf.algebraic.triangularity.short.step5.1})\footnote{The substitution
does not suffer from vanishing denominators because $\left( R_{\mathbf{p}%
}\right) _{\mathbf{p}\in\mathbf{P}}$ is algebraically independent.}, and
obtain%
\[
Q_{\mathbf{q}}\left( \left( R_{\mathbf{p}}\right) _{\mathbf{p}\in
\mathbf{P}}\right) =\left( \operatorname*{Grasp}\nolimits_{0}\left(
I_{p}\mid\widetilde{C}\right) \right) \left( \mathbf{q}\right)
\ \ \ \ \ \ \ \ \ \ \text{for every }\mathbf{q}\in\mathbf{P}%
\]
(since this substitution takes the matrix $C$ to $\widetilde{C}$). But since
$Q_{\mathbf{q}}\left( \left( R_{\mathbf{p}}\right) _{\mathbf{p}%
\in\mathbf{P}}\right) =x_{\mathbf{q}}$ for every $\mathbf{q}\in\mathbf{P}$,
this rewrites as%
\[
x_{\mathbf{q}}=\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}%
\mid\widetilde{C}\right) \right) \left( \mathbf{q}\right)
\ \ \ \ \ \ \ \ \ \ \text{for every }\mathbf{q}\in\mathbf{P}.
\]
Upon renaming $\mathbf{q}$ as $\mathbf{p}$ again, this becomes%
\[
x_{\mathbf{p}}=\left( \operatorname*{Grasp}\nolimits_{0}\left( I_{p}%
\mid\widetilde{C}\right) \right) \left( \mathbf{p}\right)
\ \ \ \ \ \ \ \ \ \ \text{for every }\mathbf{p}\in\mathbf{P}.
\]
Hence, there exists a matrix $\widetilde{D}\in\left( \mathbb{F}\left(
x_{\mathbf{P}}\right) \right) ^{p\times\left( p+q\right) }$ satisfying
(\ref{pf.Grasp.generic.short.reduce-to-rational}) (namely, $\widetilde{D}%
=\left( I_{p}\mid\widetilde{C}\right) $).
This completes the proof of Proposition~ \ref{prop.Grasp.generic}.
\end{proof}
\section{\label{sect.rect.finish}The rectangle: finishing the proofs}
As promised, we now use Propositions \ref{prop.Grasp.GraspR} and
\ref{prop.Grasp.generic} to derive our initially stated results on rectangles.
First, we formulate an easy inductive consequence of Proposition \ref{prop.Grasp.GraspR}:
\begin{corollary}
\label{cor.Grasp.GraspR}Let $A\in\mathbb{K}^{p\times\left( p+q\right) }$ be a
matrix. Then every $i \in\mathbb{N}$ satisfies%
\[
\operatorname*{Grasp}\nolimits_{-i}A=R_{\operatorname*{Rect}\left(
p,q\right) }^{i}\left( \operatorname*{Grasp}\nolimits_{0}A\right)
\]
(provided that $A$ is sufficiently generic in the sense of Zariski topology
that both sides of this equality are well-defined).
\end{corollary}
% \begin{proof}
% % [Proof of Corollary \ref{cor.Grasp.GraspR}.]
% In order to prove Corollary
% \ref{cor.Grasp.GraspR} by induction over $i$, it is clearly enough to
% show that every $j \in \mathbb{N}$ satisfies
% $
% \operatorname*{Grasp}\nolimits_{-\left( j+1\right) }A
% =
% R_{\operatorname*{Rect}\left( p,q\right) }\left( \operatorname*{Grasp}
% \nolimits_{-j}A\right)
% $.
% But this follows from Proposition \ref{prop.Grasp.GraspR} (applied to
% $-\left( j+1\right) $ instead of $j$).
% \end{proof}
\begin{proof}
[Proof of Theorem \ref{thm.rect.ord}.]We need to show that
$\operatorname*{ord}\left( R_{\operatorname*{Rect}\left( p,q\right)
}\right) =p+q$. According to Proposition \ref{prop.rect.reduce}, it is enough
to prove that almost every (in the Zariski sense) reduced labelling
$f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$
satisfies $R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}f=f$. So let
$f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$ be a
sufficiently generic reduced labelling. In other words, $f$ is a sufficiently
generic element of $\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$
(because the reduced labellings $\mathbb{K}^{\widehat{\operatorname*{Rect}%
\left( p,q\right) }}$ are being identified with the elements of
$\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$). By Proposition
\ref{prop.Grasp.generic}, there exists a matrix $A\in\mathbb{K}^{p\times
\left( p+q\right) }$ satisfying $f=\operatorname*{Grasp}\nolimits_{0}A$.
Consider this $A$. By Corollary \ref{cor.Grasp.GraspR} (applied to
$i=p+q$), we have%
\[
\operatorname*{Grasp}\nolimits_{-\left( p+q\right) }%
A=R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}\left(
\underbrace{\operatorname*{Grasp}\nolimits_{0}A}_{=f}\right)
=R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}f.
\]
But Proposition \ref{prop.Grasp.period} (applied to $j=-\left( p+q\right) $)
yields
$$
\operatorname*{Grasp}\nolimits_{-\left( p+q\right) }A
=\operatorname*{Grasp}\nolimits_{p+q+\left( -\left( p+q\right) \right)
}A=\operatorname*{Grasp}\nolimits_{0}A =f.
$$
Hence, $f=\operatorname*{Grasp}\nolimits_{-\left( p+q\right) }%
A=R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}f$, proving the theorem.
% In other words,
% $R_{\operatorname*{Rect}\left( p,q\right) }^{p+q}f=f$. This (as we know)
% proves Theorem \ref{thm.rect.ord}.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{thm.rect.antip}.]We regard the reduced
labelling $f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,q\right) }%
}$ as an element of $\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$.
% (because we identify reduced labellings in $\mathbb{K}%
% ^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$ with elements of
% $\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$).
We assume WLOG that
this element $f\in\mathbb{K}^{\operatorname*{Rect}\left( p,q\right) }$ is
generic enough (among the reduced labellings) for Proposition
\ref{prop.Grasp.generic} to apply; hence,
there exists a matrix $A\in\mathbb{K}^{p\times\left( p+q\right) }$
satisfying $f=\operatorname*{Grasp}\nolimits_{0}A$. By
Corollary \ref{cor.Grasp.GraspR} (applied to $i+k-1$ instead of $i$), we have%
\[
\operatorname*{Grasp}\nolimits_{-\left( i+k-1\right) }%
A=R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}\left(
\underbrace{\operatorname*{Grasp}\nolimits_{0}A}_{=f}\right)
=R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f.
\]
But Proposition \ref{prop.Grasp.antipode} (applied to $j=-\left(
i+k-1\right) $) yields%
\begin{align*}
\left( \operatorname*{Grasp}\nolimits_{-\left( i+k-1\right) }A\right)
\left( \left( i,k\right) \right) & =\dfrac{1}{\left(
\operatorname*{Grasp}\nolimits_{-\left( i+k-1\right) +i+k-1}A\right)
\left( \left( p+1-i,q+1-k\right) \right) }\\
& =\dfrac{1}{f\left( \left( p+1-i,q+1-k\right) \right) } , \\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{since }\operatorname*{Grasp}%
% \nolimits_{-\left( i+k-1\right) +i+k-1}A=\operatorname*{Grasp}%
% \nolimits_{0}A=f\right) ,
\end{align*}
so that%
\[
f\left( \left( p+1-i,q+1-k\right) \right) =\dfrac{1}{\left(
\operatorname*{Grasp}\nolimits_{-\left( i+k-1\right) }A\right) \left(
\left( i,k\right) \right) }=\dfrac{1}{\left( R_{\operatorname*{Rect}%
\left( p,q\right) }^{i+k-1}f\right) \left( \left( i,k\right) \right) }, %
\]
% (since $\operatorname*{Grasp}\nolimits_{-\left( i+k-1\right) }%
% A=R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f$).
proving Theorem~\ref{thm.rect.antip}.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{thm.rect.antip.general}.]
Recall the notation $\left( a_{0},a_{1},...,a_{n+1}\right) \flat f$ defined in
Definition \ref{def.bemol}.
Let $f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,q\right) }}$ be
arbitrary. By genericity, we assume WLOG that $f\left( 0\right) $ and
$f\left( 1\right) $ are nonzero.
Let $n=p+q-1$, so $\operatorname*{Rect}\left( p,q\right) $ is an
$n$-graded poset. For any element $(i,k)\in \operatorname*{Rect}\left( p,q\right) $, we
have $i+k-1\in\left\{ 0,1,...,n\right\} $ and $1\leq n-i-k+2\leq n$.
Define an $\left( n+2\right) $-tuple $\left( a_{0},a_{1},...,a_{n+1}%
\right) \in\mathbb{K}^{n+2}$ by%
\[
a_{r}=\left\{
\begin{array}
[c]{c}%
\dfrac{1}{f\left( 0\right) },\ \ \ \ \ \ \ \ \ \ \text{if }r=0;\\
1,\ \ \ \ \ \ \ \ \ \ \text{if }1\leq r\leq n;\\
\dfrac{1}{f\left( 1\right) },\ \ \ \ \ \ \ \ \ \ \text{if }r=n+1
\end{array}
\right. \ \ \ \ \ \ \ \ \ \ \text{for every }r\in\left\{
0,1,...,n+1\right\} .
\]
Thus, $a_{n-i-k+2}=1$ (since $1\leq n-i-k+2\leq n$) and $a_{0}=\dfrac
{1}{f\left( 0\right) }$ and $a_{n+1}=\dfrac{1}{f\left( 1\right) }$.
Let $f^{\prime}=\left( a_{0},a_{1},...,a_{n+1}\right) \flat f$. Then
% it is
% easy to see from the definition of $\left( a_{0},a_{1},...,a_{n+1}\right)
% \flat f$ that
clearly $f^{\prime}\left( 0\right) =1$ and $f^{\prime}\left(
1\right) =1$, i.e., $f^{\prime}$ is a reduced $\mathbb{K}%
$-labelling. Hence, Theorem \ref{thm.rect.antip} (applied to $f^{\prime}$
instead of $f$) yields%
\begin{equation}
f^{\prime}\left( \left( p+1-i,q+1-k\right) \right) =\dfrac{1}{\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}\left( f^{\prime}\right)
\right) \left( \left( i,k\right) \right) }.
\label{pf.rect.antip.general.1}%
\end{equation}
On the other hand,
% again from the definition of $f^{\prime}=\left(
% a_{0},a_{1},...,a_{n+1}\right) \flat f$,
it is easy to see that $f^{\prime
}\left( v\right) =f\left( v\right) $ for every $v\in\operatorname*{Rect}%
\left( p,q\right) $. This yields, in particular, that $f^{\prime}\left(
\left( p+1-i,q+1-k\right) \right) =f\left( \left( p+1-i,q+1-k\right)
\right) $.
But let us define an element $\widehat{a}_{\kappa}^{\left( \ell\right) }%
\in\mathbb{K}^{\times}$ for every $\ell\in\left\{ 0,1,...,n+1\right\} $ and
$\kappa\in\left\{ 0,1,...,n+1\right\} $ as in Proposition
\ref{prop.Rl.scalmult}. Then, it is easy to see that every
$\kappa\in\left\{ 0,1,...,n+1\right\} $ satisfies
\begin{equation}
\label{pf.rect.antip.general.kappa}
\widehat{a}^{\kappa}_{\kappa}
= a_{n+1} a_0
= \dfrac{1}{f\left(0\right) f\left(1\right)}
\end{equation}
(since $a_{n+1} =\dfrac{1}{f\left(1\right)}$ and
$a_0 =\dfrac{1}{f\left(0\right)}$).
Proposition \ref{prop.Rl.scalmult} (applied to
$\ell=i+k-1$) yields%
\[
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}\left( \left(
a_{0},a_{1},...,a_{n+1}\right) \flat f\right) =\left( \widehat{a}%
_{0}^{\left( i+k-1\right) },\widehat{a}_{1}^{\left( i+k-1\right)
},...,\widehat{a}_{n+1}^{\left( i+k-1\right) }\right) \flat\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right) .
\]
Since $\left( a_{0},a_{1},...,a_{n+1}\right) \flat f=f^{\prime}$, this
rewrites as
\[
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}\left( f^{\prime}\right)
=\left( \widehat{a}_{0}^{\left( i+k-1\right) },\widehat{a}_{1}^{\left(
i+k-1\right) },...,\widehat{a}_{n+1}^{\left( i+k-1\right) }\right)
\flat\left( R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right) .
\]
Hence,%
\begin{align*}
& \left( R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}\left(
f^{\prime}\right) \right) \left( \left( i,k\right) \right) \\
& =\left( \left( \widehat{a}_{0}^{\left( i+k-1\right) },\widehat{a}%
_{1}^{\left( i+k-1\right) },...,\widehat{a}_{n+1}^{\left( i+k-1\right)
}\right) \flat\left( R_{\operatorname*{Rect}\left( p,q\right) }%
^{i+k-1}f\right) \right) \left( \left( i,k\right) \right) \\
& =\widehat{a}_{\deg\left( \left( i,k\right) \right) }^{\left(
i+k-1\right) }\cdot\left( R_{\operatorname*{Rect}\left( p,q\right)
}^{i+k-1}f\right) \left( \left( i,k\right) \right) \\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }\left(
% \widehat{a}_{0}^{\left( i+k-1\right) },\widehat{a}_{1}^{\left(
% i+k-1\right) },...,\widehat{a}_{n+1}^{\left( i+k-1\right) }\right)
% \flat\left( R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right)
% \right) \\
& =\widehat{a}_{i+k-1}^{\left( i+k-1\right) }\cdot\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right) \left( \left(
i,k\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{since }\deg\left(
\left( i,k\right) \right) =i+k-1\right) \\
& =\dfrac{1}{f\left( 0\right) f\left( 1\right) }\cdot\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right) \left( \left(
i,k\right) \right)
\end{align*}
% (since (\ref{pf.rect.antip.general.kappa}) yields
% $\widehat{a}_{i+k-1}^{\left( i+k-1\right) }
% =\dfrac{1}{f\left( 0\right) f\left( 1\right) }$).
Thus, (\ref{pf.rect.antip.general.1}) rewrites as%
\[
f^{\prime}\left( \left( p+1-i,q+1-k\right) \right) =\dfrac{1}{\dfrac
{1}{f\left( 0\right) f\left( 1\right) }\cdot\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right) \left( \left(
i,k\right) \right) }=\dfrac{f\left( 0\right) f\left( 1\right) }{\left(
R_{\operatorname*{Rect}\left( p,q\right) }^{i+k-1}f\right) \left( \left(
i,k\right) \right) }.
\]
This rewrites as%
\[
f\left( \left( p+1-i,q+1-k\right) \right) =\dfrac{f\left( 0\right)
f\left( 1\right) }{\left( R_{\operatorname*{Rect}\left( p,q\right)
}^{i+k-1}f\right) \left( \left( i,k\right) \right) }
\]
(since we know that $f^{\prime}\left( v \right)
=f\left( v \right) $ on $\operatorname*{Rect}\left( p,q\right)$), proving the
theorem.
\end{proof}
\section{\label{sect.righttri} \texorpdfstring{The $\vartriangleright$ triangle}{The |> triangle}}
\label{sect.tria}
Having proven the main properties of birational rowmotion $R$ on the rectangle
$\operatorname*{Rect}\left( p,q\right) $, we now turn
to other posets. We will spend the next three sections discussing the order of
birational rowmotion on certain triangle-shaped posets obtained as subsets of
the square $\operatorname*{Rect}\left( p,p\right) $. We start with the
easiest case:
\begin{definition}
\label{def.Leftri}Let $p$ be a positive integer. Define a subset
$\operatorname*{Tria}\left( p\right) $ of $\operatorname*{Rect}\left(
p,p\right) $ by%
\[
\operatorname*{Tria}\left( p\right) =\left\{ \left( i,k\right)
\in\left\{ 1,2,...,p\right\} ^{2}\ \mid\ i\leq k\right\} .
\]
This subset $\operatorname*{Tria}\left( p\right) $ inherits the structure of a $\left( 2p-1\right) $-graded poset
from $\operatorname*{Rect}\left( p,p\right) $.
It has the form of a triangle as shown below.
\end{definition}
\begin{example}
Below we show on the \emph{left} the Hasse diagram of the poset $\operatorname*{Rect}\left(
4,4\right) $, with the elements that belong to $\operatorname*{Tria}\left(
4\right) $ marked by underlines; on the \emph{right} is the Hasse diagram of the poset $\operatorname*{Tria}\left(
4\right) $ itself:%
\[
\xymatrixrowsep{0.9pc}\xymatrixcolsep{0.20pc}\xymatrix{
& & & \underline{\left(4,4\right)} \ar@{-}[rd] \ar@{-}[ld] & & & \\
& & \left(4,3\right) \ar@{-}[rd] \ar@{-}[ld] & & \underline{\left(3,4\right)} \ar@{-}[rd] \ar@{-}[ld] & & \\
& \left(4,2\right) \ar@{-}[rd] \ar@{-}[ld] & & \underline{\left(3,3\right)} \ar@{-}[rd] \ar@{-}[ld] & & \underline{\left(2,4\right)} \ar@{-}[rd] \ar@{-}[ld] & \\
\left(4,1\right) \ar@{-}[rd] & & \left(3,2\right) \ar@{-}[rd] \ar@{-}[ld] & & \underline{\left(2,3\right)} \ar@{-}[rd] \ar@{-}[ld] & & \underline{\left(1,4\right)} \ar@{-}[ld] \\
& \left(3,1\right) \ar@{-}[rd] & & \underline{\left(2,2\right)} \ar@{-}[rd] \ar@{-}[ld] & & \underline{\left(1,3\right)} \ar@{-}[ld] & \\
& & \left(2,1\right) \ar@{-}[rd] & & \underline{\left(1,2\right)} \ar@{-}[ld] & & \\
& & & \underline{\left(1,1\right)} & & &
} \qquad
\xymatrixrowsep{0.9pc}\xymatrixcolsep{0.20pc}\xymatrix{
& & & \left(4,4\right) \ar@{-}[rd] & & & \\
& & & & \left(3,4\right) \ar@{-}[rd] \ar@{-}[ld] & & \\
& & & \left(3,3\right) \ar@{-}[rd] & & \left(2,4\right) \ar@{-}[rd] \ar@{-}[ld] & \\
& & & & \left(2,3\right) \ar@{-}[rd] \ar@{-}[ld] & & \left(1,4\right) \ar@{-}[ld] \\
& & & \left(2,2\right) \ar@{-}[rd] & & \left(1,3\right) \ar@{-}[ld] & \\
& & & & \left(1,2\right) \ar@{-}[ld] & & \\
& & & \left(1,1\right) & & &
}.
\]
% And here
% \[
% \xymatrixrowsep{0.9pc}\xymatrixcolsep{0.20pc}\xymatrix{
% & & & \left(4,4\right) \ar@{-}[rd] & & & \\
% & & & & \left(3,4\right) \ar@{-}[rd] \ar@{-}[ld] & & \\
% & & & \left(3,3\right) \ar@{-}[rd] & & \left(2,4\right) \ar@{-}[rd] \ar@{-}[ld] & \\
% & & & & \left(2,3\right) \ar@{-}[rd] \ar@{-}[ld] & & \left(1,4\right) \ar@{-}[ld] \\
% & & & \left(2,2\right) \ar@{-}[rd] & & \left(1,3\right) \ar@{-}[ld] & \\
% & & & & \left(1,2\right) \ar@{-}[ld] & & \\
% & & & \left(1,1\right) & & &
% }.
% \]
\end{example}
% \begin{remark}
% \label{rmk.Leftri.sw}The poset
% $\operatorname*{Tria}\left( p\right) $ appears in \cite[\S 6.2]%
% {striker-williams} as the poset of order ideals $J\left( \left[ 2\right] \times\left[
% p-1\right] \right) $.
% \end{remark}
We could also consider the subset $\left\{ \left( i,k\right) \in\left\{
1,2,...,p\right\} ^{2}\ \mid\ i\geq k\right\} $, but that would yield a
poset isomorphic to $\operatorname*{Tria}\left( p\right) $ and thus would
not be of any further interest.
\begin{theorem}
\label{thm.Leftri.ord}Let $p$ be a positive integer. Let $\mathbb{K}$ be a field.
Then,
$\operatorname*{ord}\left( R_{\operatorname*{Tria}\left( p\right) }\right)
=2p$.
\end{theorem}
%%% TR: Don't think this is worth keeping here, but you could persuade me otherwise.
% For the rational map $\overline{R}$ introduced in \cite[\S 6]{grinberg-roby-part1},
% this theorem yields $\operatorname*{ord}\left( \overline{R}%
% _{\operatorname*{Tria}\left( p\right) }\right) \mid2p$. It can be shown
% that actually $\operatorname*{ord}\left( \overline{R}_{\operatorname*{Tria}%
% \left( p\right) }\right) =2p$ for $p>3$, while $\operatorname*{ord}\left(
% \overline{R}_{\operatorname*{Tria}\left( 1\right) }\right) =1$,
% $\operatorname*{ord}\left( \overline{R}_{\operatorname*{Tria}\left(
% 2\right) }\right) =1$ and $\operatorname*{ord}\left( \overline
% {R}_{\operatorname*{Tria}\left( 3\right) }\right) =2$.
As for rectangles, we get here the birational version of a known result
for classical rowmotion. The poset $\operatorname*{Tria}\left( p\right) $ appears in
\cite[\S 6.2]{striker-williams} as the poset of order ideals $J\left( \left[ 2\right]
\times\left[ p-1\right] \right) $, where the authors show that $\operatorname*{ord}\left(
\mathbf{r}_{\operatorname*{Tria}\left( p\right) }\right) =2p$. Theorem \ref{thm.Leftri.ord}
thus shows that birational rowmotion and classical rowmotion have the same order for
$\operatorname*{Tria}\left( p\right) $.
In order to prove Theorem \ref{thm.Leftri.ord}, we need a way to turn
labellings of $\operatorname*{Tria}\left( p\right) $ into labellings of
$\operatorname*{Rect}\left( p,p\right) $ in a rowmotion-equivariant way. It
turns out that the obvious \textquotedblleft unfolding\textquotedblright%
\ construction (with some fudge coefficients) works:
\begin{lemma}
\label{lem.Leftri.vrefl}Let $p$ be a positive integer.
Let $\mathbb{K}$ be a field of characteristic $\neq2$.
\textbf{(a)} Let $\operatorname*{vrefl}:\operatorname*{Rect}\left(
p,p\right) \rightarrow\operatorname*{Rect}\left( p,p\right) $ be the map
sending every $\left( i,k\right) \in\operatorname*{Rect}\left( p,p\right)
$ to $\left( k,i\right) $. This map $\operatorname*{vrefl}$ is an involutive
poset automorphism of $\operatorname*{Rect}\left( p,p\right) $. (In
intuitive terms, $\operatorname*{vrefl}$ is simply reflection across the
vertical axis.) We have $\operatorname*{vrefl}\left( v\right) \in
\operatorname*{Tria}\left( p\right) $ for every $v\in\operatorname*{Rect}%
\left( p,p\right) \setminus\operatorname*{Tria}\left( p\right) $.
We extend $\operatorname*{vrefl}$ to an involutive poset automorphism of
$\widehat{\operatorname*{Rect}\left( p,p\right) }$ by setting
$\operatorname*{vrefl}\left( 0\right) =0$ and $\operatorname*{vrefl}\left(
1\right) =1$.
\textbf{(b)} Define a
map $\operatorname*{dble}:\mathbb{K}^{\widehat{\operatorname*{Tria}\left(
p\right) }}\rightarrow\mathbb{K}^{\widehat{\operatorname*{Rect}\left(
p,p\right) }}$ by setting%
\[
\left( \operatorname*{dble}f\right) \left( v\right) =\left\{
\begin{array}
[c]{l}%
\dfrac{1}{2}f\left( 1\right) ,\ \ \ \ \ \ \ \ \ \ \text{if }v=1;\\
2f\left( 0\right) ,\ \ \ \ \ \ \ \ \ \ \text{if }v=0;\\
f\left( v\right) ,\ \ \ \ \ \ \ \ \ \ \text{if }v\in\operatorname*{Tria}%
\left( p\right) ;\\
f\left( \operatorname*{vrefl}\left( v\right) \right)
,\ \ \ \ \ \ \ \ \ \ \text{otherwise}%
\end{array}
\right.
\]
for all $v\in\widehat{\operatorname*{Rect}\left( p,p\right) }$ for all
$f\in\mathbb{K}^{\widehat{\operatorname*{Tria}\left( p\right) }}$. This is
well-defined. We have%
\begin{equation}
\left( \operatorname*{dble}f\right) \left( v\right) =f\left( v\right)
\ \ \ \ \ \ \ \ \ \ \text{for every }v\in\operatorname*{Tria}\left( p\right)
. \label{lem.Leftri.vrefl.b.1}%
\end{equation}
Also,%
\begin{equation}
\left( \operatorname*{dble}f\right) \left( \operatorname*{vrefl}\left(
v\right) \right) =f\left( v\right) \ \ \ \ \ \ \ \ \ \ \text{for every
}v\in\operatorname*{Tria}\left( p\right) . \label{lem.Leftri.vrefl.b.2}%
\end{equation}
\textbf{(c)} We have%
\[
R_{\operatorname*{Rect}\left( p,p\right) }\circ\operatorname*{dble}%
=\operatorname*{dble}\circ R_{\operatorname*{Tria}\left( p\right) }.
\]
\end{lemma}
The coefficients $\dfrac{1}{2}$ and $2$ in the definition of
$\operatorname*{dble}$ ensure that the labellings $R_{\operatorname*{Rect}%
\left( p,p\right) }\circ\operatorname*{dble}$ and
$\operatorname*{dble}\circ
R_{\operatorname*{Tria}\left( p\right) }$ in part \textbf{(c)} of the Lemma
are equal at every element of the poset, without extraneous factors
appearing in certain ranks.
\begin{proof}
The proofs of \textbf{(a)} and \textbf{(b)} are easy, following in a few lines from the
definitions. The proof of \textbf{(c)} involves a few pages of rewriting formulas and
case-checking, but there are no surprises. Full details are available in
\cite{grinberg-roby-arxiv}.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{thm.Leftri.ord}.]Applying Lemma
\ref{lem.ord.poor-mans-projord}
to $2p-1$ and $\operatorname*{Tria}\left( p\right) $
instead of $n$ and $P$, we see that $\operatorname*{ord}\left(
R_{\operatorname*{Tria}\left( p\right) }\right) $ is divisible by
$2p-1+1=2p$. Now, if we can prove that $\operatorname*{ord}\left(
R_{\operatorname*{Tria}\left( p\right) }\right) \mid2p$, then we will
immediately obtain $\operatorname*{ord}\left( R_{\operatorname*{Tria}\left(
p\right) }\right) =2p$, and Theorem \ref{thm.Leftri.ord} will be proven.
So it suffices to show that $R_{\operatorname*{Tria}%
\left( p\right) }^{2p}=\operatorname*{id}$. Since this statement boils down
to a collection of polynomial identities in the labels of an arbitrary
$\mathbb{K}$-labelling of $\operatorname*{Tria}\left( p\right) $, it is
clear that it is enough to prove it in the case when $\mathbb{K}$ is a field
of rational functions in finitely many variables over $\mathbb{Q}$. So let us
WLOG assume we are in this case; then the characteristic of $\mathbb{K}$ is
$0 \neq2$, so that we can apply Lemma \ref{lem.Leftri.vrefl}\textbf{(c)}
to get
\[
R_{\operatorname*{Rect}\left( p,p\right) }\circ\operatorname*{dble}%
=\operatorname*{dble}\circ R_{\operatorname*{Tria}\left( p\right) }.
\]
From this, it follows (by induction over $k$) that
\[
R_{\operatorname*{Rect}\left( p,p\right) }^{k}\circ\operatorname*{dble}%
=\operatorname*{dble}\circ R_{\operatorname*{Tria}\left( p\right) }^{k}%
\]
for every $k\in\mathbb{N}$. Applied to $k=2p$, this yields%
\begin{equation}
R_{\operatorname*{Rect}\left( p,p\right) }^{2p}\circ\operatorname*{dble}%
=\operatorname*{dble}\circ R_{\operatorname*{Tria}\left( p\right) }^{2p}.
\label{pf.Leftri.ord.1}%
\end{equation}
But Theorem \ref{thm.rect.ord} (applied to $q=p$) yields $\operatorname*{ord}%
\left( R_{\operatorname*{Rect}\left( p,p\right) }\right) =p+p=2p$, so that
$R_{\operatorname*{Rect}\left( p,p\right) }^{2p}=\operatorname*{id}$. Hence,
(\ref{pf.Leftri.ord.1}) simplifies to
\[
\operatorname*{dble}=\operatorname*{dble}\circ R_{\operatorname*{Tria}\left(
p\right) }^{2p}.
\]
We can cancel $\operatorname*{dble}$ from this equation, because
$\operatorname*{dble}$ is an injective and therefore left-cancellable map. As
a consequence, we obtain $\operatorname*{id}=R_{\operatorname*{Tria}\left(
p\right) }^{2p}$. In other words, $R_{\operatorname*{Tria}\left( p\right)
}^{2p}=\operatorname*{id}$. This proves Theorem \ref{thm.Leftri.ord}.
\end{proof}
\section{\label{sect.delta} \texorpdfstring{The $\Delta$ and $\nabla$ triangles}{The Delta and Nabla triangles}}%
\label{sect.DeltaNabla}
The next kind of triangle-shaped posets is more interesting.
\begin{definition}
\label{def.DeltaNabla}Let $p$ be a positive integer. Define three subsets
$\Delta\left( p\right) $, $\operatorname*{Eq}\left( p\right) $ and
$\nabla\left( p\right) $ which partition $\operatorname*{Rect}\left( p,p\right) $ by%
\begin{align*}
\Delta\left( p\right) & =\left\{ \left( i,k\right) \in\left\{
1,2,...,p\right\} ^{2}\ \mid\ i+k>p+1\right\} ;\\
\operatorname*{Eq}\left( p\right) & =\left\{ \left( i,k\right)
\in\left\{ 1,2,...,p\right\} ^{2}\ \mid\ i+k=p+1\right\} ;\\
\nabla\left( p\right) & =\left\{ \left( i,k\right) \in\left\{
1,2,...,p\right\} ^{2}\ \mid\ i+k1$. Then:
% \textbf{(a)} We have $\operatorname*{ord}\left( R_{\nabla\left( p\right) }\right)
% \mid2p$.
% \textbf{(b)} If $p>2$, then $\operatorname*{ord}\left( R_{\nabla\left(
% p\right) }\right) =2p$.
% \end{corollary}
% \begin{corollary}
% \label{cor.Delta.ord}Let $p$ be an integer $>1$. Then:
% \textbf{(a)} We have $\operatorname*{ord}\left( R_{\Delta\left( p\right) }\right)
% \mid2p$.
% \textbf{(b)} If $p>2$, then $\operatorname*{ord}\left( R_{\Delta\left(
% p\right) }\right) =2p$.
% \end{corollary}
\begin{corollary}
\label{cor.DeltaNabla.ord}Let $p$ be an integer $>1$. Then:
\textbf{(a)} We have $\operatorname*{ord}\left( R_{\nabla\left( p\right) }\right)
\mid2p$.
\textbf{(b)} If $p>2$, then $\operatorname*{ord}\left( R_{\nabla\left(
p\right) }\right) =2p$.\\
\noindent The same holds if we replace $\nabla\left( p\right)$ everywhere with $\Delta\left( p\right)$.
\end{corollary}
Corollary \ref{cor.DeltaNabla.ord} (for $\Delta (p)$) is analogous to a known result for classical
rowmotion. In fact, from \cite[Conjecture 3.6]{striker-williams} (originally a
conjecture of Panyushev, then proven by Armstrong, Stump and Thomas) and our
Remark \ref{rmk.Delta.sw}, it can be seen that
every integer $p>2$ satisfies $\operatorname*{ord}\left(
\mathbf{r}_{\Delta\left( p\right) }\right) =2p$
(where
$\mathbf{r}_P$ denotes the classical rowmotion map on the order ideals of
a poset $P$). Also, the equivalence of these results for $\nabla (p)$ and $\Delta (p)$ follows
from Remark \ref{rmk.DeltaNabla} and Proposition
\ref{prop.op.ord}).
The proof of Theorem~\ref{thm.DeltaNabla.halfway} will use a mapping that transforms labellings
of $\Delta\left( p\right) $ into labellings of $\operatorname*{Rect}\left( p,p\right) $ in a
way that is rowmotion-equivariant at least under a rather liberal condition on the
labelling. This mapping is similar in its function to the mapping $\operatorname*{dble}$ of
Lemma \ref{lem.Leftri.vrefl}, but its definition is more intricate. Thanks to a suggestion
by an anonymous referee, we state a more general lemma that will specialize to the one we
need.
\begin{lemma}
\label{lem.Delta.hrefl-general}Let $p$ be a positive integer. Let $P$ be a
$\left( 2p-1\right) $-graded finite poset. Let $\operatorname*{hrefl}%
:P\rightarrow P$ be an involution such that $\operatorname*{hrefl}$ is a poset
antiautomorphism of $P$. We extend $\operatorname*{hrefl}$ to
an involutive poset antiautomorphism of $\widehat{P}$ by setting
$\operatorname*{hrefl}\left( 0\right) =1$ and $\operatorname*{hrefl}\left(
1\right) =0$.
Assume that every $v\in\widehat{P}$ satisfies
$\deg\left( \operatorname{hrefl}v\right) =2p-\deg v$.
Let $N$ be a positive integer. Assume that, for every $v\in P$ satisfying
$\deg v=p-1$, there exist precisely $N$ elements $u$ of $P$ satisfying
$u\gtrdot v$.
Define three subsets $\Delta$, $\operatorname*{Eq}$ and $\nabla$ of $P$ by%
\begin{align*}
\Delta & =\left\{ v\in P\ \mid\ \deg v>p\right\} ;\\
\operatorname*{Eq} & =\left\{ v\in P\ \mid\ \deg v=p\right\} ;\\
\nabla & =\left\{ v\in P\ \mid\ \deg v
}[d]_-{\pi} \ar@{-->}[r]^{\operatorname*{wing}} & \mathbb{K}^{\widehat{P}} \ar@{-->}[d]^-{\pi} \\ \overline{\mathbb{K}^{\widehat{\Delta}}} \ar@{-->}[r]_{\overline{\operatorname*{wing}}} & \overline{\mathbb{K}^{\widehat{P}}} }
\]
commutes.
\textbf{(c)} The rational map $\overline{\operatorname*{wing}}$ defined in
Lemma \ref{lem.Delta.hrefl-general} \textbf{(b)} satisfies
\[
\overline{R}_{P}\circ\overline{\operatorname*{wing}}=\overline
{\operatorname*{wing}}\circ\overline{R}_{\Delta}.
\]
\textbf{(d)} Almost every (in the sense of Zariski topology) labelling
$f\in\mathbb{K}^{\widehat{\Delta}}$ satisfying $f\left( 0\right) =N$
satisfies%
\[
R_{P}\left( \operatorname*{wing}f\right) =\operatorname*{wing}\left(
R_{\Delta}f\right) .
\]
\end{lemma}
% \begin{lemma}
% \label{lem.Delta.hrefl}Let $p$ be a positive integer. Clearly,
% $\operatorname*{Rect}\left( p,p\right) $ is the disjoint union of the sets
% $\Delta\left( p\right) $, $\nabla\left( p\right) $ and $\operatorname*{Eq}%
% \left( p\right) $.
% Let $\mathbb{K}$ be a field of characteristic $\neq 2$.
% \textbf{(a)} Let $\operatorname*{hrefl}:\operatorname*{Rect}\left(
% p,p\right) \rightarrow\operatorname*{Rect}\left( p,p\right) $ be the map
% sending every $\left( i,k\right) \in\operatorname*{Rect}\left( p,p\right)
% $ to $\left( p+1-k,p+1-i\right) $. This map $\operatorname*{hrefl}$ is an
% involution and a poset antiautomorphism of $\operatorname*{Rect}\left(
% p,p\right) $. (In intuitive terms, $\operatorname*{hrefl}$ is simply
% reflection across the horizontal axis (i.e., the line $\operatorname*{Eq}%
% \left( p\right) $).) We have $\operatorname*{hrefl}\mid_{\operatorname*{Eq}%
% \left( p\right) }=\operatorname*{id}$ and $\operatorname*{hrefl}\left(
% \Delta\left( p\right) \right) =\nabla\left( p\right) $.
% We extend $\operatorname*{hrefl}$ to an involutive poset antiautomorphism of
% $\widehat{\operatorname*{Rect}\left( p,p\right) }$ by setting
% $\operatorname*{hrefl}\left( 0\right) =1$ and $\operatorname*{hrefl}\left(
% 1\right) =0$.
% \textbf{(b)} Define a rational map $\operatorname*{wing}:\mathbb{K}^{\widehat{\Delta\left(
% p\right) }}\dashrightarrow\mathbb{K}^{\widehat{\operatorname*{Rect}\left(
% p,p\right) }}$ by setting%
% \[
% \left( \operatorname*{wing}f\right) \left( v\right) =\left\{
% \begin{array}
% [c]{l}%
% f\left( v\right) ,\ \ \ \ \ \ \ \ \ \ \text{if }v\in\Delta\left( p\right)
% \cup\left\{ 1\right\} ;\\
% 1,\ \ \ \ \ \ \ \ \ \ \text{if }v\in\operatorname*{Eq}\left( p\right) ;\\
% \dfrac{1}{\left( R_{\Delta\left( p\right) }^{p-\deg v}f\right) \left(
% \operatorname*{hrefl}v\right) },\ \ \ \ \ \ \ \ \ \ \text{if }v\in
% \nabla\left( p\right) \cup\left\{ 0\right\}
% \end{array}
% \right.
% \]
% for all $v\in\widehat{\operatorname*{Rect}\left( p,p\right) }$ for all
% $f\in\mathbb{K}^{\widehat{\Delta\left( p\right) }}$. This is well-defined.
% \textbf{(c)} Consider the map $\operatorname*{vrefl}:\operatorname*{Rect}%
% \left( p,p\right) \rightarrow\operatorname*{Rect}\left( p,p\right) $
% defined in Lemma \ref{lem.Leftri.vrefl}. Define a map $\operatorname*{vrefl}%
% \nolimits^{\ast}:\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,p\right)
% }}\rightarrow\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,p\right) }}$
% by setting%
% \[
% \left( \operatorname*{vrefl}\nolimits^{\ast}f\right) \left( v\right)
% =f\left( \operatorname*{vrefl}\left( v\right) \right)
% \ \ \ \ \ \ \ \ \ \ \text{for all }v\in\widehat{\operatorname*{Rect}\left(
% p,p\right) }%
% \]
% for all $f\in\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,p\right) }}$.
% Also, define a map $\operatorname*{vrefl}\nolimits^{\ast}:\mathbb{K}%
% ^{\widehat{\Delta\left( p\right) }}\rightarrow\mathbb{K}^{\widehat{\Delta
% \left( p\right) }}$ by setting%
% \[
% \left( \operatorname*{vrefl}\nolimits^{\ast}f\right) \left( v\right)
% =f\left( \operatorname*{vrefl}\left( v\right) \right)
% \ \ \ \ \ \ \ \ \ \ \text{for all }v\in\widehat{\Delta\left( p\right) }%
% \]
% for all $f\in\mathbb{K}^{\widehat{\Delta\left( p\right) }}$. Then,%
% \begin{equation}
% \operatorname*{vrefl}\nolimits^{\ast}\circ R_{\Delta\left( p\right)
% }=R_{\Delta\left( p\right) }\circ\operatorname*{vrefl}\nolimits^{\ast}
% \label{lem.Delta.hrefl.e.1}%
% \end{equation}
% (as rational maps $\mathbb{K}^{\widehat{\Delta\left( p\right) }%
% }\dashrightarrow\mathbb{K}^{\widehat{\Delta\left( p\right) }}$).
% Furthermore,%
% \begin{equation}
% \operatorname*{vrefl}\nolimits^{\ast}\circ R_{\operatorname*{Rect}\left(
% p,p\right) }=R_{\operatorname*{Rect}\left( p,p\right) }\circ
% \operatorname*{vrefl}\nolimits^{\ast} \label{lem.Delta.hrefl.e.2}%
% \end{equation}
% (as rational maps $\mathbb{K}^{\widehat{\operatorname*{Rect}\left(
% p,p\right) }}\dashrightarrow\mathbb{K}^{\widehat{\operatorname*{Rect}\left(
% p,p\right) }}$). Finally,
% \begin{equation}
% \operatorname*{vrefl}\nolimits^{\ast}\circ\operatorname*{wing}%
% =\operatorname*{wing}\circ\operatorname*{vrefl}\nolimits^{\ast}
% \label{lem.Delta.hrefl.e.3}%
% \end{equation}
% (as rational maps $\mathbb{K}^{\widehat{\Delta\left( p\right) }%
% }\dashrightarrow\mathbb{K}^{\widehat{\operatorname*{Rect}\left( p,p\right)
% }}$).
% \textbf{(d)} Almost every (in the sense of Zariski topology) labelling
% $f\in\mathbb{K}^{\widehat{\Delta\left( p\right) }}$ satisfying $f\left(
% 0\right) =2$ satisfies%
% \[
% R_{\operatorname*{Rect}\left( p,p\right) }\left( \operatorname*{wing}%
% f\right) =\operatorname*{wing}\left( R_{\Delta\left( p\right) }f\right)
% .
% \]
% \textbf{(e)} Let $\ell\in\mathbb{N}$. Then, almost every (in the
% sense of Zariski topology) labelling $f\in
% \mathbb{K}^{\widehat{\Delta\left( p\right) }}$ satisfying $f\left(
% 0\right) =2$ satisfies%
% \[
% R_{\operatorname*{Rect}\left( p,p\right) }^{\ell}\left(
% \operatorname*{wing}f\right) =\operatorname*{wing}\left( R_{\Delta\left(
% p\right) }^{\ell}f\right) .
% \]
% \end{lemma}
The condition $f\left(0\right) = N$ in part \textbf{(d)} of this lemma
has been made to ensure that we obtain a honest equality between
$R_{P}\left( \operatorname*{wing} f\right)$ and
$\operatorname*{wing}\left( R_{\Delta}f\right)$,
without ``correction factors'' in certain ranks.
\begin{proof}
[Proof of Lemma \ref{lem.Delta.hrefl-general}.]We will not delve into the
details of this tedious and yet straightforward proof. Parts \textbf{(a)}
and \textbf{(b)} are straightforward and quick. Parts \textbf{(c)} and
\textbf{(d)} can be verified
label-by-label using Propositions \ref{prop.R.implicit} and
\ref{prop.R.implicit.converse} and some nasty casework (see, again,
\cite{grinberg-roby-arxiv}).
%Part \textbf{(b)} more or less follows from the fact that the
%definitions of $R_{\Delta}$, $R_{P}$ and $\operatorname*{wing}$ are all \textquotedblleft
%invariant\textquotedblright\ under the vertical reflection
%$\operatorname*{vrefl}$; but proving part \textbf{(b)} in a pedestrian way
%might be even more straightforward than formalizing this invariance
%argument
%% \footnote{Again, Propositions \ref{prop.R.implicit} and
%% \ref{prop.R.implicit.converse} come in handy for proving
%% (\ref{lem.Delta.hrefl.e.1}) and (\ref{lem.Delta.hrefl.e.2}). Then, one can
%% prove (by induction over $\ell$) that $\operatorname*{vrefl}\nolimits^{\ast
%% }\circ R_{\Delta\left( p\right) }^{\ell}=R_{\Delta\left( p\right) }^{\ell
%% }\circ\operatorname*{vrefl}\nolimits^{\ast}$ for all $\ell\in\mathbb{N}$.
%% Using this, (\ref{lem.Delta.hrefl.e.3}) is straightforward to check.}.
%% The proof of part \textbf{(e)} is an easy induction over $\ell$ (details left to the
%% reader), using part \textbf{(d)} and the fact that
%% $R_{\Delta\left( p\right) }$ does not change the label at $1$.
\end{proof}
\begin{example}
Here is an example of a poset $P\neq \operatorname*{Rect}\left(
p,p\right)$ to which Lemma
\ref{lem.Delta.hrefl-general} applies. Namely, the hypotheses of Lemma
\ref{lem.Delta.hrefl-general} are satisfied when $p=5$, $N=3$, $P$ is the
poset with Hasse diagram%
\xymatrixrowsep{0.68pc}
\xymatrixcolsep{0.68pc}
\[
\xymatrix{
& & & \bullet\ar@{-}[dl] \ar@{-}[dr] \\
& & \bullet\ar@{-}[d] & & \bullet\ar@{-}[d] \\
& & \bullet\ar@{-}[dl] \ar@{-}[dr] & & \bullet\ar@{-}[dl] \ar@{-}[dr] \\
& \bullet\ar@{-}[dl] \ar@{-}[d] \ar@{-}[dr] & & \bullet\ar@{-}[dl] \ar@
{-}[d] \ar@{-}[dr] & & \bullet\ar@{-}[dl] \ar@{-}[d] \ar@{-}[dr] \\
\bullet\ar@{-}[dr] & \bullet\ar@{-}[d] & \bullet\ar@{-}[dl] \ar@
{-}[dr] & \bullet\ar@{-}[d] & \bullet\ar@{-}[dl] \ar@{-}[dr] & \bullet\ar@
{-}[d] & \bullet\ar@{-}[dl] \\
& \bullet\ar@{-}[dr] & & \bullet\ar@{-}[dl] \ar@{-}[dr] & & \bullet\ar@
{-}[dl] \\
& & \bullet\ar@{-}[d] & & \bullet\ar@{-}[d] \\
& & \bullet\ar@{-}[dr] & & \bullet\ar@{-}[dl] \\
& & & \bullet}
\]
and $\operatorname*{hrefl}:P\rightarrow P$ is the reflection about
the horizontal axis of symmetry.
\end{example}
\begin{example}\label{ex.Rect.hrefl}
For the case of interest in this section, we now specify henceforth the map
$\operatorname*{hrefl}:\operatorname*{Rect}\left(
p,p\right) \rightarrow\operatorname*{Rect}\left( p,p\right) $ to be given by
$\left( i,k\right) \in\operatorname*{Rect}\left( p,p\right)
\mapsto \left( p+1-k,p+1-i\right) $.
This map $\operatorname*{hrefl}$ clearly satisfies the hypotheses of
Lemma~\ref{lem.Delta.hrefl-general}, where we set
$P = \operatorname*{Rect}\left( p,p\right)$ and $N = 2$;
we then have $\Delta = \Delta\left(p\right)$ and
$\nabla = \nabla\left(p\right)$.
In intuitive terms, $\operatorname*{hrefl}$ is simply
reflection across the horizontal axis, i.e., the line $\operatorname*{Eq}%
\left( p\right) $.
\end{example}
We are ready to prove the main theorem of this section.
\begin{proof}
[Proof of Theorem \ref{thm.DeltaNabla.halfway}.]The result that we are
striving to prove is a collection of identities between rational functions,
hence boils down to a collection of polynomial identities in the labels of an
arbitrary $\mathbb{K}$-labelling of $\Delta\left( p\right) $. Therefore, it
is enough to prove it in the case when $\mathbb{K}$ is a field of rational
functions in finitely many variables over $\mathbb{Q}$. So let us WLOG assume
that we are in this case. Then, $2$ is invertible in $\mathbb{K}$, so that we
can apply Lemma \ref{lem.Delta.hrefl-general}.
Consider the maps $\operatorname*{hrefl}$, $\operatorname*{wing}$, and
$\operatorname*{vrefl}$ defined in
Example~\ref{ex.Rect.hrefl}, Lemma~\ref{lem.Delta.hrefl-general}, and
Lemma~\ref{lem.Leftri.vrefl}. The restrictions of $\operatorname*{vrefl}$
to the subposets $\Delta\left(p\right)$ and $\nabla\left(p\right)$ are
automorphisms of these subposets, and will also be denoted by
$\operatorname*{vrefl}$.
% Clearly, it will be enough to prove that%
% \[
% R_{\Delta\left( p\right) }^{p}=\operatorname*{vrefl}\nolimits^{\ast}%
% \]
% as rational maps $\mathbb{K}^{\widehat{\Delta\left( p\right) }%
% }\dashrightarrow\mathbb{K}^{\widehat{\Delta\left( p\right) }}$. In other
% words, it will be enough to prove that $R_{\Delta\left( p\right) }%
% ^{p}g=\operatorname*{vrefl}\nolimits^{\ast}g$ for almost every $g\in
% \mathbb{K}^{\widehat{\Delta\left( p\right) }}$.
Let $g\in\mathbb{K}^{\widehat{\Delta\left( p\right) }}$ be any
sufficiently generic zero-free labelling of $\Delta\left( p\right) $.
We need to show that $R_{\Delta\left( p\right) }^{p}g
= g \circ \operatorname*{vrefl}$ (indeed, this is merely a restatement
of Theorem \ref{thm.DeltaNabla.halfway} with $f$ renamed as $g$).
%Let us use Definition \ref{def.bemol}. The
Since the poset $\Delta\left( p\right) $ is
$\left( p-1\right) $-graded, using Definition \ref{def.bemol} we can find a $\left( p+1\right) $-tuple
$\left( a_{0},a_{1},...,a_{p}\right) \in\left( \mathbb{K}^{\times}\right)
^{p+1}$ such that $\left( \left( a_{0},a_{1},...,a_{p}\right) \flat
g\right) \left( 0\right) =2$ (by setting $a_{0}=\dfrac{2}{g\left(
0\right) }$, and choosing all other $a_{i}$ arbitrarily). Fix such a $\left(
p+1\right) $-tuple, and set $f=\left( a_{0},a_{1},...,a_{p}\right) \flat
g$. Then, $f\left( 0\right) =2$. We are going to prove that $R_{\Delta
\left( p\right) }^{p}f = f \circ \operatorname*{vrefl}$. Until we
have done this, we can forget about $g$; all we need to know is that $f$ is a
sufficiently generic $\mathbb{K}$-labelling of $\Delta\left( p\right) $
satisfying $f\left( 0\right) =2$.
Let $\left( i,k\right) \in\Delta\left( p\right) $ be arbitrary. Then,
$i+k>p+1$ (since $\left( i,k\right) \in\Delta\left( p\right) $).
Consequently, $2p-\left( i+k-1\right) $ is a well-defined element of
$\left\{ 1,2,...,p-1\right\} $. Denote this element by $h$. Thus,
$h\in\left\{ 1,2,...,p-1\right\} $ and $i+k-1+h=2p$. Moreover, $\left(
k,i\right) =\operatorname*{vrefl}v\in\Delta\left( p\right) $.
Let $v=\left( p+1-k,p+1-i\right) $. Then, $v=\operatorname*{hrefl}\left(
\left( i,k\right) \right) \in\nabla\left( p\right) $ (since $\left(
i,k\right) \in\Delta\left( p\right) $) and $\deg v=h$.
Moreover, $\operatorname*{hrefl}v=\left( i,k\right) $.
Lemma \ref{lem.Delta.hrefl-general} \textbf{(d)} (applied $h$ times) yields
$R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left( \operatorname*{wing}%
f\right) =\operatorname*{wing}\left( R_{\Delta\left( p\right) }%
^{h}f\right) $; hence,%
\begin{align}
\left( R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left(
\operatorname*{wing}f\right) \right) \left( v\right) & =\left( \operatorname*{wing}\left( R_{\Delta\left( p\right) }%
^{h}f\right) \right) \left( v\right) =\dfrac{1}{\left( R_{\Delta\left(
p\right) }^{p-\deg v}\left( R_{\Delta\left( p\right) }^{h}f\right)
\right) \left( \operatorname*{hrefl}v\right) }\nonumber\\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{wing}\text{, since }v\in\nabla\left( p\right) \subseteq
% \nabla\left( p\right) \cup\left\{ 0\right\} \right) \nonumber\\
& =\dfrac{1}{\left( R_{\Delta\left( p\right) }^{p-h}\left( R_{\Delta
\left( p\right) }^{h}f\right) \right) \left( \operatorname*{hrefl}v \right)
}\ \ \ \ \ \ \ \ \ \ \left( \text{since }
\deg v = h \right) \nonumber\\
& =\dfrac{1}{\left( R_{\Delta\left( p\right) }^{p}f\right) \left(
\left( i,k\right) \right) } .
\label{pf.Delta.halfway.new.short.4}%
\end{align}
% (since $\operatorname*{hrefl}v=\left( i,k\right) $ and
% $R_{\Delta\left( p\right) }^{p-h}\left( R_{\Delta\left( p\right) }%
% ^{h}f\right) = \left( R_{\Delta\left( p\right) }^{p-h}\circ
% R_{\Delta\left( p\right) }^{h}\right) f=R_{\Delta\left( p\right) }^{p}f$).
But Theorem \ref{thm.rect.antip.general}
% (applied to $p$,
% $R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left( \operatorname*{wing}%
% f\right) $ and $\left( k,i\right) $ instead of $q$, $f$ and $\left(
% i,k\right) $)
yields%
\begin{align*}
& \left( R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left(
\operatorname*{wing}f\right) \right) \left( \left( p+1-k,p+1-i\right)
\right) \\
& =\dfrac{\left( R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left(
\operatorname*{wing}f\right) \right) \left( 0\right) \cdot\left(
R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left( \operatorname*{wing}%
f\right) \right) \left( 1\right) }{\left( R_{\operatorname*{Rect}\left(
p,p\right) }^{i+k-1}\left( R_{\operatorname*{Rect}\left( p,p\right) }%
^{h}\left( \operatorname*{wing}f\right) \right) \right) \left( \left(
k,i\right) \right) }.
\end{align*}
Since $\left( p+1-k,p+1-i\right) =v$ and
\begin{align*}
R_{\operatorname*{Rect}\left( p,p\right) }^{i+k-1}\left(
R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left( \operatorname*{wing}%
f\right) \right)
% & =\left( \underbrace{R_{\operatorname*{Rect}\left(
% p,p\right) }^{i+k-1}\circ R_{\operatorname*{Rect}\left( p,p\right) }^{h}%
% }_{\substack{=R_{\operatorname*{Rect}\left( p,p\right) }^{i+k-1+h}%
% =R_{\operatorname*{Rect}\left( p,p\right) }^{2p}\\\text{(since
% }i+k-1+h=2p\text{)}}}\right) \left( \operatorname*{wing}f\right) \\
& =R_{\operatorname*{Rect}\left( p,p\right) }^{2p}%
% _{\substack{=\operatorname*{id}\\\text{(since Theorem \ref{thm.rect.ord}
% (applied to }q=p\text{)}\\\text{yields }\operatorname*{ord}\left(
% R_{\operatorname*{Rect}\left( p,p\right) }\right) =p+p=2p\text{)}}}
\left(
\operatorname*{wing}f\right) =\operatorname*{wing}f,
\end{align*}
this equality rewrites as%
\[
\left( R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left(
\operatorname*{wing}f\right) \right) \left( v\right) =\dfrac{\left(
R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left( \operatorname*{wing}%
f\right) \right) \left( 0\right) \cdot\left( R_{\operatorname*{Rect}%
\left( p,p\right) }^{h}\left( \operatorname*{wing}f\right) \right)
\left( 1\right) }{\left( \operatorname*{wing}f\right) \left( \left(
k,i\right) \right) }.
\]
% Since
% \begin{align*}
% & \underbrace{\left( R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left(
% \operatorname*{wing}f\right) \right) \left( 0\right) }_{\substack{=\left(
% \operatorname*{wing}f\right) \left( 0\right) \\\text{(by Corollary
% \ref{cor.R.implicit.01})}}}\cdot\underbrace{\left( R_{\operatorname*{Rect}%
% \left( p,p\right) }^{h}\left( \operatorname*{wing}f\right) \right)
% \left( 1\right) }_{\substack{=\left( \operatorname*{wing}f\right) \left(
% 1\right) \\\text{(by Corollary \ref{cor.R.implicit.01})}}}\\
% & =\underbrace{\left( \operatorname*{wing}f\right) \left( 0\right)
% }_{\substack{=\dfrac{1}{\left( R_{\Delta\left( p\right) }^{p-\deg
% 0}f\right) \left( \operatorname*{hrefl}0\right) }\\\text{(by the definition
% of }\operatorname*{wing}\text{)}}}\cdot\underbrace{\left(
% \operatorname*{wing}f\right) \left( 1\right) }_{\substack{=f\left(
% 1\right) \\\text{(by the definition of }\operatorname*{wing}\text{)}}}
% =\dfrac{1}{\left( R_{\Delta\left( p\right) }^{p-\deg0}f\right) \left(
% \operatorname*{hrefl}0\right) }\cdot f\left( 1\right) =1
% \end{align*}
% (since Corollary \ref{cor.R.implicit.01} yields $\left( R_{\Delta\left(
% p\right) }^{p-\deg0}f\right) \left( \operatorname*{hrefl}0\right)
% =f\left( \operatorname*{hrefl}0\right) =f\left( 1\right) $),
By Corollary~\ref{cor.R.implicit.01} and the definition of $\operatorname*{wing}$
this
simplifies to
\[
\left( R_{\operatorname*{Rect}\left( p,p\right) }^{h}\left(
\operatorname*{wing}f\right) \right) \left( v\right) =\dfrac{1}{\left(
\operatorname*{wing}f\right) \left( \left( k,i\right) \right) }.
\]
Compared with (\ref{pf.Delta.halfway.new.short.4}), this yields $\dfrac
{1}{\left( R_{\Delta\left( p\right) }^{p}f\right) \left( \left(
i,k\right) \right) }=\dfrac{1}{\left( \operatorname*{wing}f\right) \left(
\left( k,i\right) \right) }$. Taking inverses in this equality, we get
\begin{align*}
\left( R_{\Delta\left( p\right) }^{p}f\right) \left( \left( i,k\right)
\right) & =\left( \operatorname*{wing}f\right) \left( \left(
k,i\right) \right) =f\left( \underbrace{\left( k,i\right) }%
_{=\operatorname*{vrefl}\left( i,k\right) }\right) \\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{by the definition of }%
% \operatorname*{wing}\text{, since }\left( k,i\right) \in\Delta\left(
% p\right) \subseteq\Delta\left( p\right) \cup\left\{ 1\right\} \right) \\
& =f\left( \operatorname*{vrefl}\left( i,k\right) \right) =\left(
f \circ \operatorname*{vrefl}\right) \left( \left( i,k\right)
\right).% \\
% & \ \ \ \ \ \ \ \ \ \ \left( \text{since }\left( \operatorname*{vrefl}%
% \nolimits^{\ast}f\right) \left( \left( i,k\right) \right) =f\left(
% \operatorname*{vrefl}\left( i,k\right) \right) \text{ by the definition of
% }\operatorname*{vrefl}\nolimits^{\ast}\right) .
\end{align*}
Now we have shown this for \textbf{every} $\left( i,k\right) \in
\Delta\left( p\right) $, hence that $R_{\Delta
\left( p\right) }^{p}f = f \circ \operatorname*{vrefl}$.
Next recall that $f=\left( a_{0},a_{1},...,a_{p}\right) \flat g$. Hence,%
\begin{equation}
R_{\Delta\left( p\right) }^{p}f=R_{\Delta\left( p\right) }^{p}\left(
\left( a_{0},a_{1},...,a_{p}\right) \flat g\right) =\left( a_{0}%
,a_{1},...,a_{p}\right) \flat\left( R_{\Delta\left( p\right) }%
^{p}g\right) \label{pf.Delta.halfway.new.short.10}%
\end{equation}
by Corollary \ref{cor.Rl.scalmult}.
% , applied to $\Delta\left( p\right) $,
% $p-1$ and $g$ instead of $P$, $n$ and $f$).
On the other hand, $f=\left(
a_{0},a_{1},...,a_{p}\right) \flat g$ yields%
\begin{equation}
f \circ \operatorname*{vrefl} =
\left( \left( a_{0},a_{1},...,a_{p}\right) \flat g\right)
\circ \operatorname*{vrefl} =\left(
a_{0},a_{1},...,a_{p}\right) \flat\left(
g \circ \operatorname*{vrefl} \right)
\label{pf.Delta.halfway.new.short.11}
\end{equation}
(this is easy to check directly using the definitions of $\flat$ and
$\operatorname*{vrefl}$, since every
$v\in \widehat{\Delta\left(p\right)}$ satisfies
$\deg\left(\operatorname*{vrefl}v\right) = \deg v$).
In light of (\ref{pf.Delta.halfway.new.short.10}) and
(\ref{pf.Delta.halfway.new.short.11}), the equality $R_{\Delta\left(
p\right) }^{p}f= f \circ \operatorname*{vrefl}$ becomes
$\left( a_{0},a_{1},...,a_{p}\right) \flat\left( R_{\Delta\left(
p\right) }^{p}g\right) =\left( a_{0},a_{1},...,a_{p}\right) \flat\left(
g \circ \operatorname*{vrefl}\right) $. We can cancel the
\textquotedblleft$\left( a_{0},a_{1},...,a_{p}\right) \flat$%
\textquotedblright\ from both sides of this equation (since all $a_{i}$ are
nonzero), and thus obtain $R_{\Delta\left( p\right) }^{p}%
g= g \circ \operatorname*{vrefl}$.
% As we have seen, this is all we
% need to prove
This proves Theorem \ref{thm.DeltaNabla.halfway} for $\Delta (p)$.
It is now straightforward to obtain the results for $\nabla (p)$ using the poset
antiautomorphism $\operatorname*{hrefl}$ of $\operatorname*{Rect}\left(
p,p\right) $ defined in Remark \ref{rmk.DeltaNabla}, which restricts to a poset
antiisomorphism $\operatorname*{hrefl}:\nabla\left( p\right) \rightarrow
\Delta\left( p\right) $, that is, to a poset isomorphism
$\operatorname*{hrefl}:\nabla\left( p\right) \rightarrow\left(
\Delta\left( p\right) \right) ^{\operatorname*{op}}$. Details appear
in \cite{grinberg-roby-arxiv}.
\end{proof}
% We can now obtain Theorem \ref{thm.Nabla.halfway} from Theorem
% \ref{thm.Delta.halfway} using a construction from the proof of Proposition
% \ref{prop.op.ord}:
% \begin{proof}
% [Proof of Theorem \ref{thm.Nabla.halfway}.]The poset
% antiautomorphism $\operatorname*{hrefl}$ of $\operatorname*{Rect}\left(
% p,p\right) $ defined in Remark \ref{rmk.DeltaNabla} restricts to a poset
% antiisomorphism $\operatorname*{hrefl}:\nabla\left( p\right) \rightarrow
% \Delta\left( p\right) $, that is, to a poset homomorphism
% $\operatorname*{hrefl}:\nabla\left( p\right) \rightarrow\left(
% \Delta\left( p\right) \right) ^{\operatorname*{op}}$. We will use this
% isomorphism to identify the poset $\nabla\left( p\right) $ with the opposite
% poset $\left( \Delta\left( p\right) \right) ^{\operatorname*{op}}$ of
% $\Delta\left( p\right) $.
% Set $P=\Delta\left( p\right) $. Define a rational map $\kappa:\mathbb{K}%
% ^{\widehat{P}}\dashrightarrow\mathbb{K}^{\widehat{P^{\operatorname*{op}}}}$ as
% in the proof of Proposition \ref{prop.op.ord}. Then, as in said proof, it can
% be shown that the map $\kappa$ is a birational map and satisfies $\kappa\circ
% R_{P}=R_{P^{\operatorname*{op}}}^{-1}\circ\kappa$. Since $P=\Delta\left(
% p\right) $ and $P^{\operatorname*{op}}=\left( \Delta\left( p\right)
% \right) ^{\operatorname*{op}}=\nabla\left( p\right) $, this rewrites as
% $\kappa\circ R_{\Delta\left( p\right) }=R_{\nabla\left( p\right) }%
% ^{-1}\circ\kappa$. For the same reason, we know that $\kappa$ is a rational
% map $\mathbb{K}^{\widehat{\Delta\left( p\right) }}\dashrightarrow
% \mathbb{K}^{\widehat{\nabla\left( p\right) }}$.
% From $\kappa\circ R_{\Delta\left(
% p\right) }=R_{\nabla\left( p\right) }^{-1}\circ\kappa$, we can easily
% obtain $\kappa\circ R_{\Delta\left(
% p\right) }^p=R_{\nabla\left( p\right) }^{-p}\circ\kappa$.
% Now, consider the map $\operatorname*{vrefl}\nolimits^{\ast}:\mathbb{K}%
% ^{\widehat{\Delta\left( p\right) }}\rightarrow\mathbb{K}^{\widehat{\Delta
% \left( p\right) }}$ defined in Lemma \ref{lem.Delta.hrefl} \textbf{(c)}, and
% also consider the similarly defined map $\operatorname*{vrefl}\nolimits^{\ast
% }:\mathbb{K}^{\widehat{\nabla\left( p\right) }}\rightarrow\mathbb{K}%
% ^{\widehat{\nabla\left( p\right) }}$. Both squares of the diagram%
% \[
% \xymatrixcolsep{5pc}
% \xymatrix{
% \mathbb K^{\widehat{\Delta\left(p\right)}} \ar@{-->}[r]^{R_{\Delta
% \left(p\right)}^p} \ar@{-->}[d]^{\kappa} & \mathbb K^{\widehat{\Delta
% \left(p\right)}}
% \ar[r]^{\operatorname*{vrefl}^{\ast}} \ar@{-->}[d]^{\kappa} & \mathbb
% K^{\widehat{\Delta\left(p\right)}} \ar@{-->}[d]^{\kappa} \\
% \mathbb K^{\widehat{\nabla\left(p\right)}} \ar@{-->}[r]_{R_{\nabla
% \left(p\right)}^{-p}} & \mathbb K^{\widehat{\nabla\left(p\right)}}
% \ar[r]_{\operatorname*{vrefl}^{\ast}} & \mathbb K^{\widehat{\nabla
% \left(p\right)}}
% }
% \]
% commute (the left square does so because of $\kappa\circ R_{\Delta\left(
% p\right) }^p=R_{\nabla\left( p\right) }^{-p}\circ\kappa$, and the
% commutativity of the right square follows from a simple calculation), and so
% the whole diagram commutes. In other words,
% \begin{equation}
% \kappa\circ\left( \operatorname*{vrefl}\nolimits^{\ast}\circ R_{\Delta\left(
% p\right) }^p\right) =\left( \operatorname*{vrefl}\nolimits^{\ast}\circ
% R_{\nabla\left( p\right) }^{-p}\right) \circ\kappa.
% \label{pf.Nabla.halfway.short.1}
% \end{equation}
% But the statement of Theorem \ref{thm.Delta.halfway} can be rewritten as
% $R_{\Delta\left( p\right) }^p
% =\operatorname*{vrefl}\nolimits^{\ast}$. Since
% $\operatorname*{vrefl}\nolimits^{\ast}$ is an involution (this is clear by
% inspection), we have $\operatorname*{vrefl}\nolimits^{\ast}=\left(
% \operatorname*{vrefl}\nolimits^{\ast}\right) ^{-1}$, so that
% $\underbrace{\operatorname*{vrefl}\nolimits^{\ast}}_{=\left(
% \operatorname*{vrefl}\nolimits^{\ast}\right) ^{-1}}\circ\underbrace{R_{\Delta
% \left( p\right) }^p}_{=\operatorname*{vrefl}\nolimits^{\ast}}=\left(
% \operatorname*{vrefl}\nolimits^{\ast}\right) ^{-1}\circ\operatorname*{vrefl}%
% \nolimits^{\ast}=\operatorname*{id}$. Thus, (\ref{pf.Nabla.halfway.short.1})
% simplifies to $\kappa\circ\operatorname*{id}=\left( \operatorname*{vrefl}%
% \nolimits^{\ast}\circ R_{\nabla\left( p\right) }^{-p}\right) \circ\kappa$.
% In other words, $\kappa=\left( \operatorname*{vrefl}\nolimits^{\ast}\circ
% R_{\nabla\left( p\right) }^{-p}\right) \circ\kappa$. Since $\kappa$ is a
% birational map, we can cancel $\kappa$ from this identity, obtaining
% $\operatorname*{id}=\operatorname*{vrefl}\nolimits^{\ast}\circ R_{\nabla
% \left( p\right) }^{-p}$. In other words, $R_{\nabla\left( p\right)
% }^p=\operatorname*{vrefl}\nolimits^{\ast}$. But this is precisely the statement
% of Theorem \ref{thm.Nabla.halfway}.
% \end{proof}
The proof of Corollary \ref{cor.DeltaNabla.ord} is now a simple exercise (or can be looked
up in \cite{grinberg-roby-arxiv}).
% \begin{proof}
% [Proof of Corollary \ref{cor.Delta.ord}.]\textbf{(a)} Let
% $f\in\mathbb{K}^{\widehat{\Delta\left( p\right) }}$ be sufficiently generic.
% Then, every $\left( i,k\right) \in\Delta\left( p\right) $ satisfies%
% \begin{align*}
% & \left( \underbrace{R_{\Delta\left( p\right) }^{2p}}_{=R_{\Delta\left(
% p\right) }^{p}\circ R_{\Delta\left( p\right) }^{p}}f\right) \left(
% \left( i,k\right) \right) \\
% & =\left( \left( R_{\Delta\left( p\right) }^{p}\circ R_{\Delta\left(
% p\right) }^{p}\right) f\right) \left( \left( i,k\right) \right)
% =\left( R_{\Delta\left( p\right) }^{p}\left( R_{\Delta\left( p\right)
% }^{p}f\right) \right) \left( \left( i,k\right) \right) \\
% & =\left( R_{\Delta\left( p\right) }^{p}f\right) \left( \left(
% k,i\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{by Theorem
% \ref{thm.Delta.halfway}, applied to }R_{\Delta\left( p\right) }^{p}f\text{
% instead of }f\right) \\
% & =f\left( \left( i,k\right) \right) \ \ \ \ \ \ \ \ \ \ \left( \text{by
% Theorem \ref{thm.Delta.halfway}, applied to }\left( k,i\right) \text{
% instead of }\left( i,k\right) \right) .
% \end{align*}
% Hence, the two labellings $R_{\Delta\left( p\right) }^{2p}f$ and $f$ are
% equal on every element of $\Delta\left( p\right) $. Since these two
% labellings are also equal on $0$ and $1$ (because Corollary
% \ref{cor.R.implicit.01} yields $\left( R_{\Delta\left( p\right) }%
% ^{2p}f\right) \left( 0\right) =f\left( 0\right) $ and $\left(
% R_{\Delta\left( p\right) }^{2p}f\right) \left( 1\right) =f\left(
% 1\right) $), this yields that the two labellings $R_{\Delta\left( p\right)
% }^{2p}f$ and $f$ are equal on every element of $\Delta\left( p\right)
% \cup\left\{ 0,1\right\} =\widehat{\Delta\left( p\right) }$. Hence,
% $R_{\Delta\left( p\right) }^{2p}f=f=\operatorname*{id}f$.
% Now, forget that we fixed $f$. We thus have shown that $R_{\Delta\left(
% p\right) }^{2p}f=\operatorname*{id}f$ for every sufficiently generic
% $f\in\mathbb{K}^{\widehat{\Delta\left( p\right) }}$. Hence, $R_{\Delta
% \left( p\right) }^{2p}=\operatorname*{id}$. In other words,
% $\operatorname*{ord}\left( R_{\Delta\left( p\right) }\right) \mid2p$. This
% proves Corollary \ref{cor.Delta.ord} \textbf{(a)}.
% \textbf{(b)} Proving Corollary \ref{cor.Delta.ord} \textbf{(b)} is left to the reader.
% \end{proof}
% \begin{proof}
% [Proof of Corollary \ref{cor.Nabla.ord}.]Corollary
% \ref{cor.Nabla.ord} can be deduced from Theorem \ref{thm.Nabla.halfway} in the
% same way as Corollary \ref{cor.Delta.ord} is deduced from Theorem
% \ref{thm.Delta.halfway}. We won't dwell on the details.
% \end{proof}
\section{\label{sect.quarter} The quarter-triangles}
\label{sect.quartertri}
We have now studied the order of birational rowmotion on all four triangles
(two of which are isomorphic as posets) which are obtained by cutting the
rectangle $\operatorname*{Rect}\left( p,p\right) $ along one of its
diagonals. But we can also cut $\operatorname*{Rect}\left( p,p\right) $
along \textbf{both} diagonals into four smaller triangles. These are
isomorphic in pairs, and we will analyze them now. The following definition is
an analogue of Definition \ref{def.DeltaNabla} but using $\operatorname*{Tria}%
\left( p\right) $ instead of $\operatorname*{Rect}\left( p,p\right) $:
\begin{definition}
\label{def.NEtri}Let $p$ be a positive integer. Define three subsets
$\operatorname*{NEtri}\left( p\right) $, $\operatorname*{Eqtri}\left(
p\right) $ and $\operatorname*{SEtri}\left( p\right) $ of
$\operatorname*{Tria}\left( p\right) $ by%
\begin{align*}
\operatorname*{NEtri}\left( p\right) & =\left\{ \left( i,k\right)
\in\operatorname*{Tria}\left( p\right) \ \mid\ i+k>p+1\right\} ;\\
\operatorname*{Eqtri}\left( p\right) & =\left\{ \left( i,k\right)
\in\operatorname*{Tria}\left( p\right) \ \mid\ i+k=p+1\right\} ;\\
\operatorname*{SEtri}\left( p\right) & =\left\{ \left( i,k\right)
\in\operatorname*{Tria}\left( p\right) \ \mid\ i+k
1$. Then, $\operatorname*{ord}%
\left( R_{\operatorname*{SEtri}\left( p\right) }\right) =p$
and $\operatorname*{ord}%
\left( R_{\operatorname*{NEtri}\left( p\right) }\right) =p$.
\end{conjecture}
In the case when $p$ is odd, we can prove this conjecture using
the same approach that was used to prove Theorem \ref{thm.Leftri.ord}
(see \cite{grinberg-roby-arxiv} for details):
\begin{theorem}
\label{thm.SEtri.ord}Let $p$ be an odd integer $>1$. Then, $\operatorname*{ord}%
\left( R_{\operatorname*{SEtri}\left( p\right) }\right) =p$
and $\operatorname*{ord}%
\left( R_{\operatorname*{NEtri}\left( p\right) }\right) =p$.
\end{theorem}
However, this reasoning fails in the even-$p$ case
(although the order of \textbf{classical} rowmotion is again known to be $p$
in the even-$p$ case -- see \cite[Conjecture 3.6]{striker-williams}).
% Here is
% how the proof proceeds in the case of odd $p$:
%
% \begin{proposition}
% \label{prop.SEtri.odd.ord}Let $p$ be an odd integer $>1$.
% Let $\mathbb{K}$ be a field.
% Then,
% $\operatorname*{ord}\left( R_{\operatorname*{SEtri}\left( p\right)
% }\right) =p$.
% \end{proposition}
%
% \begin{proposition}
% \label{prop.NEtri.odd.ord}Let $p$ be an odd integer $>1$.
% Let $\mathbb{K}$ be a field.
% Then,
% $\operatorname*{ord}\left( R_{\operatorname*{NEtri}\left( p\right)
% }\right) =p$.
% \end{proposition}
%
% Our proof of Proposition \ref{prop.NEtri.odd.ord} rests upon the following fact:
%
% \begin{lemma}
% \label{lem.NEtri.vrefl}Let $p$ be a positive integer.
% Let $\mathbb{K}$ be a field of characteristic $\neq2$.
%
% \textbf{(a)} Let $\operatorname*{vrefl}:\Delta\left( p\right) \rightarrow
% \Delta\left( p\right) $ be the map sending every $\left( i,k\right)
% \in\Delta\left( p\right) $ to $\left( k,i\right) $. This map
% $\operatorname*{vrefl}$ is an involutive poset automorphism of $\Delta\left(
% p\right) $. (In intuitive terms, $\operatorname*{vrefl}$ is simply reflection
% across the vertical axis.) We have $\operatorname*{vrefl}\left( v\right)
% \in\operatorname*{NEtri}\left( p\right) $ for every $v\in\Delta\left(
% p\right) \setminus\operatorname*{NEtri}\left( p\right) $.
%
% We extend $\operatorname*{vrefl}$ to an involutive poset automorphism of
% $\widehat{\Delta\left( p\right) }$ by setting $\operatorname*{vrefl}\left(
% 0\right) =0$ and $\operatorname*{vrefl}\left( 1\right) =1$.
%
% \textbf{(b)} Define a
% map $\operatorname*{dble}:\mathbb{K}^{\widehat{\operatorname*{NEtri}\left(
% p\right) }}\rightarrow\mathbb{K}^{\widehat{\Delta\left( p\right) }}$ by
% setting%
% \[
% \left( \operatorname*{dble}f\right) \left( v\right) =\left\{
% \begin{array}
% [c]{l}%
% \dfrac{1}{2}f\left( 1\right) ,\ \ \ \ \ \ \ \ \ \ \text{if }v=1;\\
% f\left( 0\right) ,\ \ \ \ \ \ \ \ \ \ \text{if }v=0;\\
% f\left( v\right) ,\ \ \ \ \ \ \ \ \ \ \text{if }v\in\operatorname*{NEtri}%
% \left( p\right) ;\\
% f\left( \operatorname*{vrefl}\left( v\right) \right)
% ,\ \ \ \ \ \ \ \ \ \ \text{otherwise}%
% \end{array}
% \right.
% \]
% for all $v\in\widehat{\Delta\left( p\right) }$ for all $f\in\mathbb{K}%
% ^{\widehat{\operatorname*{NEtri}\left( p\right) }}$. This is well-defined.
% We have%
% \begin{equation}
% \left( \operatorname*{dble}f\right) \left( v\right) =f\left( v\right)
% \ \ \ \ \ \ \ \ \ \ \text{for every }v\in\operatorname*{NEtri}\left(
% p\right) . \label{lem.NEtri.vrefl.b.doublev}%
% \end{equation}
% Also,%
% \begin{equation}
% \left( \operatorname*{dble}f\right) \left( \operatorname*{vrefl}\left(
% v\right) \right) =f\left( v\right) \ \ \ \ \ \ \ \ \ \ \text{for every
% }v\in\operatorname*{NEtri}\left( p\right) .
% \label{lem.NEtri.vrefl.b.doublevrefl}%
% \end{equation}
%
%
% \textbf{(c)} Assume that $p$ is odd. Then,%
% \[
% R_{\Delta\left( p\right) }\circ\operatorname*{dble}=\operatorname*{dble}%
% \circ R_{\operatorname*{NEtri}\left( p\right) }.
% \]
%
% \end{lemma}
%
%
% \begin{proof}
% [\nopunct]We omit the proofs of Lemma \ref{lem.NEtri.vrefl}, Proposition
% \ref{prop.NEtri.odd.ord} and Proposition \ref{prop.SEtri.odd.ord} since
% neither of them involves any new ideas. The first is analogous to that of
% Lemma \ref{lem.Leftri.vrefl} (with $\Delta\left( p\right) $ and
% $\operatorname*{NEtri}\left( p\right) $ taking the roles of
% $\operatorname*{Rect}\left( p,p\right) $ and $\operatorname*{Tria}\left(
% p\right) $, respectively)\footnote{The only non-straightforward change that
% must be made to the proof is the following: In Case 2 of the proof of Lemma
% \ref{lem.Leftri.vrefl}, we used the (obvious) observation that $\left(
% i-1,i\right) $ and $\left( i,i-1\right) $ are elements of
% $\operatorname*{Rect}\left( p,p\right) $ for every $\left( i,i\right)
% \in\operatorname*{Rect}\left( p,p\right) $ satisfying $i\neq1$. The
% analogous observation that we need for proving Lemma \ref{lem.NEtri.vrefl} is
% still true in the case of odd $p$,
% but a bit less obvious. In fact, it is the observation that
% $\left( i-1,i\right) $ and $\left( i,i-1\right) $ are elements of
% $\Delta\left( p\right) $ for every $\left( i,i\right) \in\Delta\left(
% p\right) $. This uses the oddness of $p$.}. The proof of Proposition
% \ref{prop.NEtri.odd.ord} combines Lemma \ref{lem.NEtri.vrefl} with Theorem
% \ref{thm.DeltaNabla.halfway}. Proposition \ref{prop.SEtri.odd.ord} is derived from
% Proposition \ref{prop.NEtri.odd.ord} using Proposition \ref{prop.op.ord}.
% \end{proof}
Nathan Williams suggested that the following generalization of Conjecture
\ref{conj.SEtri.ord} might hold:
\begin{conjecture}
\label{conj.NEtriminus.ord}Let $p$ be an integer $>1$. Let $s\in\mathbb{N}$.
Let $\operatorname*{NEtri}\nolimits^{\prime}\left( p\right) $ be the
subposet $\left\{ \left( i,k\right) \in\operatorname*{NEtri}\left(
p\right) \ \mid\ k\geq s\right\} $ of $\operatorname*{NEtri}\left(
p\right) $. Then, $\operatorname*{ord}\left( R_{\operatorname*{NEtri}%
\nolimits^{\prime}\left( p\right) }\right) \mid p$.
\end{conjecture}
This conjecture has been verified using Sage for all $p\leq7$. Williams (based
on a philosophy from his thesis \cite{williams-cataland}) suspects there could
be a birational map between $\mathbb{K}^{\widehat{\operatorname*{NEtri}%
\nolimits^{\prime}\left( p\right) }}$ and $\mathbb{K}%
^{\widehat{\operatorname*{Rect}\left( s-1,p-s+1\right) }}$ which commutes with
the respective birational rowmotion operators for all $s>\dfrac{p}{2}$; this,
if shown, would obviously yield a proof of Conjecture
\ref{conj.NEtriminus.ord}. This already is an interesting question for
classical rowmotion; a bijection between the antichains (and thus between the
order ideals) of $\operatorname*{NEtri}\nolimits^{\prime}\left( p\right) $
and those of $\operatorname*{Rect}\left( s-1,p-s+1\right) $ was found by
Stembridge \cite[Theorem 5.4]{stembridge-trapezoid}, but does not commute with
classical rowmotion.
\section{\label{sect.negres}Negative results}
Generally, it is not true that if $P$ is an $n$-graded poset, then
$\operatorname*{ord}\left( R_{P}\right) $ is necessarily finite. When
$\operatorname*{char}\mathbb{K}=0$, the authors have proven the
following\footnote{See the ancillary files of \cite{grinberg-roby-arxiv}
for an outline of the (rather technical) proofs.}:
\begin{itemize}
\item If $P$ is the poset $\left\{ x_{1},x_{2},x_{3},x_{4},x_{5}\right\} $
with relations $x_{1}