% EJC papers *must* begin with the following two lines.
\documentclass[12pt]{article}
\usepackage{e-jc}
\usepackage{amsthm,amsmath,amssymb}
% we recommend the graphicx package for importing figures
\usepackage{graphicx}
\usepackage{tikz}
% use this command to create hyperlinks (optional and recommended)
\usepackage[colorlinks=true,citecolor=black,linkcolor=black,urlcolor=blue]{hyperref}
% use these commands for typesetting doi and arXiv references in the bibliography
\newcommand{\doi}[1]{\href{http://dx.doi.org/#1}{\texttt{doi:#1}}}
\newcommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{\texttt{arXiv:#1}}}
% all overfull boxes must be fixed;
% i.e. there must be no text protruding into the margins
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{observation}[theorem]{Observation}
\newtheorem{claim}[theorem]{Claim}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{open}[theorem]{Open Problem}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{question}[theorem]{Question}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remarks}
\newtheorem{note}[theorem]{Note}
\newcommand{\Cal}[1]{{\mathcal #1}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\achar}{\operatorname{char}}
\newcommand{\tr}{\operatorname{tr}}
\DeclareMathOperator{\perm}{perm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% if needed include a line break (\\) at an appropriate place in the title
\title{\bf Leapfrog Constructions: From Continuant \\ Polynomials to Permanents of Matrices}
% input author, affilliation, address and support information as follows;
% the address should include the country, and does not have to include
% the street address
\author{Alberto Facchini\thanks{Partially
supported by Universit\`a di Padova (Progetto ex 60\% ``Anelli e categorie di moduli'') and Fondazione Cassa di Risparmio di Padova e Rovigo (Progetto di Eccellenza ``Algebraic structures and their applications''.)}\\
\small Dipartimento di Matematica\\[-0.8ex]
\small Universit\`a di Padova\\[-0.8ex]
\small Padova, Italy\\
\small\tt facchini@math.unipd.it\\
\and
Andr\'e Leroy\\
\small Facult\'e Jean Perrin\\[-0.8ex]
\small Universit\'e d'Artois\\[-0.8ex]
\small Lens, France\\
\small\tt andre.leroy@univ-artois.fr
}
% \date{\dateline{submission date}{acceptance date}\\
% \small Mathematics Subject Classifications: comma separated list of
% MSC codes available from http://www.ams.org/mathscinet/freeTools.html}
\date{\dateline{Aug 26, 2014}{XX}\\
\small Mathematics Subject Classifications: Primary 05B45, 11B39; Secondary 15A15}
\begin{document}
\maketitle
% E-JC papers must include an abstract. The abstract should consist of a
% succinct statement of background followed by a listing of the
% principal new results that are to be found in the paper. The abstract
% should be informative, clear, and as complete as possible. Phrases
% like "we investigate..." or "we study..." should be kept to a minimum
% in favor of "we prove that..." or "we show that...". Do not
% include equation numbers, unexpanded citations (such as "[23]"), or
% any other references to things in the paper that are not defined in
% the abstract. The abstract will be distributed without the rest of the
% paper so it must be entirely self-contained.
\begin{abstract}
We study noncommutative continuant polynomials via a new leapfrog construction. This needs the
introduction of new indeterminates and leads to generalizations of Fibonacci polynomials, Lucas polynomials and other families of
polynomials. We relate these polynomials to various topics such as quiver algebras and tilings.
Finally, we use permanents to give a broad perspective on the subject.
%
% keywords are optional
% \bigskip\noindent \textbf{Keywords:} graph reconstruction
% conjecture; Broglington manifold; Pipletti's classification
\end{abstract}
\section{Introduction}
The continuants (or continuant polynomials) $p_n$ were introduced in the noncommutative setting in
\cite{IHES} by P.M. Cohn, who used them to describe some groups of invertible matrices
and, under suitable hypotheses, to analyze comaximal relations in a ring. See \cite{Cohn} for more details.
Continuants also appear in connection with the weak algorithm \cite{Cohn} and more recently they have been used to characterize Euclidean pairs and quasi Euclidean rings \cite{AJLL}. Cohn also calls the construction of continuants {\em leapfrog construction}.
In this paper, we prefer to present Cohn's
construction with a different notation, which we think is more convenient, and which highlights the leapfrog
structure of the construction of the polynomials $p_n$ (Section~\ref{sec2}). We reinterpret the
continuants $p_n$ in term of suitable quiver algebras with two vertices $A$ and $B$, in which the paths leap
alternatively from $A$ to $B$. Cohn's polynomials $p_n$ are related to the elementary group $E_2(R)$
(equation~(\ref{big})). Here we find the leapfrog structure again, because the $n$-th powers of a suitable
coset $E_2(R)P(0)$ of an element $P(0)$ of $GL_n(R)$ modulo $E_2(R)$ jump alternatively from $E_2(R)$ when $n$ is even
to $E_2(R)P(0)$ when $n$ is odd.
In a completely different setting, other polynomials $h_n$ were introduced in \cite{AdelAlb}, to
compute the inverse of an isomorphism $f$ in a factor category $\Cal A/\Cal I_1\cap\dots\cap\Cal I_n$ from
the inverses of the images of the isomorphism $f$ in factor categories $\Cal A/\Cal I_t$ of $\Cal
A$ modulo ideals $\Cal I_t$ of $\Cal A$. Since there is a surprising similarity between the structure of Cohn's
polynomials $p_n$ and that of the polynomials $h_n$, we have studied whether both families of
polynomials were specializations of a more general class of polynomials. In this investigation, we
have met with other classes of noncommutative polynomials (the generalized Fibonacci polynomials
$f_n$, the generalized Lucas polynomials $\ell_n$ and other ``circular'' polynomials $c_n$), which have
evident combinatorial interpretations: they parametrize tilings and circular tilings of
stripes of squares with dominoes (Section~\ref{domino}). Commutative generalized Fibonacci polynomials
appear naturally in Combinatorics \cite{gfp}, related to the Fibonacci sequence, and in Complex
Analysis \cite{book}, related to generalized continuous fractions.
Our noncommutative generalized Fibonacci polynomials are the noncommutative analogue of the
generalized Fibonacci polynomials studied in \cite{gfp}.
We have thus found a very natural ``hierarchy'' of polynomials (see the diagram in Remark~\ref{Rem}(4)). Our polynomials turn out to be noncommutative permanents of suitable matrices in noncommutative indeterminates (Theorem~\ref{g_n as a permanent} and Corollary~\ref{syre}).
The rings and algebras we work with are associative and have an identity element.
\section{Two sets of indeterminates in Cohn's \\ continuants $p_n$}\label{sec2}
We begin this Section by recalling the definition of the continuants \cite{IHES}, which
are noncommutative polynomials with coefficients in the ring $\Z$ of integers. Let
$t_1,t_2,t_3,\dots$ be infinitely many noncommutative indeterminates over the ring $\Z$.
There is a strictly ascending chain $$\Z\langle t_1\rangle\subset \Z\langle t_1,t_2\rangle\subset \Z\langle t_1,t_2,t_3\rangle\subset \dots$$ of noncommutative integral domains, where $\Z\langle t_1,\dots,t_n\rangle$ denotes the ring of polynomials in the noncommutative indeterminates $t_1,\dots,t_n$ with coefficients in $\Z$, and the union of this chain is the ring $\Z\langle t_1,t_2,t_3,\dots\rangle$. The continuants $p_n=p_n(t_1,\dots,t_n)$ are defined by the recursion formulae:
\begin{equation}\begin{array}{l}{p_{-1}=0,\quad p_0=1,\ } {\rm and},\; {\rm for}\; n\ge 1,\;\\ {p_n(t_1,\dots,t_n)=p_{n-1}(t_1,\dots,t_{n-1})t_n+p_{n-2}(t_1,\dots,t_{n-2}).}\end{array}\label{rec}\end{equation}
For every $f\in \Z\langle t_1,t_2,t_3,\dots\rangle$, we will use P.M.~Cohn's convenient notation, introduced in \cite[p.~147]{Cohn}, denoting the $2\times 2$-matrix
$\left(\begin{array}{cc}f & 1\\ 1 & 0\end{array}\right)$ by $P(f)$. Then
\begin{equation}P(t_1)P(t_2)\dots P(t_n)=\left(\begin{array}{cc}p_n(t_1,\dots,t_n) & p_{n-1}(t_1,
\dots,t_{n-1})\\ p_{n-1}(t_2,\dots,t_{n}) & p_{n-2}(t_2,\dots,t_{n-1})\end{array}\right).
\label{2}\end{equation}
Since the inverse of the matrix $P(f)$ is given by $P(0)P(-f)P(0)$, we easily get that the inverse of the matrix in (\ref{2}) is given by
\begin{equation}(-1)^n\left(\begin{array}{cc}p_{n-2}(t_{n-1},\dots,t_2) & -p_{n-1}(t_{n-1},
\dots,t_{1})\\ -p_{n-1}(t_n,\dots,t_{2}) & p_{n}(t_{n},\dots,t_{1})\end{array}\right).
\label{inverse of equation 2}\end{equation}
This leads to the following well-known relations:
\begin{equation}
\label{first equation from the inverse}
p_n(t_1,\dots ,t_n)p_{n-1}(t_{n-1},\dots,t_2)-p_{n-1}(t_1,\dots,t_{n-1})p_{n-1}(t_n,\dots,t_2)=(-1)^n \end{equation}
\begin{equation}
\label{second equation from the inverse}
p_n(t_1,\dots,t_n)p_{n-1}(t_{n-1},\dots,t_1)=p_{n-1}(t_1,\dots,t_{n-1})p_n(t_n,\dots,t_1) \end{equation}
This last relation is due to Wedderburn \cite{W}.
Using the associativity of the product of matrices in (\ref{2}), we also get, for any
$1\le k \le n$, the formula:
\begin{equation}
\label{equation for continuants from associativity}
\begin{array}{r} p_n(t_1,\dots,t_n)=p_k(t_1,\dots,t_k)p_{n-k}(t_{n-k+1},\dots, t_n) +\qquad\\
+ p_{k-1}(t_1,\dots ,t_{k-1})p_{n-k-1}(t_{n-k},\dots,t_n)
\end{array}\end{equation}
For $k=1$, this gives back the equation given in
\cite[(14), page 148]{Cohn}: $$ p_{n}(t_1,\dots,t_n)=t_1p_{n-1}(t_2,\dots,t_n)+p_{n-2}(t_3,\dots,t_n)$$.
The continuant polynomial $p_n(t_1,\dots,t_n)$ can be presented via a leapfrog construction in the following sense. The first term of $p_n$ is $t_1t_2\cdots t_n$. The next terms are obtained by erasing two
consecutive indeterminates (the frog leaps over them) from $t_1t_2\cdots t_n$ to get the sum:
$t_3t_4\cdots t_n + t_1t_4t_5\cdots t_n + t_1t_2t_5\cdots t_n+ \dots$.
As far as the following terms are concerned, we erase
$2$ pairs of consecutive indeterminates ($2$ jumps) and get the terms
$$\sum_{1\le i_1 < i_2 -1 \le n} t_1\cdots \widehat{t_{i_1}}\widehat{t_{i_1 + 1}}\cdots \widehat{t_{i_2}}\widehat{t_{i_2 + 1}}\cdots t_n.$$
We then continue adding terms corresponding to $3$ leaps, $4$ leaps, and so on.
Finally, we can write
\begin{equation}
p_n(t_1,\dots,t_n)= \sum_{i_1,i_2,\dots,i_j} t_1\cdots \widehat{t_{i_1}}\widehat{t_{i_1 + 1}} \cdots \widehat{t_{i_2}}\widehat{t_{i_2 + 1}}\cdots \widehat{t_{i_j}}\widehat{t_{i_j+1}}\cdots t_n,
\end{equation}
where $1\le j \le \lfloor{n/2}\rfloor$ and $i_j + 1] (r) to [out=270, in=270] node[auto, swap] {\scriptsize $x_1$} (s);
\draw [->] (r) to [out=335, in=205] node[auto] {\scriptsize $x_2$} (s);
\draw [->] (r) to [out=65, in=115] node[auto, swap] {\scriptsize $x_n$} (s);
\draw [->] (s) to [out=240, in=300] node[auto, swap] {\scriptsize $y_1$} (r);
\draw [->] (s) to [out=170, in=10] node[auto, swap]{\scriptsize $y_2$} (r);
\draw [->] (s) to [out=90, in=90] node[auto, swap]{\scriptsize $y_n$} (r);
\end{tikzpicture}
\end{center}
For each $i=1,2,\dots, n$, the directed graph $\Gamma_n$ has one arrow $x_i$ from $A$ to $B$, and one arrow $y_i$ from $B$ to $A$. Thus $\Gamma_n$ has $2n$ arrows.
Let $k$ be a field. Consider the quiver algebra $k\Gamma_n$ and the ideal $I$ of $k\Gamma_n$ generated by all paths $x_iy_j\colon A\stackrel{x_i}{\longrightarrow} B\stackrel{y_j}{\longrightarrow}A$ with $i>j$ and all paths $y_ix_j\colon B\stackrel{y_i}{\longrightarrow} A\stackrel{x_j}{\longrightarrow}B$ with $i\ge j$. The quotient algebra $k\Gamma_n/I$ is a finite dimensional $k$-algebra, because the longest possible path not in $I$ is the path $x_1y_1x_2y_2\dots x_ny_n$ of length $2n$. In particular, $R:=k\Gamma_n/I$ is an artinian ring, so that the Jacobson radical $J(R)$ is a nilpotent ideal that contains all nilpotent elements of $R$. Moreover, $R/J(R)\cong k\times k$. The algebra $R=R_0\oplus R_1$ is 2-graded, where $R_0$ corresponds to the paths of even length and $R_1$ to the paths of odd length. In particular, the images of the polynomials $G_n$ in $R$ are in $R_0$ and the images of the polynomials $H_n$ are in $R_1$. Notice that $G_n$ is a linear combination of paths from $A$
to $A$ (i.~e., cycles), and $H_n$ is a linear combination of paths from $A$ to $B$. The elements $x_iy_j$ with $i\ge j$ are nilpotent of index $2$, hence they belong to $J(R)$, so $\sum_{1\le i\le j\le n}x_iy_j$ is nilpotent, and $1-\sum_{1\le i\le j\le n}x_iy_j$ is invertible in $R$. Here, and in the next Proposition, the same symbol denotes both an element of $k\Gamma_n$ and its image in $R$.
\begin{theorem} In the ring $R=k\Gamma_n/I$, we have that $$H_n=\left(1-\sum_{1\le i\le j\le n}x_iy_j\right)^{-1}\left(\sum_{i=1}x_i\right)\quad\mbox{and}\quad G_n=\left(1-\sum_{1\le i\le j\le n}x_iy_j\right)^{-1}$$ for every $n\ge 0$.
\end{theorem}
\begin{proof} We claim that $H_n=G_n\left(\sum_{i=1}^{n}x_i\right)$ and
$G_n\left(1-\sum_{1\le i\le j\le n}x_iy_j\right)=1$ in $R$ for every $n\ge1$.
The proof of the claim is by induction on $n$. For $n=1$, we have $G_1x_1= (x_1y_1+1)x_1=x_1=H_1$ and $G_1(1-x_1y_1)=(1+x_1y_1)(1-x_1y_1)=1$, because $y_1x_1\in I$.
Now assume that the claim is true for $n$ in $R:=k\Gamma_n/I$. We will show that the claim is also true for $n+1$ in $R':=k\Gamma_{n+1}/I'$.
Using the relations (\ref{relations between qs and Ps}), we have $$\begin{array}{l} H_{n+1}-G_{n+1}\left(\sum_{i=1}^{n+1}x_i\right)= \\ \qquad =(G_nx_{n+1} + H_n)-(H_{n+1}y_{n+1}+G_n)\left(\sum_{i=1}^{n+1}x_i\right)= \\ \qquad =G_nx_{n+1} + H_n-G_n\left(\sum_{i=1}^{n+1}x_i\right)=H_n-G_n\left(\sum_{i=1}^{n}x_i\right)=0.\end{array}$$ This proves the first formula in the claim for $n+1$, i.~e., that $$H_{n+1}=G_{n+1}\left(\sum_{i=1}^{n+1}x_i\right)$$ in $R'$. But $(G_{n+1}-G_n)\left(\sum_{i=1}^{n+1}x_i\right)=H_{n+1}y_{n+1}\left(\sum_{i=1}^{n+1}x_i\right)=0$, so that we also have $H_{n+1}=G_n\left(\sum_{i=1}^{n+1}x_i\right)$ in $R'$.
Finally, in $R':=k\Gamma_{n+1}/I'$, we have that $$\begin{array}{l}G_{n+1}\left(1-\sum_{1\le i\le j\le n+1}x_iy_j\right)= \\ \qquad\qquad=(H_{n+1}y_{n+1}+G_n)\left(1-\sum_{1\le i\le j\le n}x_iy_j-\sum_{i=1}^{n+1}x_iy_{n+1}\right)= \\ \qquad\qquad=H_{n+1}y_{n+1}+G_n\left(1-\sum_{1\le i\le j\le n}x_iy_j\right)-G_n\left(\sum_{i=1}^{n+1}x_iy_{n+1}\right)= \\ \qquad\qquad=1+\left(H_{n+1}-G_n\left(\sum_{i=1}^{n+1}x_i\right)\right)y_{n+1}=1.\end{array}$$ This concludes the proof of the claim. The statement of the Proposition for $n\ge 1$ follows immediately from the claim. The case $n=0$ is trivial.
\end{proof}
This theorem again shows that the structure of the $H_n$'s is different from the structure of the $G_n$'s, so that the choice of using different notations for the two families of polynomials is appropriate.
Notice that, in our graph $\Gamma_n$ with two vertices, the paths leap alternatively from $A$ to $B$.
\section{Continuants and groups of matrices}
For any fixed ring $R$, let $GL_2(R)$ denote the group of all invertible $2\times 2$-matrices with entries in $R$, that is, $GL_2(R)=U(M_2(R))$, the group of units of the ring $M_2(R)$ of all $2\times 2$-matrices with entries in $R$. Let $E_2(R)$ be the {\em elementary group}, that is, the subgroup of $GL_2(R)$ generated by all triangular matrices $\left(\begin{array}{cc}1 & x\\ 0 & 1\end{array}\right)$ and all triangular matrices $\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right),$ where $x$ and $y$ range in $R$. Notice that the triangular matrices $\left(\begin{array}{cc}1 & x\\ 0 & 1\end{array}\right)$ and $\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right)$ in $GL_2(R)$ generate $E_2(R)$ as a semigroup, because the inverse of the triangular matrix $\left(\begin{array}{cc}1 & x\\ 0 & 1\end{array}\right)$ is $\left(\begin{array}{cc}1 & -x\\ 0 & 1\end{array}\right)$, and similarly for $\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right)$. Since the matrices of the type $\left(\begin{array}{cc}1 & x\\ 0 & 1\end{array}\right)$ form an abelian group isomorphic to the additive group of $R$, and similarly for the matrices $\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right)$, an arbitrary element of $E_2(R)$ is a product of finitely many elements of the form
$$\left(\begin{array}{cc}1 & x\\ 0 & 1\end{array}\right)\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right)=\left(\begin{array}{cc}xy+1 & x\\ y & 1\end{array}\right).$$ These are exactly the factors on the left in the equation~(\ref{big}). Thus an arbitrary element of $E_2(R)$ is a matrix of the form $$\left(\begin{array}{cc}G_n(x_1,y_1,\dots,x_n,y_n) & H_n(x_1,y_1\dots,y_{n-1},x_n)\\ H_n(y_1, x_2,\dots,x_n,y_n) & G_{n-1}(y_1, x_2,\dots,y_{n-1},x_n)\end{array}\right),$$ with $x_1,y_1,\dots,x_n,y_n\in R$.
Let $G$ be the subsemigroup of the multiplicative semigroup $M_2(R)$ generated by all matrices of type $$P(x):=\left(\begin{array}{cc}x & 1\\ 1 & 0\end{array}\right),$$ where $x$ ranges in $R$. As Cohn proved in \cite{IHES}, the semigroup $G$, set of all products $P(x_1)\cdots P(x_n)$ with $n\ge 1$ and $x_1,\dots,x_n\in R$, is a group, because $P(0)^2$ is the identity of $GL_2(R)$ and $P(x)^{-1}=P(0)P(-x)P(0)$.
\begin{theorem}\label{4.1} For any ring $R$, exactly one of the following two conditions holds: \begin{itemize}
\item[{\rm (a)}]
Either $G=E_2(R)$, or \item[{\rm (b)}]
The group $G$ is the semidirect product $E_2(R)\rtimes C$ of the group $E_2(R)$ and the cyclic group $C$ of order $2$ generated by the involution $P(0)=\left(\begin{array}{cc}0 & 1\\ 1 & 0\end{array}\right).$ \end{itemize} The action of $P(0)$ on $E_2(R)$ is given by $$\left(\begin{array}{cc}1 & x\\ 0 & 1\end{array}\right)\mapsto P(0)\left(\begin{array}{cc}1 & x\\ 0 & 1\end{array}\right)P(0)=\left(\begin{array}{cc}1 & 0\\ x & 1\end{array}\right)$$ and $$\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right)\mapsto P(0)\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right)P(0)=\left(\begin{array}{cc}1 & y\\ 0 & 1\end{array}\right).$$
\end{theorem}
\begin{proof} Suppose $G\ne E_2(R)$. Then $\left(\begin{array}{cc}1 & x\\ 0 &1\end{array}\right)=P(x)P(0)$ and $\left(\begin{array}{cc}1 & 0\\ y & 1\end{array}\right)=P(0)P(y)$, so that $E_2(R)$ is contained in $G$. It is easily verified that the action of $P(0)$ on $E_2(R)$, given by conjugation by the involution $P(0)$, is as in the last part of the statement of the Theorem. Since every generator $P(x)$ of $G$ can be written as $P(x)=\left(\begin{array}{cc}1 & x\\ 0 &1\end{array}\right)P(0)$, where $\left(\begin{array}{cc}1 & x\\ 0 &1\end{array}\right)\in E_2(R)$, it follows that $E_2(R)$ is a normal subgroup of $G$ and $G=E_2(R) C$. In order to conclude the proof that $G$ is the semidirect product of $E_2(R)$ and $C$, it remains to notice that $P(0)\notin E_2(R)$, because $G\ne E_2(R)$ and $G=E_2(R) C$.\end{proof}
\begin{proposition}\label{4.2} If $R$ is any ring of characteristic $2$, then $G=E_2(R)$.\end{proposition}
\begin{proof} If $R$ has characteristic $2$, then, for $x_1=0$ and $y_1=x_2=y_2=1$, we have that $$\begin{array}{l}{P(0)
=\left(\begin{array}{cc}1&0\\ 1 &1\end{array}\right)\left(\begin{array}{cc}0 & 1\\ 1 &1\end{array}\right)=} \\ {\qquad\qquad \left(\begin{array}{cc}x_1y_1+1 & x_1\\ y_1 & 1\end{array}\right)\left(\begin{array}{cc}x_2y_2+1 & x_2\\ y_2 & 1\end{array}\right)\in E_2(R).}\end{array}$$ Thus $G=E_2(R)$ by Theorem~\ref{4.1}.
\end{proof}
In the next proposition, $\achar(R)$ denotes the characteristic of the ring $R$ and $\det(A)$ denotes the determinant of the matrix $A$.
\begin{proposition}\label{comm} Let $R$ be a commutative ring. Then:\begin{itemize}
\item[{\rm (a)}] $E_2(R)=\{\,A\in G\mid\det(A)=1\,\}$.
\item[{\rm (b)}] $G=E_2(R)$ if and only if $\achar(R)=2$.\end{itemize}\end{proposition}
\begin{proof} As $R$ is a commutative ring, the determinant $\det\colon G\to U(R)$ is a group morphism, whose kernel contains $E_2(R)$. Thus we have three groups $G\supseteq \ker\det\supseteq E_2(R)$. From Theorem~\ref{4.1}, we know that $[G:E_2(R)]\le 2$. Thus we have three possible cases:
\smallskip
{\em First case: $G= \ker\det\supset E_2(R)$.} In this case, $P(0)\in G= \ker\det$, so that $\det(P(0))=1$, that is, $-1=1$, hence $\achar(R)=2$. By Proposition~\ref{4.2}, it follows that $G=E_2(R)$. This is not true in this first case. Thus this first case can never take place.
\smallskip
{\em Second case: $G\supset \ker\det= E_2(R)$.} In this case, $\achar(R)\ne2$ by Proposition~\ref{4.2}. Thus (a) and (b) both hold in this second case.
\smallskip
{\em Third case: $G=\ker\det= E_2(R)$.} Then (a) is trivially true. Moreover, as in the first case, we have that $P(0)\in \ker\det$, so that $\det(P(0))=1$, hence $\achar(R)=2$. Thus (b) also holds.\end{proof}
Let us go back to the case of $R$ nonnecessarily commutative. When $G=E_2(R)\rtimes C$, we again found a leapfrog structure, because $\left(\begin{array}{cc}t_i & 1\\ 1 & 0\end{array}\right)=\left(\begin{array}{cc}1&t_i \\ 0 & 1\end{array}\right)P(0)$ is an element of the coset $E_2(R)P(0)$ of $G$, so that the products on the left of~(\ref{2}) jump alternatively from elements of $E_2(R)$ when $n$ is even to elements of the coset $E_2(R)P(0)$ when $n$ is odd.
\section{A second sequence of noncommutative \\ polynomials $h_n$}
In this Section, we study another sequence of noncommutative polynomials similar to the sequence of continuants $p_n$ considered in the previous two sections. In Section~\ref{sec2}, we have preferred to present continuants as polynomials in two infinite sets of indeterminates $x_1,x_2,x_3,\dots$ and $y_1,y_2,y_3,\dots$. The polynomials $h_n$ we will construct now are noncommutative polynomials in the infinite set of indeterminates $x_1,x_2,x_3,\dots$ plus one more indeterminate $y$. The polynomials $h_n$ have been introduced in \cite{AdelAlb}, in the study of semilocal categories, in order to present a sort of Chinese Remainder Theorem that holds in preadditive categories. Let us recall the definition of those polynomials $h_n$, adapting the notation to the context of this paper.
In the paper \cite{AdelAlb}, the authors essentially introduce noncommutative polynomials $$h_n=h_n(x_1,x_2,\dots, x_n,y)$$ with coefficients in $\Z$, $n\ge 1$, defined as follows. Let $x_1,x_2,x_3,\dots,y$ be infinitely many noncommutative indeterminates over the ring $\Z$. Let $\Z\langle x_1,x_2,x_3,\dots, x_n,y\rangle$ denote the ring of polynomials in the $n+1$ noncommutative indeterminates $x_1$, $x_2$, $x_3$, \dots, $x_n$, $y$ with coefficients in $\Z$. For each $n\ge 1$, there is a unique polynomial $h_n=h_n( x_1,x_2,\dots,x_n,y)$ in $\Z\langle x_1,x_2,\dots,x_n,y\rangle$ such that \begin{equation}1+h_ny=(1+x_1y)(1+x_2y)\dots(1+x_ny).\label{333}\end{equation} In fact, such a polynomial $h_n$ exists because the product on the right in the equation~(\ref{333}) is of the form ``$1+$ monomials that terminate with $y$''. Moreover, $h_n$ is the unique polynomial that satisfies the equation~(\ref{333}), because $\Z\langle x_1,x_2,\dots,x_n,y\rangle$ is an integral domain.
\begin{proposition}\label{5.1} The polynomials $h_n$, $n\ge 1$, have the following properties:
\begin{itemize}
\item[{\rm (a)}] $1+yh_n=(1+yx_1)(1+yx_2)\dots(1+yx_n)$ for every $n\ge 1$.
\item[{\rm (b)}] $h_1=x_1$, and $h_{n}=x_{n}+h_{n-1}(1+yx_{n})$ for every $n\ge 2$.
\item[{\rm (c)}] $$\begin{array}{l} h_n=\sum_{1\le i\le n}x_i+\sum_{1\le i_1] (r) to [out=270, in=270] node[auto, swap] {\scriptsize $x_1$} (s);
\draw [->] (r) to [out=335, in=205] node[auto] {\scriptsize $x_3$} (s);
\draw [->] (r) to [out=65, in=115] node[auto, swap] {\scriptsize $x_n$} (s);
\draw [->] (r) to [out=300, in=240] node[auto] {\scriptsize $x_2$} (s);
\draw [->] (s) to [out=90, in=90] node[auto, swap]{\scriptsize $y$} (r);
\end{tikzpicture}
\end{center}
Thus the directed graph $\Delta_n$ has $n$ arrows $x_1,\dots,x_n$ from $A$ to $B$ and one arrow $y$ from $B$ to $A$, so that $\Delta_n$ has $n+1$ arrows. Let $k$ be a field, consider the quiver algebra $k\Delta_n$ and the ideal $I$ of $k\Delta_n$ generated by all paths $x_iyx_j$ with $i\ge j$. The quotient algebra $k\Delta_n/I$ is a finite dimensional $k$-algebra, because the longest possible path not in $I$ is the path $yx_1yx_2y\dots x_ny$ of length $2n+1$. In particular, $R:=k\Delta_n/I$ is an artinian ring with $R/J(R)\cong k\times k$. The algebra $R=R_0\oplus R_1$ is 2-graded, where $R_0$ corresponds to the paths of even length and $R_1$ to the paths of odd length. In particular, the images of the polynomials $h_n$ in $R$ are all in $R_1$. The elements $x_iy$ are nilpotent of index $2$, hence they belong to $J(R)$, so $\sum_{i=1}^nx_iy$ is nilpotent, and $1-\sum_{i=1}^nx_iy$ is invertible.
\begin{theorem} In the ring $R=k\Delta_n/I$, we have that:
{\rm (a)} $h_n=\left(1-\sum_{i=1}^nx_iy\right)^{-1}\left(\sum_{i=1}^{n}x_i\right),$
{\rm (b)} $q_n=\left(1-\sum_{i=1}^nx_iy\right)^{-1},$
{\rm (c)} $h_n=\left(\sum_{i=1}^{n}x_i\right)\left(1-\sum_{i=1}^nyx_i\right)^{-1},$
{\rm (d)} $q'_n=\left(1-\sum_{i=1}^nyx_i\right)^{-1},$ and
{\rm (e)} $h_n\left(\sum_{i=1}^nyx_i\right)=\left(\sum_{i=1}^nx_iy \right)h_n$,
for every $n\ge 1$.\end{theorem}
\begin{proof} (a) It suffices to show that $$\left(1-\sum_{i=1}^nx_iy\right)h_n=\left(\sum_{i=1}^{n}x_i\right).$$ We do it by induction on $n\ge1$. The case $n=1$ is easy, because $$(1-x_1y)h_1=(1-x_1y)x_1=x_1.$$ Suppose our formula true for $n$. Then $$\begin{array}{l}\left(1-\sum_{i=1}^{n+1}x_iy\right)h_{n+1}= \\ \qquad\qquad=\left(1-\sum_{i=1}^{n}x_iy-x_{n+1}y\right)(x_{n+1}+h_n(1+yx_{n+1}))= \\ \qquad\qquad=\left(1-\sum_{i=1}^{n}x_iy\right)x_{n+1}+\left(1-\sum_{i=1}^{n}x_iy\right)h_n(1+yx_{n+1})- \\ \qquad\qquad\qquad\qquad -x_{n+1}yh_n(1+yx_{n+1})= \\ \qquad\qquad=x_{n+1}-\sum_{i=1}^{n}x_iyx_{n+1}+\left(\sum_{i=1}^{n}x_i\right)(1+yx_{n+1})=\sum_{i=1}^{n+1}x_i,\end{array}$$ which concludes the proof of (a).
(b) We will prove that $q_n\left(1-\sum_{i=1}^nx_iy\right)=1$ in $R$ by induction on $n\ge 1$. The case $n=1$ is trivial. Suppose the formula true for $n-1$. Then $$\begin{array}{l}q_n\left(1-\sum_{i=1}^nx_iy\right)=q_{n-1}(1+x_ny)\left(1-\sum_{i=1}^nx_iy\right)= \\ \qquad\qquad = q_{n-1}\left(1+x_ny-\sum_{i=1}^nx_iy\right)=
q_{n-1}\left(1-\sum_{i=1}^{n-1}x_iy\right)=1\end{array}$$ by the inductive hypothesis.
(e) is easy and left to the reader.
(d) Like for (b), it suffices to show by induction on $n$ that $$q'_n\left(1-\sum_{i=1}^nyx_i\right)=1.$$
(c) From (e) and (a), we have that $$h_n\left(1-\sum_{i=1}^nyx_i\right)=\left(1-\sum_{i=1}^nx_iy\right)h_n=\sum_{i=1}^{n}x_i.$$ Thus (c) also holds.\end{proof}
Again, in our graph $\Delta_n$ with two vertices $A$ and $B$, the paths leap alternatively from $A$ to $B$.
\begin{proposition} The matrices $\left(\begin{array}{cc}x_i & 1\\ 1 & -y\end{array}\right)$ $(i=1,2,\dots,n)$ and $
\left(\begin{array}{cc}y & 1\\ 1 & 0\end{array}\right)$ are invertible in the ring $R=k\Delta_n/I$ $($that is, they are invertible elements of the ring $M_2(R)).$ Their inverses are $$\left(\begin{array}{cc}y-yx_iy & 1-yx_i\\ 1-x_iy & -x_i\end{array}\right)\quad{\mbox{and}}\quad P(0)P(-y)P(0)$$ respectively.\end{proposition} % (From this Proposition, the analogues of Cohn \cite[p.~148, (18)-Lemma 2.7.2]{Cohn} should follow.)
\section{The generalized Fibonacci polynomials $f_n$}\label{GCF}
We now generalize the continuants $p_n(t_1,t_2,\dots,t_n)$ of Section~\ref{sec2} from the
case of one sequence of indeterminates $t_n$ to the case of two sequences of indeterminates.
The new polynomials we obtain appear in the study of generalized continued fractions, hence
we call them {\em generalized Fibonacci polynomials}.
Our new polynomials $f_n$ are again polynomials with coefficients in $\Z$ and in the noncommutative indeterminates $x_1,x_2,x_3,\dots$ and $y_1,y_2,y_3,\dots$ They are defined by the recursion formulae: \begin{equation}\begin{array}{l}f_{-1}=0,\quad f_0=1,\\ f_n(x_1,\dots,x_n,y_1,\dots,y_n)=f_{n-1}(x_1,\dots,x_{n-1},y_1,\dots,y_{n-1})x_n+\\ \qquad\qquad\qquad\qquad\qquad\qquad\quad +f_{n-2}(x_1,\dots,x_{n-2}, y_1,\dots,y_{n-2})y_n.\end{array}\label{rec''}\end{equation}
Thus, when we specialize all the indeterminates $y_i$ to $1$, these polynomials turn out to be the continuants $p_n$ of Section~\ref{sec2} in one countable set of indeterminates, that is, $p_n(t_1,\dots,t_n)=f_n(t_1,\dots,t_n,1,1,\dots,1)$. Also,
$$f_n(x,\dots,x,1,1,\dots,1)=
F_n(x),$$ the commutative Fibonacci polynomials, which have received much attention
\cite[Introduction]{gfp}. Moreover, $f_n(x,\dots,x,y,\dots,y)$ turns out to be the noncommutative analogue of the generalized Fibonacci polynomial $\{n\}_{x,y}$ studied in \cite{gfp}.
The first of these polynomials $f_n$ are $$\begin{array}{l}f_0=1, \qquad f_1=x_1, \qquad f_2=x_1x_2+y_2, \\ f_3=x_1x_2x_3+x_1y_3+y_2x_3, \\ f_4=x_1x_2x_3x_4+x_1x_2y_4+x_1y_3x_4+y_2x_3x_4+ y_2 y_4,
\\ f_5=x_1x_2x_3x_4x_5+x_1x_2x_3y_5+x_1x_2y_4x_5+x_1y_3x_4x_5+ \\ \qquad\qquad\qquad\qquad +x_1y_3y_5+y_2x_3x_4x_5+ y_2x_3y_5+y_2 y_4x_5,\dots\end{array}$$ The number of monomials in each $f_n$ is the $(n+1)$-th Fibonacci number $F_{n+1}$.
These polynomials can be built using a leapfrog construction similar to that in Section 2 for continuants polynomials. For $f_n$, start writing the product $x_1\dots x_n$ and add all the monomials obtained by replacing all the possible disjoint consecutive
pairs $x_ix_{i+1}$ by $y_{i+1}$.
Notice that the indeterminate $y_1$ does not appear in any polynomial $$f_n(x_1,\dots,x_n,y_1,\dots,y_n),$$ that is, every $f_n(x_1,\dots,x_n,y_1,\dots,y_n)$ has degree $0$ in
the indeterminate $y_1$. Let us prove by induction on $n$ that \begin{equation}f_n(x_1,\ \dots,\ x_n,\ y_1,\ x_1x_2,\ x_2x_3,\ x_3x_4,\ \dots,\ x_{n-1}x_n)=F_{n+1}x_1x_2\dots x_n.\end{equation} For $n=-1$, we have that $f_{-1}=0=F_0$. For $n=0$, we have that $f_{0}=1=F_1$. Assume the result true for the indices smaller than $n$. Then
$$\begin{array}{l} f_n(x_1,\dots,\ x_n,\ y_1,\ x_1x_2,\ x_2x_3,\ x_3x_4,\dots,\ x_{n-1}x_n)= \\ \quad =f_{n-1}(x_1,\dots,x_{n-1},\ y_1,\ x_1x_2,\ x_2x_3,\ x_3x_4,\dots,\ x_{n-2}x_{n-1})x_n+ \\ \qquad +f_{n-2}(x_1,\dots,x_{n-2},\ y_1,\ x_1x_2,\ x_2x_3,\ x_3x_4,\dots,\ x_{n-3}x_{n-2})x_{n-1}x_n = \\ \quad=F_nx_1x_2\dots x_{n-1}\cdot x_n+F_{n-1}x_1x_2\dots x_{n-2}\cdot x_{n-1}x_n= \\ \quad =F_{n+1}x_1x_2\dots x_n.\end{array}$$
Let us mention another interesting specialization: if one puts $y_i=x_i$, for any $i=1,\dots, n$ we obtain new polynomials in the variables $x_1,\dots,x_n$:
$$d_n(x_1,\dots,x_n)=f_n(x_1,\dots,x_n,x_1,\dots,x_n).$$ In particular, we have
$d_0=1,\, d_1(x_1)=x_1,\, d_2(x_1,x_2)=x_1x_2+x_2,\, d_3(x_1,x_2,x_3)=x_1x_2x_3 + x_1x_3+x_2x_3$. Specializing further, we get the
sequence of natural numbers $D_n=d_n(0,1,2,\dots,n-1)$ that gives the number of derangements in the symmetric group $S_n$. We will see that the polynomials $f_n$ admit a matrix presentation (cf.~(\ref{2'})). This easily leads to a presentation of the elements of the sequence $D_n$ (cf.~also \cite{R}).
The polynomials $f_n$ are also homogeneous polynomials of degree $n$ if we give all the indeterminates $x_i$ degree $1$ and all the indeterminates $y_i$ degree $2$. Thus, if we view $f_n(x_1,\dots,x_n,y_1,\dots,y_n)$ as an element of $k\langle x_1,\dots,x_n,y_1,$ $\dots,y_n\rangle$, where $k$ is any commutative ring and $k\langle x_1,\dots,x_n,y_1,\dots,y_n\rangle$ is the free $k$-algebra in the noncommutative indeterminates $x_1,\dots,x_n,y_1,\dots,y_n$, then $$f_n(\lambda x_1,\dots,\lambda x_n,\lambda^2y_1,\dots,\lambda^2y_n)=\lambda^nf_n(x_1,\dots,x_n,y_1,\dots,y_n)$$ for every $\lambda\in k$.
Finally, it is not difficult to see that $f_n(x,\dots,x,y,\dots,y)$ is the sum of all monic monomials of degree $n$ in the free algebra $\Z\langle x,y\rangle$ in the two indeterminates $x,y$ when the indeterminate $x$ is given degree $1$ and $y$ is given degree $2$. Two other formulae, that essentially appear in \cite{gfp}, are $$f_n(2,2,\dots,2,-1,-1,\dots,-1)=n$$ and, more generally, $$f_n(x+1,x+1,\dots,x+1,-x,-x,\dots,-x)=1+x+x^2+\dots+x^{n-1}.$$
\bigskip
These new polynomials $f_n$ are also entries of $2\times 2$-matrices, as in the identity~(\ref{2}). Now we have that
\begin{equation}
\begin{array}{l}{\left(\begin{array}{cc}x_1 & 1\\ y_1 & 0\end{array}\right)\cdots\left(\begin{array}{cc}x_n & 1\\ y_n & 0\end{array}\right)=} \\ {=\left(\begin{array}{cc}f_n(x_1,\dots,x_n,y_1,\dots,y_n) & f_{n-1}(x_1,\dots,x_{n-1},y_1,\dots,y_{n-1})\\ y_1f_{n-1}(x_2,\dots,x_{n},y_2,\dots,y_{n}) & y_1f_{n-2}(x_2,\dots,x_{n-1},y_2,\dots,y_{n-1})\end{array}\right).}\end{array}\label{2'}\end{equation} This is clear for $n=1$. The general case can be proved by induction, since on writing $f_i=f_i(x_1,\dots,x_i,y_1,\dots,y_i)$, $f'_i=f_i(x_2,\dots,x_{i+1},y_2,\dots,y_{i+1})$, we have that \begin{equation}\left(\begin{array}{cc}f_{n-1} & f_{n-2}\\ y_1f'_{n-2} & y_1f'_{n-3}\end{array}\right)\left(\begin{array}{cc}x_n & 1\\ y_n & 0\end{array}\right)=\left(\begin{array}{cc}f_n & f_{n-1}\\ y_1f'_{n-1} & y_1f'_{n-2}\end{array}\right).\label{2''}\end{equation}
Regrouping the first $k$ matrices, for $1\le k \le n$, in~(\ref{2'}), we get that the matrix on the right hand side of this equation is equal to the product
\begin{equation}
\begin{array}{l}
\begin{pmatrix}
f_k(x_1,\dots, y_k) & f_{k-1}(x_1,\dots,y_{k-1}) \\
y_1f_{k-1}(x_2,\dots,y_k) & y_1f_{k-2}(x_2,\dots,y_{k-1})
\end{pmatrix}\cdot \\ \qquad\qquad \cdot \begin{pmatrix}
f_{n-k}(x_{k+1},\dots,y_n) & f_{n-k-1}(x_{k+1},\dots,y_{n-1}) \\
y_{k+1}f_{n-k-1}(x_{k+2},\dots,y_n) & y_{k+1}f_{n-k-2}(x_{k+2},\dots,y_{n-1})
\end{pmatrix} \end{array}
\end{equation}
Comparing the ($1,1$) entry of this product with the corresponding entry in (\ref{2'}), we
obtain:
\begin{equation}\begin{array}{l}
f_n(x_1,\dots,y_n)=f_k(x_1,\dots,y_k)f_{n-k}(x_{k+1},\dots,y_n) + \\ \qquad\qquad
\qquad +f_{k-1}(x_1,\dots,y_{k-1})y_{k+1}f_{n-k-1}(x_{k+2},\dots, y_n).
\end{array}\label{decomposition k,n-k for fn.}\end{equation}
Let us now mention a few consequences of the equation (\ref{decomposition k,n-k for fn.}). First we remark that for $k=1$, we have \begin{equation}\begin{array}{l}{f_n(x_1,\dots,x_n,y_1,\dots,y_n)=} \\ \qquad {=x_1f_{n-1}(x_2,\dots,x_n,y_2,\dots,y_n)+y_2f_{n-2}(x_3,\dots,x_n,y_3,\dots,y_n).}\end{array}\label{xgk}\end{equation}
\medskip
The formula obtained in (\ref{decomposition k,n-k for fn.}) can also be considered as a generalization of an analogous classical relation for the usual Fibonacci numbers $F_n$; that is, if one specializes \linebreak $x_1=\dots =x_n=y_1=
\dots=y_n=1$, we get that $F_n=F_kF_{n-k}+F_{k-1}F_{n-k-1}$.
Using the recursive relation in the definition of $f_n$ and the above equation
(\ref{decomposition k,n-k for fn.}), we easily obtain the following useful formula:
for $1\le k $, we can define the standard partial derivations $\frac{\partial}{\partial x_k}$ and $\frac{\partial}{\partial y_k}$, for $k\ge 1$.
Using the equation (\ref{decomposition k,n-k for fn.}), we then have the following:
\begin{equation}
\begin{array}{l}
\frac{\partial f_{n}(x_1,\dots,y_n)}{\partial x_k}=
f_{k-1}(x_1,\dots,y_{k-1})f_{n-k}(x_{k+1},\dots,y_n),\quad {\rm for} \;1\le k\le n.\\
\frac{\partial f_n(x_1,\dots,y_n)}{\partial y_k}=f_{k-2}(x_1,\dots,y_{k-2})f_{n-k}(x_{k+1},\dots,y_n),\quad {\rm for} \; 2\le k\le n.
\end{array}
\label{partial derivatives of f_n}
\end{equation}
\medskip
As mentioned above, the polynomials $f_n$, specialized in $y_1=\dots=y_n=1$, give the continuant polynomials $p_n$. In particular, it is easy to obtain formulas for the continuant polynomials analogous to the equations (\ref{decomposition k,n-k for fn.}) and (\ref{partial derivatives of f_n}). On the other hand, we may as well specialize the polynomials $f_n$ in $x_1=\dots=x_n=1$; we then get a family of polynomials $r_n(y_1,\dots,y_n)$. The first values of these polynomials are $r_1=1,\ r_2=1+y_2,\ r_3=1+y_2+y_3, r_4=1+y_2+y_3+y_4+ y_2 y_4,\ r_5=1+y_2+y_3+y_4+ y_5+y_2 y_4+ y_2y_5+y_3y_5$. They satisfy
the recurrence relation
$$
r_{n+1}(y_1,\dots, y_{n+1}) =r_n(y_1,\dots,y_n)+ r_{n-1}(y_1,\dots,y_{n-1})y_{n+1}.
$$
\noindent In particular, if we specialize further, we obtain the following sequence of natural numbers
$I_n=r_n(0,1,\dots,n-1)$ which gives the number of involutions in the symmetric group
$S_n$. As for earlier sequences, these can be presented using a specialization of the matrices
in Equation (\ref{2'}) (cf. also \cite{R}).
Using the polynomials $r_n$ and the equation~(\ref{decomposition k,n-k for fn.}), we easily obtain, for $1\le k \le n$, that \begin{equation}\begin{array}{l}{f_n(1,\dots,1,x_{k+1},\dots,x_n,y_1,\dots,y_n)=}
\\ \qquad {=r_k(y_1,\dots,y_k)x_{k+1}f_{n-k}(x_{k+1},
\dots,y_n) +} \\ \qquad\qquad {+r_{k-1}(y_1,\dots,y{k-1})y_{k+1}f_{n-k-1}(x_{k+2},\dots,y_n).}\end{array}\label{17}\end{equation}
From~(\ref{xgk}), we get the formulae analogous to those given by P.~M. Cohn for the continuant
polynomials in~\cite[formulae~(16), p.~148]{Cohn}:
$$
f_n(0,x_2,\dots,x_n,y_1,\dots ,y_n) = y_2f_{n-2}(x_3,\dots,x_n,y_3,\dots,y_n)$$
and
$$f_n(1,x_2,\dots,x_n,y_1\dots y_n)=f_{n-1}(x_2+y_2,x_3,\dots,x_n,y_2,y_3,\dots,y_n).
$$
\bigskip
Conjugating all the matrices in the equation~(\ref{2'}) by the invertible matrix $P(0)$, we find that \begin{equation}
\begin{array}{l}{\left(\begin{array}{cc}0 & y_1\\ 1 & x_1\end{array}\right)\cdots\left(\begin{array}{cc}0 & y_n\\ 1 & x_n\end{array}\right)=} \\ {=\left(\begin{array}{cc}y_1f_{n-2}(x_2,\dots,x_{n-1},y_2,\dots,y_{n-1}) &y_1f_{n-1}(x_2,\dots,x_{n},y_2,\dots,y_{n}) \\ f_{n-1}(x_1,\dots,x_{n-1},y_1,\dots,y_{n-1})& f_n(x_1,\dots,x_n,y_1,\dots,y_n)\end{array}\right).}\end{array}\label{Fibonacci as product of matrices}\end{equation}
\bigskip
We may also look at our noncommutative indeterminates $x_1,\dots,x_n,y_1,$ $\dots,y_n$ in the polynomial $f_n(x_1,\dots,x_n,y_1,\dots,y_n)$ as arrows in a quiver $E_n$ with two vertices $A$ and $B$, where $x_i$ is an arrow from $A$ to $B$ for $i$ odd, $x_i$ is an arrow from $B$ to $A$ for $i$ even, $y_i$ is an arrow from $A$ to $A$ for $i$ odd, and $y_i$ is an arrow from $B$ to $B$ for $i$ even. The quiver $E_n$ is the following:
\begin{center}
\begin{tikzpicture}
\node (r) at (0,0) {A};
\node (s) at (2,0) {B};
\draw [->] (s) to [out=205, in=335] node[auto] {\scriptsize $x_{2i}$} (r);
\draw [->] (r) to [out=25, in=155] node[auto]{\scriptsize $x_{2i-1}$} (s);
\path (r) edge[ out=140, in=50
, looseness=0.8, loop
, distance=1cm, ->]
node[above=0.1pt] {\scriptsize $y_{2i}$} (r);
\path (s) edge[ out=140, in=50
, looseness=0.8, loop
, distance=1cm, ->]
node[above=0.1pt] {\scriptsize $y_{2i-1}$} (s);
\end{tikzpicture}
\end{center}
The quiver $E_n$ has $2n$ arrows. The polynomials $f_i$ turn out to be linear combinations of paths with all the coefficients equal to one, and these paths in $f_i$ are from $A$ to $A$ (hence they are cycles) when $i$ is even, and are paths from $A$ to $B$ when $i$ is odd.
Notice that we have \begin{equation}\left(\begin{array}{cc}x_{2i-1} & 1\\ y_{2i-1} & 0\end{array}\right)\left(\begin{array}{c} A \\ B \end{array}\right)=\left(\begin{array}{c} B \\ A \end{array}\right)\ \mbox{and}\ \left(\begin{array}{cc}x_{2i} & 1\\ y_{2i} & 0\end{array}\right)
\left(\begin{array}{c} B \\ A\end{array}\right)=\left(\begin{array}{c} A \\ B\end{array}\right),\end{equation} so that again, in the product (\ref{2'}), the matrices alternatively leap from $A$ to~$B$.
\section{Circular tilings and the polynomials $c_n$}\label{domino}
The classes of polynomials studied in this paper have a clear combinatorial interpretation. For instance:
(1) The monomials in the polynomial $h_n$ parametrize the nonempty subsets of a set of $n$ elements. It suffices to associate to the monomial $x_{i_1}yx_{i_2}y\dots yx_{i_t}$ of $h_n$ the subset $\{i_1,i_2,\dots,i_t\}$ of $\{1,2,\dots,n\}$.
(2) The monomials in the polynomial $$f_n=f_n(x_1,\dots,x_n,y_1,\dots,y_n)$$ studied in Section~\ref{GCF} parametrize the possible ways one can tile a
strip of $1\times n$ square cells with $1\times 1$ squares
and $1\times 2$ dominos. Essentially, this is the standard interpretation of the Fibonacci numbers $F_n$ via linear tilings. A {\em linear tiling} of a row of squares (a $1\times n$
strip of square cells) is a covering of the strip of squares with squares and dominos (which cover two squares). For instance, the polynomial $f_3=x_1x_2x_3+x_1y_3+y_2x_3$ parametrizes the set of the three linear tilings
\bigskip
\begin{center}
\begin{tikzpicture}
\draw (0,0) -- (3,0) -- (3,1) -- (0,1) -- (0,0) ;
\draw (4,0) -- (7,0) -- (7,1) -- (4,1) -- (4,0) ;
\draw (8,0) -- (11,0) -- (11,1) -- (8,1) -- (8,0) ;
\draw (1,0) -- (1,1) ;
\draw (2,0) -- (2,1) ;
\draw (5,0) -- (5,1) ;
\draw (6,0) -- (6,1) ;
\draw (9,0) -- (9,1) ;
\draw (10,0) -- (10,1) ;
\filldraw [black] (0.5,0.5) circle (2pt)
(1.5,0.5) circle (2pt)
(2.5,0.5) circle (2pt)
(4.5,0.5) circle (2pt)
(5.5,0.5) circle (2pt)
(6.5,0.5) circle (2pt)
(8.5,0.5) circle (2pt)
(9.5,0.5) circle (2pt)
(10.5,0.5) circle (2pt) ;
\draw (5.5,0.5) -- (6.5,0.5) ;
\draw (8.5,0.5) -- (9.5,0.5) ;
\end{tikzpicture}
\end{center}
\bigskip
\noindent of a row of three squares. Here $x_i$ denotes the $i$-th square and $y_i$ denotes the domino that covers the $(i-1)$-th and the $i$-th square ($y_i$ is the domino that ``ends on the $i$-th square''.) The Fibonacci number $F_n$ represents the number of tilings of a strip of length $n$ using length $1$ squares and length $2$ dominos.
\bigskip
It is also possible to consider {\em circular tilings}, where the deformed square are arranged in a circle \cite{gfp}. For instance, the four possible circular tilings of a circle of three squares are
\bigskip
\begin{center}
\begin{tikzpicture}
\draw (0,0) circle (1cm) ;
\draw (0,0) circle (0.4cm) ;
\filldraw [black] (0,0.7) circle (2pt)
(0.606,-0.35) circle (2pt)
(-0.606,-0.35) circle (2pt) ;
\draw (0.3464,0.2) -- (0.866,0.5) ;
\draw (-0.3464,0.2) -- (-0.866,0.5) ;
\draw (0,-0.4) -- (0,-1) ;
\draw (3,0) circle (1cm) ;
\draw (3,0) circle (0.4cm) ;
\filldraw [black] (3,0.7) circle (2pt)
(3.606,-0.35) circle (2pt)
(2.394,-0.35) circle (2pt) ;
\draw (3.3464,0.2) -- (3.866,0.5) ;
\draw (2.6536,0.2) -- (2.134,0.5) ;
\draw (3,-0.4) -- (3,-1) ;
\draw (6,0) circle (1cm) ;
\draw (6,0) circle (0.4cm) ;
\filldraw [black] (6,0.7) circle (2pt)
(6.606,-0.35) circle (2pt)
(5.394,-0.35) circle (2pt) ;
\draw (6.3464,0.2) -- (6.866,0.5) ;
\draw (5.6536,0.2) -- (5.134,0.5) ;
\draw (6,-0.4) -- (6,-1) ;
\draw (9,0) circle (1cm) ;
\draw (9,0) circle (0.4cm) ;
\filldraw [black] (9,0.7) circle (2pt)
(9.606,-0.35) circle (2pt)
(8.394,-0.35) circle (2pt) ;
\draw (9.3464,0.2) -- (9.866,0.5) ;
\draw (8.6536,0.2) -- (8.134,0.5) ;
\draw (9,-0.4) -- (9,-1) ;
\draw (2.394,-0.35) arc (210:330:0.7);
\draw (6,0.7) arc (90:210:0.7);
\draw (9.606,-0.35) arc (-30:90:0.7);
\end{tikzpicture}
\end{center}
But for us it will be more convenient to represent the same four possible circular tilings of a circle of three squares as follows:
\begin{center}
\begin{tikzpicture}
\draw (0,0) -- (3,0) -- (3,1) -- (0,1) -- (0,0) ;
\draw (3.2,0) -- (6.2,0) -- (6.2,1) -- (3.2,1) -- (3.2,0) ;
\draw (6.4,0) -- (9.4,0) -- (9.4,1) -- (6.4,1) -- (6.4,0) ;
\draw (9.6,0) -- (12.6,0) -- (12.6,1) -- (9.6,1) -- (9.6,0) ;
\draw (1,0) -- (1,1) ;
\draw (2,0) -- (2,1) ;
\draw (4.2,0) -- (4.2,1) ;
\draw (5.2,0) -- (5.2,1) ;
\draw (8.4,0) -- (8.4,1) ;
\draw (7.4,0) -- (7.4,1) ;
\draw (10.6,0) -- (10.6,1) ;
\draw (11.6,0) -- (11.6,1) ;
\filldraw [black] (0.5,0.5) circle (2pt)
(1.5,0.5) circle (2pt)
(2.5,0.5) circle (2pt)
(3.7,0.5) circle (2pt)
(4.7,0.5) circle (2pt)
(5.7,0.5) circle (2pt)
(6.9,0.5) circle (2pt)
(7.9,0.5) circle (2pt)
(8.9,0.5) circle (2pt)
(10.1,0.5) circle (2pt)
(11.1,0.5) circle (2pt)
(12.1,0.5) circle (2pt);
\draw (4.7,0.5) -- (5.7,0.5) ;
\draw (6.4,0.5) -- (6.9,0.5) ;
\draw (8.9,0.5) -- (9.4,0.5) ;
\draw (10.1,0.5) -- (11.1,0.5) ;
\end{tikzpicture}
\end{center}
\noindent In this representation, we use an ``out-of-phase domino'' which spans the first and last cells of the tiling.
\bigskip
This suggests that there must also exist noncommutative polynomials $ {c}_n$ which par\-am\-etrize the set of circular tilings of a circle of $n$ squares. The idea is the following. Any circular tiling of a circle of $n$ squares $s_1,\dots,s_n$ either does not contain the out-of-phase domino which spans $s_n$ and $s_1$ or contains the out-of-phase domino. The circular tilings of the circle that do not contain the out-of-phase domino are in one-to-one correspondence with the linear tilings
of a row of squares $s_1,\dots,s_n$, hence they are parametrized by the monomials of the polynomial $f_n(x_1,\dots,x_n,y_1,\dots,y_n)$.
The circular tilings of the circle that do contain the out-of-phase domino are in one-to-one correspondence with the linear tilings
of the row of squares $s_2,\dots,s_{n-1}$, hence they are parametrized by the monomials of the polynomial $$f_{n-1}(x_2,\dots,x_{n-1},y_2,\dots,y_{n-1}).$$ Now, as we have already said, the indeterminate $x_i$ denotes a length $1$ square in the $i$-th position, and the indeterminate $y_i$ denotes a length $2$ domino that ends in the $i$-th position. Thus we will denote the
out-of-phase domino, which starts from the $n$-th position and ends in first one, by $y_1$. Hence we find that the circular tilings of a circle of $n$ squares $(n\ge1$) are parametrized by the noncommutative polynomials $ {c}_n$ defined by \begin{equation}\begin{array}{l} {c}_n(x_1,\dots,x_n,y_1,\dots,y_n)= \\ \qquad\qquad =f_n(x_1,\dots,x_n,y_1,\dots,y_n)+y_1f_{n-2}(x_2,\dots,x_{n-1},y_2,\dots,y_{n-1}).\end{array}\label{o;no}\end{equation}
Notice that:
(1) The indeterminate $y_1$ does not appear in the polynomials $f_n$, it appears in these new polynomials $ {c}_n$ for the first time.
(2) In the polynomials $f_n$ $(n\ge1)$, all the monomials begin with $x_1$ or $y_2$ and end with $x_n$ or $y_n$. In the polynomials $ {c}_n$, all the monomials begin with $x_1,y_1$ or $y_2$ and end with $x_{n-1},y_{n-1}, x_n$ or $y_n$.
(3) The first polynomials $ {c}_n$ are $$\begin{array}{l} {c}_1(x_1,y_1)=x_1, \qquad {c}_2(x_1,x_2,y_1,y_2)=x_1x_2+y_1+y_2,\\ {c}_3(x_1,x_2,x_3,y_1,y_2,y_3)=x_1x_2x_3+x_1y_3+y_1x_2+y_2x_3,\\ {c}_4(x_1,x_2,x_3,x_4,y_1,y_2,y_3,y_4)= x_1x_2x_3x_4+x_1x_2y_4+x_1y_3x_4+ \\ \qquad\qquad +y_1x_2x_3+y_1y_2+y_2x_3x_4+y_2y_4.\end{array}$$
(4) Once again these polynomials can be obtained by a leapfrog construction as follows: to
obtain $c_n$ you write $x_1\dots x_n$ and replace every possible disjoint pairs $x_ix_{i+1}$
by $y_{i+1}$ (indexing module $n$ so that for a word terminating in $x_n$ and starting by
$x-1$ the letter $x_n$ is erased and the letter $x_1$ is replaced by $y_1$).
(5) The polynomial $ {c}_2(x_1,x_2,y_1,y_2)=x_1x_2+y_1+y_2$ parametrizes the three circular tilings
\bigskip
\begin{center}
\begin{tikzpicture}
\draw (0,0) -- (2,0) -- (2,1) -- (0,1) -- (0,0) ;
\draw (3.5,0) -- (5.5,0) -- (5.5,1) -- (3.5,1) -- (3.5,0) ;
\draw (7,0) -- (9,0) -- (9,1) -- (7,1) -- (7,0) ;
\draw (1,0) -- (1,1) ;
\draw (4.5,0) -- (4.5,1) ;
\draw (8,0) -- (8,1) ;
\filldraw [black] (0.5,0.5) circle (2pt)
(1.5,0.5) circle (2pt)
(4,0.5) circle (2pt)
(5,0.5) circle (2pt)
(7.5,0.5) circle (2pt)
(8.5,0.5) circle (2pt);
\draw (3.5,0.5) -- (4,0.5) ;
\draw (5.5,0.5) -- (5,0.5) ;
\draw (7.5,0.5) -- (8.5,0.5) ;
\end{tikzpicture}
\end{center}
\bigskip
\noindent of a circle of two squares, which we considered to be distinct. With these conventions, we have that $ {c}_n(1,1,\dots,1)=L_n$ for every $n\ge1$, where $L_n$ indicates the $n$-th Lucas number, as desired. Here, the Lucas number $L_n$, defined by $L_0=2,\ L_1=1$ and $L_n=L_{n-1}+L_{n-2}$ when $n\ge 2$, represents for $n\ge 1$ the number of circular tilings of a strip of length $n$ using length $1$ squares and length $2$ dominos.
\bigskip
To relate the polynomials $c_n$ with a suitable product of matrices we need the trace. For any ring $R$, the abelian group $M_n(R)$ of all $n\times n$-matrices over $R$ can be viewed as an $R$-$R$-bimodule $_RM_n(R)_R$. The {\em trace} $\tr\colon {}_RM_n(R)_R\to{}_RR_R$, defined by $\tr(a_{ij})_{i,j}=\sum_{i=1}^na_{ii},$ is an $R$-$R$-bimodule morphism with the further property that
$$\tr\left((a_{ij})_{i,j}(b_{ij})_{i,j}\right)=\sum_{i=1}^n\sum_{j=1}^na_{ij}b_{ji}.$$
From~(\ref{2'}) and~(\ref{o;no}), we get that
\begin{theorem} $$ {c}_n(x_1,\dots,x_n,y_1,\dots,y_n)= \tr\left(\left(\begin{array}{cc}x_1 & 1\\ y_1 & 0\end{array}\right)\cdots\left(\begin{array}{cc}x_n & 1\\ y_n & 0\end{array}\right)\right)$$\end{theorem}
\section{The generalized Lucas polynomials $\ell_n$ \\ and the negative indices}
Now we can define the {\em generalized Lucas polynomials} by the recursion formulae: \begin{equation}\begin{array}{l}{\ell_{0}=2,\quad \ell_1=x_1,}\\ {\ell_n(x_1,\dots,x_n,y_1,\dots,y_n)=\ell_{n-1}(x_1,\dots,x_{n-1},y_1,\dots,y_{n-1})x_n+} \\ \qquad\qquad\qquad {+\ell_{n-2}(x_1,\dots,x_{n-2}, y_1,\dots,y_{n-2})y_n,}\end{array}\label{rec'''}\end{equation} generalizing \cite[p.~142]{BQ}.
The first of these polynomials are $$\begin{array}{l}\ell_0=2, \qquad \ell_1=x_1, \qquad \ell_2=x_1x_2+2y_2, \\ \ell_3=x_1x_2x_3+x_1y_3+2y_2x_3, \\ \ell_4=x_1x_2x_3x_4+x_1x_2y_4+x_1y_3x_4+2y_2x_3x_4+ 2y_2 y_4,
\\ \ell_5=x_1x_2x_3x_4x_5+x_1x_2x_3y_5+x_1x_2y_4x_5+x_1y_3x_4x_5+ \\ \qquad\qquad\qquad\qquad +x_1y_3y_5+2y_2x_3x_4x_5+ 2y_2x_3y_5+2y_2 y_4x_5,\dots\end{array}$$ The number of monomials in each $\ell_n$ is the $(n+1)$-th Fibonacci number and, for $n\ge1$, one has that $\ell_n(1,1,\dots,1)=L_n$, the $n$-th Lucas number. The polynomial $$\ell_n(x_1,\dots,x_n,y_1,\dots,y_n)$$ belongs to $\Z\langle x_1,\dots,x_n,y_1,\dots,y_n\rangle$, though in this case also the indeterminate $y_1$ does not appear. But the polynomials $\ell_n$ are just a specialization of the polynomials $f_n$, as the following result shows.
\begin{theorem}\label{8.1} $$\begin{array}{l} \ell_n(x_1,\dots,x_n,y_1,\dots,y_n)= \\ \qquad\qquad =f_n(x_1,x_2,x_3\dots,x_n,y_1,2y_2,y_3,y_4,\dots,y_n)\end{array}$$ for every $n\ge 1$.\end{theorem}
{{\sc Proof.}\ } Induction on $n$. The cases $n=1$ and $n=2$ are easily checked directly. For $n\ge 3$, assume that the theorem is true for $n-1$ and $n-2$. Then $$\begin{array}{l}\ell_n(x_1,\dots,x_n,y_1,\dots,y_n)= \\ \qquad =\ell_{n-1}(x_1,\dots,x_{n-1},y_1,\dots,y_{n-1})x_n+ \\ \qquad\qquad +\ell_{n-2}(x_1,\dots,x_{n-2}, y_1,\dots,y_{n-2})y_n= \\ \qquad =
f_{n-1}(x_1,\dots,x_{n-1},y_1,2y_2,\dots,y_{n-1})x_n+ \\ \qquad\qquad +f_{n-2}(x_1,\dots,x_{n-2}, y_1,2y_2\dots,y_{n-2})y_n= \\ \qquad =f_n(x_1,x_2,x_3\dots,x_n,y_1,2y_2,y_3,y_4,\dots,y_n).\qquad ~\rule{1ex}{1ex}\end{array}$$
\vspace{0.4truecm}
Let us consider negative indices $n$. The sequence of Fibonacci numbers $F_n$ can be extended to any negative index $n$ using the recurrence formula $F_{n-2} = F_n - F_{n-1}$, and one finds that $F_{-n} = (-1)^{n+1} F_n$ for every $n\ge 0$. It is clear that our sequences of polynomials can be also extended to negative indices $n$.
Let us begin with the continuants $p_n(t_1,\dots,t_n)$, for which we have that $$p_n(1,1,\dots,1)=F_{n+1}.$$ The difference of $1$ in the indices in this formula is due to Cohn's original choice of the initial conditions $p_{-1}=0,p_0=1$. Though the usual modern definition of Fibonacci numbers with $F_0=0,F_1=1$ is more appropriate, we prefer to continue using Cohn's original notation with $p_{-1}=0$ and $p_0=1$.
The recursion formula must now be re-written as $p_{n-2}=p_n-p_{n-1}t_n$. Substituting $n-2$ with $-m$, we get that $p_{-m}=p_{-(m-2)}-p_{-(m-1)}t_{-(m-2)}$. We thus find that $p_0=1, p_{-1}=0, p_{-2}=1,p_{-3}=-t_{-1}, p_{-4}=t_{-1}t_{-2}+1,$ and so on. The polynomial $p_{-n}$ for $n\ge0$ turns out to be an element of the free algebra $\Z\langle t_{-1},t_{-2},\dots,t_{-(n-2)}\rangle$ in the noncommutative indeterminates $t_{-1},t_{-2},\dots,t_{-(n-2)}$. The general formula is given in the next proposition. Its proof is left to the reader.
\begin{proposition} $$\begin{array}{r} p_{-n}(t_{-1},t_{-2},\dots,t_{-(n-2)})=(-1)^np_{n-2}(t_{-1},t_{-2},\dots,t_{-(n-2)})=\qquad\qquad \\ =p_{n-2}(-t_{-1},-t_{-2},\dots,-t_{-(n-2)})\end{array}$$ for every integer $n\ge0$.\end{proposition}
Notice that relation~(\ref{2}) now becomes
\begin{equation}\begin{array}{l}\left(\begin{array}{cc}-t_{-1} & 1\\ 1 & 0\end{array}\right)\cdots\left(\begin{array}{cc}-t_{-n} & 1\\ 1 & 0\end{array}\right)= \\ \qquad\qquad\left(\begin{array}{cc}p_{-(n+2)}(t_{-1},\dots,t_{-n}) & p_{-(n-1)}(t_{-1},\dots,t_{-(n-1)})\\ p_{-(n-1)}(t_{-2},\dots,t_{-n}) & p_{-n}(t_{-2},\dots,t_{-(n-1)})\end{array}\right).\end{array} \label{2lll}\end{equation}
\bigskip
For the generalized Fibonacci polynomials $f_n(x_1,x_2,\dots,x_n,y_1\dots y_n)$, we have that the recursion formula $$\begin{array}{r}f_n(x_1,\dots,x_n,y_1,\dots,y_n)= f_{n-1}(x_1,\dots,x_{n-1},y_1,\dots,y_{n-1})x_n+\qquad
\qquad \\ +f_{n-2}(x_1,\dots,x_{n-2}, y_1,\dots,y_{n-2})y_n\end{array}$$ now becomes \begin{equation}\begin{array}{r}f_{n-2}(x_1,\dots,x_{n-2}, y_1,\dots,y_{n-2})= f_n(x_1,\dots,x_n,y_1,\dots,y_n)y_n^{-1}-\ \ \\ -f_{n-1}
(x_1,\dots,x_{n-1},y_1,\dots,y_{n-1})x_ny_n^{-1}.\label{f_}\end{array}\end{equation} Thus we get that, for any integer $n\ge 0$, the polynomial $f_{-n-2}$ must belong to the $\Z$-algebra $\Z\langle x_{-1},x_{-2},\dots,x_{-
n},y_0^{\pm1}, y_1^{\pm1},\dots, y_{-(n-2)}^{\pm1}\rangle$, obtained from $\Z$ adjoining $2n$ algebraically independent elements $x_{-1},x_{-2},\dots,x_{-n},y_0,y_{-1},$ $y_{-2},\dots,y_{-(n-1)}$ with $y_0$, $y_{-1}$, \dots, $y_{-
(n-1)}$ invertible. The proof of the following proposition, by induction, is left to the reader.
\begin{proposition} $$\begin{array}{l}f_{-n-2}(x_{-1},x_{-2},\dots,x_{-n},y_0, y_1,\dots, y_{-(n-1)})= \\ =f_{n}(-y_0^{-1}x_{-1},-y_{-1}^{-1}x_{-2},\dots,-y_{-(n-1)}^{-1}x_{-n},1,y_0^{-1}, y_{-1}^{-1},\dots, y_{-(n-2)}^{-1})y_{-(n-2)}^{-1}\end{array}$$ for every integer $n\ge0$.\end{proposition}
In extending the generalized Lucas polynomials $\ell_n$ to negative indices $n$, we have, like for the $f_n$'s in (\ref{f_}), that $\ell_{n-2}=\ell_ny_n^{-1}-\ell_{n-1}x_ny_n^{-1}$. It is easily seen that
\begin{proposition} $$\begin{array}{l}\ell_{-n}(x_1,x_0,x_{-1}\dots,x_{-(n-2)},y_0,y_{-1},y_{-2},\dots,y_{-(n-2)})= \\ =(-1)^n\ell_n(x_1y_1^{-1},x_0y_0^{-1},x_{-1}y_{-1}^{-1},\dots,x_{-(n-2)}y_{-(n-2)}^{-1}; 1,y_0^{-1},y_{-1}^{-1},\dots,y_{-(n-2)}^{-1})\end{array}$$ for every integer $n\ge 0$.\end{proposition}
\section{The general pattern: the polynomials $g_n$}
Now consider the following family of polynomials $g_n$, with $n\ge0$. To define them, we need countably many noncommutative indeterminates $x_{ij}$, where $1\le i \le j $. Set $g_0=1$ and
\begin{equation}
\label{recurrence relation for generalized polynomials}
g_n=\sum_{i=1}^{n}g_{i-1}x_{in}, \quad {\rm for }\quad n\ge 1.
\end{equation} For instance, the first polynomials $g_n$ are $$\begin{array}{l} g_1=x_{11},\quad g_2=x_{12}+x_{11}x_{22},\quad
g_3=x_{13}+x_{11}x_{23}+x_{12}x_{33}+x_{11}x_{22}x_{33}, \\
g_4=x_{14}+ x_{11}x_{24} + x_{12}x_{34}+x_{11}x_{22}x_{34}+ x_{13}x_{44}+x_{11}x_{23}x_{44}+ \\ \qquad\qquad +x_{12}x_{33}x_{44}+x_{11}x_{22}x_{33}x_{44}.\end{array}$$
For every $n\ge 1$, the polynomial $g_n$ turns out to be a polynomial with integer coefficients in the $n(n+1)/2$ indeterminates $x_{ij}$ with $1\le i\le j\le n$, as is easily seen.
The polynomial $g_n$ is a sum of monic monomials that parametrize all linear tilings of a strip of
$n$ square cells, that is, all coverings of the strip of squares with rectangles of any length $1,2,\dots,n$. The
indeterminate $x_{ij}$ indicates the rectangle of length $j-i+1$ that starts covering the $i$-th square and ends covering the $j$-th square.
For instance, $g_3=x_{13}+x_{11}x_{23}+x_{12}x_{33}+x_{11}x_{22}x_{33}$ and, correspondingly, the tilings of a strip of three squares are
\bigskip
\begin{center}
\begin{tikzpicture}
\draw (0,0) -- (3,0) -- (3,1) -- (0,1) -- (0,0) ;
\draw (3.2,0) -- (6.2,0) -- (6.2,1) -- (3.2,1) -- (3.2,0) ;
\draw (6.4,0) -- (9.4,0) -- (9.4,1) -- (6.4,1) -- (6.4,0) ;
\draw (9.6,0) -- (12.6,0) -- (12.6,1) -- (9.6,1) -- (9.6,0) ;
\draw (1,0) -- (1,1) ;
\draw (2,0) -- (2,1) ;
\draw (4.2,0) -- (4.2,1) ;
\draw (5.2,0) -- (5.2,1) ;
\draw (8.4,0) -- (8.4,1) ;
\draw (7.4,0) -- (7.4,1) ;
\draw (10.6,0) -- (10.6,1) ;
\draw (11.6,0) -- (11.6,1) ;
\filldraw [black] (0.5,0.5) circle (2pt)
(1.5,0.5) circle (2pt)
(2.5,0.5) circle (2pt)
(3.7,0.5) circle (2pt)
(4.7,0.5) circle (2pt)
(5.7,0.5) circle (2pt)
(6.9,0.5) circle (2pt)
(7.9,0.5) circle (2pt)
(8.9,0.5) circle (2pt)
(10.1,0.5) circle (2pt)
(11.1,0.5) circle (2pt)
(12.1,0.5) circle (2pt);
\draw (4.7,0.5) -- (5.7,0.5) ;
\draw (7.9,0.5) -- (6.9,0.5) ;
\draw (0.5,0.5) -- (2.5,0.5) ;
% \draw (10.1,0.5) -- (11.1,0.5) ;
\end{tikzpicture}
\end{center}
\bigskip
\noindent The first tiling consists of a unique rectangle of length $3$. The second and the third of one rectangle of length $1$ and one of length $2$, in the two possible orders. The fourth tiling consists of three squares.
\begin{remark}\label{Rem}{\rm (1) Let us show how we can recover the previous families of polynomials using the polynomials $g_n$.
As we have seen, the family $f_n$ parametrizes tilings of a strip of length $n$ with tiles of length $1$ (represented by the indeterminates $x_i$ in the definition of $f_n$) and of length $2$ (represented by the indeterminates $y_i$ in the definition of $f_n$). It is clear from the equation (\ref{recurrence relation for generalized polynomials}) that
$f_n$ can be obtained by equating, in this expression of $g_n$, all the indeterminates $x_{ij}$ to zero
for $j>i+2$. In other words, the polynomial $f_n$ can be obtained from $g_n$ by specializing in the polynomial $g_n$ the indeterminates $x_{ij}$ to zero whenever $j>i+2$, the indeterminates $x_{ii}$ to $x_i$ and the indeterminates $x_{i,i+1}$ to $y_{i+1}$.
(2) Since the polynomials $p_n$ are obtained from $f_n$ by specializing the
indeterminates $y_i$ (in the definition of $f_n$) to $1$ and the indeterminates $x_i$ to $t_i$, we
can also obtain the polynomials $p_n$ by specializing the indeterminates of the polynomial
$g_n$. To be more precise this specialization is obtained by sending $x_{ij}$ to zero whenever $j>i+2$, $x_{ii}$ to $t_i$ and $x_{i,i+1}$ to $1$.
(3) The polynomial $g_n$ is the sum of $2^{n-1}$ monomials, which parametrize the subsets of
a set of $n-1$
elements. Hence there is a clear immediate connection with the monomials of the
polynomial $h_n$, which
parametrize the nonempty subsets of a set of $n$ elements. To this end, it suffices to send every monomial $x_{i_1j_1}x_{i_2j_2}\dots x_{i_{t-1}j_{t-1}}x_{i_tn}$ of degree $t$ in $g_n$ to the subset $\{j_1,j_2,\dots,j_{t-1}\}$ of cardinality $t-1$ of the set $\{1,2,\dots,n-1\}$.
In order to get the polynomial $h_n$ from $g_{n+1}$, it
suffices to specialize, in the polynomial $g_{n+1}$, the indeterminate $x_{ij}$ to $x_jy$ for every
$i$ and $j$, then multiply by $(x_{n+1}y)^{-1}=y^{-1}x_{n+1}^{-1}$, subtract $1$, and finally multiply by $y^{-1}$ on the right. That is, after the specialization, we have that $h_n=(g_{n+1}y^{-1}x_{n+1}^{-1}-1)y^{-1}$.
(4) The ``hierarchy'' of the polynomials we have studied in this paper follows therefore the following pattern. Each family of polynomials is a ``specialization'' of the families above it, as the remarks (1), (2), (3) above, Theorem~\ref{8.1} and (\ref{o;no}) show.
\begin{center}
\begin{tikzpicture}
\node [above] (g) at (1,1) {$g_n$};
\node [left] (h) at (-2,0) {$h_n$};
\node [above right] (f) at (4,0) {$f_n$};
\node [below] (p) at (1,-1) {$p_n$};
\node [below] (c) at (4,-1) {$c_n$};
\node [below] (l) at (7,-1) {$\ell_n$};
\draw (-2,0) -- node[above] {(3)} (1,1) ;
\draw (1,1) -- node[above] {(1)} (4,0) ;
\draw (4,0) -- node[above] {(2)} (1,-1) ;
\draw (4,0) -- node[right] {(\ref{o;no})} (4,-1) ;
\draw (4,0) -- node[above] {(\ref{8.1})} (7,-1) ;
\filldraw [black] (-2,0) circle (2pt)
(1,1) circle (2pt)
(4,0) circle (2pt)
(1,-1) circle (2pt)
(4,-1) circle (2pt)
(7,-1) circle (2pt);
\end{tikzpicture}
\end{center}
Notice that, in this diagram, $c_n$ is not really obtained via some specialization from $f_n$, because (\ref{o;no}) is simply the definition of the $c_n$'s, in terms of the polynomials $f_n$.
}\end{remark}
\bigskip
Equation (\ref{recurrence relation for generalized polynomials}) leads to
$$
(g_{1},\dots,g_{n})=(g_0,\dots,g_{n-1})\begin{pmatrix}
x_{11} & x_{12} &\dots &x_{1n}\\
0 & x_{22} &\dots &x_{2n}\\
\vdots &\ddots & \ddots & \vdots \\
0 &\dots & 0& x_{nn}
\end{pmatrix}
$$
Since, for $1\le l \le n$, a tiling of a strip of length $n$ is obtained by a tile of length $l$ followed by a tiling of length $n-l$, the following formula, where we have specified explicitly the indeterminates ("the tiles") for each polynomial, is easy to get:
\begin{equation}
\label{gn as sum with variables on the left}
g_n(x_{ij}; 1\le i \le j \le n)=\sum_{l=1}^n x_{1l}g_{n-l}(x_{l+i,l+j}; 1\le i \le j \le n-l)
\end{equation}
The row $(g_n,\dots,g_1)$ is also given by the first row of the following matrix product. This can be seen as a generalization of the equality~(\ref{2'}).
$$
\begin{pmatrix}
x_{11} & 1 & 0 & \dots & 0 \\
* & 0 & 1 & 0 & \vdots \\
\vdots & \vdots & \ddots & & 0 \\
* & 0 & 0 & \dots & 1 \\
* & 0 & 0 & \dots & 0
\end{pmatrix}
\begin{pmatrix}
x_{22} & 1 & 0 & \dots & 0 \\
x_{12} & 0 & 1 & 0 & \vdots \\
* & 0 & \ddots & \ddots & 0 \\
\vdots & 0 & 0 & \dots & 1 \\
* & 0 & 0 & \dots & 0
\end{pmatrix} \cdots
\begin{pmatrix}
x_{nn} & 1 & 0 & \dots & 0 \\
x_{n-1,n} & 0 & 1 & 0 & \vdots \\
\vdots & \vdots & \ddots & & 0 \\
x_{2n} & 0 & 0 & \dots & 1 \\
x_{1n} & 0 & 0 & \dots & 0
\end{pmatrix}
$$
\section{Permanents}
Let $R$ be a nonnecessarily commutative ring, $n\ge1$ be an integer, $M_n(R)$ the $n\times n$-matrix ring, and $S_n$ be the symmetric group. Define a mapping $\perm\colon M_n(R)\to R$ setting, for every matrix $A=(a_{i,j})_{i,j}\in M_n(R)$, $$\perm(A):=\sum_{\sigma\in S_n}a_{1,\sigma(1)}\dots a_{n,\sigma(n)}\label{sum}.$$ If $A_{i,j}$ denotes the $(n-1)\times(n-1)$-matrix that results from $A$ removing the $i$-th row and the $j$-th column, then $\perm(A):=\sum_{j=1}^na_{1,j}\perm(A_{1,j})=\sum_{j=1}^n\perm(A_{n,j})a_{n,j}$ (it is possible to easily expand our permanent along the first row or the last row only).
\medskip
\begin{theorem}\label{g_n as a permanent}
For every $n\ge 1,$
\begin{equation}
g_n(x_{ij})=\perm(A_n)=\perm(A_n^t),
\end{equation}
where
$$A_n=\left(\begin{array}{ccccc}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
1 & x_{22} & x_{23} & \dots & x_{2n}\\
0 & 1 & x_{33} &\ddots & x_{3n} \\
\vdots & \ddots& \ddots & \ddots & \vdots \\
0 & \dots & 0 &1& x_{nn}\end{array}\right)
$$
\end{theorem}
\begin{proof}
Let us show that $g_n(x_{ij})=\perm(A_n)$ by induction on $n$.
The case $n=1$ is trivial.
Let us assume that the formula holds for $n$ and expand
$\perm(A_{n+1})$ along the last row: $\perm(A_{n+1})=\perm(B_n)+g_nx_{n+1,n+1}$ where
$B_n$ is the $n\times n$ matrix given by $B_n=\begin{pmatrix}
A_{n-1} & C \\
U & x_{n,n+1}
\end{pmatrix} $, where $C$ is the column $(x_{1,n+1},\dots,x_{n-1,n+1})^t$ and $U$ is the row
$(0,\dots,0,1)$. In particular, the matrix $B_n$ is obtained from the matrix $A_n$ changing
the last column. Notice that in the expression $g_n=\sum_{i=1}^n g_{i-1}x_{in}$, the polynomials $g_0,
\dots,g_{n-1}$ do not depend on the indeterminates $x_{in}$. Since the expression of $\perm (A_n)$
given by the inductive hypothesis is $\perm(A_n)=g_n$, we get that
$\perm(B_n)=g_0x_{1,n+1}+g_1x_{2,n+1}+\cdots+g_{n-1}x_{n,n+1}$. Thus we have
$\perm(A_{n+1,n+1})=\sum_{i=1}^{n+1}g_{i-1}x_{i,n+1}=g_{n+1}$, as desired.
The fact that $\perm(A_{n}^t)=g_n$ as well is proved similarly, using the equality~(\ref{gn as sum with variables on the left}).
\end{proof}
Let us remark that, contrary to the case of permanents defined over commutative rings, we do not have in general that $\perm(A)=\perm(A^t)$.
\medskip
From Theorems~\ref{g_n as a permanent}, \ref{8.1} and Remarks~\ref{Rem}((1) and (2)), we immediately get that:
\begin{corollary}\label{syre} The polynomials $f_n(x_1,\dots,x_n,y_1,\dots,y_n)$ and $p_n(t_1,\dots,t_n)$ are the permanents of the $n\times n$ tridiagonal matrices $$\left(\begin{array}{ccccc}
x_{1} & y_{2} & 0 & \dots & 0 \\
1 & x_{2} & y_{3} & \dots & 0\\
0 & 1 & x_{3} &\ddots & 0 \\
\vdots & \ddots& \ddots & \ddots & \vdots \\
0 & \dots & 0 &1& x_{n}\end{array}\right)\quad{and}\quad\left(\begin{array}{ccccc}
t_{1} & 1 & 0 & \dots & 0 \\
1 & t_{2} & 1 & \dots & 0\\
0 & 1 & t_{3} &\ddots & 0 \\
\vdots & \ddots& \ddots & \ddots & \vdots \\
0 & \dots & 0 &1& t_{n}\end{array}\right),$$ and their transposes, respectively.\end{corollary}
The analogue of formula (\ref{xgk}) is the following:
\begin{proposition}\label{Andre} $$\begin{array}{r}g_n=x_{11}g_{n-1}(x_{i+1\ j+1})+x_{12}g_{n-2}(x_{i+2\ j+2})+x_{13}g_{n-3}(x_{i+3\ j+3})+\qquad \\ +\dots+x_{1\ n-1}g_{1}(x_{i+n-1\ j+n-1})+x_{1n}g_0\end{array}$$ \end{proposition}
\begin{proof} It suffices to apply Theorem~\ref{g_n as a permanent} expanding the permanent along the first row. The $t$-th term in this expansion is $$x_{it}\perm\left(
\begin{array}{cccccccc}1&&&&&&&\\
&1&*&&&&&\\
&0&\ddots&&&*&&\\
&&&1&&&&\\
&&&& x_{t+1\ t+1}& x_{t+1\ t+2}&\dots& x_{t+1\ n}\\
&&&& 1 &x_{t+2\ t+2}& & x_{t+2\ n}\\
&0&&& &\ddots &\ddots & \vdots\\
&&&&0& &1 & x_{nn}\end{array}\right)=$$
$$=x_{it}\perm\left(
\begin{array}{cccc} x_{t+1\ t+1}& x_{t+1\ t+2}&\dots& x_{t+1\ n}\\
1 &x_{t+2\ t+2}& & x_{t+2\ n}\\
&\ddots &\ddots & \vdots\\
0& &1 & x_{nn}\end{array}\right)=x_{it}g_{n-t}(x_{i+t\ j+t}).$$\end{proof}
\begin{thebibliography}{10}
\bibitem{AJLL} A. Alahmadi, S.~K. Jain, A. Leroy, and T.~Y. Lam. \newblock Euclidean pairs and quasi Euclidean rings. \newblock {\em J. Algebra}, {406}:154--170, 2014.
\bibitem{AdelAlb} A. Alahmadi, and A. Facchini. \newblock Some remarks on categories of modules modulo morphisms with essential kernel or superfluous image. \newblock {\em J. Korean Math. Soc.}, {50} :557--578, 2013.
\bibitem{gfp} T. Amdeberhan, X. Chen, V.~H. Moll, and B.~E. Sagan. \newblock Generalized Fibonacci polynomials and Fibonomial coefficients.
\newblock {\em Ann. Comb.}, {18}:541--562, 2014.
\bibitem{BQ} A.~T. Benjamin, J. Quinn, and J.~J. Quinn, \newblock Proofs that Really Count. \newblock {\em Math. Assoc. Amer.}, 2003.
\bibitem{Cohn} P.~M. Cohn, \newblock Free Ideal Rings and Localization in
General Rings. \newblock {\em Cambridge Univ. Press, Cambridge} , 2006.
\bibitem{IHES} P.~M. Cohn. \newblock On the structure of the $GL_2$ of a ring. \newblock {\em Inst. Hautes \'Etudes Sci. Publ. Math.},
{30}:5--53, 1966.
\bibitem{book} T.~W. Cusick, and M. E. Flahive, \newblock The Markoff and Lagrange spectra. \newblock {\em Math. Surveys and Monographs}, no. {30}, Amer. Math. Soc., Providence, RI, 1989.
\bibitem{R} C. Reutenauer \newblock On a matrix representation for polynomially recursive sequences. \newblock {\em
The Electronic Journal of Combinatorics}, 19(3) (2012)
\bibitem{W} H.~M. Wedderburn. \newblock Noncommutative domains of integrity. \newblock {\em J. reine angew.
Math.}, 167:129--141, 1932.
\end{thebibliography}
\end{document}