\documentclass[12pt]{article}
\usepackage{e-jc}
\specs{P2.24}{25(2)}{2018}

% \usepackage[draft,notref,notcite]{showkeys} % for showing all labels, ref, cite, etc.
\usepackage{amssymb, amsmath}
\usepackage{xspace}
\usepackage{enumitem}
%\usepackage[draft,notref,notcite]{showkeys} % for showing all labels, ref, cite, etc.
\usepackage[colorlinks=true,citecolor=black,linkcolor=black,urlcolor=blue]{hyperref}

\usepackage{soul}
\usepackage{tikz}
\usetikzlibrary{arrows,matrix,decorations,arrows,shapes}

% USE it for hide tikzpicture
%\renewenvironment{tikzpicture}{\comment}{\endcomment}
% HACK for mk4ht
%\let\columnlines\empty

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%% TiKz settings %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\tikzstyle vertex=[circle, line width=0.2mm, draw, fill=black, inner sep=0mm, minimum width=1mm]%
\tikzstyle{edge} = [draw]

\usepackage[ruled,noline,titlenumbered,linesnumbered,noend,norelsize]{algorithm2e}
\DontPrintSemicolon
\SetNlSty{footnotesize}{}{}
\IncMargin{0.5em}
\SetNlSkip{1em}
\SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif}
\SetKw{Return}{return}
\SetKw{Let}{let}
%\SetNlSkip{2em}

\renewcommand{\phi}{\varphi}
\renewcommand{\leq}{\leqslant}
\renewcommand{\geq}{\geqslant}


% \newcommand{\set}[1]{\left\{#1\right\}}
\newcommand{\abs}[1]{\left\vert#1\right\vert}
% \newcommand{\Incomp}{\parallel}

\newcommand{\Alg}[0]{\text{BALANCED}\xspace}
\newcommand{\Br}{Builder\xspace}
\newcommand{\Sr}{Scheduler\xspace}

\DeclareMathOperator{\Val}{val}
\DeclareMathOperator{\bal}{bal}

\newcommand{\eqA}{\Phi}
\newcommand{\eqB}{\Psi}

\newcommand{\vr}[1]{#1}
\newcommand{\mtop}[2]{\genfrac{}{}{0pt}{1}{#1}{#2}}

% \allowdisplaybreaks[1]

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%METADATA%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Give the submission and acceptance dates in the format shown.
% The editors will insert the publication date in the third argument.
\dateline{Dec 3, 2015}{Apr 24, 2018}{May 11, 2018}

\MSC{68W27, 05C70, 05C85}

\Copyright{  The authors. Released under the CC BY-ND license (International 4.0).}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\title{Deferred on-line bipartite matching}

\author{Jakub Kozik\thanks{Partially supported by the Polish National Science Centre UMO-2011/03/D/ST6/01370.}
\qquad Grzegorz Matecki\thanks{Partially supported by the Polish National Science Centre 2011/03/B/ST6/01367.}\\
\small Theoretical Computer Science\\[-0.8ex]
\small Faculty of Mathematics and Computer Science\\[-0.8ex] 
\small Jagiellonian University in Krak\'{o}w, Poland\\
\small\tt Jakub.Kozik@tcs.uj.edu.pl \qquad Grzegorz.Matecki@tcs.uj.edu.pl\\
}

\begin{document}

% \keywords{on-line, bipartite matching, adaptive algorithm}%

\maketitle

\begin{abstract}

We present a new model for the problem of on-line matching on bipartite graphs. 
Suppose that one part of a graph is given, but the vertices of the other part are presented in an on-line fashion.
In the classical version, each incoming vertex is either irrevocably matched to a vertex from the other part or stays unmatched forever.
In our version, an algorithm is allowed to match the new vertex to a group of elements (possibly empty).
Later on, the algorithm can decide to remove some vertices from the group and assign them to another (just presented) vertex,
with the restriction that each element belongs to at most one group.
We present an optimal (deterministic) algorithm for this problem and prove that its competitive ratio equals $1-\pi/\cosh(\frac{\sqrt{3}}{2}\pi)\approx 0.588$.
\end{abstract}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\label{section:introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
A number of task-server assignment problems can be modelled as finding a match in a bipartite graph $G=(U,D,E)$. 
The vertices of one part (set $D$) correspond to servers, and the vertices of the other part (set $U$) correspond to tasks. 
An edge between a task and a server indicates that the server is capable of performing the task. 
In a simple setting where a single server can perform at most one task, the problem of maximising the number of performed tasks is reduced to finding a maximum matching.
However, in real-life applications, it is very common that not all tasks are known a priori, and some decisions about the assignments have to be made with no knowledge about the future tasks. 
A simple model for this situation is on-line bipartite matching. 
In this setting, the servers are known from the beginning, and the tasks are revealed one by one. 
The decision about the assignment of each task has to be made just after its arrival and cannot be changed in the future.   
Suppose that there are $k$ servers, and that $k$ tasks are going to be revealed. 
It is easy to show that it is possible to present the tasks in such a way that the constructed assignment matches at most $\lceil k/2 \rceil$ tasks, 
whereas it would be possible to match all the tasks if they were revealed all at once.
On the other hand, any greedy strategy guarantees that at least half of the tasks will be assigned. 
These observations imply that the competitive ratio of the on-line bipartite matching problem equals $1/2$.

In their classical contribution, Karp, Vazirani and Vazirani~\cite{KVV90} took an approach in which the graph to be presented is fixed before the first task is revealed. 
In particular, the graph does not depend on the decisions of the assigning algorithm. 
While the approach does not make any difference for the worst-case analysis of the algorithm, it provides a framework for analysing randomised algorithms. 
The authors presented a randomised algorithm, which constructs a matching with the expected competitive ratio of at least $1-1/e$
(the original paper~\cite{KVV90} contained a mistake, which was corrected in~\cite{GM2008}; see also a simplified exposition~\cite{BM2008}). 
The result is the best possible among all randomised algorithms.
The approach of~\cite{KVV90} has been applied to many variants of the original problem, with numerous practical applications (the switch routing problem~\cite{AC2006, AR2003}, on-line auctions~\cite{MGS2011}, the Adwords problem~\cite{DH2009, GM2008, MSV2007}, etc.) 
Recently, the problem of on-line stochastic matching~\cite{BK2010, FMMM2009, KMT2011, MGS2011, MP2012}, where the competitive ratio can be  greater than $1-\frac{1}{e}$, has attracted a deep interest.
%
A different approach (called $b$-matching) is presented in~\cite{KalKir2000}, where the authors allowed a server to perform up to $b$ tasks at the same time. 
They showed an optimal deterministic algorithm with a competitive ratio of $1-\frac{1}{(1+\frac{1}{b})^b} $ (which tends to $1-\frac{1}{e}$ with $b \rightarrow \infty$).


It is not uncommon for the cost of a running server to be roughly the same as the cost of an idle one. 
Consequently, it may be profitable to perform some task on many servers simultaneously.
In order to capture this situation, we allow an algorithm to assign more than one server to an incoming task, and later in the future, to forgot some calculations and switch the freed servers over to new tasks.
One server is assumed to be enough to complete a task.
In this model, called \emph{deferred} (on-line) matching, an algorithm can improve performance by delaying some decisions concerning which server should accomplish which task. 
We present an algorithm that always matches at least $(1-\pi/\cosh(\frac{\sqrt{3}}{2}\pi))n+o(n)\approx 0.588n+o(n)$ vertices, when $n$ is the size of the maximum matching in the presented graph.


This type of approach was first introduced by Felsner in~\cite{Fel97} 
as an \emph{adaptive} generalisation of the on-line chain partitioning problem. 
In this problem, an algorithm has to partition into chains a partial order whose vertices are presented in an on-line manner.
Felsner proposed a variant in which each incoming vertex can be initially assigned to several chains
and later withdrawn from some of these chains, unless only one chain remains.
This modification gives more freedom to a partitioning algorithm but,
the classical behaviour whereby just one chain is used for each new vertex is still allowed.
Interestingly, as reported in~\cite{BFKKMM2012}, no  adaptive algorithm has been proved to essentially outperform the best non-adaptive algorithm 
and, in both cases, the best lower bounds for the number of used chains are roughly twice the width of the presented partial order.
It seems challenging to verify whenever the adaptive (deferred) approach to chain partitioning enables more efficient on-line algorithms.

\subsection{Further related work}
The problem of $b$-matching~\cite{KalKir2000} seems similar to deferred matching in which the roles of servers and tasks are switched. 
However, in the deferred matching, an algorithm always ends up with each server performing at most one task, while $b$-matching allows a server to perform many tasks.

Another similar approach is proposed by Feldman et al.~\cite{FMMMa2009} as \emph{free disposal}.
They consider the weighted matching problem, in which each incoming vertex (server) $u\in U$ can be assigned to one of its neighbours (in $D$) or left alone.
All vertices of $D$ (tasks) are given in advance (the roles of servers and tasks are switched). 
Tasks are allowed to be assigned to servers multiple times.
In the end, each task $d\in D$ chooses at most $n(d)$ servers from all the vertices of $U$ assigned to it  -- the ones with the highest-weighted edge.
The numbers $n(d)$ (for $d\in D$) are given in advance and can be interpreted as the maximum number of times the realization of the task brings a profit.
The main difference from the deferred matching is that once a connection between a server and a task is established, it cannot be changed until the very end.
In deferred matching, a server may drop its task and take on a new one during the on-line process.

The idea of dropping an edge from already-constructed matching
is investigated in the \emph{pre-emptive model}. 
Here, edges with weights are incoming on-line, and an algorithm is allowed to remove previously accepted edges in order to add a new one.
A collection of results from pre-emptive matching can be found in~\cite{CTV2015,ELSW2013}.

An approach where a task (with a given weight) can be assigned to multiple servers and, therefore, each server can run many tasks was already studied in~\cite{ANR1995}.
The goal here is to minimise the maximum load (the sum of the weights of all tasks assigned to a server) of all servers.
Furthermore, assignment decisions may not be changed in the future.
The authors prove that the best competitive ratio is $O(\log n)$, where $n$ is the number of servers.
In deferred matching, we are not interested in the load of each server that may change (it can only decrease) during the process (assuming that weights are equal to $1$).
However, we explicitly forbid the load of a server to be greater than $\alpha$, and our results depend~on~$\alpha$.

In most papers about on-line bipartite matching, decisions made by an algorithm are irrevocable.
Some authors allow a limited number of reassignments (see~\cite{CDKL2009, GMS2014}), and analyse the strategies that minimise the reassignments.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Problem definition}\label{sec:definition}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

For a positive integer $\alpha$, the \emph{deferred $\alpha$-matching game} is played in rounds between the \Sr and the \Br.
They play on a bipartite graph, say $G=(U,D,E)$.
The set of vertices $D$ is known in advance, and set $U$ (with its neighbouring edges) is revealed step-by-step by the Builder
within a finite number of rounds. Each round consists of three steps:
\begin{enumerate}[label=(R\theenumi)]
% \def\labelenumi{(\theenumi)}
% \def\theenumi{R\arabic{enumi}}
	\item \Br presents a vertex $u\in U$ and reveals all its neighbours $N(u)=\{d \in D: (u,d)\in E\}\subseteq D$.\label{game:1}
	\item \Sr assigns to $u$ a set $m(u)\subseteq N(u)$ with a maximum size of $\alpha$. \label{game:2}
	\item \Sr updates $m(x):=m(x)\setminus m(u)$ for every vertex $x$ presented before.\label{game:3}
\end{enumerate}
The \emph{size} of the game, denoted by $n$, is the maximum size of a matching in $G$.
The final function $m$ is called \emph{the multi-match} constructed by the \Sr.
We allow $\alpha$ to amount to infinity, in which case the game is called  \emph{deferred $\infty$-matching} or simply \emph{deferred matching}.

The goal of the \Sr is to maximise the number $k$ of non-empty sets $m(u)$ over all vertices $u\in U$. 
Intuitively, every such vertex $u$ can be successfully matched with an arbitrarily chosen $d\in m(u)$. 
The number $k$ denotes \emph{the size of the multi-match constructed by the \Sr}.
The goal of the \Br is to make $k$ as small as possible.

The interpretation of the deferred $\alpha$-matching game in terms of servers-tasks assignment is clear: $D$ is the set of servers, $U$ is the set of incoming tasks.
An algorithm assigns the servers $m(u)$ to an incoming $u$ (possibly cancelling the previous computations of the servers in $m(u)$).
The performance of the Scheduler is measured through the number (or the fraction) of the tasks are performed at the end of the game. 
Note that for $\alpha=1$, the game is reduced to the classical on-line bipartite matching.

Let $\mathcal{A}$ be the assigning algorithm.
We denote by $\Val_{\mathcal{A}}(n)$ the worst-case value of the size of the multi-match constructed by $\mathcal{A}$ in all possible games of size $n$. 
The value of the deferred $\alpha$-matching problem $\Val_\alpha(n)$ is the maximum value of $\Val_{\mathcal{A}}(n)$ among all $\alpha$-assigning algorithms $\mathcal A$. 
Since no algorithm produces a multi-match larger than $n$, we additionally use the \emph{competitive ratio} defined as
$\liminf_{n\to\infty}{\Val_\alpha(n)}/{n}$.
Similarly, \emph{the competitive ratio of the algorithm $\mathcal{A}$} is $\liminf_{n\to\infty}{\Val_{\mathcal{A}}(n)}/{n}$.
 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Main results}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
To solve the problem of on-line deferred $\alpha$-matching we consider a deterministic algorithm called $\alpha$-\Alg{}.
In one round, for the presented vertex $u$, the algorithm successively enlarges the new matching set $m(u)$, possibly diminishing previously mapped sets, as long as $\abs{m(u)}$ does not exceed the size of any diminished sets.
It is greedy in the sense that no task is rejected as long as it is possible to perform.
The main idea of the algorithm is to locally balance the sizes of the assigned sets.
The details of the algorithm are described in Section~\ref{sect:best_alg}. 
The main result of our work is the proof that $\alpha$-\Alg{} is the best possible algorithm.
\begin{theorem}\label{thm:optimal-bal}
  $\alpha$-\Alg{} is an optimal strategy for the \Sr in the deferred \mbox{$\alpha$-matching} game.
\end{theorem}

The proof is split into two parts, described in the two subsequent sections.
The following schema of a system of inequalities is crucial for both arguments:
\begin{equation}\label{ineq:main-res}
 \left\{
 \begin{array}{l}
 (1+\alpha)\vr{x_0} \leq n,\\
  (\vr{x_0}+\cdots+\vr{x_i})(1+\vr{x_i})\leq n-i,\quad i=1,\ldots,k,\\
  \vr{x_1} \geq \vr{x_2}\geq \cdots \geq \vr{x_k} \geq 0,\\
  \vr{x_0}+\cdots + \vr{x_k} \geq 0.
 \end{array}
 \right. 
\end{equation}


We will operate on fixed $n$ and $\alpha$. 
Thus, the schema is parametrised by a positive integer $k$.
We say that a pair $(k,x)$ satisfies system~\eqref{ineq:main-res} if $x=(x_0,x_1, \ldots, x_k)$ is an integer vector satisfying the instance of the schema for this particular $k$.

In Section~\ref{sect:worst_adapive}, we prove (Proposition~\ref{prop:spoiler_bound}) that for every solution $(k,x)$ of~\eqref{ineq:main-res}, every deferred $\alpha$-matching algorithm $\mathcal A$ can be limited by the \Br's strategy in a game of the size $n$, so that $\mathcal A$ matches at most $n-(x_0+\cdots+x_k)$ vertices.
On the other hand, in Section~\ref{sect:best_alg}, we show (Proposition~\ref{prop:balanced_bound}) that when $\alpha$-\Alg{} constructs a multi-match of size $k$ in a game of size $n$, then $k=n-(x_0+\cdots+x_k)$ for some $(k,x)$ satisfying~\eqref{ineq:main-res}.
These two facts ensure that $\alpha$-\Alg{} is an optimal strategy for \Sr.

In order to determine the competitive ratio of $\alpha$-\Alg{}, we need to maximise the sum $x_0+\cdots+x_k$ over all feasible solutions $(k,x)$ of~\eqref{ineq:main-res}.
In Section~\ref{sect:comp} we present and solve a linear programming formulation of~\eqref{ineq:main-res}.
Finally, we prove
\begin{theorem}\label{thm:ratio}
 The competitive ratio of the deferred on-line $\alpha$-matching problem on bipartite graphs 
 %(and the competitive ratio of the $\alpha$-\Alg{} algorithm)  
 equals
$1 - \frac{\alpha}{1+\alpha}\prod_{i=1}^{\alpha-1}\frac{i+i^2}{1+i+i^2}$. For $\alpha\to\infty$, it converges to $1 - {\pi}/{\cosh \frac{\sqrt 3 \pi}{2}}\approx 0.588$.
\end{theorem}
The ratio converges fast, we obtain $5/9\approx 0.556$ and $4/7\approx 0.571$ for $\alpha=2$ and $\alpha=3$, respectively.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Worst case scenario}
\label{sect:worst_adapive}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Inequalities~\eqref{ineq:main-res} allow $x_0$ to be negative. 
We start with an observation
that in order to maximise the sum $x_0+\cdots+x_k$, it suffices 
to consider the solutions of~\eqref{ineq:main-res} with $x_0=\left\lfloor\frac{n}{1+\alpha}\right\rfloor$.

\begin{proposition}\label{prop:positiv:x0}
For any pair $(k,x)$ 
satisfying~\eqref{ineq:main-res}, there exists a pair $(k,x')$ 
satisfying~\eqref{ineq:main-res}, such that $x'_0 = \left\lfloor\frac{n}{1+\alpha}\right\rfloor$ and $x'_0+\cdots+x'_{k} \geq x_0+\cdots +x_{k}$.
\end{proposition}
\begin{proof}
For the case in which $x_1=0$, take $x'_0 = \left\lfloor\frac{n}{1+\alpha}\right\rfloor$, $x'_1=\cdots=x'_k=0$ to observe that
$x_0+\cdots+x_k = x_0 \leq \left\lfloor\frac{n}{1+\alpha}\right\rfloor = x'_0 + \cdots +x'_k$.

From now on, we shall assume that $x_1>0$. 
The proposition is proved by induction
on the number $m=\left\lfloor\frac{n}{1+\alpha}\right\rfloor - x_0$.
The base case of $m=0$ (where $x_0= \left\lfloor\frac{n}{1+\alpha}\right\rfloor$) is immediate, as we can simply take $x'=x$.
For the induction step, let $m>0$.
It is equivalent to $x_0<\left\lfloor\frac{n}{1+\alpha}\right\rfloor$.
Let $j$ be the highest index for which $x_j=x_1$.
Then, consider the following sequence $(\bar x_0,\ldots,\bar x_k)$: $\bar x_0 = x_0+1$, $\bar x_j = x_j-1$ and $\bar x_i = x_i$ for all $i\notin\{0,j\}$.
First, we need to test whether the pair $(k,\bar x)$ satisfies~\eqref{ineq:main-res}.
The condition $m>0$ implies that
\begin{equation}\label{prep_cond_I}
\bar x_0=x_0+1\leq \left\lfloor\frac{n}{1+\alpha}\right\rfloor. 
\end{equation}
The definition of $j$ guarantees that $x_j > x_{j+1}$ as long as $j<k$.
Therefore, $\bar x_j \geq \bar x_{j+1}$, and indeed, 
\begin{equation}\label{prep_cond_II}
\bar x_1\geq \ldots\geq \bar x_k. 
\end{equation}
For all $i\geq j$, we have $\bar x_0+\cdots+\bar x_i = x_0+\cdots+x_i$, and thus
\begin{equation}\label{prep_cond_III}
  (\bar x_0+\cdots+\bar x_i)(1+\bar x_i)\leq (x_0+\cdots+x_i)(1+x_i) \leq n-i,
\end{equation}
and
\begin{equation}\label{prep_cond_IV}
\bar x_0+\cdots+\bar x_k = x_0+\cdots+x_k\geq0.
\end{equation}
For  $0<i<j$, we arrive at
\begin{align}\label{prep_cond_V}
 (\bar x_0+\cdots+\bar x_i)(1+\bar x_i) &=  (x_0+\cdots+x_{i}+1)(1+x_j) \leq\nonumber\\
			     &\leq (x_0+\cdots+x_{j-1}+x_j)(1+x_j)\leq \\
			     &\leq n-j < n-i.\nonumber
\end{align}
Inequalities~\eqref{prep_cond_I}-\eqref{prep_cond_V} imply that $(k,\bar x)$ does, indeed, satisfy~\eqref{ineq:main-res}.
Next, observe that for $(k,\bar x)$ there is a pair $(k,x')$ satisfying~\eqref{ineq:main-res} such that 
$x'_0 = \left\lfloor\frac{n}{1+\alpha}\right\rfloor$ and $x'_0+\cdots+x'_{k} \geq \bar x_0+\cdots +\bar x_{k}$.
It has already been proved for  $\bar x_1 = 0$,
and the case $\bar x_1>0$ is given through the induction hypothesis, since $0\leq\left\lfloor\frac{n}{1+\alpha}\right\rfloor - \bar x_0 <  m$. 
Finally, with~\eqref{prep_cond_IV}, we obtain $x'_0+\cdots+x'_{k} \geq  x_0+\cdots + x_{k}$,
and through the induction hypothesis, we end the proof.
\end{proof}



\begin{proposition}\label{prop:spoiler_bound}
For any pair $(k,x)$ 
satisfying~\eqref{ineq:main-res}, there exists a strategy for the \Br in the deferred $\alpha$-matching game of size $n$ such that any \Sr constructs a multi-match with a size of at most $n-(x_0+\cdots+x_k)$.
\end{proposition}
\begin{proof}
Assume for a moment that the proposition is true whenever $x_0\geq0$.
For the case $x_0<0$, by Proposition~\ref{prop:positiv:x0}, there is a pair $(k,x')$, such that $x'_0\geq0$ and 
\begin{equation}\label{prop_case0}
n-(x'_0+\cdots+x'_k)\leq n-(x_0+\cdots+x_k). 
\end{equation}
Thus, for $(k,x')$,  we obtain a \Br's strategy $S$ that force the \Sr to use at most $n-(x'_0+\cdots+x'_k)$ vertices for its match mapping.
By~\eqref{prop_case0}, we can use the strategy $S$ for the pair $(x,k)$ with $x_0<0$.


The above argument shows that it is enough to consider pairs with $x_0\geq0$.
Without a loss of generality, we assume that $x_k>0$ and describe a strategy for the \Br that does not allow the \Sr to construct a multi-match larger than $n-(x_0+\cdots+x_k)$.
During the game, the \Br presents a bipartite graph $G=(U, D,E)$ with $\abs{U}=\abs{D}=n$ and maintains an auxiliary structure:
a partition of $U=  U_0\cup U_1\cup\ldots\cup U_k\cup R$ and a partition of $D =  D_0\cup D_1\cup\ldots\cup D_k\cup S$, such that
\begin{align}
    \abs{U_0} &= \abs{D_0} = x_0,\nonumber\\
    \abs{U_i} &=  \abs{D_i}=1+x_i,\qquad\textrm{for }i=1,\ldots,k,\nonumber\\
    N(u_i)    &=  D-(D_0\cup\ldots\cup D_{i-1}),\qquad\textrm{for each }u_i\in U_i,\label{atack:neighbourU}\\
    N(r)      &=  S,\qquad\textrm{for each }r\in R.\label{atack:neighbourR}
\end{align}
Note that~\eqref{ineq:main-res} guarantees that $x_0+\cdots+x_k\leq n-k$ and thus 
$\sum_{i=0}^k\abs{U_i} = \sum_{i=0}^k\abs{D_i}\leq x_0 + \sum_{i=1}^k(1+x_i) \leq n$.
Therefore, $\abs{R}=\abs{S}\geq0$.
It is clear that any bipartite graph that can be partitioned in such manner contains a perfect matching.
  
 
The strategy of the \Br is divided into $k+2$ phases enumerated from $0$ to $k+1$.
{Figure~\ref{fig:attack} depicts the evolution of the strategy described below for $\alpha=2$, $n=18$ and $k=2$.}
In the beginning of the $i$-th phase ($0\leq i\leq k$), the sets $U_j$ and $D_j$, for $j<i$ are already fixed. 
Next, during the $i$-th phase, the \Br presents $1+x_i$ vertices  (or $x_0$ vertices when $i=0$), which form the set $U_i$ with their neighbourhoods defined by~\eqref{atack:neighbourU}. 
Then, the \Br chooses in a special manner the set $D_i\subseteq D-(D_0\cup\ldots D_{i-1})$ with a size of $1+x_i$ (or a size of $x_0$ if $i=0$).
This concludes phase $i$.
In the last phase of the game (i.e. phase $k+1$), the \Br presents a set $R$ with a size of $n-k-(x_0+\cdots+x_k)$ with vertices adjacent to all vertices in $S=D-\bigcup_{i=0}^kD_i$.
  
We now only need to define the \Br's choice of $D_i$.
For that, after each phase $i$ ($0\leq i\leq k$), the \Br maintains the following:
\begin{itemize}
    \item[($\star$)] 
    there are $i$ distinct vertices $y_1,\ldots,y_i\in U_0\cup\ldots\cup U_i$ such that
    $m(u)\cap \bigcup_{j=0}^i D_j = \emptyset$ for all $u\in (U_0\cup\ldots\cup U_i)\setminus \{ y_1,\ldots,y_i\}$.
\end{itemize}
Observe that, for a fixed $X\subseteq D$ and a fixed $u\in U$,
once the condition $X\cap m(u) = \emptyset$ is satisfied, it will stay so to the end of the game,
as $m(u)$ may only shrink later on in the game.
We prove by induction that choosing such $D_i$ is possible in every phase by successively finding the correct $y_i$'s. 

For $i=0$, the \Br chooses\footnote{Let $m(X) = \bigcup_{x\in X}m(x)$ for a set $X$.} any $D_0\subseteq D-m(U_0)$ with a size of $x_0$, which is possible since
$\abs{D-m(U_0)}\geq n-\alpha x_0\geq x_0$ (because $\abs{m(u)}\leq\alpha$ for all $u\in U_0$, and through~\eqref{ineq:main-res}).
Thus, $D_0\cap m(u)=\emptyset$ for all $u\in U_0$, as required.

\begin{figure}[tbh]
\begin{center}
\begin{tikzpicture}%[scale=.9]
 \foreach \p in {1,...,18} {
  \coordinate (d\p) at (\p*0.7,-1);
  \fill (d\p) circle (1pt) node[below] {$\mtop{\p}{}$};
  }
  
 \foreach \q in {1,...,18} {
  \coordinate (u\q) at (\q*0.7,1);
 }
 \fill (u1) circle(1pt) node[above] {$\genfrac{}{}{0pt}{1}{7}{8}$};
 \fill (u2) circle(1pt) node[above] {$\mtop{9}{10}$};
 \fill (u3) circle(1pt) node[above] {$\mtop{\textrm{\st{$11$}}}{12}$};
 \fill (u4) circle(1pt) node[above] {$\mtop{\textrm{\st{$13$}}}{14}$};
 \fill (u5) circle(1pt) node[above] {$\mtop{\textrm{\st{$15$}}}{16}$};
 \fill (u6) circle(1pt) node[above] {$\mtop{\textrm{\st{$17$}}}{18}$};
 \draw[rounded corners] (u1) ++ (-0.3,-0.2) rectangle  (6*0.7+0.3,1.8) ;
 \draw[rounded corners] (d1) ++ (-0.3,-0.6) rectangle  (6*0.7+0.3,0.2-1) ;
 
 \fill (u7) circle(1pt) node[above] {$\mtop{}{17}$};
 \fill (u8) circle(1pt) node[above] {$\mtop{}{15}$};
 \draw[rounded corners] (u7) ++ (-0.3,-0.2) rectangle  (8*0.7+0.3,1.5) ;
 \draw[rounded corners] (d7) ++ (-0.3,-0.6) rectangle  (8*0.7+0.3,0.2-1) ;
 
 \fill (u9) circle(1pt) node[above] {$\mtop{}{13}$};
 \fill (u10) circle(1pt) node[above] {$\mtop{}{11}$};
 \draw[rounded corners] (u9) ++ (-0.3,-0.2) rectangle  (10*0.7+0.3,1.5) ;
 \draw[rounded corners] (d9) ++ (-0.3,-0.6) rectangle  (10*0.7+0.3,0.2-1) ;
 
 \foreach \q in {11,...,18} {
   \fill (u\q) circle(1pt);
 }
 \draw[rounded corners] (u11) ++ (-0.3,-0.2) rectangle  (18*0.7+0.3,1.2) ;
 \draw[rounded corners] (d11) ++ (-0.3,-0.6) rectangle  (18*0.7+0.3,0.2-1) ;
 
 \path (u1) ++ (2,1.1) node {$U_0$} ;
 \path (d1) ++ (2,-1) node {$D_0$} ;
 
 \path (u7) ++ (0.4,0.8) node {$U_1$} ;
 \path (d7) ++ (0.4,-1) node {$D_1$} ;
 
 \path (u9) ++ (0.4,0.8) node {$U_2$} ;
 \path (d9) ++ (0.4,-1) node {$D_2$} ;
 
 \path (u11) ++ (2.4,0.5) node {$R$} ;
 \path (d11) ++ (2.4,-1) node {$S$} ;
 
 \draw (u1) ++ (-0.2,-0.2) rectangle (1*0.7+0.2,2) ;
 \path (0.8,2.2) node {$y_1$};
 
 \draw (u2) ++ (-0.2,-0.2) rectangle (2*0.7+0.2,2) ;
 \path (2*0.7+0.1,2.2) node {$y_2$};

 \foreach \q in {u1,u7,u9,u11}  {\path (\q) ++ (-.2,-.21) coordinate (p\q);}
 \foreach \q in {u6,u8,u10,u18} {\path (\q) ++ (.2,-.21) coordinate (p\q);}
 \foreach \q in {d1,d7,d9,d11}  {\path (\q) ++ (-.2,.21) coordinate (p\q);}
 \foreach \q in {d6,d8,d10,d18} {\path (\q) ++ (.2,.21) coordinate (p\q);}

 \filldraw[black!50!white] (pu11) -- (pd11) -- (pd18) -- (pu18) ;
 \filldraw[black!40!white] (pu9) -- (pu10) .. controls (10*.7+.2,.1) .. (pd18) -- (pd9);
 \filldraw[black!30!white] (pu7) -- (pu8) .. controls (8*.7+.2,.1) .. (pd18) -- (pd7);
 \filldraw[black!20!white] (pu1) -- (pu6) .. controls (6*.7+.2,.1) .. (pd18) -- (pd1);
 

\end{tikzpicture}
\caption{Sample construction of the solution $x=(6,1,1)$ of~\eqref{ineq:main-res} with $n=18$ and $\alpha=2$. 
Note that $N(U_0)=D$, $N(U_1) = D-D_0$, $N(U_2)=D_2\cup S$ and $N(R)=S$. 
The numbers under the vertices of the bottom part represent their names.
The numbers in the upper part correspond to sets $m(u)$ (e.g.\ $m(y_1) = \{7,8\}$).
}\label{fig:attack}
\end{center}
\end{figure}

For $1\leq i\leq k$, we assume that ($\star$) holds for the vertices $Y=\{y_1,\ldots,y_{i-1}\}$ after the $(i-1)$-th phase.
Let $U'=U_0\cup\ldots\cup U_{i}$ and $D'=D-(D_0\cup\ldots\cup D_{i-1})$.
First, note that $m(U'-Y)\subseteq D'$ through the ($\star$)-property after the $(i-1)$-th phase.
We split the vertices in $D'$ into two (disjointed) blocks:
$D' = m(U'-Y)\cup X$, where $X$ is simply the set of all unmatched vertices in $D'$.
Next, with $\abs{U'} = x_0+\cdots+x_i+i$, $\abs{Y}=i-1$ and $\abs{D'}=n-(x_0+\cdots+x_{i-1}+i-1)$,
we set a lower bound on the average size of $m(u)$ for $u\in U'-Y$:
\begin{align*}
\frac{\abs{m(U'-Y)}}{\abs{U'-Y}} &=
    \frac{\abs{D'}-\abs{X}}{\abs{U'}-\abs{Y}}  
    =\frac{n-i-\abs{X}-(x_0+\cdots+x_{i-1}-1)}{x_0+\cdots+x_i+1}\\
    &\buildrel \eqref{ineq:main-res}\over \geq % ^{\textrm{(    \ref{ineq:main-res})}} 
    \frac{(x_0+\cdots+x_i)(1+x_i)-\abs{X}-(x_0+\cdots+x_{i-1}-1)}{x_0+\cdots+x_i+1}\\
    &= x_i + \frac{1-\abs{X}}{x_0+\cdots+x_i+1} > x_i - \abs{X}.
  \end{align*}
Clearly, there must exist a vertex $y_i\in U'-Y$ with $\abs{m(y_i)}\geq \frac{\abs{m(U'-Y)}}{\abs{U'-Y}} > x_i-\abs{X}$.
Since all of the values are integers, we have $\abs{X}+\abs{m(y_i)}\geq 1 + x_i$.
The \Br picks for $D_i$ any subset of $X\cup m(y_i)$ with a size of $1+x_i$,  and hence keeps the property ($\star$) satisfied after the $i$-th phase.

Based on the condition ($\star$), after the $k$-th phase, there is $Y=\{y_1,\ldots, y_k\}$ such that  $D-S$ and $m(u)$ are disjoint for all $u\in U-Y$.
Therefore, whenever $m(u)\neq\emptyset$,  for $u\in U$, then either $u\in Y$ or $m(u)\subseteq S$. 
This means that the number of such $u$'s is at most $\abs{Y}+\abs{S} = k + n - (\abs{D_0}+\cdots+\abs{D_k}) = n-(x_0+\cdots+x_k)$. 
Consequently, the size of the multi-match produced by  the \Sr is at most $n-(x_0+\cdots+x_k)$.
\end{proof}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The best matching algorithm}
\label{sect:best_alg}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
To describe the main algorithm, we shall introduce certain useful definitions.
Consider a deferred $\alpha$-matching algorithm.
Its (partial) multi-match produced at the end of round $t$ is denoted by $m^t:U\to\mathcal{P}(D)$.
In particular, for the $u \in U$ that is presented in round $t_0$, we have $m^t(u)=\emptyset$ for $t< t_0$, and subsequently $(m^{t_0}(u), m^{t_0+1}(u), \ldots,m^{T}(u))$ is a weakly decreasing (see rule~\eqref{game:3}) sequence of sets, i.e. $m^{t_0}(u)\supseteq m^{t_0+1}(u)\supseteq\ldots\supseteq m^{T}(u)$.
After the last round (i.e. round $T$) the function $m^T$ is equal to the function $m$.


At a round $t$ in which a vertex $u \in U$ is presented, we say that $d\in D$ is \emph{available} for $u$ if $d\in N(u)$ and $m^{t-1}(x)\neq\{d\}$ for any $x$ presented earlier.
Furthermore, $d$ is \emph{strongly available} for $u$ if it is available for $u$ and $d$ does not belong to any $m^{t-1}(x)$. 
Vertex $e\in U$ is \emph{ready} for $u$ if $m^{t-1}(e)$ contains an  element that is available for $u$. See Figure~\ref{fig:available_example} for an example.
\begin{figure}[tbh]
\begin{center}
\begin{tikzpicture}%[scale=.9]

\coordinate (d1) at (0, 0);
\coordinate (d2) at (2, 0);
\coordinate (d3) at (4, 0);
\coordinate (d4) at (6, 0);

\coordinate (u1) at (0, 2);
\coordinate (u2) at (6, 2);
\coordinate (u)  at (3, 2);

\fill (d1) circle (2pt) node[below] {\footnotesize $1$};
\fill (d2) circle (2pt) node[below] {\footnotesize $2$};
\fill (d3) circle (2pt) node[below] {\footnotesize $3$};
\fill (d4) circle (2pt) node[below] {\footnotesize $4$};

\fill (u1) circle (2pt) node[above] {\footnotesize $u_1$}; 
\node[above] at (0, 2.3) {\footnotesize $m(u_1)=\{2,3\}$};
\fill (u2) circle (2pt) node[above] {\footnotesize $u_2$};
\node[above] at (6, 2.3) {\footnotesize $m(u_2)=\{4\}$};
\fill (u) circle (2pt) node[above] {\footnotesize $u$};

\path[draw,dashed] (d1) -- (u1) -- (d4);
\path[draw,dashed] (d1) -- (u) -- (d4);
\path[draw,dashed] (d4) -- (u2);

\node[right] at (-4, 2) {\footnotesize $N(u_1)=\{1, 2, 3, 4\}$};
\node[right] at (-4, 1.5) {\footnotesize $N(u_2)=\{4\}$};
\node[right] at (-4, 1) {\footnotesize $\ N(u)=\{1, 2, 3, 4\}$};

\end{tikzpicture}
\caption{
Sample graph for when a vertex $u$ is presented.
Element $1$ is strongly available, and both $2$ and $3$ are (just) available for $u$.
Element $4$ is not available for $u$. Vertex $u_1$ is ready for $u$.
}\label{fig:available_example}
\end{center}
\end{figure}


We present an algorithm for the \Sr called $\alpha$-\Alg{}. 
Suppose that a vertex $u$ is presented in round $t$, and let $U_t$ be the set of vertices presented so far (including $u$). 
The function  $m^{t-1}$ defined on $U_{t-1}$ is already known, and the algorithm has to construct the function $m^t$ defined on $U_t$. 
The algorithm initially assigns $m^t(x) := m^{t-1}(x)$ for all $x\in U_{t-1}$ and sets $m^t(u) := \emptyset$. 
Then, the set $m^t(u)$ is increased, one element at a time, until a certain condition is satisfied.
During the process some other sets $m^t(x)$ may be decreased.
The precise description is listed in Algorithm~\ref{alg}.

\begin{algorithm}
\label{alg}
\caption{$\alpha-\Alg{}(u)$ - round $t$}
\Let $m^t:= m^{t-1}$ \;
\Let $m^t(u):=\emptyset$ \;
pick up at most $\alpha$ strongly available elements for $u$ and put it into $m^t(u)$ \; \label{line:3}
\While {there exists a vertex $e\in U$ that is ready for $u$ and satisfies 
   $$
    \abs{m^t(u)}  +2 \leq \abs{m^t(e)}\label{cond:balanced}
   $$ \label{line:4}}
 { from the set of all such vertices pick $e$ with maximal size of $m^t(e)$ \;
   move one vertex available for $u$ from $m^t(e)$ to $m^t(u)$}
\end{algorithm}
%
The condition in line~\ref{line:3} guarantees that the size of $m^t(u)$ is never greater than $\alpha$.
Note also that $\alpha$-\Alg{} never leaves $m^t(u)$ empty if there exists an element available for $u$; thus, in this respect, $\alpha$-\Alg{} can be considered as greedy. 
For $\alpha=1$, the above algorithm is just a simple greedy construction of a bipartite matching.

The following proposition describes the performance of $\alpha$-\Alg{}. 
\begin{proposition}\label{prop:balanced_bound}
 The size $k$ of the multi-match produced by $\alpha$-\Alg{} in a deferred $\alpha$-matching game with a size of $n$ equals  $n-(x_0+x_1+\cdots+x_k)$
 for a certain pair $(k,(x_0,\ldots,x_k))$ satisfying~\eqref{ineq:main-res}.
\end{proposition}
\begin{proof}
Consider an instance of the matching game with a size of $n$ in which the \Br produced the graph $G=(U,D,E)$, and $\alpha$-\Alg{} algorithm constructed the multi-match $m:U\to\mathcal{P}(D)$. 
Suppose that $T$ rounds have been played in the game.
\emph{The presentation time} of an element $u\in U$ is the index of the round in which $u$ has been revealed. 

Let $X$ be the set of all vertices in $D$ such that $m(U)\cap X = \emptyset$. 
The size of the multi-match produced by $\alpha$-\Alg{} is equal to the size of $\mathbf Y = \{u \in U : m(u)\neq \emptyset\}$.
The proof of the proposition requires the following claims. 
\begin{claim}
\label{claim:d1}
  Suppose that  $x\neq y$, $m^{t_1}(x) \cap m^{t_2}(y)\neq\emptyset$ and $t_1 < t_2$, then $\abs{m^{t_1+1}(x)} \geq\abs{m^{t_2}(y)}$. 
\end{claim}
\begin{proof}
We will prove the claim by the induction on $t_2-t_1$.
For the induction base suppose that $t_2 = t_1 + 1$.
This means that during round $t_2$, the $\alpha$-\Alg{} algorithm removed (at least) one element from $m^{t_2}(x)$ and inserted it into $m^{t_2}(y)$. 
Let $d\in m^{t_1}(x)\cap m^{t_2}(y)$ be the last such element (see Figure~\ref{fig:common-element}).
This only happens when the condition from line~\ref{line:4} of the algorithm  is satisfied, and  $x$ is a vertex with the maximum size of the assigned set among the vertices ready for $y$. 
Let $s$ be the size of the set assigned to $x$ at that moment  (in terms of Algorithm~\ref{alg}, this is $\abs{m^{t_2}(x)}$). 
Clearly, $\abs{m^{t_1}(x)} \geq s$, since $\abs{m^{t_1}(x)}$ is the size of the set assigned to $x$ in the beginning of round $t_2$, and since that set can only become smaller during the round. 
Furthermore, $s-1=\abs{m^{t_1+1}(x)}$, since element $d$ was the last one removed from $m^{t_2}(x)$.

The condition from line~\ref{line:4} of $\alpha$-\Alg{} guarantees that the set that has just been increased has no more elements than the set that has been decreased. 
That property, combined with the fact that for any vertex that was ready for $y$, the assigned set was not greater than $s$, gives $s > \abs{m^{t_2}(y)}$.
Thus, indeed, $\abs{m^{t_1+1}(x)}\geq \abs{m^{t_2}(y)}$.

For the general case take any $d\in m^{t_1}(x)\cap m^{t_2}(y)$ and consider the last round $t \geq t_1$, where $d\in m^t(x)$.
Note that $t < t_2$, since through the assumption, we know that $d\in m^{t_2}(y)$ for $y\neq x$ and $t_2 > t_1$.
Therefore, there is $z\in U$ different from $x$ such that $d\in m^{t+1}(z)$.
Based on the induction base and the fact that the sizes of the matching sets of fixed $u\in U$ never increase, we obtain
$\abs{m^{t_1}(x)} \geq \abs{m^{t_1+1}(x)} \geq \abs{m^{t+1}(x)} \geq \abs{m^{t+1}(z)}$.
If $z=y$, then simply $\abs{m^{t+1}(z)} \geq \abs{m^{t_2}(y)}$, and the claim follows.
The case $z\neq y$ implies that $t+1 < t_2$; however we also have $t_2 - (t+1) < t_2 - t_1$.
Therefore, based on the induction, we obtain $\abs{m^{t+1}(z)} \geq \abs{m^{t+2}(z)} \geq \abs{m^{t_2}(y)}$,
which concludes the proof.
\end{proof}

This claim has the following interesting consequence: if we focus on a single element $d\in D$ and track the sizes of the matching sets that contain this element during the game, then these sizes form a non-increasing sequence.

\usetikzlibrary{arrows,calc,shapes,decorations.pathreplacing}
\begin{figure}[tbh]
\begin{center}
\begin{tikzpicture}%[scale=.9]

\coordinate (d) at (5,0);
\coordinate (x) at (4,2);
\coordinate (y) at (6,2);

\fill (d) circle (2pt) node[below] {$d$};
\fill (x) circle (2pt) node[below] {$x$};
\fill (y) circle (2pt) node[below] {$y$};

\path[draw] (1,0) -- (9,0);
\path[draw,dashed] (2,0) -- (x) -- (6,0);
\path[draw,dashed] (4,0) -- (y) -- (8,0);

\draw (3.8,2.3) rectangle (4.2,4);
\draw[dashed] (3.8,4) rectangle (4.2,4.5);
\path (4,3.8) node {\st{$d$}};
\path[draw] (3.8,3.6) -- (4.2,3.6);

\draw (5.8,2.3) rectangle (6.2,3);
\draw[dashed] (5.8,3) rectangle (6.2,3.6);
\path[draw] (5.8,2.6) -- (6.2,2.6);
\path (6,2.8) node {$d$};

\draw[thick,->] (4.3,3.8) .. controls (5,3.8) and (4.5,2.8) .. (5.7,2.8);

\draw[decorate,decoration={brace,mirror,raise=6pt,amplitude=10pt}]    (3,4.5)--(3,2.3) ;
\draw[decorate,decoration={brace,mirror,raise=6pt,amplitude=10pt}]    (3.9,4)--(3.9,2.3);
\node at (3.2,3.2) {$s$};
\node[left] at (2.5,3.5) {$\abs{m^{t_1}(x)}$};

\draw[decorate,decoration={brace,raise=6pt,amplitude=10pt}]    (6.1,3.6)--(6.1,2.3);
\node[right] at (6.6,3) {$s-1\geq\abs{m^{t_2}(y)}$};

\path[draw,dashed] (4.2,3.6) -- (5.8,3.6);

\end{tikzpicture}
\caption{Moving element $d$ from $m(x)$ into $m(y)$.}
\label{fig:common-element}
\end{center}
\end{figure}


\begin{claim}\label{claim:els-above-X}
 If $t$ is the presenting time of $u$ and $N(u)\cap X \neq \emptyset$, then $\abs{m^t(u)}=\alpha$.
\end{claim}
\begin{proof}
 Let $d\in N(u)\cap X$. By the definition of $X$, element $d$ is strongly available for $u$ when $u$ is presented. 
But $d$ is not chosen in the line~\ref{line:3} of $\alpha$-\Alg{}. 
It means that there were at least $\alpha$ other strongly available elements for $u$ that had been added to $m^t(u)$. 
Thus, indeed $\abs{m^t(u)}=\alpha$.
\end{proof}

The next claim will allow us to prove that inequalities~\eqref{ineq:main-res} are satisfied for carefully chosen $(k,(x_0,\ldots,x_k))$.
We will apply it for quite specific values of $Y\subseteq \mathbf Y$ (we sort $\mathbf Y$ so that the sizes of $m(y)$ do not increase along the list, and the consider prefixes of the resulting list).
However, as this does not seem to simplify the proof, the claim is formulated for arbitrary $Y\subseteq \mathbf Y$.
Once it is fixed (see Figure~\ref{fig:partition}), we denote by $\mu$ the minimum size of $m(y)$ for $y\in Y$.
Set $M \subset D$ consists of elements assigned to the elements of $Y$ after the game (i.e. $M= m(Y)$).
Finally, set $Q\subset U$ consists of all elements that contain at least one element of $M$ or $X$ in their neighbourhood (in particular, $Y\subset Q$).
Intuitively these are the elements that directly influence the final shape of $M$.
\begin{figure}[tbh]
\begin{center}
\begin{tikzpicture}%[scale=.9]

\coordinate (y1) at (1.3, 3.15);
\coordinate (y2) at (1.8, 3.15);
\coordinate (y3) at (2.3, 3.15);
\coordinate (y4) at (2.8, 3.15);

\fill (y1) circle (1pt) node[below=2pt] {\footnotesize $y_1$};
\fill (y2) circle (1pt) node[below=2pt] {\footnotesize $y_2$};
\fill (y3) circle (0pt) node {$\dots$};
\fill (y4) circle (1pt) node[below=2pt] {\footnotesize $y_i$};

\path[draw] (0, 0) rectangle (2, .3);
% \path[draw] (2, 0) rectangle (6, .3);
\path[draw] (6, 0) rectangle (10, .3);

\draw[decorate,decoration={brace,mirror,raise=1pt,amplitude=5pt}]    (2, 0) -- (6, 0) ;

\node[below] at (1, 0) {$X$};
\node[below] at (4, -.1) {$M$};
\node[below] at (8, 0) {$D-X-M$};

\path[draw] (1, 3) rectangle (6, 3.3);
\node[above] at (3.5, 3.3) {$Q$};

\path[draw] (7, 3) rectangle (10, 3.3);
\node[above] at (8.5, 3.3) {$U-Q$};

% \path[draw] (1, 2.9) rectangle (3, 3.6);
\draw[decorate,decoration={brace,mirror,raise=1pt,amplitude=5pt}]    (3, 3.3) -- (1, 3.3) ;
\node[above] at (2, 3.5) {$Y$};


\draw[rounded corners] (2, 0) rectangle node {\footnotesize $m(y_1)$} (3, .5) ;
\draw[rounded corners] (3, 0) rectangle node {\footnotesize $m(y_2)$} (4.4, .5) ;
% \draw[ rounded corners] (4.5, 0) rectangle node {$\cdots$} (5, .5) ;
\node at (4.75, .2) {$\cdots$};
\draw[rounded corners] (5, 0) rectangle node {\footnotesize $m(y_i)$} (6, .5) ;

\path[draw,dashed] (0, .3) -- (1, 3);
\path[draw,dashed] (10, .3) -- (6, 3);
\path[draw,dashed] (6, .3) -- (7, 3);
\path[draw,dashed] (10, .3) -- (10, 3);

\draw[thick,->] (2.45, .55) -- (1.4, 2.6) ;
\draw[thick,->] (3.75, .55) -- (1.95, 2.6) ;
% \draw[thick,->] (5, .55) -- (2.4, 3) ;
\draw[thick,->] (5.5, .55) -- (3, 2.6) ;

\end{tikzpicture}
\caption{A partition of $U$ and $D$ with a chosen subset $Y = \{y_1, y_2, \ldots, y_i\}\subseteq U$. 
Set $M$ is equal to $\bigcup_{y\in Y}m(y)$.
Each element $q\in Q$ has a neighbour in $M$ or in $X$, i.e. $N(q) \cap (M\cup X) \not= \emptyset$. 
For element $u\in U-Q$, we have $N(u) \subseteq D-M-X$.\newline
Recall that $x\in X$ iff $x\notin m(u)$ for all $u\in U$.}
\label{fig:partition}
\end{center}
\end{figure}

\begin{claim}\label{claim:main-ineq}
For any subset $Y\subseteq \mathbf Y$  we have
\[
	(\abs{Q}-\abs{Y})(\mu - 1) + \abs{M}  \leq \abs{D-X},
\]
where $\mu= \min\{\abs{m(y)} : y \in Y\}, M=\bigcup_{y\in Y} m(y)$ and $Q$ is the set of all vertices $q\in U$ for which $N(q) \cap (M\cup X) \neq \emptyset$. 
\end{claim}

\begin{proof}
For every element $q\in Q$, we will assign a witnessing set $Z_q\subseteq D - X$, such that
\begin{enumerate}
\def\labelenumi{(\theenumi)}
\def\theenumi{\alph{enumi}}
 \item all sets $Z_q$ are mutually disjoint,
 \item $\abs{Z_q} \geq \abs{m(q)}$ for $q\in Y$,\label{witnessing_set:b}
 \item $\abs{Z_q} \geq \mu - 1$ for $q\in Q-Y$.\label{witnessing_set:c}
\end{enumerate}
Consequently, the left hand side of the claimed inequality will be the lower bound for the total count of the witnessing elements, which is $\sum_{q\in Q} \abs{Z_q}$.
This number must be lower than the total number of the assigned elements, which is the value of the right hand side.

% For every element of $Q$ we are going to assign a witnessing set of elements of $D - X$ in such a way that these sets are mutually disjoint.
% The size of a witnessing set will be at least $\mu$ for the elements of $Y$ and at least $\mu-1$ for the remaining elements of $Q$.
% Then, the left hand side of the claimed inequality will be a lower bound for the total count of witnessing elements.
% That number must be smaller that the total number of assigned elements, which is the value of the right hand side.

The claim is obvious for $\mu=1$, since $M\cap X = \emptyset$.
We assume that $\mu>1$.
Let $i=\abs{Y}, s=\abs{Q}$, and let $(q_1, \ldots, q_s)$ be the enumeration of $Q$ for which the sequence of the corresponding presenting times $(t_1, \ldots, t_s)$ is strictly decreasing.
This means that $q_1,\ldots,q_s$ are in the reverse of the arrival order.
For each $q_j$, we recursively define its witnessing set as
\[
	Z_j:=m^{t_j}(q_j)-(Z_1\cup\ldots\cup Z_{j-1})\subseteq D-X.
\]
Clearly, these sets are disjoint and consist of elements of $D-X$; therefore, $\abs{Z_1}+\cdots+\abs{Z_s}\leq \abs{D-X}$.
All we need to complete the proof now is to show that $\abs{Z_j}\geq \mu - 1$ for all $j$ ($1\leq j\leq s$), and $Z_j \supseteq m(q_j)$ for $q_j\in Y$. 
See~\eqref{witnessing_set:b} and~\eqref{witnessing_set:c}.

Suppose first that $q_j \in Y$.
The set $m^t(q_j)$ of the elements assigned to $q_j$ can only become smaller after vertex $q_j$ has been presented; therefore, we have $m(q_j)\subseteq m^t(q_j)$ for all $t\geq t_j$. 
In particular, for every $j'\leq j-1$, we have $t_{j'} \geq t_j$, hence $m(q_j) \cap Z_{j'} = \emptyset$. 
This gives $Z_j \supseteq m(q_j)$.


Suppose now that $q_j\notin Y$.
Let $R_j := (Z_1\cup\ldots\cup Z_{j-1} )\cap m^{t_j}(q_j)$ (so that $m^{t_j}(q_j)=Z_j\cup R_j$).
Assume also that $\abs{Z_j}<\alpha$, since otherwise $\abs{Z_j}=\alpha>\mu-1$.
Consider two cases:

\begin{figure}[tbh]
\begin{center}
\begin{tikzpicture}%[scale=.9]

\path[draw] (0, 0) -- (10, 0);

\coordinate (q) at (5, 3);
\coordinate (d) at (2, 0);
\coordinate (u) at (1, 3);

\fill (q) circle (2pt) node [above] {\footnotesize $q_j$};
\fill (d) circle (2pt) node [below] {\footnotesize $d\in M$};
\fill (u) circle (2pt) node [above] {\footnotesize $u$};

\path[draw,dashed] (1,0) -- (q) -- (9,0);

\filldraw[rounded corners, fill=white, draw=black] (3, -.15) rectangle node[below=4pt] {\footnotesize $Z_j$} node {\footnotesize $m^{t_j}(q_j)$} (5, .3);
\filldraw[rounded corners, fill=white, draw=black] (6, -.15) rectangle node[below=2pt] {\footnotesize $Z_{j-1}$} (7, .15);
\filldraw[rounded corners, fill=white, draw=black] (8, -.15) rectangle node[below=2pt] {\footnotesize $Z_1$} (9.5, .15);

\draw[thick, ->] (d) ++ (-.05, .2) --  (1.05, 2.8);
\draw[thick, ->] (4, .35)  --  (4.95, 2.8);

\end{tikzpicture}
\caption{Case $R_j = \emptyset$ at the round $t_j$.
The element $d\in M$ is available for $q_j$, but $d\in m_{t_j}(u)$ for some $u\in U$.
}
\label{fig:case1}
\end{center}
\end{figure}
\textbf{Case 1: $R_j=\emptyset$.}
Based on Claim~\ref{claim:els-above-X} and the definition of $q_j\in Q$, inequality $\abs{Z_j}<\alpha$ implies that $N(q_j)\cap X = \emptyset$ and thus $N(q_j)\cap M\neq\emptyset$.
Let $d\in N(q_j)\cap M$. 
Note that element $d$ is not strongly available for $q_j$ (at the time $t_j$).
Otherwise, it would be inserted into $m^{t_j}(q_j)$ (knowing that $|Z_j|< \alpha$).
However, since after the game $d\in m(y)$ for some $y \in Y$, this would also mean that $d\in R_j$, which contradicts our assumption that $R_j=\emptyset$.
Therefore, at the time $t_j$, element $d$ must already belong to $m^{t_j}(u)$ for some $u\in U$ presented earlier (see Figure~\ref{fig:case1}).

We show that $d$ is available for $q_j$ (at the time $t_j$).
At the end of the game $m(y)$ has the size of at least $\mu$; therefore, based on Claim~\ref{claim:d1}, the sets containing $d$ during the game were at least of that size.
In particular, we have $|m^{t_j}(u)|\geq \mu\geq 2$ (since $\mu>1$).
It  means that $d$ is indeed available for $q_j$ at the time $t_j$.
%Actually $u\in Q$ since it has element $d\in M$ in its neighbourhood.
%Thus, there are two elements (possibly the same) $u\in Q$ and $y\in Y$ such that $d\in m^{t_j}(u) \cap m(y)$.
%Moreover $y$ is not presented before $u$ (if $u\neq y$ then $u$ is presented before $q_j$ and $y$ after $q_j$).
%Regardless of whether or not $u$ and $y$ are the same, by Claim~\ref{claim:d1}, we deduce $\abs{m^{t_j}(u)} \geq \abs{m(y)} \geq \mu $. 

The algorithm in round $t_j$  did not select the element $d$ to be assigned to $q_j$; consequently, at the end of the round, the inequality in line~\ref{line:4} of the algorithm was not satisfied. 
This means that $\abs{Z_j}=\abs{m^{t_j}(q_j)} \geq \abs{m^{t_j}(u)}-1 \geq \mu -1$.

\textbf{Case 2: $R_j \neq \emptyset$.} 
Let $t>t_j$ be the lowest number (the first moment) for which $R_j \cap m^t(q_j) = \emptyset$ (this is a straightforward consequence of the definition that such $t$ exists). 
This means that $t$ is the last round in which an element of $R_j$ was removed from the set assigned to $q_j$, in particular,  $\abs{Z_j} \geq  \abs{m^{t}(q_j)}$. 
Consider any element $d\in R_j\cap m^{t-1}(q_j)$.
Based on the definition of $Z_j$, there exists a number $l<j$ such that $d\in Z_l\subseteq m^{t_l}(q_l)$ with $t-1 < t_l$. 
Based on Claim~\ref{claim:d1}, this means that $\abs{m^{t}(q_j)}\geq \abs{m^{t_l}(q_l)}$, thus $\abs{Z_j} \geq \abs{Z_l}$. 
Straightforward induction (with Case 1 as basis) gives $\abs{Z_j}\geq \mu-1$.
\end{proof}

We are ready to prove the proposition. 
Fix any optimal (maximum) matching in graph $G$ and let $F\subseteq D$ be the set of all elements in $D$ outside the  matching. 
Consider an enumeration $(y_1,\ldots, y_k)$ of  $\mathbf Y$ such that, for  $x_i=\abs{m(y_i)}-1$, we have $ x_1\geq x_2\geq \ldots\geq x_k\geq 0$. 
Let $x'=\abs{X-F}$, $f'=\abs{F-X}$ and $x_0 = x'-f'$.
Observe that $\abs{F} = f'+\abs{X}-x'$. 
This implies that
\begin{align}\label{eq:size:D-X}
 \abs{D-X}=\abs{D}-\abs{X}= n + \abs{F}-\abs{X} = n-x_0.
\end{align}

To show that $(k, (x_0,x_1, \ldots,x_k))$ satisfies~\eqref{ineq:main-res}, fix $1\leq i\leq k$ and apply Claim~\ref{claim:main-ineq} for $Y=\{y_1,\ldots,y_i\}$.  
Thus, $\abs{M}=x_1+\cdots+x_i+i$ and  $\mu = x_i+1$. 
Recall that in the selected optimal matching each vertex in $D-F$ has a unique match in $U$.
Therefore, $\abs{Q} \geq \abs{M-F}+\abs{X-F} = \abs{M-(F-X)}+\abs{X-F}  \geq \abs{M}-f'+x'$ 
since $M\cap X = \emptyset$,
and thus,
$$(\abs{M}-i+x_0)(\mu-1)+\abs{M}\leq \abs{D-X} \buildrel \eqref{eq:size:D-X}\over = n-x_0,$$ 
which  can be  rewritten as
\[ (x_0+x_1+\cdots+x_i)\cdot(1+x_i)\leq n-i.\]

To show $ (1+\alpha)x_0 \leq n$, we will consider 
the set $Q'$ of all vertices $q\in U$ for which $N(q)\cap X \neq\emptyset$. 
For each $q\in Q'$
we define $s(q)$ as the set of all strongly available elements inserted into $m^{t}(q)$, 
at the time $t$ when $q$ was presented, 
in line~\ref{line:3} of the algorithm. 
Firstly, note that $s(q_1)$ and $s(q_2)$ are disjoint for distinct $q_1,q_2\in Q'$. 
This is because $s(q)$ gathers only strong available elements, which will never be strong available elements again.
Secondly, based on Claim~\ref{claim:els-above-X}, we know that $\abs{s(q)}=\alpha$ for each $q\in Q'$. 

The above two facts imply
that $\alpha\abs{Q'}\leq n-x_0$, since $\bigcup_{q\in Q'}s(q) \subseteq D-X$ and through~\eqref{eq:size:D-X}.
Furthermore, since each element in $D-F$ has a unique match in $U$, we have $\abs{Q'}\geq \abs{X-F}=x'\geq x_0$.  
Therefore, $ (1+\alpha)x_0 \leq n$.


To complete the proof, recall that $\bigcup_{y\in \mathbf Y} m(y) = D-X$.
Thus, $x_1+\cdots+x_k+k=n-x_0$, and consequently, the size of the multi-match constructed by the algorithm equals $k = n-(x_0+x_1+\cdots+x_k)$.
Furthermore, since $k$ cannot be larger than $n$, we have $x_0+\cdots+ x_k \geq0$.
\end{proof}

Combining Proposition~\ref{prop:spoiler_bound} and Proposition~\ref{prop:balanced_bound} we finally arrive at Theorem~\ref{thm:optimal-bal}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Competitiveness of the algorithm}
\label{sect:comp}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Recall that the worst (minimum) size of the multi-match constructed by $\alpha$-\Alg{} in any deferred $\alpha$-matching game with a size of $n$ is denoted by $\Val_{\alpha-\Alg{}}(n)$.
To simplify this notation we use $\bal(\alpha,n)$ instead of $\Val_{\alpha-\Alg{}}(n)$.
Similarly, for the competitive ratio of $\alpha$-\Alg{}, i.e. $\liminf_{n\to\infty} \bal(\alpha,n)/n$, we use the notation $\bal(\alpha)$.

Propositions~\ref{prop:spoiler_bound} and~\ref{prop:balanced_bound} imply that in order to determine $\bal(\alpha,n)$, it is sufficient to find a pair $(k,(x_0,\ldots, x_k))$ that satisfies~\eqref{ineq:main-res} and maximises $\sum_{i=0}^k x_i$.
Moreover, based on Proposition~\ref{prop:positiv:x0}, we can assume that in the maximising solution, we have $x_0= \lfloor \frac{n}{1+\alpha} \rfloor$.
From now on, we will consider $x_0$ in system~\eqref{ineq:main-res} as fixed together with $n$ and $\alpha$.

Consider a pair $(k,x)$ such that $x_i= x_{i+1}$ for some $i\geq1$.
If $(k,x)$ satisfies the $(i+1)$-th inequality from~\eqref{ineq:main-res}, i.e. $(x_0+\cdots+x_{i+1})(1+x_{i+1})\leq n-(i+1)$,
then it also satisfies the $i$-th inequality from~\eqref{ineq:main-res}:
\[
 (x_0+\cdots+x_{i})(1+x_{i})\leq (x_0+\cdots+x_i+x_{i+1})(1+x_{i+1})\leq n-(i+1) < n-i.
\]
This suggests another representation of the solutions.
Let $(k,x)$ be a pair satisfying~\eqref{ineq:main-res} with $x=(x_1, \ldots, x_k)$.
Note that $x$ is a non-increasing sequence of non-negative integers.
We will assume, without a loss of generality, that $x_k>0$.
Let $Y(x)=(y_1, \ldots, y_m)$, such that $y_j= |\{i>0: 1+x_i= j\}|$, for $j=1, \ldots,m$, and $m$ be such that $y_m \neq 0$.
Thus, $y_j$ is simply the number of the (consecutive) entries of $x$ with the value $j-1$.
Clearly, $m-1$ equals the maximum value of the sequence $x$.
This value is realised in $x_1$, which gives $m=x_1+1$.
Consequently, for every $i$ for which $x_{i}\neq x_{i+1}$, the inequality 
\[
   (x_0+x_1+\cdots+x_i)(1+x_i) \leq n-i
\]
can be rewritten as
\[
	(x_0 + (m-1) \cdot y_m +\cdots +(t-1)\cdot y_t) \cdot t \leq n- (y_m +\cdots + y_t),
\]
where $t=x_i+1$.
Thus, the sequence $(y_1, \ldots, y_m)$ belongs to the image of $Y$ whenever it satisfies the following system of inequalities 
\begin{equation}\label{cond:prob:B}
   \eqB_{n,m}(x_0):\qquad t\cdot x_0 + \sum_{i=t}^m (1+(i-1)t)\vr{y_i} \leq n \quad \text{ for $t=1, \ldots,m$}.
\end{equation}
Moreover, since $y_i\geq0$, the $m$-th inequality in~\eqref{cond:prob:B} implies that
\begin{equation}\label{cond:m}
 n-mx_0\geq0.
\end{equation}
On the other hand, having any $m>0$ satisfying~\eqref{cond:m} and any solution $(y_1,\ldots,y_m)$ of~\eqref{cond:prob:B}, we can easily
find (using the definition of the $y_j$'s) a solution $(k,x)$ of~\eqref{ineq:main-res} such that $x_0+x_1+\cdots + x_k = x_0 + \sum_{i=1}^m(i-1)y_i$.


The above considerations are summarised in the following proposition.
\begin{proposition}
 The minimal size of the multi-match constructed by $\alpha$-\Alg{} in all deferred $\alpha$-matching games with a size of $n$ is equal to
 $$\bal(\alpha,n)= n -x_0- \sup\left\{\sum_{i=1}^m (i-1) y_i \right\},$$
 where $x_0=\lfloor n/(1+\alpha)\rfloor$ and the supremum is taken over all integers $m>0$ such that $n-mx_0\geq0$ and over all vectors $(y_1,\ldots,y_m)$ of the non-negative integers that satisfy $\eqB_{n,m}(x_0)$.
\end{proposition}

\subsection{LP formulation}
In order to maximise $\sum_{i=1}^m (i-1) y_i$, we will consider the following linear program. We present the primal and the dual formulation.

\noindent{\bf Primal}
 \begin{eqnarray*}
  \text{maximise:}    &\text{ }& \sum\nolimits_{j=1}^m (j-1)y_j\\
  \text{subject to:}  &\text{ }& \sum\nolimits_{j=i}^m (1+(j-1)i)y_j \leq n- ix_0,\text{ and } y_i\geq0\quad \text { for }i=1,\ldots,m
 \end{eqnarray*}
 
\noindent{\bf Dual}
 \begin{eqnarray*}
  \text{minimise:}    &\text{ }& \sum\nolimits_{i=1}^m (n-ix_0)z_i \\
  \text{subject to:}  &\text{ }&  \sum\nolimits_{i=1}^j (1+(j-1)i)z_i \geq j-1,\text{ and } z_j\geq0 \text{ for } j=1,\ldots,m
 \end{eqnarray*}
 
Let $P_i(y) = \sum_{j=i}^m (1+(j-1)i)y_j$ and $D_j(z) = \sum_{i=1}^j (1+(j-1)i)z_i$. 
Note that both systems $(P_i(y) = n-ix_0)_{i=1, \ldots, m}$ and $(D_j(z) = j-1)_{j=1, \ldots, m}$ are quadratic and have unique solutions. 
Based on the Complementary Slackness Theorem, these solutions will be optimal as long as both are non-negative vectors.

To solve the first system, note that $P_{i+2}(y)+P_{i}(y) - 2P_{i+1}(y) = (1-i+i^2)y_{i}-(1+i)^2y_{i+1}$. 
This, together with the initial values for $y_m$ and $y_{m-1}$, lead to
\begin{align}
 & y_i= y_{i+1} \frac{(i+1)^2}{1-i+i^2} \quad \text{ for $i=1, \ldots,m-2$},\label{solution:yi} \\
 & y_m  =  \frac{n-m \cdot x_0}{1+m(m-1)},\quad \quad y_{m-1}= \frac{x_0+(m-1)y_m}{1+(m-1)(m-2)}.\label{solution:ym}
\end{align}
Through~\eqref{cond:m}, $y_m\geq0$, and thus all $y_i$ are non-negative.

For the second system observe that $D_{j+1}(z)+D_{j-1}(z)-2D_j(z) = (1+j+j^2)z_{j+1}-(1-j)^2z_j$. 
Again, with the initial values for $z_1$ and $z_2$, we arrive at
\begin{align*}
 & z_{j+1} = z_j \frac{(j-1)^2}{1+j+j^2}\quad\text{ for $j=2,\ldots,m-1$},\\
 & z_1 = 0, \quad\quad z_2 = 1/3.
\end{align*}
Therefore, $z_i\geq0$, and, indeed, the above vectors $y$ and $z$ are the optimal solutions for the primal and the dual LP.

To calculate the target function $\sum_{j=1}^m (j-1)y_j$, consider a new variable $y_0$ defined by the additional condition: $P_0(y) = n$.
Equation~\eqref{solution:yi} still works for $y_0$. 
Furthermore, after combining $P_0(y)$ with $P_1(y)$, we obtain $\sum_{j=1}^m (j-1)y_j = y_0-x_0$.
Finally, using~\eqref{solution:yi} and~\eqref{solution:ym} and after rearrangement, we arrive at
\begin{equation}\label{solution:y0}
 x_0 + \sum_{j=1}^m (j-1)y_j = y_0 = \frac{(m-1) n+x_0}{ m} \prod_{i=1}^{m-1} \frac{i+i^2}{1+i+i^2}.
\end{equation}

 


\subsection{Lower bound}
The solution for the LP relaxation of~\eqref{cond:prob:B} may not be an integer; therefore, with the solution~\eqref{solution:y0}, we only have the following bound:
\[
 \bal(\alpha,n) \geq n-\sup\{y_0 : n-mx_0\geq0\}.
\]


Let $F(z,m)=\frac{(m-1)+z}{m} \prod_{i=1}^{m-1} \frac{i+i^2}{1+i+i^2}$. 
Thus, for fixed $m$, we obtain $y_0= n \cdot F(x_0/n,m)$. 
Note that the function $F(z,m)$ increases with $m$. For $m\leq 1/z$ and $F(0,m)$, it increases indefinitely with $m$.
Therefore for $x_0 = \lfloor n / (1+\alpha)\rfloor > 0$, we have
\begin{align}%\label{eq:maxLB}
	\bal(\alpha,n)/n &\geq 1-\max_{m\leq n/x_0}F(x_0/n,m)\nonumber\\
		        &= 1-F(x_0/n,\lfloor n/x_0 \rfloor ) \buildrel n\to\infty\over\longrightarrow 1 - F(1/(1+\alpha),1+\alpha),\label{eq:balLB}
\end{align}
and for $x_0 = 0$ (this occurs when $\alpha\geq n$) the bound is
\begin{align}\label{eq:inftyLB}
  \bal(\alpha,n)/n &\geq 1-\lim_{m\to\infty}F(0,m) =1-\prod_{i=1}^{\infty}\frac{i+i^2}{1+i+i^2} =
  1-\frac{\pi}{\cosh\tfrac{\sqrt{3}}{2}\pi}.
\end{align}

\subsection{Upper bound}
Consider $m=\min(\lfloor n/x_0 \rfloor,\ln n )$, with the assumption that $n/x_0$ is greater than $\ln n$ when $x_0=0$,
and let $(y_1, \ldots, y_m)$ be the optimal rational solution of $\Psi_{n,m}(x_0)$.
Let $v=(v_1, \ldots, v_m)$ be such that $v_i= \lfloor y_i \rfloor$.
The vector $v$ contains only non-negative, integer entries and $v\leq y$. 
The shape of the system $\Psi_{n,m}(x_0)$ guarantees that $v$ also satisfies $\Psi_{n,m}(x_0)$.
Finally, we have
\[
	x_0 + \sum_{i=1}^m (i-1) v_i >  x_0 + \sum_{i=1}^m (i-1) y_i - \sum_{i=1}^{m} (i-1).
\]
The definition of $m$ indicates that $m = o(n)$.
Therefore, through Proposition~\ref{prop:spoiler_bound}, we arrive at
\begin{align*}
	\bal(\alpha,n) &< n \cdot (1-F(x_0/n,m )) + m(m-1)/2
	                =n \cdot (1-F(x_0/n,m )) + o(n)
\end{align*}
Hence, for $x_0=\lfloor n / (1+\alpha) \rfloor > 0$ the bound is
\begin{align}\label{eq:balUB}
 \bal(\alpha,n)/n  < 1-F(x_0/n,\lfloor n/x_0 \rfloor ) + o(1) \buildrel n\to\infty\over\longrightarrow 1 - F(1/(1+\alpha),1+\alpha),
\end{align}
and for $x_0 = 0$ we have
\begin{align}\label{eq:inftyUB}
 \bal(\alpha,n)/n < 1 - F(0,\ln n) + o(1) \buildrel n\to\infty\over\longrightarrow 1-\prod_{i=1}^{\infty}\frac{i+i^2}{1+i+i^2} =
  1-\frac{\pi}{\cosh\tfrac{\sqrt{3}}{2}\pi}.
\end{align}


\subsection{Competitive ratio}
If $\alpha$ is finite, then based on~\eqref{eq:balLB} and~\eqref{eq:balUB}, the ratio of $\alpha$-\Alg{} is equal to  $\bal(\alpha)= 1 - F(\frac{1}{\alpha+1},\alpha + 1)$.
Note that for the case $\alpha\to\infty$, $F(1/(1+\alpha),1+\alpha)\to {\pi}/{\cosh\tfrac{\sqrt{3}}{2}\pi}$.
Thus, based on~(\ref{eq:balLB}-\ref{eq:inftyUB}), we arrive at $\bal(\alpha) = 1-{\pi}/{\cosh\tfrac{\sqrt{3}}{2}\pi}$.
This finally proves Theorem~\ref{thm:ratio}.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusions and remarks}\label{sec:remarks}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In the classical on-line matching problem, the randomised approach has a considerable advantage over the deterministic approach. 
This paper shows that in the model with the deferred decisions, the  deterministic bounds can be pushed further.  
It is interesting to know what can be achieved with randomised strategies.
\begin{problem}
 What is the competitive ratio of the randomised version of the deferred matching problem?
\end{problem}

The authors of~\cite{KalKir2000} consider a variant (called $b$-matching), in which each server can perform up to $b$ tasks. 
The competitive ratio of their optimal algorithm approaches $1-\frac{1}{e}$ with $b\rightarrow \infty$, which is a barrier for any randomised matching algorithm (see~\cite{KVV90}). 
We expect that in our model, it will be possible to break the $1-\frac{1}{e}$ limit in case of $b$-matching.
\begin{problem}
 What is the competitive ratio of the deferred $(\alpha,b)$-matching problem?
\end{problem}

Note that the underlying graph $G$ of a partial order $P$ with a height of most $2$ is bipartite.
The following Claim~\ref{claim:width} implies that the problems of finding the maximal matching of $G$ and finding the minimal chain partition of $P$ are dual.
\begin{claim}\label{claim:width}
Let $G$ be a bipartite graph with $N$ vertices. 
If $n$ is the size of the maximal matching in $G$, and $w$ is the size of the maximal independent set in $G$, then $n + w = N$.
\end{claim}

This allows the results of this paper to be adopted for the deferred (a.k.a. adaptive) approach of the on-line chain partition problem (which was mentioned in the introduction).
We only provide the result without the prove, which is only a technical modification of the proofs in this paper.
\begin{theorem}
There is an algorithm for the $\alpha$-adaptive chain partitioning of up-growing orders with a height of $2$, 
with competitive ratio equal to $1 + \frac{\alpha}{1+\alpha}\prod_{i=1}^{\alpha-1}\frac{i+i^2}{1+i+i^2}$.\\
For unlimited $\alpha$, i.e. $\alpha\to\infty$, the competitive ratio equals $1+{\pi}/{\cosh\tfrac{\sqrt{3}}{2}\pi}\approx 1.412$. 
\end{theorem}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bibliographystyle{plain} %\bibliography{bib_\jobname}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{thebibliography}{99}

\bibitem{AC2006}
Yossi Azar and Yoel Chaiutin.
\newblock Optimal node routing.
\newblock In Bruno Durand and Wolfgang Thomas, editors, {\em STACS 2006},
  volume 3884 of {\em Lecture Notes in Computer Science}, pages 596--607.
  Springer Berlin Heidelberg, 2006.

\bibitem{ANR1995}
Yossi Azar, Joseph Naor, and Raphael Rom.
\newblock The competitiveness of on-line assignments.
\newblock {\em Journal of Algorithms}, 18(2):221 -- 237, 1995.

\bibitem{AR2003}
Yossi Azar and Yossi Richter.
\newblock Management of multi-queue switches in qos networks.
\newblock {\em Algorithmica}, 43(1):81--96, Sep 2005.

\bibitem{BK2010}
Bahman Bahmani and Michael Kapralov.
\newblock Improved bounds for online stochastic matching.
\newblock In {\em Proceedings of the 18th annual European conference on
  Algorithms: Part I}, ESA'10, pages 170--181, Berlin, Heidelberg, 2010.
  Springer-Verlag.

\bibitem{BM2008}
Benjamin Birnbaum and Claire Mathieu.
\newblock On-line bipartite matching made simple.
\newblock {\em SIGACT News}, 39(1):80--87, March 2008.

\bibitem{BFKKMM2012}
Bart{\l}omiej Bosek, Stefan Felsner, Kamil Kloch, Tomasz Krawczyk, Grzegorz
  Matecki, and Piotr Micek.
\newblock On-line chain partitions of orders: A survey.
\newblock {\em Order}, 29(1):49--73, 2012.

\bibitem{CDKL2009}
Kamalika Chaudhuri, Constantinos Daskalakis, Robert~D. Kleinberg, and Henry
  Lin.
\newblock Online bipartite perfect matching with augmentations.
\newblock In {\em INFOCOM 2009, IEEE}, pages 1044--1052, April 2009.

\bibitem{CTV2015}
Ashish Chiplunkar, Sumedh Tirodkar, and Sundar Vishwanathan.
\newblock On randomized algorithms for matching in the online preemptive model.
\newblock In Nikhil Bansal and Irene Finocchi, editors, {\em Algorithms -- ESA
  2015}, volume 9294 of {\em Lecture Notes in Computer Science}, pages
  325--336. Springer Berlin Heidelberg, 2015.

\bibitem{DH2009}
Nikhil~R. Devenur and Thomas~P. Hayes.
\newblock The adwords problem: online keyword matching with budgeted bidders
  under random permutations.
\newblock In {\em Proceedings of the 10th ACM conference on Electronic
  commerce}, EC '09, pages 71--78, New York, NY, USA, 2009. ACM.

\bibitem{ELSW2013}
Leah Epstein, Asaf Levin, Danny Segev, and Oren Weimann.
\newblock {Improved Bounds for Online Preemptive Matching}.
\newblock In Natacha Portier and Thomas Wilke, editors, {\em 30th International
  Symposium on Theoretical Aspects of Computer Science (STACS 2013)}, volume~20
  of {\em Leibniz International Proceedings in Informatics (LIPIcs)}, pages
  389--399, Dagstuhl, Germany, 2013. Schloss Dagstuhl--Leibniz-Zentrum fuer
  Informatik.

\bibitem{FMMMa2009}
Jon Feldman, Nitish Korula, Vahab Mirrokni, S.~Muthukrishnan, and Martin P\'al.
\newblock Online ad assignment with free disposal.
\newblock In Stefano Leonardi, editor, {\em Internet and Network Economics},
  volume 5929 of {\em Lecture Notes in Computer Science}, pages 374--385.
  Springer Berlin Heidelberg, 2009.

\bibitem{FMMM2009}
Jon Feldman, Aranyak Mehta, Vahab Mirrokni, and S.~Muthukrishnan.
\newblock Online stochastic matching: Beating 1-1/e.
\newblock In {\em Proceedings of the 2009 50th Annual IEEE Symposium on
  Foundations of Computer Science}, FOCS '09, pages 117--126, Washington, DC,
  USA, 2009. IEEE Computer Society.

\bibitem{Fel97}
Stefan Felsner.
\newblock On-line chain partitions of orders.
\newblock {\em Theoret. Comput. Sci.}, 175(2):283--292, 1997.

\bibitem{GM2008}
Gagan Goel and Aranyak Mehta.
\newblock Online budgeted matching in random input models with applications to
  adwords.
\newblock In {\em Proceedings of the nineteenth annual ACM-SIAM symposium on
  Discrete algorithms}, SODA '08, pages 982--991, Philadelphia, PA, USA, 2008.
  Society for Industrial and Applied Mathematics.

\bibitem{GMS2014}
Anupam Gupta, Amit Kumar, and Cliff Stein.
\newblock Maintaining assignments online: Matching, scheduling, and flows.
\newblock In Chandra Chekuri, editor, {\em Proceedings of the Twenty-Fifth
  Annual {ACM-SIAM} Symposium on Discrete Algorithms, {SODA} 2014, Portland,
  Oregon, USA, January 5-7, 2014}, pages 468--479. {SIAM}, 2014.

\bibitem{KalKir2000}
Bala Kalyanasundaram and Kirk~R. Pruhs.
\newblock An optimal deterministic algorithm for online b-matching.
\newblock {\em Theoretical Computer Science}, 233(1-2):319 -- 325, 2000.

\bibitem{KMT2011}
Chinmay Karande, Aranyak Mehta, and Pushkar Tripathi.
\newblock Online bipartite matching with unknown distributions.
\newblock In {\em Proceedings of the 43rd annual ACM symposium on Theory of
  computing}, STOC '11, pages 587--596, New York, NY, USA, 2011. ACM.

\bibitem{KVV90}
R.~M. Karp, U.~V. Vazirani, and V.~V. Vazirani.
\newblock An optimal algorithm for on-line bipartite matching.
\newblock In {\em Proceedings of the twenty-second annual ACM symposium on
  Theory of computing}, STOC '90, pages 352--358, New York, NY, USA, 1990. ACM.

\bibitem{MGS2011}
Vahideh~H. Manshadi, Shayan~Oveis Gharan, and Amin Saberi.
\newblock Online stochastic matching: Online actions based on offline
  statistics.
\newblock {\em Mathematics of Operations Research}, 37(4):559--573, 2012.

\bibitem{MP2012}
Aranyak Mehta and Debmalya Panigrahi.
\newblock Online matching with stochastic rewards.
\newblock In {\em Proceedings of the 2012 IEEE 53rd Annual Symposium on
  Foundations of Computer Science}, FOCS '12, pages 728--737, Washington, DC,
  USA, 2012. IEEE Computer Society.

\bibitem{MSV2007}
Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani.
\newblock Adwords and generalized online matching.
\newblock {\em J. ACM}, 54(5):no. 5, October 2007.

\end{thebibliography}

\end{document}


