On Eventually Periodic Sets as Minimal Additive Complements

We say a subset $C$ of an abelian group $G$ \textit{arises as a minimal additive complement} if there is some other subset $W$ of $G$ such that $C+W=\{c+w:c\in C,\ w\in W\}=G$ and such that there is no proper subset $C'\supset C$ such that $C'+W=G$. In their recent paper, Burcroff and Luntzlara studied, among many other things, the conditions under which"eventually periodic sets", which are finite unions of infinite (in the positive direction) arithmetic progressions and singletons, arise as minimal additive complements in $\mathbb Z$. In the present paper we shall study this question further. We give, in the form of bounds on the period $m$, some sufficient conditions for an eventually periodic set to be a minimal additive complement; in particular we show that"all eventually periodic sets are eventually minimal additive complements". Moreover, we generalize this to a framework in which"patterns"of points are projected down to $\mathbb Z$, and we show that all sets which arise this way are eventually minimal additive complements. We also introduce a formalism of formal power series, which serves purely as a bookkeeper in writing down proofs. Through our work we are able to answer a question of Burcroff and Luntzlara in a large class of cases.


Introduction
The setting of the question is as follows.For C and W subsets of an abelian group G, we say that C is an additive complement to W if the Minkowski sum C + W is equal to G, i.e. if C is a minimal additive complement (or MAC ) to W if there is no proper subset of C which is an additive complement to W .We say C arises as a MAC (or is a MAC ) if there exists a W to which C is a MAC.
In particular, we will be interested in sets which are called "eventually periodic sets", which are defined as follows: For S ⊆ Z, let S /m denote the image in Z/mZ under the standard projection.An eventually periodic set of period m is a set of integers of form1 (mN + A) ∪ B ∪ F where A, B, and F are finite, A is nonempty, B /m ⊆ A /m , and F /m ∩ A /m = ∅.
In this paper, we shall study the conditions under which eventually periodic sets arise as MACs.We will show two main theorems, namely Theorems 4 and 7, the latter of which is stated in a new general framework of "patterns" which we introduce, and use them to deduce several other results, for example that Result (Proposition 1).Any eventually periodic set C = mN ∪ B ∪ {f } (where B is a finite subset of Z such that b ≡ 0 mod m for all b ∈ B and f ≡ 0 mod m) arises as a MAC in Z.

and that
Result (Proposition 2).Any C = mN ∪ F (for F in a single congruence class mod m) arises as a MAC in Z.
The two main theorems are of roughly the following form: if there exists some set cover of mN ∪ B (or more generally a congruence class mod m in which C has infinitely many points) satisfying certain conditions, and if m is greater than a certain bound, then C arises as a MAC.In particular, roughly speaking, "all eventually periodic sets are eventually MACs".We also introduce a formalism of "formal power series" to help reduce the process of checking the proofs of such statements to routine calculations.We should however issue a disclaimer that there is no new content in these formal power series, and that no tools or clever tricks from the broader theory of generating functions/formal power series will be utilized; these formal power series here serve purely as bookkeepers.
1.1.Background and Motivation.Minimal additive complements were introduced in 2011 as an arithmetic analogue to the metric concept of h-nets in groups by Nathanson [N], who showed that every nonempty finite subset of Z has a MAC, and moreover that every additive complement of such a set contains a MAC.The question of which subsets of Z have MACs has since been studied by many; for example, Chen and Yang [CY] showed in 2012 that all subsets of Z which are unbounded both above and below have MACs.Kiss, Sándor, and Yang [KSY] in 2019 introduced the concept of "eventually periodic sets" and studied the question of when these sets have MACs.
The natural "inverse problem" is then to study which subsets of Z arise as MACs.This study was initiated by Kwon [K] in 2019, who showed that every nonempty finite set in Z arises as a MAC.Alon, Kravitz, and Larson [AKL] extended this further in 2020 and showed that, in any finite abelian group G, any nonempty subset C with size bounded above by some constant depending on |G| will arise as a MAC.Moreover, they showed that any nonempty finite subset of an infinite abelian group arises as a MAC.Later in 2020, Biswas and Saha [BSb] generalized this even further and showed that, for any group G (abelian or not), any nonempty finite subset C with |G| > |C| 5 − |C| 4 will arise as a MAC; in particular, any nonempty finite subset of any infinite group will arise as a MAC.Also in 2020, Biswas and Saha [BSa] derived some conditions for subsets to not arise as MACs in an arbitrary group; for example, their results show that ({3, 5, 7, 9, 11} + 12Z) ∪ {p prime : p ≡ 1 mod 12} is not a MAC in Z.
Our motivation is as follows.In 2020, Burcroff and Luntzlara [BL] studied (among many other things) the eventually periodic sets of Kiss, Sándor, and Yang and the question of when do they arise as MACs.They showed that there are three certain necessary conditions for an eventually periodic set to arise as a MAC, whose statements are rather technical and which we therefore skip here.In the case that m is prime, they gave a fourth necessary condition.They also showed that, in certain circumstances, namely for m a prime congruent to 2 modulo 3 and for A /m and F /m of certain prescribed sizes ( m+1 3 and 1, respectively), these necessary conditions are also sufficient.Using these facts in conjunction, they showed that any set of form 2N ∪ B ∪ F arises as a MAC if and only if 2Z \ (2N ∪ B) = F + W for some W ⊆ Z.In conclusion, BL have given necessary and sufficient conditions for eventually periodic sets of prime period congruent to 2 mod 3 and with |F /m | = 1 and |A /m | = p+1 3 to arise as MACs.They also simplified these conditions to something very concise and concrete in the case of m = 2.A natural question to ask then is if there are other circumstances in which an eventually periodic set C arises as a MAC.We will attempt to treat this direction in this paper.
By venturing in this direction, we are able to answer an open question of Burcroff and Luntzlara for a large number of cases.At the end of their paper [BL], they raised the following question: Question.Which sets of the form C 1 ∪ (−C 2 ), where C 1 and C 2 are eventually periodic sets of integers, arise as minimal additive complements?
Our results (in particular, Theorem 7) will show that a large class of such sets do indeed rise as minimal additive complements.
Let us briefly explain why we study the objects we do in this paper.The existence of a nonempty F is crucial, as it allows us to set up so-called "dependent elements" in our constructions later (to be explained later; roughly this is just to ensure that all elements of the eventually periodic set are necessary).Without this F , it is for example easy to see that N is not a MAC in Z.Similarly, our results here are true roughly because if m is sufficiently large, then there is "enough space to maneuver" in setting up the "dependent elements".

Prelude
2.1.Preliminary Definitions.First some brief notes on convention: we will take the "natural numbers" N = Z 0+ = {n ∈ Z : n ≥ 0} to include zero.Negative integers are denoted Z − .We will write mod n k to denote the remainder of k when divided by n.Some further notes on hats: throughout this paper we will decorate many of our symbols with hats, namely the widehat , the overline , and the widetilde ; our philosophy is that the widehat denotes lifts, the overline is any generic marker (preferably to do with some "natural" modification), and the widetilde denotes quotients.
Recall from the introduction that the general setting of the question is as follows: Definition 1.For C and W subsets of an abelian group G, we say C is an additive complement to We say C arises as a MAC (or is a MAC) if there exists a W to which C is a MAC.
For W = {w} a singleton, we will denote If A ⊆ S ⊆ G are subsets such that there exists B ⊆ G with then we say S is coverable by A.
For this paper we will be concerned mostly with the case of G = Z.There, many results are known already; for example in this paper we will utilize the theorem of Kwon [K] mentioned in the introduction: Theorem (K).All finite subsets of Z arise as MACs.
Closely related to this is a lemma by Burcroff-Luntzlara [BL] which we shall also employ later, whereupon we shall call it the "BL lemma".
Lemma (BL).For a fixed finite set F ⊂ Z and W ⊆ Z such that F + W ⊃ N, there exists a set W ⊆ Z such that F + W = F + W and F + W = F + W for any proper subset F ⊂ F .
Recall that the results of this paper are concerned with "eventually periodic sets", defined by Definition 2. For S ⊆ Z, let S /m denote the image in Z/mZ under projection.
An eventually periodic set of period m is a set of integers of form where A is nonempty, B and F are finite, B /m ⊆ A /m , and As noted by B-L, WLOG we may take A to have at most one element in each congruence class mod m.
In other words, these are sets which have infinite (in the positive direction) arithmetic progressions of period m starting from all elements of A, and various finite "exceptions" B and F , where elements of B lie in the same congruence classes as elements of A and elements of F lie in different congruence classes than those of A.
Our motivating point is BL's following result: 2.2.The Setting.In this section we will describe the "setup" we will be operating under, in the hopes of providing a clearer and more visual picture of the problem at hand.
In dealing with eventually periodic sets of period m, we shall think of them as follows: take the infinite strip in the lattice Z 2 given by and consider it as a copy of Z by taking n = x + my.
(For the sake of visual appeal, rather than place these "dots" (x, y) on the grid lines, we shall shift the grid lines to the west and to the south by 0.5, so that the dots lie in the middle of "boxes"; that is, rather than placing the dots "Go-style", we will be placing them "English chess-style".)For example, the set 4N ∪ {−8, −12} ∪ {3, 6} would look like: In order to preserve structure, perhaps it is better to think of this as or more concisely where the congruence is that of abelian groups (hence the symbol In fact we can take this to be definition: We will denote the projection map by Under the identification (which we call the strip construction) when there could be confusion with the following definition).We will write Col + (i) to denote the subset of Col(i) with y ≥ 0, i.e. nonnegative y-coordinate.Similarly Col − (i) refers to y < 0.
Similarly we may refer to {(x, y) ∈ Z 2 : x = i} as the i-th column of Z 2 , denoted Col(i).We will write Col + (i) to denote the subset of Col(i) with y ≥ 0, i.e. nonnegative y-coordinate.Similarly Col − (i) refers to y < 0. For a subset S of Z 2 , we also denote the set of columns in which S has elements by Col(S), with Col + (S) defined by Col + (S) := s∈S Col + (s) and similarly for Col − (S).
Abuse of notation: for any subset S of Z, we will use the same symbol S to denote its isomorphic image in Z m ∼ = Z.
A more pictorial/"topological" way to think of this is to take Z 2 and wrap it around horizontally to create a cylinder in a slanted manner, such that each (mk, y) gets glued to (m(k − 1), y + 1).
In this setup, the question of whether or not C is an additive complement can be rephrased as whether or not there exists a set of "translations" (more precisely this is translations inside Z m = Z 2 /Z (m, −1) ) by W such that the union of all such translations of C covers all of Z m .For example, W = {1} is a simple shift to the right by one unit.Whether or not C is a minimal additive complement can be rephrased as whether or not such a W exists such that every element c ∈ C has a dependent element in the integers n ∈ Z, which are defined as follows: Definition 4. For a fixed additive complement W of C, an element c ∈ C is said to have a dependent element if there is some ∆(c) ∈ Z such that ∆(c) ∈ C \ {c} + W , i.e. if c is removed then ∆(c) fails to be covered.
An element of C is said to be a guardian if it has a dependent element.
Observation.As noted by B-L, it is clear that minimality is equivalent to every element of C having a dependent element, i.e. in the union of all translations, w∈W (C + {w}) = C + W , every element c ∈ C has a translate which is covered exactly once.

Results and Discussions Thereof
B-L described when sets of form 2N ∪ B ∪ F arise as MACs.In attempting to generalize this to general m, we can restrict our attention to either F a singleton or B an empty set.In the former case, Proposition 1.For m ≥ 2, any eventually periodic set arises as a MAC.This holds even if B is infinite, unless both m = 2 and mN ∪ B = mZ are true, in which case it is mZ ∪ {f } does not arise as a MAC.
In the latter case, Proposition 2. For |F /m | = 1, any C = mN ∪ F arises as a MAC.
Note that the hypothesis |F /m | = 1 in particular implies F = ∅ is nonempty.The above two propositions can both be seen as specific instances of the following: In words, the existence of a subset W ⊆ Z such that mZ − \ B = F + W implies that mN ∪ B ∪ F is a MAC.Proposition 3 implies Proposition 1.Let B be finite.Indeed, in Proposition 1 F = {f } is a singleton, which can cover any subset of the integers, and therefore in particular any B has For the case where B is infinite, see the Appendix.
Proposition 3 implies Proposition 2. Similarly in Proposition 2 we have B = ∅, so that mZ − \ B = mZ − , which is coverable by any F ; indeed, just take W = {−km − f 1 : k ∈ Z + }, where f 1 is the maximal element of F .(Indeed, it is easy to see that, generally, N is coverable by any finite set F ⊆ Z; then the above is a specific instance, since mZ − is isomorphic to N as monoids.)Hence an empty B satisfies the conditions of Proposition 3, which gives Proposition 2.
In fact, Proposition 3 is also a specific case of a more general statement: where without loss of generality let A = {0}.Then the existence of a set cover {S i }, Theorem 4 implies Proposition 3. First let f > 1.Indeed, Proposition 3 is the case when mN ∪ B has a cover which consists of a single set, namely mN ∪ B itself; in this case n = 1, and the relevant bound is m ≥ f + 1, which is of course always true.This recovers Proposition 3.
If f = 1, the argument in the previous paragraph will give that we are done if m ≥ 3. The m = 1 case is of course impossible since then F would collide with the column containing the arithmetic progression.
Hence it remains to prove the case f = 1 and m = 2.We claim that C + (W ∪ {0}) = Z realizes C as a MAC.Firstly, since mZ − \ B is infinite in the negative direction, it contains an isomorphic copy of N (as monoids), and so by the BL Lemma we can assume that F is minimal with respect to the condition F + W = mZ − \ B. Secondly, since F + W = mZ − \ B ⊆ 2Z lies in the 0-th column, we have that 0 ∈ W . Thirdly, since F + W = mZ − \ B where mZ − \ B is unbounded in the negative direction, we have that the sum C + (W ∪ {0}) = (C + W ) ∪ (C + {0}) contains infinitely many translates of the arithmetic progression 2N in the sum C + W .These translates are of the form 2N + k, where k ∼ = 1 (2), and since there are infinitely many of them, they cover all of 2Z + 1.Furthermore, C + W contains F + W and therefore mZ − \ B. On the other half, C + {0} contains mN ∪ B. Hence, together, we have C + (W ∪ {0}) contains mN ∪ B, mZ − \ B, and 2Z + 1; hence C + (W ∪ {0}) gives all of Z.Moreover, C is a MAC with respect to W ∪ {0} since the removal of any element in 2N ∪ B would lead to C + {0} not containing that element, and the removal of any element in F would lead to F + W not giving all of mZ − \ B.
Let us remark that, in the worst case scenario, one can always take the set cover to be in the above, so that n = |B| + 1.This set cover satisfies the required conditions since, as noted earlier, mZ \ mN is always coverable by F , and mZ \ {b} is similarly coverable since it consists of two isomorphic (as monoids) copies of N, which we have established is coverable by finite sets.In particular, this means that In some sense, this is saying "any C = mN ∪ B ∪ F (where Theorem 4 implies Proposition 5. See paragraph preceding Proposition 5.This idea of "eventually being a MAC" is one we will explore more and make more precise presently.
Definition 5. Given any set of points C in the lattice Z, we may consider its image under the projection and then the isomorphism which is a composition we will call π m by abuse of notation.Let this image be denoted C = π m ( C); then we may ask whether or not C is a MAC inside Z.In fact, we can ask this question for varying m.
In general, we will call such a C (respectively A, B, F ) a pattern for/of C (respectively A, B, F ), and C / is defined as This construction allows us to turn subsets of Z 2 into subsets of Z.When such a C has columns which are either consisted of finitely many points (giving F ) or consisted of finite many points union a "infinite ray" of points going to the north (giving B ∪ (mN + A)), this construction gives C an eventually periodic set.
Just as a C gives rise to a The next theorem will tell us that, in some sense, all patterns for eventually periodic sets are eventually MACs.That is, we take our eventually periodic set C of period m, consider its preimage π −1 m (C) = C under the construction above, and consider, for large growing M , π M (π −1 m (C)); the statement is that this set is a MAC in Z for all sufficiently large M .But before stating this theorem let us define a quantity we will use: for C the pattern for C, consider C / = {x : (x, y) ∈ C for some y} the set of x-coordinates; this C / will be a set of separated maximal contiguous "blocks", i.e.
where each . is an arithmetic progression of common difference one) and i < j =⇒ c i,• < c j,• .Let be the smallest possible length of a consecutive (i.e. an arithmetic progression of common difference one) set of integers formed by horizontal translates of C / , i.e. the minimal possible length of an interval of integers [a, b] such that there exists W such that Theorem 6.For any fixed patterns of A, B, and nonempty F , any In other words, "all eventually periodic sets are eventually MACs".The constructed bound for this is where denotes the minimal length possible of a consecutive block formed by horizontal translates of C / , which is further bounded by Theorem 6 is also a corollary of a more general theorem: denote the (nonempty) columns with finitely many points (where each F i lies in a distinct column Col( F i )), and let denote the columns with infinitely many points (where again each K i lies in a distinct column Col( K i )).For C / the set of x-coordinates of C, let denote the minimal length possible of a consecutive block formed by horizontal translates of C / , which satisfies Suppose that for each K i there exists a set cover S i consisting of such that, for each i, j, there is some collection of (possibly empty) W i,j;µ , U i,j;ν ⊆ Z 2 such that Theorem 7 implies Theorem 6.To recover Theorem 6 from Theorem 7, take , which can be written as , where a i is an element of A and B i are the elements of B lying in the same column as a i . is the same as before.
Now take the set cover of the infinite columns where This set cover satisfies the hypotheses since firstly for any F j ; to see this recall from earlier that any half column (which is isomorphic as a monoid to N if we take addition to only affect the y coordinate) is coverable by any finite set.Similarly, secondly Col( for any finite column F j , which is true since Col( K i ) \ { b} consists of two infinite rays, one pointing up and one pointing down, and as established earlier these rays, each isomorphic as monoids to N, are coverable by finite sets.Then Theorem 7 states that implies that C is a MAC, which is precisely the statement of Theorem 6.
The reader might note that Theorems 4 and 7 say very similar things, namely that for large enough m the projection under π m of some pattern will be a MAC in Z.However, Theorem 4 is not a corollary of Theorem 7 due to the bounds.Indeed, applying Theorem 7 to the setting of Theorem 4 will yield only m ≥ (2 f + 1)(n + 1) where n is the size of the set cover S, which is much worse than the bound given in Theorem 4. As the reader will see in the proofs in the following section, the construction for Theorem 4 feels "tight" or "efficient" in some sense while in Theorem 7 we are much sloppier.This is perhaps to be expected; Theorem 4 deals with a very specific type of set (namely |A /m | = |B| = |F /m | = 1), while Theorem 7 deals with a much broader class, so one might expect that it is easier to derive better bounds in the former case than the latter.
We should also note that, by choosing appropriate K, Theorem 7 answers the question of Burcroff and Luntzlara mentioned at the end of the Introduction in a large class of cases.Indeed, writing , Theorem 7 tells us that whenever we can partition B i,j by S i,j,k such that Col − (min A i,j ) \ S i,j,k can be set-covered by appropriate translates of the different columns in C 1 , C 2 , if m is larger than some bound depending on the size of our cover, the number of equivalence classes mod m represented by F 1 ∪ (−F 2 ), and the horizontal distribution (when we draw it in Z m form) of C 1 and C 2 , then C 1 ∪ (−C 2 ) is a MAC.As an example, in the case that B 1 = B 2 = ∅ and at least one of F 1 , F 2 is nonempty, since translations of finite sets can cover N, we obtain the bound that As another example of Theorem 7, we could consider the case K = mN ∪ B where B is infinite and F = {f } is a singleton.Then we can take the set cover to have one set, namely K itself, for mZ \ (mN ∪ B) = mZ − \ B is coverable by (translates of) the singleton {f }.Furthermore, in this case = 2 f .Then the bound from Theorem 7 tells us that It turns out this is true for smaller m as well, as long as not both of m = 2 and mN ∪ B = mZ are true.
Having discussed at length the results, it remains to prove Theorems 4 and 7 (as we have noted in the discussions above, all other results are actually corollaries of these two).

A Formalism of Formal Power Series
Before giving the proofs of our main theorems, namely Theorems 4 and 7, we will develop a language of "formal power series" in which the proofs are much easier to relate.We should however give a disclaimer beforehand that these formal power series do not possess the soul of the technique of generating functions, which is namely the idea of collapsing long expressions into short ones or vice versa (e.g. the identity n x n = 1 1−x ) in order to achieve clever manipulations.The formal power series we introduce here will not engage in such acrobatics and will instead serve solely as bookkeepers.
The point of this is to make it easier to show that C is a MAC, given a claimed complement W . Roughly, this formalism will turn a set S ⊆ Z into a formal power series.
In our strip construction Z m , each column Col(i) = {(x, y) ∈ Z m : x = i} can be thought of as a copy of Z with the obvious addition structure (add the y-coordinates).We will denote where we take addition in Col(i) to be addition of the y-coordinates.This isomorphism is such that For a set S ⊆ Col(i) ⊂ Z m which lies entirely in a single column, we will let S ⊆ Z denote its image in Z under the isomorphism Z m ⊃ Col(i) ∼ = Z.
In the backwards direction, given a set S ⊆ Z, we will let q S(i) ⊆ Col(i) ⊂ Z m denote its image under the inverse isomorphism.
The above is nothing more than saying that the set of all integers in a single congruence class mod m forms a copy of Z.
But before we can describe how our formalism will turn the data of a set S ⊆ Z into an object, we must first describe in what world this object will live.In the following definition the symbol refers to the disjoint union, which keeps track of multiplicities, and the symbol ⊕ refers to the Minkowski sum of sets + except with multiplicities taken into account2 , i.e. a "disjoint" Minkowski sum.
Definition 7. Consider the Z-algebra generated by symbols of form q A for A ∈ N Z (here N Z refers to subsets of Z with multiplicity allowed, i.e. a "weighted subset" with weights, which encode multiplicity, in N), modded out by relations q ∅ = 0, q {0} = 1, q A q B −q A⊕B = 0, and q A +q B −q A B = 0; in symbols this is We can then consider the following polynomial ring over this algebra: When q B = q A + q C for some set C ∈ N Z , we will say q B ≥ q A .Abuse of notation: For a singleton S = {n}, instead of writing the cumbersome q S = q {n} , we will write q n .For example we will write q 1 in place of q {1} .Similarly we will write q 0 = 1 instead of q {0} .We will also later drop the notation ⊕ and only use +, relying on context3 for whether we consider multiplicity or not.Generally speaking, whenever we are in the context of these exponential symbols, or later in the context of formal power series, the symbol + will be taken to mean with multiplicity.
Our choice of notation N Z here is in line with the notation {0, 1} Z for the power set P(Z).Hence regular subsets of Z are those members of N Z whose weights (i.e.multiplicities) are either 0 or 1, so that every member of P(Z) is a member of N Z .
The reason why we take this ideal to quotient by in the definition of Ξ m will be clear later.It is ] / x m −q 1 that our formal power series shall live.
These symbols, appropriately, behave like exponentials and correspond to and the distributive property of multiplication corresponds to Defining B A to be the set C such that A ⊕ C = B (if it exists), we also have Note that by extending the notion of setminus B \ A to include cases when A is not necessarily a subset of B, we can make sense of expressions such as −q A .Indeed, treating N Z as a semi-ring with addition corresponding to and multiplication corresponding to the disjoint Minkowski sum ⊕, we can complete this to a ring by introducing symbols of form B − A q B − q A , appropriately quotienting so that B The idea of this formalism is to do the following: where each S i lies in a single column labeled by distinct s i ∈ Z/mZ.Then the information in S is the same as the data where the first entry in each pair indicates the "shape" of the elements of S in the column labeled by the second entry of each pair.We call this the shape list form.We also overload the symbol and write this as Our formalism will now take this data and put it into a power series in the following way.Define: where each S i lies in a single column, let us write where in an abuse of notation we have written x s i instead of x s i and q S i instead of q S i for the sake of simplicity.We write [x i ]S(x) for the coefficient in front of the x i term (after reducing mod x m − q 1 until all powers are less than or equal to m − 1).
Then it is obvious that, for A, B ⊆ Z m , we have Note well that such formal power series are in bijection with weighted (i.e.we allow multiplicities for each element) subsets of Z m .
For a power series S(x) ∈ Ξ m presented in a form such that the exponents appearing in S(x) are in the range [0, m − 1], let us denote Range(S(x)) := {exponents appearing in S(x)} = S /m .Now we should also explain why in defining Ξ m we are quotienting out by the ideal x m − q 1 .This is simply because our sets live in Z m .For example, for A = {1}, we have A(x) = q 1 x 0 , which is the same as A(x) = q 0 x m since in the latter description q S := q S = q 0 corresponds to In claiming that (A + B)(x) = A(x)B(x) and (A B)(x) = A(x) + B(x), we have omitted some minor checks; these are covered in the Appendix at the end of the paper.
The key in this definition is that now, when claiming that C is a MAC to W , rather than compute C + W and show it is Z, meanwhile proving that all elements of C have dependents, we can instead take their formal power series, multiply, and check the coefficient in front of each x i , which is equivalent to checking column-by-column that C + W = Z minimally.As expected, there is no new content in this formalism (it's just notation), but this will make writing down certain proofs much more concise.
More precisely, the condition of C + W = Z is the same as every i-th coefficient [ where S ⊇ Z (here i ∈ [0, m − 1]), and the condition of C being minimal will be checked within each term q S x s ; there, we will check whether or not S ⊇ Z contains elements dependent on elements of C, and whether or not the union over all such S covers all elements of C (so that every element of C has a dependent element).That is, checking that C is minimal will be equivalent to giving a partition of In words, this is saying that for any C i in this partition, there is some element w ∈ W and column number n ∈ Z/mZ such that the coefficient [x n ](C + W )(x), which counts the results of C + W in the n-th column with multiplicity, contains the elements of C i + w exactly once.

Proofs of Main Results
In this section we shall prove the main results, namely Theorems 4 and 7.The language of the formal power series will make this process easier to communicate, but for sake of transparency we should say that this is not how we came up with these theorems.Generally, perhaps a reasonable strategy to come up with these statements might be to stare at and play with pictures like the strip construction Z m (this is what we did), and to write down a proof one would use these formal power series.In writing down the proofs in these sections, we have tried our best to be very explicit and write down all the computational details, even if they are completely straightforward; as a result the proofs are rather long, but we hope that the trade-off is that the readers will be able to read along and confirm that the proofs are correct without having to separately compute/check things on paper themselves.
Perhaps it should be noted that these theorems are much easier to see pictorially than symbolically; as unfortunately is the case at times with mathematics at large, symbols, whilst affording more precision, obscure intuition and the flow of logic.5.1.Proving Theorem 4. We first prove Theorem 4. The idea is to give a construction for C + W = Z in which all elements of C have a dependent element; namely, each S i ⊆ mN ∪ B will have its dependent elements concentrated in a single column.We shall spread these columns containing dependents out amongst "cars" of length 2 f .Here the word "cars" refers to the vehicle, in which "passengers" (i.e. the columns containing dependents of C) will fit, and we will fit all these cars into a giant garage (i.e.Z m ), and the idea is that if the garage is long enough (i.e. if m is big enough) then all these cars will fit.
In the proof below, the rough outline will be as follows: we will give a construction of a set V and claim that C is a MAC to V ; we will calculate the formal power series of these sets; we will multiply the formal power series together; and lastly we will check term-by-term in the power series that C + V = Z and that C is minimal with respect to this condition.Since the power series determines the set, we could have just given V (x), but we go the extra step of writing down what V is for this first proof for the sake of transparency.
Proof of Theorem 4. For ease of reading we will separate the proof into sections which are italicized.
(1) Observations.Let {S i } be the set cover in the theorem assumptions, i.e. S 1 ∪• • •∪S n = mN∪B such that each member S i has mZ \ S i = F + W i for some W i ⊆ Z.Note that the finiteness of B implies mN ∪ B is bounded below which implies S i must also be bounded below.As such, mZ \ S i will contain a shifted copy of Col − (0); that is, an infinite ray of integers (more precisely this ray consists of multiples of m) starting at min(S i ) − m and pointing in the negative direction; in particular this infinite ray is we can think of this as an isomorphic (as monoids) copy of N sitting below min(S i ).
By assumption there is some W i such that F + W i = mZ \ S i = Col(0) \ S i .Consider a minimal W i ⊆ W i such that F + W i = (Col + (0) + y i ) \ S i for some4 y i ∈ Col − (0) + min B + 1. (This W i exists because F is finite, so that if the sum F + W i = Col(0) \ S i is some set continuing infinitely in the negative direction, then necessarily eventually (in the negative direction) this sum is just consecutive translations of F , i.e. there is some W i ⊆ W i such that F + W i = Col − (0) + z i for some negative number z i .)Then, noting that where the latter isomorphism is that of monoids, we may apply the BL Lemma to obtain a modification/replacement5 U i of (Col − (0) + y i − max F ) such that with F minimal, i.e. wherein no proper subset F ⊂ F satisfies the same equation.Then, with respect to F + U i = Col − (0) + y i , we have that every element of F has a dependent element in Col − (0) + y i .Then consider6 W i U i ; the sum of F with this set is Hence for the rest of this proof, by redefining the symbol W i to be we can assume that and F is minimal with respect to this equation; moreover, by construction we can find dependent elements of F which are of form Z δ < y i ≤ min B. Note that, passing to Col(i) ∼ = Z, this equation reads with dependent elements of F which are of form δ < y i ≤ min B.
Let us also note that, since S i ⊆ mN ∪ B and therefore mZ \ S i are concentrated in a single column (i.e. a single congruence class mod m) and since F is also concentrated in a single column, F + W i = mZ \ S i implies that W i is also concentrated in a single column, i.e. all elements of W i are equivalent mod m.In fact we know what this congruence class is; F + W i = mZ \ S i under projection implies f + w i = 0, i.e.7 w = − f .
Similarly, as F is finite and mZ \ S i (as noted earlier) is unbounded below, in order for F + W i = mZ\S i to be true it must be the case that W i contains infinitely many negative elements.Therefore, W i must also contain infinitely many negative elements.
For sake of brevity, let (2) Construction.To be upfront, we will immediately give the construction 8 of V to which C shall be a MAC.As you can see it is quite a mess, and for that reason we will not be working with all of V all at once, and will instead cut it up into little pieces and consider one at a time.
More concisely and precisely (i.e.collapsing appropriate terms into unions) (here whenever the upper limit is smaller than the lower limit, e.g.∪ 0 j=1 , this is taken to be the empty set by convention; this convention will make our claims true when r = 0), Note that the translation factors (2i − 1) f + j − 1 in the expression W (i−1) f +j + (2i − 1) f + j − 1 are precisely the members of the intervals of integers in V .
Let us denote the shorthand (here 1 ≤ j ≤ f ) Furthermore define "car blocs" V i of V for 1 ≤ i ≤ k as follows: Extending this notation, let us also define for i = k + 1 the "remainder bloc" For short let us write K := mN ∪ B, so that We can then directly compute each of the C + V i , C + V k+1 , and C + V Z by computing the power series via multiplication.Indeed, Note that, by shifting the index up by one, we have wherein the first f terms of (C + V i+1 )(x) will combine with the last f terms of (C + V i )(x).This is the "cars fitting together" we were talking about.Similarly, we may compute Having calculated the sums of C with each of the blocs, we may now calculate so that adding up the above results we get It should be well noted that the purpose of the bound m ≥ (2k + 1) f + r is to ensure that this expression has all like terms combined; in particular, it ensures that all terms of form q K (F +W i,j ) x (2i−1) f +j−1 have exponents lying in the range [0, m − 1].
(4) Deciphering.Having computed (C + V )(x), to check that C is an additive complement to for all i.We can verify this by considering casework depending on the range of the exponent: Coefficients in this range are either of form q K (F +W i ) or q F (K+W i ) .In the latter case, since N ⊆ K and since W i contains infinitely many negative elements, we have Z ⊆ K + W i and therefore In the former case, recall F + W i = Z \ S i , and since S i ⊆ K, we have Z ⊆ K (F + W i ), i.e.
as well.(iii) x (2k−1) f to x 2k f +r−1 .Coefficients in this range are of the same form as (ii), so we are done here for the same reasons as in (ii).(iv) x 2k f +r to x (2k+1) f −1 .Coefficients in this range are of form q F (K+Z) .Since clearly Z ⊆ K +Z, we have q Z ≤ q F (K+Z) .
(vi) x (2k+1) f +r to x m−1 .Coefficients in this range are either of form q F +Z , q K+Z , or q This concludes the check that C + V = Z.
Next let us see that all elements of C have dependent elements in Z with respect to V .In fact, we claim that every column labeled by (2i − 1) f + j − 1 (where 1 ≤ j ≤ f and (i − 1) f + j ≤ n) contains dependent elements of every element of S i,j and every element of F .Indeed, the coefficients ; by construction we have F + W i,j = Z \ S i,j in such a way that there are dependent elements of every element of F of form δ < y i,j ≤ min B, so that in particular δ ∈ K.Then, in K (F + W i,j ), each such δ is counted/covered exactly once, so that they are still dependent elements of F in the equation which gives dependents of all of F .Similarly, S i,j is avoided by F + W i,j (which is equal to Z \ S i,j actually, namely everyone but S i,j ), and is covered exactly once in K, so that we have the dependent elements of S i,j also.Taking the union over all appropriate i, j, this gives the dependent elements of all i,j S i,j = mN ∪ B, so that all elements of C have dependents in the equation This concludes the check that C is minimal with respect to C + V = Z.
Lastly we should remark on the use of the bound Indeed, if this were to fail, i.e. if m < (2k + 1) f + r, then the term in (C + V )(x) would be reduced in the quotient which defines Ξ m to q K (F +W k+1,r ) +1 x (2k+1) f +r−1−m , which would then "collide" (i.e.combine like terms) with a term from earlier; in this case, we can no longer guarantee that S i,j ⊆ Z is covered only once in the coefficient [x (2k+1) f +r−1−m ](C+V )(x).
5.2.Proving Theorem 7. We next prove Theorem 7. The idea is similar to Theorem 4, where we put "passengers" (i.e. the columns with dependents of C) into "cars", which we put in a large "garage" (i.e.Z m ), and somehow if the garage is long enough then all the cars will fit and none of the passengers will overlap.
In Theorem 4, because the problem conditions are specific (e.g.only one column of F and B), each car can fit many passengers inside.However, because the distribution of points in C is not known in the setting of Theorem 7, we will only be able to put one passenger in each car this time.
Proof of Theorem 7.This theorem is stated in the setting of Z 2 projecting to Z m .However, in proving the theorem we will work directly in Z m , taking the assumptions that there is some set cover S i of the infinite columns Each F i or K j will reside in the column labeled by f i or k j respectively, and we may sometimes write c i for the label of the column containing C i , the i-th column of C counting from the left for 1 ≤ i ≤ t + r.
As before, the proof will be separated into italicized sections for ease of reading.
(1) Observations.As per the definition of , let9 Q / be a set such that The discussion immediately following the statement of Theorem 7 shows that such a set exists with the given upper bound ≤ outerrange( C / ) + innerrange( C / ).Then, define Note well that the set Q corresponding to this power series satisfies where each P i ≥ Z.
Since the columns F 1 , • • • , F t are finite sets, by the theorem of Kwon we know that there exists10 W i ⊆ Z such that F i + W i = Z and F i is minimal with respect to this equation.
Consider the ordered list of symbols (which each stand for one of our sets) for a symbol S in this list, let α(S) be the index in this list where S appears.For example, α(F 1 ) = 1, α(F t ) = t, and α(S r,nr ) = t + i n i .We will use the shorthand α i,j := α(i, j) := α(S i,j ) = the index where S i,j appears.
In line with this notation, we will define11 S i for 1 (2) Construction.First we give the construction of the set V to which C shall be a MAC.Rather than give the explicit set construction, we will give the power series V (x), which as remarked earlier determines the set V .Since V shall be quite unwieldy, we will again break it up into "blocs".
The "car blocs" V i (1 ≤ i ≤ t + j n j = N ) are defined by formal power series which we give here.For 1 ≤ i ≤ t, define Note well that some of these terms in t µ=1 q W i,j;µ x − fµ + r ν=1 q U i,j;ν x − kν could be zero, for example if W i,j;µ = ∅ then q W i,j;µ = 0.
Similarly to last time, the "filler blocs" will be defined by the power series In the case that m = N ( + 1), this power series is defined to be zero.
Then we shall define V to be whose power series shall thus be the sum of those written above.The claim then is that C + V = Z, and that C is minimal with respect to this condition.
(3) Generatingfunctionology.We will compute the power series (C + V )(x) by computing (C + V i )(x) and (C + V Z )(x) and then adding them together.
Note that in this setup, the power series of Let us then compute the sums C + V i and C + V Z by computing the formal power series thereof.For 1 where we have defined (Fil standing for Filler, Dep standing for Dependents, and Err standing for Error) Note well that the exponents of x appearing in Fil i (x) are precisely Recall that P i ⊇ Z, and note well that all terms in the last summand Err i (x) are constant multiples of Also note well that the largest power of x appearing in Err i (x) is Again note well that the exponents appearing in Fil α(i,j) (x) are As before we remark that the coefficients in Fil α(i,j) (x) have P µ ⊇ Z, and we must note well that all the terms in the last summand Err α(i,j) (x) are multiples of where for example c µ = k i for the first summand in Err α(i,j) (x) and c ν = c µ .In particular this means [x α i,j +α i,j −1 ] Err α(i,j) (x) = 0. Also note well that the largest power of x appearing in Err α(i,j) (x) is at most (in the sense of comparing the exponents) x α i,j +α i,j −1−min C / +max C / , which satisfies Moreover the smallest power of x appearing in Err α(i,j) (x) is at least α i,j + α i,j − 1 − max C / + min C / ≥ (α i,j − 1) + α i,j − 1 for the same reason.These two inequalities ensure that when we consider the sum (C +V α(i,j) )(x)+ (C + V α(i,j)+1 )(x), the summand Err α(i,j) (x) from (C + V α(i,j) )(x) will combine with the terms Fil α(i,j) (x) and Fil α(i,j)+1 (x) and will not collide (i.e., after using the relation x m = q 1 so that all exponents are in the range [0, m − 1], the set of exponents appearing in the former is disjoint from the set of exponents appearing in the latter) with the term Dep α(i,j)+1 (x).
Lastly let us compute where we the sets P N ( +1) , • • • , P m+ −2 are defined such that the last equality holds; i.e., these new13 P N ( +1) , • • • , P m+ −2 are obtained by distributing14 .Also note well that, since P i ⊇ Z for i ∈ [0, − 1], we also know Then, adding everything together, we have Let us decipher what this means now.
(4) Deciphering.Note that, for 1 then Dep N (x) (and therefore all Dep α (x) for α < N ) will not collide, in the quotient defining Ξ m , with the earlier terms in the sum i.e. that the only place x α +α−1 appears with nonzero coefficient is in Dep α (x).
Let us first check that C + V = Z.Because of the presence of the Fil α (x), we have Note that the intervals (α − 1) + α − 1, α + α − 2 and α + α, (α + 1) + α − 1 are separated by α + α − 1, and this coefficient we can readily see is q Kν q U i,j;ν = q K i µ (F µ+W i,j;µ ) ν (Kν +U i,j;ν ) ≥ q Z else , where in the second case15 we have α = α(i, j) > t.Hence the coefficient in front of x raised to any power in the range [0, N + N − 1] will be greater than or equal to q Z .Lastly, note that for N + N ≤ µ ≤ m − 1 we have This concludes the check that Next let us check that C is minimal with respect to C + V = Z.To do this we will see that all elements of C have dependents in Z.In fact, we claim that there are dependents of all of F i in the (i + i − 1)-th column, and that there are dependents of all of S i,j in the (α i,j + α i,j − 1)-th column.
But this is evident from the calculations we did earlier.Indeed, for 1 ≤ i ≤ t, and by construction F i is minimal with respect to F i + W i = Z, which implies that every element of F i has a dependent in the (i + i − 1)-th column.Similarly, for t + 1 ≤ α(i, j) ≤ N , [x α i,j +α i,j −1 ](C + V )(x) = q K i µ (F µ+W i,j;µ ) ν (Kν +U i,j;ν ) , where t µ=1 and S i,j ⊆ K i , so that S i,j is covered exactly once in the expression K i µ (F µ +W i,j;µ ) ν (K ν + U i,j;ν ), so that there are dependent elements of S i,j in column α i,j + α i,j − 1.
Unioned over all i, j, this gives dependent elements for all of i,j S i,j = K and i F i = F , so that there are dependent elements for all of C, which concludes our check that C is minimal.
6. Comments, Questions, and Further Directions 6.1.Variations of Main Results.In our proof for Theorem 4, note that the place where we crucially used that B is finite is in saying that mN ∪ B is bounded below, and therefore S i is bounded below, so if F + W i = mZ \ S i , then necessarily F + W i contains a infinite set extending into the negative direction, so that the BL lemma applies.If we drop the assumption that B is finite, this is no longer guaranteed, and we can no longer ensure that F + W i = mZ \ S i in a way that gives the dependents of F .However we can sidestep this issue by dedicating a separate column to the dependents of F : since F is finite, by the theorem of Kwon it arises as a MAC in Z, and we can instead use F +W = Z to give the dependent elements of F .Modifying the proof appropriately so that there is now n + 1 columns of dependents rather than n, we can obtain that arises as a MAC.
Note that this bound is the same as the one in Theorem 4 except n is replaced with n + 1, where the +1 is for the extra column dedicated to F .
One might then ask what happens if the |B| < ∞ condition is weakened in the general case.We do not see an immediate solution, as Theorem 6 was derived from Theorem 7 by taking the worst-case scenario in which we dedicate a column for every member of B, but if B is infinite then this approach no longer works.We hence pose the following question: Question.Is some variant of Theorem 6 true for infinite B? That is, even if B is infinite, is it still true that every eventually periodic set is eventually a MAC? 6.2.Regarding the Formal Power Series.We remark that, in the ring Q, the only units are ±q 0 .
Since no two nonempty sets satisfy A ⊕ B = ∅, we have that no two nonzero q A , q B satisfy q A q B = q ∅ = 0, so that Q is an integral domain.Then we can consider the fraction field Frac Q, as well as generating series (Frac Q) [[x]] with coefficients in this fraction field.The classical result from elementary formal power series tells us that Observation.A power series S(x) ∈ (Frac Q) [[x]] has a uniquely determined multiplicative inverse in this ring if and only if [x 0 ]S(x) = 0.
For the same reason, [x 0 ]S(x) = 0 implies S(x) ∈ (Frac Q) [[x]]/ x m − q 1 has a multiplicative inverse.In fact, this is true even if [x 0 ]S(x) = 0, as long as S(x) is not identically zero.Indeed, in the ring S(x) ∈ (Frac Q) [[x]]/ x m − q 1 , note that x i has the multiplicative inverse q −1 x m−i ; then, given a S(x) with zero constant term, we can write it as S Observation.Any nonzero power series 0 = S(x) ∈ (Frac Q) [[x]]/ x m − q 1 has a multiplicative inverse in this ring.
One might wonder if this multiplicative inverse is unique like it is for the classical setting of formal series over fields.By expanding the equation .
(The presence of the extra q 1 's on the upper triangle comes from the fact that x m = q 1 .)Then a multiplicative inverse in (Frac Q) [[x]]/ x m − q 1 is unique if and only if this Toeplitz matrix on the left (which has entries in this strange field Frac Q) is invertible.The author wonders if some type of complexification is possible, so that the Gershgorin circle theorem (or some appropriate variant) becomes applicable.We suppose it is possible that somehow this matrix is always invertible.For example, in the case m = 2, the determinant of this matrix is det T = q A 0 ⊕A 0 − q A 1 ⊕A 1 ⊕1 , which we see cannot be zero since A 0 ⊕ A 0 = A 1 ⊕ A 1 ⊕ 1 is impossible, as the smallest number in A 0 ⊕ A 0 (which must be the sum of the smallest number in A 0 with itself) must be even, while the smallest number in A 1 ⊕ A 1 ⊕ 1 must be odd.Hence Observation.In the case m = 2, all multiplicative inverses in (Frac Q) [[x]]/ x m − q 1 are unique.
But even in the case m = 3 this becomes more unwieldy.Indeed, for m = 3 the determinant becomes det T We can treat this case in the same way as before: letting the minimal element of A i be a i with multiplicity 16 m i , we see that the minimum element of the left-hand side is min A ⊕3 0 (A ⊕3 1 ⊕ 1) (A ⊕3 2 ⊕ 1) = min{3a 0 , 3a 1 + 1, 3a 2 + 2}, with multiplicity m 3 i ; on the other hand, the minimal element of the right hand side is min For det T = 0 to be true, the right-hand-side minimum must agree with the left-hand-side minimum, which is one of the stated three things.If a 0 + a 1 + a 2 + 1 = 3a 0 is the common minimum of both sides, then a 1 + a 2 + 1 = 2a 0 , i.e. 1 = (a 0 − a 1 ) + (a 0 − a 2 ), so that a 0 must be greater than one of a 1 and a 2 , contradicting the minimality of 3a 0 .If a 0 + a 1 + a 2 + 1 = 3a 1 + 1 is the common minimum of both sides, then a 0 + a 2 = 2a 1 .i.e. 0 = (a 1 − a 0 ) + (a 1 − a 2 ), which also contradicts 17 the minimality of 3a 1 + 1. Lastly, if a 0 + a 1 + a 2 + 1 = 3a 2 + 2 is the common minimum, then a 0 + a 1 = 2a 2 + 1, i.e. 1 = (a 0 − a 2 ) + (a 1 − a 2 ), also contradicting 18 the minimality of 3a 2 + 2. Hence the two minima cannot possibly be equal, and so we conclude det T = 0: Proposition 9.For m ≤ 3, all multiplicative inverses for nonzero elements in (Frac Q) [[x]]/ x m − q 1 exist and are unique.
As a remark, it is not enough to check the sizes (with multiplicity taken into account of course) of both sides, since the sizes are which is precisely AM-GM and are equal precisely when However, one could try to use this idea of comparing sizes to derive sufficient conditions for unique inverses.Taking the sizes of the entries in the Toeplitz matrix, we obtain that if the determinant of the symmetric 19 Toeplitz matrix det is nonzero, then A(x) has a unique multiplicative inverse in (Frac Q) [[x]]/ x m − q 1 .This matrix has integer entries, so we may now apply familiar results; for example, applying the Gershgorin circle theorem 20 [G], one obtains that The case m = 3 might have given us hope that there is some pattern to be had, but det T for m = 4 is det T = q A 4 0 − 4q A 2 0 q A 1 q A 3 q 1 − 2q A 2 0 q A 2 2 q 1 + 4q A 0 q A 2 1 q A 2 q 1 +4q A 0 q A 2 q A 2 3 q 2 −q A 4 1 q 1 +2q A 2 1 q A 2 3 q 2 −4q A 1 q A 2 2 q A 3 q 2 +q A 4 2 q 2 −q A 4 3 q 3 , which is far messier.We leave open the following question: Question.Under what general circumstances (perhaps always) are multiplicative inverses unique in (Frac Q) [[x]]/ x m − q 1 ?As a fun example, we can consider the equation When taken to Ξ m , the left hand side is 1 + x + x 2 + • • • = x 0 + x 1 + q 1 x 0 + q 1 x 2 + q 2 x 0 + q 2 x 1 + • • • = (q 0 + q 1 + q 2 + • • • )x 0 + (q 0 + q 1 + q 2 + • • • )x 1 = q N x 0 + q N x 1 = (N)(x).
19 This is because |A ⊕ 1| = |A|. 20As a reminder, this theorem states that, for a complex matrix A = {aij}, every eigenvalue of A lies in at least one of the discs B j =i |a ij | (aii) ⊂ C.
On the other hand, 1 − x = q 0 x 0 − q 0 x 1 = ({0})(x) − ({1})(x) = ({0} − {1})(x), so that the equation (1 = ({1})(x), so that Since we've seen that multiplication of these formal power series corresponds to the (disjoint) Minkowski sum and addition corresponds to the disjoint union, one may ask what the functional composition corresponds to.The answer is not clear to us, and we leave this open: Question.What does functional composition of these formal power series correspond to settheoretically?6.3.Vaguer Questions and Directions.In the construction of these formal power series, our idea was that the information of a set in Z m ∼ = Z is the same as P(Z) m , that is the information of m elements in the power set of Z.The information of P(Z) is then encoded in the coefficients of our formal power series, and the exponents of x indicate which position this set is at in our m elements of Z.One might ask how to generalize this idea: in general, one might roughly have different exponential symbols exp i (S i ) ∈ Q i which do not combine, and coefficients in a power series might look like products i exp i (S i ) of these symbols, and each term might have products of powers of variables attached of form i x n i i .We suspect these formal power series will only make sense for finitely-generated abelian groups.
Direction.To further investigate and make precise these generalized formal power series as well as their combinatorial set-theoretic interpretations.
In a similar vein, looking at our setup for the statement of Theorem 7, our idea was to take a collection of points in Z 2 satisfying some conditions, quotient out by some sequence of "increasing" submodules (in our case "increasing" meant Z (m, −1) where m is increasing), and ask at what point does the image become a MAC.But why this particular sequence of submodules?We suspect that, for some appropriately defined family of "increasing" submodules, Theorem 7 is still true.
Question.How should one define these families of "increasing" submodules so that some appropriate variant of Theorem 7 is still true?And how ought we to modify the formal power series in this framework?
One can take these questions even further.But why should 2 be special?
Question.What if instead we considered a collection of points in some other (finitely-generated) abelian group, quotiented out by some sequence of "increasing" submodules (we guess of the same rank), and asked at what point does the image become a MAC?And how ought we to define the formal power series in this framework?mN ∪ B = mZ).Then realizes C as a MAC.Indeed, in this expression, C + {0} ensures that mN ∪ B has dependent elements, and C + W ensures that {f } has dependent elements.It is easy to see that this covers all of Z, as well as that the 0-th column contains dependent elements for all of C.
Even if mN ∪ B = mZ, as long as m ≥ 3, we can still consider The dependent elements of mN ∪ B are given by the translates by i + im, and the dependent elements of {f } are given by the translates by mZ + m − 1 − f .However, if both mN ∪ B = mZ and m = 2, it is easy to see by inspection that this set cannot be a MAC; we cannot cover all of Z while maintaining a dependent element of {f }, for instance.

Theorem 8 .
Let |B /m | = |F /m | = |A| = 1 with F /m = { f },without loss of generality let A = {0}, and let B be infinite.Then the existence of a set cover {S i }, x) has a unique multiplicative inverse in (Frac Q)[[x]]/ x m − q 1 .