Points and lines configurations for perpendicular bisectors of convex cyclic polygons

We characterize the topological configurations of points and lines that may arise when placing n points on a circle and drawing the n perpendicular bisectors of the sides of the corresponding convex cyclic n-gon. We also provide exact and asymptotic formulas describing a random realizable configuration, obtained either by sampling the points uniformly at random on the circle or by sampling a realizable configuration uniformly at random.


Introduction
Let n ≥ 3 and let P 1 , . . . , P n be n distinct points on the unit circle, arranged in positive cyclic order. For all i between 1 and n, denote by L i the perpendicular bisector of the segment [P i , P i+1 ], with indices taken modulo n. These n lines all go through the center of the circle. We assume that the points are in generic position, which implies in particular that these lines are all distinct and that no point lies on a line. Hence, the n lines divide the plane into 2n regions. What are the circular arrangements of points in the 2n regions that can be realized?
Our interest for this question comes from our work [6] on the flip property for s-embeddings, a geometric integrable system. We were led to consider configurations of three hyperbola branches B 1 , B 2 , B 3 and three points P 1 , P 2 , P 3 such that the foci of B i are P i−1 and P i+1 , with 1 ≤ i ≤ 3 and indices taken modulo 3. In the limit when the hyperbola branches degenerate to the perpendicular bisectors of their foci, we recover the problem described in the previous paragraph with n = 3.
The rest of this introduction consists in the presentation of our results and is organized as follows. In Subsection 1.1 we answer the question from the first paragraph by characterizing the topological configurations of points and lines arising from the n perpendicular bisectors of a convex cyclic n-gon.
We also enumerate such configurations. A natural question to ask is what a typical realizable configuration looks like. We tackle this question under two angles: configurations coming from n uniform random points on the circle (Subsection 1.2) or configurations chosen uniformly at random among all realizable configurations (Subsection 1.3). In Subsection 1.4 we compare these two approaches and we state some open questions.

Deterministic results
We first characterize the realizable configurations. We number in counterclockwise order the 2n regions defined by the perpendicular bisectors, the first one being the region that contains 1. For every 1 ≤ i ≤ 2n, we set v i to be the number of points inside the ith region. It is not hard to see that each region contains at most one point, since two consecutive points are separated by the perpendicular bisector of the segment connecting them. The word v = (v 1 , . . . , v 2n ) ∈ {0, 1} 2n is called the occupancy word of the collection of points P 1 , . . . , P n . In order to characterize the occupancy words that may arise as one lets the positions of the points vary, we introduce the notion of signature of a word in {0, 1} 2n . If v = (v 1 , . . . , v 2n ) ∈ {0, 1} 2n is an arbitrary word, its signature σ = (σ 1 , . . . , σ n ) ∈ {0, 1, 2} n is defined by σ i = v i + v i+n for every 1 ≤ i ≤ n. See an example in Figure 1.
We now introduce a notion of discrete circular interval. Let N ≥ 1 be an integer and let 1 ≤ i, j ≤ N be two integers. We define In particular when i = j we have I N (i, i) = {1, . . . , N } \ {i}.
A word s = (s 1 , . . . , s n ) ∈ {0, 1, 2} n is called interlacing if it satisfies the following two properties: 1. there exist two integers 1 ≤ i, j ≤ n such that s i = 0 and s j = 2 ; 1 Figure 1: An example with n = 9. The black dots correspond to the points P 1 , . . . , P 9 and the white dots correspond to the antipodes of the black dots.
2. for every pair of integers (i, j) with 1 ≤ i, j ≤ n such that s i = s j = 0 and s k = 0 for all k ∈ I n (i, j), there exists a unique k 0 ∈ I n (i, j) such that s k 0 = 2.
Note that an interlacing word takes the values 0 and 2 an equal number of times. We can now characterize the words that may arise as the occupancy word of some collection of points P 1 , . . . , P n . We call such words realizable. Theorem 1. A word u = (u 1 , . . . , u 2n ) ∈ {0, 1} 2n is realizable if and only if its signature is interlacing.
In order to get rid of the arbitrary choice of the position on the circle at which we start reading the word, as well as the choice of an orientation of the circle, we will consider bracelets, which are equivalence classes of words considered up to cyclic shifts and reversal (see e.g. [3]). We denote by B n (resp. W n ) the set of realizable bracelets (resp. words) of length 2n.
Theorem 1 seems to be new even in the case n = 3. We can state a finer version of this result in that case. If A and B are two points in the plane, we denote by |AB| the Euclidean distance between A and B. The topological configuration on the right is impossible to achieve with the lines being the perpendicular bisectors of the segments.

Proposition 2.
Let A, B, C be three points in the plane with |AB| < |BC| < |CA|. Then the three perpendicular bisectors of the triangle ABC divide the plane into six regions satisfying the following properties: • A and B lie in two consecutive regions; • the regions containing B and C are separated by one empty region; • the regions containing C and A are separated by two empty regions.
We now provide an enumerative result for realizable words and bracelets. It follows from Theorem 1 and is to be compared with the total number of words of length 2n containing n ones and n zeros, which is known to be 2n n = 4 n(1+o(1)) by Stirling's formula. Corollary 3. The number of realizable words is As a consequence, the exponential growth rate of the number of realizable bracelets is equal to 3, that is, #B n = 3 n(1+o (1)) .
The sequence (#W n ) n≥1 is listed in the Online Encyclopedia of Integer Sequences [9] (OEIS) under the number A028243 and corresponds to twice the Stirling numbers of second kind.  Table 1: First terms of the sequence (#B n ) n≥3 counting the number of realizable bracelets.
The sequence (#B n ) n≥3 was absent of the OEIS before our work, so we added it under the number A350280, computing the first few terms using a brute-force algorithm, see Table 1. After this addition, OEIS editor A. Howroyd was able to compute many more terms by finding an explicit formula for #B n , which follows from Theorem 1, Corollary 3 and Burnside's lemma. For completeness, we state this formula here without proof. Let φ denote Euler's totient function.
and if n is even, Theorem 1 and Corollary 3 are proved in Section 2.

Uniformly random points on the circle
We now turn to the first of two natural models for sampling random realizable words or bracelets. Let n ≥ 3. Select n i.i.d. points uniformly at random on the circle and consider the realizable word u obtained from it. For any bracelet b ∈ B n , we denote by P(b) the probability of achieving b via n i.i.d. uniform random points.
Proposition 5. For every n ≥ 3 and b ∈ B n , P(b) is a rational number.
The exact computation of P(b) for a fixed b is possible but the technique we use requires to compute a number of integrals which is exponential in the number of occurrences of the letter 2 in the signature of b. We provide such a computation in a special case. For every n ≥ 3, define the bracelet b n ∈ B n to be the equivalence class of (1, 0, 1, . . . , 1, 0, . . . , 0), which is the word composed of a 1, followed by a 0, then n − 1 1's and finally n − 1 0's. Proposition 6. For every n ≥ 3, we have Beyond the probability of individual bracelets, we provide simple exact formulas for the expectations of some statistics. For every k ∈ {0, 1, 2}, a region in a configuration is said to be of type k if the total number of points contained in the union of the region with its antipodal region is k. In other words, a region of type k corresponds to a value of k in the signature of the word describing the configuration.
For every k ∈ {0, 1, 2} and n ≥ 3, denote by H k,n the random variable defined as the number of regions of type k in a configuration associated with n i.i.d. uniform random points. It follows from the interlacing condition of Theorem 1 that H 0,n = H 2,n . Since there are 2n regions in total we also have that H 1,n = 2n − 2H 2,n . We prove the following for the expectation of H k,n , with k ∈ {0, 1, 2}: We also provide a formula for the expected total lengths of the arcs corresponding to regions of type k. Here we rescale the distances on the circle so that the total length of the circle is 1.
For every k ∈ {0, 1, 2} and n ≥ 3, denote by L k,n the random variable defined as the sum of the lengths of all the regions of type k.
Each region either contains a single point or is empty. For n points on the circle, there are exactly n occupied regions and n empty regions. Denote by L e,n the sum of the lengths of the empty regions. Since L e,n = L 0,n + L 1,n 2 , we deduce the following from Theorem 8: The quantity E[L e,n ] can be interpreted as the probability of landing in an empty region when selecting a location uniformly at random on a circle with n uniformly random points.
From the formulas for fixed n, we immediately deduce the first order asymptotic behavior as the number of points n goes to infinity.

Corollary 10.
We have the following asymptotic results for n uniform i.i.d. points on the circle in the limit when n tends to infinity: 1. the asymptotic fraction of the number of regions of type 0, 1 and 2 is respectively 1 4 , 1 2 and 1 4 ; 2. the asymptotic fraction of the length covered by regions of type 0, 1 and 2 is respectively 1 8 , 1 2 and 3 8 ; 3. the asymptotic fraction of the length covered by empty regions is 3 8 .
For every k ∈ {0, 1, 2} we show in addition that, for the model of uniform random points, the regions of type k are asymptotically equidistributed around the circle, when we consider either their cardinality or their total length. Roughly speaking, the numbers and total lengths of regions of each type in a given portion of the circle are asymptotically proportional to the size of this portion. Specifically, for every k ∈ {0, 1, 2} and t ∈ [0, 1], we define h k,n (t) and k,n (t) to be respectively the number of regions of type k and the sum of the lengths of regions of type k which are entirely contained in the arc from 1 to e 2iπt . Note that h k,n (1) = H k,n and k,n (1) = L k,n .
Then, the following holds in the space D([0, 1], R 3 ) of càdlàg functions from [0, 1] to R 3 , endowed with the J 1 topology (we refer to [4] for more background on that topology).

Uniformly random realizable configurations
Another way to study what a typical realizable configuration looks like is to sample a word or bracelet uniformly at random among all realizable words or bracelets of a given size.
We prove some asymptotic results for the shape of a realizable word or bracelet sampled uniformly at random in W n or B n . Let w (n) be a random word taken uniformly in the set of realizable words of length 2n. For x ∈ [0, n] a real number and k ∈ {0, 1, 2}, denote by F k x the random variable corresponding to the number of occurrences of the letter k in the signature of w (n) between positions 0 and x . Then the following holds: Theorem 12. (i) The following holds in probability: (ii) We have the following convergence in distribution : where W is a Brownian motion of variance 2/9.
The first item of Theorem 12 is a law of large numbers while the second item is a functional central limit theorem. Theorem 12 also holds if we replace a uniformly random realizable word by a uniformly random realizable bracelet (see Remark 35). Theorem 12 is proved in Section 4. As stated, Theorem 12 describes the shape of the signature of uniform random realizable word. In Section 4 we actually prove a more refined result, Theorem 31, which describes the shape of the word itself.

Discussion of the results and open questions
To conclude this introduction we state a few remarks and open questions.
• We have provided computational evidence for most of the results of this introduction. It is contained in two supplementary data files that can be found in the sources. The first file is a code file, written in SageMath (version 9.2) as a Jupyter notebook. The whole notebook takes about one hour to be executed on a standard laptop. The second file is an HTML page that cannot be executed but that allows one to directly visualize the output.
• Comparing Corollary 3 with Proposition 6, it is clear that the probability distribution on realizable words or bracelets obtained by sampling n i.i.d. uniform points on the circle differs from the uniform distribution whenever n is large enough. Using the exact initial values of #B n computed in [9, Sequence A350280] and some simple inequalities, it is actually possible to show that these two probability distributions differ for every n ≥ 4.
• Note that for a large bracelet chosen uniformly at random, the asymptotic fraction of regions of type 2 is 1 6 while for a large bracelet constructed from uniform random points on the circle, that fraction is 1 4 . • In the case of uniform random points on the circle, we did not manage to prove a central limit theorem in the vein of Theorem 12 (ii), although we conjecture that such a statement should hold.
• All the formulas of probabilities and expectations in the model of n uniform random points for fixed n (namely those of Proposition 6, Theorems 7 and 8 and Corollary 9) are very compact, yet their proofs involve quite lengthy computations. It would be interesting to find shorter and more conceptual proofs of these formulas.
• As a generalization of Theorem 1, it would be interesting to characterize the occupancy words arising when we drop the requirement for the points P i to be arranged in cyclic order. As shown on Figure 3, in that case the occupancy word may have letters greater than 1. Here the occupancy word is (2, 0, 1, 0, 0, 0, 1, 0).

Organization of the paper
In Section 2 we prove Theorem 1 about the characterization of realizable words and Corollary 3 about their enumeration. In Section 3 we study the model of uniform random points on the circle and prove the results about this model presented in the introduction. Finally in Section 4 we prove Theorem 12 describing a large realizable word chosen uniformly at random.

Characterization and enumeration of realizable words
In Subsection 2.1, we prove one direction of Theorem 1: the interlacement condition is necessary for a realizable word. The converse is proved in Subsection 2.2, using an explicit procedure to construct points from a given word with interlacing signature. Finally in Subsection 2.3 we prove Corollary 3 about exact and asymptotic enumeration results.

Necessary condition for a realizable word
The unit circle may be identified to the half-open interval (0, 1] via the inverse of the map x → e 2iπx . Denote by p 1 , . . . , p n the n elements of (0, 1] corresponding to the n points P 1 , . . . , P n . For every 1 ≤ i ≤ n define the midpoints where the representative modulo 1 is taken to be in (0, 1] and the indices are considered modulo n. Up to applying a rotation of the circle, one may assume that l n = 1. Then we have 0 < p 1 < l 1 < p 2 < l 2 < · · · < l n−1 < p n < l n = 1. The inequalities are strict because of the genericity assumption. Define also for every 1 ≤ i ≤ n, p i = p i + 1 2 mod 1 and l i = l i + 1 2 mod 1, the representatives being taken in (0, 1]. Write P = {p 1 , . . . , p n }, P = {p 1 , . . . , p n }, L = {l 1 , . . . , l n } and L = {l 1 , . . . , l n }. Let (m i ) 1≤i≤2n be the reordering of the l i and l i , that is, Here again the inequalities are strict by the genericity assumption. We also set m 0 = 0. Similarly, let (q i ) 1≤i≤2n be the reordering of P ∪ P . For any Thus for any 1 ≤ i ≤ n, the signature σ of the occupancy word associated to P satisfies Note that for every 1 ≤ i ≤ n, For any (a, b) ∈ (0, 1] 2 define the circular distance to be the distance between a and b measured on the circle obtained by identifying the two endpoints of the interval [0, 1]. We also introduce a notion of circular interval defined as follows. Let a and b be two elements of (0, 1] 2 and define the circular interval We similarly define I[a, b) to be the circular counterpart for the half-open interval [a, b).
In the remainder of this section, the indices of q (resp. σ) will be considered modulo 2n (resp. n) and the real numbers of the form q − 1 2 and q + 1 2 should be understood as the representative in (0, 1] of an equivalence class modulo 1. Let p ∈ P. We define C(p) to be the element x ∈ P which minimizes d(p, x ). By the genericity assumption, C(p) is uniquely defined. Similarly for any p ∈ P , we define C(p ) to be the element x ∈ P which minimizes d(p , x). For any q ∈ P ∪P , when C(q) belongs to I(q, q + 1 2 ) (resp. I(q − 1 2 , q)), we say that q looks to its right (resp. left) and we write it D(q) = R (resp. D(q) = L).
Our aim in this subsection is to prove the following.
Proposition 13. Let u be the occupancy word associated to a collection of points in cyclic order. Then the signature σ of u is interlacing.
A key observation in this article is that the occurrences of 2 (resp. 0) in σ exactly correspond to the occurrences of the pattern RL (resp. LR) in the successive values of (D(q i )) i=1,...,2n , observation which is made explicit in Lemmas 14 and 15 (resp. in Lemma 16). Figure 4: A configuration on a portion of (0, 1]. The black (resp. white) dots represent elements of P (resp. P ), and the vertical solid (resp. dashed) lines represent elements of L (resp. L ). From each dot q, the arrow is directed towards C(q), and above it, the value of D(q) is written. Below each region, the corresponding letter in σ is indicated.
Then exactly one of q i and q i+1 belongs to P, and we have C( Proof. Up to performing a rotation of the circle, one may assume that q i < q i+1 (this is needed to take into account the case i = 2n). Reason by contradiction and assume that both q i and q i+1 are in P. Since q i+1 ∈ P, we cannot have C(q i ) = q i+1 , and it follows from the fact that which leads to a contradiction. Similarly, q i and q i+1 cannot both be in P . The last two statements of the lemma follow from the fact that I(q i , q i+1 ) ∩ (P ∪ P ) = ∅.
Lemma 15. Let p ∈ P and x ∈ P be such that x = C(p) and p = C(x ). Let also 1 ≤ i ≤ 2n be such that p ∈ [m i−1 , m i ]. Then σ i = 2. Conversely, let 1 ≤ i ≤ 2n be such that σ i = 2 and denote by p ∈ P and x ∈ P the two elements of Conversely, let 1 ≤ i ≤ 2n be such that σ i = 2 and denote by p ∈ P and x ∈ P the two elements of [m i−1 , m i ] ∩ (P ∪ P ). If we had C(x ) = p, then the perpendicular bisector of C(x ) and p would separate x from p, which is not the case. So C(x ) = p and similarly C(p) = x .
Then there exists a unique 1 ≤ j ≤ 2n such that m j and m j+1 are both in I(q i , q i+1 ), and this j satisfies σ j+1 = 0. Conversely, assume that 1 ≤ j ≤ 2n is such that σ j+1 = 0. Denote by q i the largest element of P ∪ P smaller than m j . Then D(q i ) = L and D(q i+1 ) = R.
The first case is when q i and q i+1 are of different types, that is, one belongs to P and the other to P . By symmetry we may assume that q i ∈ P and q i+1 ∈ P. An example can be seen around the leftmost empty region in Figure 4. Since D(q i ) = L, we have that C(q i ) and q i+1 are two consecutive elements in P.
) contains exactly two elements of L ∪ L . Denoting them by m j and m j+1 , we conclude that σ j+1 = 0.
The second case is when q i and q i+1 both belong to P (see for example the configuration around the second 0 in Figure 4). Then q i +q i+1 2 is the only element of I(q i , q i+1 ) ∩ L. Furthermore, C(q i ) and C(q i+1 ) are consecutive elements in P , so M : The conclusion follows as in the first case.
The third case, when q i and q i+1 both belong to P , is treated like the second case.
Conversely, assume 1 ≤ j ≤ 2n is such that σ j+1 = 0. Since two consecutive elements of L (resp. of L ) must be separated by an element of P (resp. of P ), we deduce that among m j and m j+1 , one belongs to L and the other to L . Denote by q i the largest element of P ∪ P smaller than m j . Then q i+1 is bigger than m j+1 . If q i ∈ P, consider the unique element of L ∩ {m j , m j+1 }. It is the bisector of two points of P and these points cannot be in I(q i , q i+1 ). Moreover, q i is to the left of the bisector. This implies that D(q i ) = L. This works also in the case q i ∈ P . Similarly, one shows that D(q i+1 ) = R.
Remark 17. In particular, Lemmas 14, 15 and 16 imply that the signature of a realizable word uniquely determines the sequence D(q i ) 1≤i≤2n .
We now prove that at least one region of a realizable word is of type 2.
Proof. Consider a pair (p, x ) achieving the minimum min q∈P y ∈P d(q, y ).
The next lemma finally shows that, between two regions of type 0, there is always a region of type 2.
Lemma 19. Assume that σ 1 = 0 and that there exists 2 ≤ i ≤ n such that Proof. Since σ 1 = 0, we have q 1 > m 1 and by Lemma 16 we have that D(q 1 ) = R. Denote by q r the largest element of P ∪ P smaller than m i−1 .
We now have all the tools to prove Proposition 13: Proof of Proposition 13. Let P 1 , . . . , P n be n points in cyclic order on the circle and let σ := (σ 1 , . . . , σ n ) ∈ {0, 1, 2} n be the signature of their occupancy word. Define s 0 , s 1 and s 2 to be respectively the number of occurrences of the values 0, 1 and 2 in the signature. Then n = s 0 + s 1 + s 2 and since there are n points, we also have Combining these two equations we obtain that s 0 = s 2 . From Lemma 18 we get that s 2 ≥ 1. Therefore s 0 ≥ 1. Assume that 1 ≤ i < j ≤ n are such that σ i = σ j = 0 and σ k > 0 for all i < k < j. Up to applying a translation, one may assume that i = 1. By Lemma 19 we deduce the existence of some k such that 1 < k < j and σ k = 2. Given that s 0 = s 2 , such a k is necessarily unique. Hence we conclude that σ is interlacing.

Realizing a word with interlacing signature
In this subsection we construct an explicit configuration of points from a word whose signature is interlacing.
Then there exists a configuration of points on the circle having v as an occupancy word.
Proof. We fix n ≥ 3 and such a word v. Up to applying a rotation one may assume that σ 1 = 0.
Denote by T (resp. Z) the subset of all 1 ≤ i ≤ 2n such that σ i = 2 (resp. σ i = 0). Here again the indices of σ are considered modulo n. T and Z are respectively the locations of twos and zeros. The set {1, . . . , 2n} \ (T ∪ Z) is composed of several connected components, which are the intervals of integers between two consecutive elements of T ∪ Z (note that some of these intervals may be empty). We call such a connected component an ascending component (resp. a descending component) if it is of the form I 2n (i, j) with i ∈ Z and j ∈ T (resp. i ∈ T and j ∈ Z). In particular, for all k ∈ I 2n (i, j) we have σ k = 1. Defining s := #T = #Z, we let i 1 < · · · < i s be the ordering of T , and j 1 < · · · < j s be the ordering of Z. In particular, j 1 = 1.
To each 1 ≤ i ≤ 2n, we associate a position r i in (0, 1], which in the end of the process will be the position of a point (resp. the antipode of a point) if v i = 1 (resp. if v i = 0). The idea is to guarantee that for every h in a descending component I 2n (i k , j k+1 ) (resp. in an ascending component I 2n (j k , i k )), the position r h is closer to r i k than to its closest neighbor on the right (resp. left). Using the terminology of Subsection 2.1, we make sure that every point or antipode of a point in a descending (resp. an ascending) component looks to the left (resp. right). We will then check explicitly that the configuration of points thus constructed has occupancy word v.
First, for all 1 ≤ k ≤ s, we set In the definition of r j 1 we used the notational convention r i 0 = 0. Since s is even, the 2k points constructed arise in antipodal pairs, but this absence of genericity will not be an issue. In the last paragraph of the proof we will explain how one can perturb the positions to make them generic without changing the occupancy word. Let η > 0 be small enough (η < 1 s2 n+2 will suffice for our purposes). For 1 ≤ k ≤ s, consider the kth descending component, that is, I 2n (i k , j k+1 ). For every h ∈ I 2n (i k , j k+1 ), we set Similarly, for 1 ≤ k ≤ s, consider the kth ascending component, that is, 2ηη η2η 1/s By the symmetry of the word, there cannot be more than n points in a component. Therefore, in both of those cases, Hence the constructed positions of ascending and descending components lie in disjoint intervals. Now let P be the subset of positions We claim that this configuration of points has v as its occupancy word. As in the previous subsection, we set P = {p + 1 2 , p ∈ P}, and L (resp. L ) the positions of the bisectors of P (resp. P ). We also set M to be the collection of L ∪ L , possibly with repetitions, since the configuration constructed may not yet satisfy the genericity assumptions.
We now characterize the positions of these bisectors.  , j k+1 ). We distinguish two cases, depending on the value of v h .
If v h = 1, then r h ∈ P. Moreover, v i k = 1 since i k ∈ T , hence r i k ∈ P. Therefore, the rightmost element of P smaller than r h belongs to the set {r i k , r i k +1 , . . . , r h−1 }. Hence the position m h of the bisector of this point and r h satisfies If v h = 0, then as σ h = 1, we have v h+n = 1. Hence there is an element of P at position r h+n , and by the invariance under translation of the word by n, we have r h+n = r h + 1 2 . Therefore, r h ∈ P . Similarly, as σ i k = 2, we have v i k +n = 1 so that r i k +n ∈ P and r i k ∈ P . From there, we conclude as in the previous case.
For h in an ascending component, the proof is identical.
For the second point of the lemma, consider the index j k ∈ Z. As both r i k−1 and r i k belong to P, the elements of P directly to the left and right of r j k belong, respectively, to {r i k−1 , . . . , r j k −1 } and to {r j k +1 , . . . , r i k }. Hence the position of their bisector m j k ∈ L satisfies As j k − 1 belongs to the descending component I 2n (i k−1 , j k ), by (1) we have r j k −1 < r i k−1 + 1 4s , hence the right-hand side is smaller than , which is the expected bound. The left-hand side is treated similarly. Then, an identical proof shows that there is an element of L in the same interval.
Clearly the elements of M coming from ascending and descending components are disjoint. Those coming from the second point are also disjoint among themselves, as even if two may share the same position, only one of them will belong to L, and the other to L . The fact that these two families are disjoint is an easy consequence of (1). Hence we have constructed two elements of M for each element of Z, and one for each element of (T ∪ Z) c , which is in total 2(#Z) + 2n − (#T + #Z) = 2n.
Let w be the occupancy word of P. We now have all the tools to prove that w = v. For any 1 ≤ k ≤ s, consider the interval I[r i k−1 , r i k ). In that part, the positions and order of the elements of M are given by Lemma 21: for every h in the descending component then there are two distinct elements in m j k , m j k ∈ I(r j k −1 , r j k +1 )∩M; then for every h in the ascending component I 2n (j k , i k ) there is one m h ∈ I(r h , r h+1 ). Thus the part of w corresponding to this interval can be described as: first a 1 (for the region containing r i k−1 ), then for every h ∈ I 2n (i k−1 , j k ), either a 1 or a 0 according to the value of v h (as by definition these are the positions where a point of P has been put), then a 0 (for the region corresponding to m j k , m j k ), then for every h ∈ I 2n (j k , i k ) either a 0 or a 1 according to the value of v h . This is clearly the same as v at those indices. This being true for any k, by concatenation we get that w = v.
One issue that may arise is that the configuration constructed above is not generic, in the sense that two lines may coincide, which occurs for example when a descending component and the following ascending component are empty. In order to avoid such issues, we slightly perturb the configuration by fixing ε > 0 and defining for every 1 ≤ k ≤ 2n, the position r k = r k + kε. For ε small enough (ε < η 2n suffices), the relative position of the perturbed points and lines is the same as the unperturbed one, while two lines can no longer coincide.

Enumerating realizable words and bracelets
We end this section by computing the cardinality of the set #W n .

Proof of Corollary 3.
To choose a realizable word v, one may first choose its interlacing signature σ. This amounts to choosing an integer 1 ≤ ≤ n 2 such that σ will have 2 letters 0 or 2 and choose whether the first one is a 0 or a 2. We have 2 n 2 choices for the positions of these letters and the value of the first one. Then for every 1 ≤ i ≤ 2n such that σ i = 1 (here the index i is considered modulo 2n), one has to choose if v i is 1 or 0, under the condition that v i+n = v i . This gives 2 n−2 choices. Hence the number of realizable words of size 2n is one easily gets 2 n+1 + #W n + X = 2 × 3 n and 2 n+1 + #W n − X = 2, and the result follows.
Regarding realizable bracelets, we have which implies the announced result.

Bracelets for uniformly random points
In this section, we study the following model: fix n ≥ 3 and draw at random n points independently and uniformly distributed on the unit circle. Most of our proofs here rely on a model of black and white dots with exponential spacings, which can be coupled to our original model of uniform points on the circle. This new model is presented in Subsection 3.1. We prove Proposition 5 and Proposition 6 concerning the probability of individual bracelets to occur in Subsection 3.

Black and white dots with exponential spacing
In this section we identify the unit circle with the half-open interval [0, 1), by the inverse of the map t → e 2iπt . Note that this differs from the convention of Section 2 where the circle was identified to (0, 1]; this discrepancy is due to convenience of notation in both cases. Let p 1 , . . . , p n be n points in general position in [0, 1). We apply a global rotation to the n points so that p 1 = 0. Now the region with label 1 is defined to be the region containing e iε for all ε > 0 small enough. Recall that p i = p i + 1 2 mod 1 for every 1 ≤ i ≤ n. Denote by P (resp. P ) the set of all p i (resp. of all p i ) and note that the interval [0, 1 2 ) contains exactly n elements of P ∪ P . For every 0 ≤ i ≤ n − 1, we define the variables X i and Γ i such that the following two conditions are satisfied: • 0 = X 0 < X 1 < · · · < X n−1 < 1 2 is an ordering of the intersection of [0, 1 2 ) with the set P ∪ P ; Each X i is called a black dot (resp. white dot) if Γ i = 1 (resp. Γ i = 0). It is clear that from the position of the black and white dots we recover the set P up to a global rotation. Furthermore, taking the p i to be i.i.d. uniform on [0, 1) induces the probability distribution on dots described as follows: • X 0 = 0 is a black dot; • (X 1 , . . . , X n−1 ) are the ordering statistics of n−1 i.i.d. uniform random variables in [0, 1 2 ); • (Γ 1 , . . . , Γ n−1 ) are i.i.d. Bernoulli variables of parameter 1 2 , independent of the X i .
We also adopt the convention that X n = 1 2 and Γ n = 0. The Γ i 's are called colors.
For every 1 ≤ i ≤ n we define S i = X i − X i−1 to be the spacing between two consecutive dots. The main tool to understand the joint behaviour of the S i 's is the following lemma (see e.g. [7, Section 4.1] for a proof), which allows us to get rid of the condition that the sum of the variables S i should be equal to 1 2 . Lemma 22 ([7]). Fix n ≥ 1. If T 1 , . . . , T n are i.i.d. exponential variables of parameter 1 and Y n := n i=1 T i , then (T 1 /2Y n , . . . , T n /2Y n ) is independent of Y n and distributed as (S 1 , . . . , S n ).
In the remainder of this section we will frequently use the exponential spacing model, defined as follows. Let T 1 , . . . , T n be n i.i.d. Exp(1) variables and, for every 0 ≤ k ≤ n, define the dot Y k = k i=1 T i . Define also Γ 1 , . . . , Γ n−1 to be n − 1 i.i.d. Bernoulli variables of parameter 1 2 independent of the T i , and set in addition Γ 0 = 1 and Γ n = 0. It will also be useful to extend the definition of the Y i and Γ i to all −n ≤ i ≤ 2n − 1 by setting for With this extension, every dot X i with 0 ≤ i ≤ n − 1 has at least one dot of the opposite color to its left and to its right.
By Lemma 22, up to global scale, the variables Y i are distributed like the variables X i . Hence, replacing the X i by the Y i does not change the probability of each bracelet to occur. Thus, we can use the exponential spacing model to prove results about probabilities of bracelets (Proposition 6 and Theorem 7). In order to prove Theorem 8 about expected lengths of intervals, we need to overcome the problem that the global scale to go between the X i and the Y i is random: it is the total length of the interval Y n . This is done via Lemma 23. If t = (t 1 , . . . , t n ) ∈ R n + and γ = (γ 0 , . . . , γ n−1 ) ∈ {0, 1} n such that γ 0 = 1, we define for every k ∈ {0, 1, 2} L k (t, γ) to be the sum of the lengths of the regions of type k in a bracelet constructed from n points whose spacings (resp. colors) are given by t (resp. γ).
Lemma 23 (Transfer lemma for lengths). Let n ≥ 3 and let T and Γ be the n-tuples of spacings and colors in the exponential spacing model. Set also Y n = T 1 + · · · + T n . For every k ∈ {0, 1, 2}, we have In particular, Note that in Equation (3), the expectation on the left-hand side refers to a model on a circle of unit length while the expectation on the right-hand side refers to the exponential spacing model.
Proof of Lemma 23. The idea behind the proof is simply that the quantity E[L k (T , Γ)] behaves linearly in Y n . More precisely, observe that By Lemma 22, E[L k ( 1 2Yn T , Γ)|Y n ] is a random variable independent of Y n . Taking the expectation on both sides yields Equation (2). The second statement is a consequence of Lemma 22 and the fact that E[Y n ] = n.
In order to reconstruct the bracelet from the colored dots, we partition the configuration space according to the oriented colored dot configuration realized by the colored dots, which we define below.
Definition 24. Let 0 < x 1 < · · · < x n−1 < x n and let (γ 0 , . . . , γ n−1 ) ∈ {0, 1} n with γ 0 = 1. Extend the x i and γ i to all −n ≤ i ≤ 2n − 1 by setting For every 0 ≤ i ≤ n − 1, we set d i = L (resp. d i = R) if the dot of color 1 − γ i closest to x i lies to the left (resp. right) of x i ; in Section 2 this was termed as x i looks to the left (resp. right). The oriented colored dot configuration (OCDC) associated with the x i and γ i is the n-tuple It follows from Lemmas 14, 15 and 16 that the OCDC determines the bracelet. However, each bracelet may be realized by several OCDCs.
A useful tool to shorten computations in the remainder of this section is the notion of Erlang random variables (see e.g. [2, Chapter 1]).
Definition 25. Let λ > 0 be a real number and k ≥ 1 be an integer. The real random variable U is said to follow the Erlang distribution of parameters (k, λ) if its density with respect to the Lebesgue measure on R is given by We write this as U ∼ Erlang(k, λ).
An Erlang(k, λ) variable is distributed like the sum of k i.i.d. Exp(λ) variables. Remark 26. The rest of this section is very computational. We explain here our method to perform the exact computations of probabilities and expectations. We first express them using independent Erlang variables that should verify certain inequalities. This allows us to write them as multiple integrals of the product of the densities of these Erlang variables on the domain defined by the inequalities. The computation of the multiple integrals is then straightforward using that a primitive of f (x, k, λ) is given for x > 0 by: We computed all these integrals by hand. Each computation is elementary, but may take a page or two when there is a sum of two or three terms in the integrand, as is the case for example in Subsection 3.4.

Probabilities of individual bracelets
We first prove that the probability of each bracelet is a rational number. This proof does not use the exponential spacing model.

Proof of Proposition 5.
We denote by 0 = p 1 < p 2 < · · · < p n < 1 the positions of the n points. For every 1 ≤ i ≤ n recall the definitions of l i = p i +p i+1 2 and l i = l i + 1 2 , where we take the representative modulo 1 in [0, 1). Let b ∈ B n be a bracelet. Writing that b is achieved is a logical statement that can be written as a disjunction (using the operator "or") of disjoint clauses, where each clause corresponds to an ordering of the 3n elements p 1 , . . . , p n , l 1 , . . . , l n , l 1 , . . . , l n . Such an ordering can itself be expressed as a disjunction of disjoint literals, where each literal is a conjunction (using the operator "and") of inequalities of the following form: some linear combination with rational coefficients of the p i 's is greater than some rational number. Hence each literal defines a convex polytope contained inside [0, 1] n and whose faces are hyperplanes defined by Cartesian equations involving only rational coefficients. Such a polytope has vertices with rational coordinates, hence its volume is rational. Finally, the probability of b can be written as the sum of the volumes of such convex polytopes, hence is rational.
We now turn to the computation of the probability of the bracelet b n , which is the equivalence class of (1, 0, 1, . . . , 1, 0, . . . , 0). The computation is much easier for this bracelet than for other bracelets, since it can only be realized by a small number of OCDCs.
Here the number of missing 0's and 1's represented by the dots is entirely determined by the fact that each word has n letters of each type. By Remark 17, the sequence (d i ) 1≤i≤n of R's and L's associated to each of these words is entirely determined by their signatures. For general realizable words, the only remaining ambiguity to entirely determine the OCDC is to know the color of the dot on the left in each region of type 2. Here this indeterminacy is lifted by the condition which we imposed, that we have a black dot at position 0 which is the left dot in a region of type 2. The four OCDCs thus obtained from the four w i can be computed explicitly, a case-by-case analysis shows that they all satisfy o 0 = B R , o 1 = W L and o 2 = · · · = o n−1 . We denote these four OCDCs by Ω B L , Ω B R , Ω W L and Ω W R , where the index of Ω corresponds to the value of o 2 . They correspond respectively to w 1 , w 2 , w 3 and w 4 .
We compute the probabilities of these OCDCs using the exponential spacing model with spacings T 1 , . . . , T n and colors Γ 0 , . . . , Γ n−1 . There is a probability 1/2 n−1 for the variables Γ 1 , . . . , Γ n−1 to achieve the colors imposed by a given OCDC. We recall that in the exponential spacing model Γ 0 is always fixed to be black.
We first explain the computation of P(Ω B L ). The condition o 1 = W L translates to T 1 < T 2 . The condition o n−1 = B L translates to T 2 +· · ·+T n−1 < T n . The condition o 0 = B R translates to T 1 < T 2 + · · · + T n which is implied by the earlier condition that T 1 < T 2 . Finally each condition o i = B L for 2 ≤ i ≤ n − 2 translates to T 2 + · · · + T i < T n , which is implied by the earlier condition that T 2 + · · · + T n−1 < T n . Hence the two conditions T 1 < T 2 and T 2 + · · · + T n−1 < T n together with the conditions on the colors Γ 1 , . . . , Γ n−1 are equivalent to the realization of Ω B L . Since the T i 's are independent of the Γ i 's, we obtain the following product of probabilities: P(Ω B L ) = 1 2 n−1 P(T 1 < T 2 and T 2 + · · · + T n−1 < T n ).
A similar case-by-case reasoning yields P(Ω B R ) = 1 2 n−1 P(T 1 < T n and T 3 + · · · + T n < T 2 ) Since the T i are i.i.d. we have that P(Ω B L ) = P(Ω B R ) and P(Ω W L ) = P(Ω W R ).
Set U = T 3 + · · · + T n−1 , then U ∼ Erlang(n − 3, 1). Write also X = T 1 , Y = T 2 and Z = T n . By the independence of X, Y, Z and U and using the method outlined in Remark 26, we have 2 n−1 P(Ω B L ) = P(X < Y and Y + U < Z) thus P(Ω B L ) = 1 3·2 2n−3 . Set V = T 2 + · · · + T n−1 , then V ∼ Erlang(n − 2, 1). Write also Z = T n . Then we have thus P(Ω W L ) = 1 2 2n−3 . Since there are n possible choices for the point to place in 0, we have which yields the desired quantity.

Expected number of regions of each type
In this subsection we prove Theorem 7 about the expected number of regions of each type.
Proof of Theorem 7. As described in the introduction, it suffices to study H 2,n . Fix a point p ∈ P. Denote by f n the probability that p lies in a region of type 2 and lies to the left of the element of P which is also in this region. These events are illustrated in Figure 6.
Set X = T 1 , U = T 2 + · · · + T k+1 and V = T n+1−l + · · · + T n . Then U ∼ Erlang(k, 1) and V ∼ Erlang(l, 1) and X, U and V are independent, thus p k,l = P(X < U and X < V ) Then we have f n = 1 2 n−1 + ϕ 1 3 Lemma 27 (stated and proved below) implies that Proof. By summing first on (i, j) in formula (5), we get The inner sum can be computed by changing one variable to s = k + l. For any s satisfying i + j + 2 ≤ s ≤ n − 1, there are s − (i + j + 1) terms (k, l) contributing, hence the inner sum is (treating the case s = n − 1 apart): This computation is standard, and the result is Summing instead on the variables u = i + j and j, this becomes from which the desired formula quickly follows.

Expected total length of regions of type k
Since
Proof. We use the exponential spacing model again. Regions of type 2 come in antipodal pairs of equal lengths and exactly one of the regions in each pair has the element of P to the left of the element of P . Recall from the proof of Theorem 7 that Q denotes the event that O 0 = B R and O 1 = W L . Conditionally on Q being realized, the region of type 2 containing the origin can be divided into three: the portion from the left boundary to the black dot, the portion from the black dot to the white dot and the portion from the white dot to the right boundary. By symmetry, conditionally on Q, the expected lengths of the first and third portions are equal. The position of the white dot is T 1 and we denote by ρ the position of the right boundary. See Figure 6.
Since U ∼ Erlang(k − 1, 1), V ∼ Erlang(l, 1) and X, Y , U and V are independent, we deduce where corresponds to the case X < U and corresponds to the case U < X. A straightforward computation yields If k = 1, then conditionally on A 1,l we have 2ρ = T 1 + T 2 so that Similar computations as above show that formula (7) is also valid for k = 1.

Expected total length of regions of type 1
Proposition 29. For every n ≥ 3, we have Proof. Denote by S the event that O 0 = B L and O n−1 = B R . By Lemma 15, this is equivalent to requiring that the dot at position 0 is a black dot looking to the left and lying in a region of type 1. Denote by λ (resp. ρ) the position of the left (resp. right) boundary of that region. Since regions of type 1 come in antipodal pairs of equal length with exactly one which is occupied, and the point is equally likely to look to the left or to the right, Lemma 23 and a reasoning similar to the one for L 2,n yields where the expectation on the left-hand side refers to a model on a circle of unit length while the expectation on the right-hand side refers to the exponential spacing model. Recall the definitions of α + and α − from the proof of Theorem 7 and define β + and β − similarly for white dots. More precisely, for any OCDC o = (o 0 , o 1 , . . . , o n−1 ), we define If β(o) = ∅, set β − (o) = min(β(o)) and β + (o) = max(β(o)). Define the following events. For every 1 ≤ l ≤ n − 2, For every 1 ≤ l ≤ n − 3, For every k, l ≥ 1 such that k + l ≤ n − 2, For every k, l ≥ 1 such that k + l ≤ n − 3, Observing that O 0 = B L implies that α(O) = ∅ and that O n−1 = B L implies that β(O) = ∅, we deduce that all the events defined above form a partition of S. We first treat the cases of S m with 1 ≤ m ≤ 4. We write X = T 1 , U = T 2 + · · · + T k+1 , V = T n−l + · · · + T n−1 and Y = T n . Then U ∼ Erlang(k, 1), V ∼ Erlang(l, 1) and X, Y , U and V are independent.
The event S 1 l corresponds to imposing the colors of min(l + 2, n − 1) dots and requiring that V < Y < X. Conditionally on S 1 l we have 2λ = −V − Y and 2ρ = X − Y . Hence The event S 4 k,l corresponds to imposing the colors of min(k + l + 2, n − 1) dots and requiring that V + Y < X + U . Conditionally on S 4 l we have 2λ = −Y and 2ρ = min(X, corresponds to the case Y + V < U and and The cases S 1 l (resp. S 2 l ) correspond to S 3 0,l (resp. S 4 0,l ) provided we extend the definitions of S 3 k,l and S 4 k,l to k = 0. For the case of S 5 l , we define X = T 1 , U = T 2 + · · · + T n−1−l , Z = T n−l , V = T n−l +1 + · · · + T n−1 and Y = T n . Then U ∼ Erlang(n − 2 − l , 1), V ∼ Erlang(l − 1, 1) and X, Y , U , V and Z are independent. The event S 5 l corresponds to requiring that V +Y < X +U and imposing the colors of n−1 dots. Conditionally on S 5 l we have 2λ = −Y and 2ρ = min(X, X +U −Y −V ). Setting k = n − 2 − l and l = l − 1 and comparing with the computation for S 4 k,l we deduce that if 2 ≤ l ≤ n − 3: being standard, we omit the computations here. Thus we get and combining these yields

Equidistribution of regions of each type
This subsection is devoted to the proof of Theorem 11, which is done by making use of Theorems 7 and 8.
Proof of Theorem 11. Recall that we identify the circle with the interval [0, 1), so that, for 0 ≤ t ≤ 1, the circular arc from 1 to e 2iπt gets identified to the interval [0, t]. We can restrict ourselves to 0 ≤ t ≤ 1 2 , since for 1 2 < t ≤ 1, we have for every k ∈ {0, 1, 2} Let us first focus on the proof of Theorem 11 (i), as (ii) can be showed in the same way. Firstly, remark that the variables ( h 0,n (t) 2n , h 1,n (t) 2n , h 2,n (t) 2n ) , so the sequence that we consider is tight. Thus, we only have to check the convergence of its finite-dimensional marginals. Fix m ≥ 1 and 0 < a 1 < a 2 < · · · < a m < 1 2 to be m real numbers. We also set a 0 = 0 and a m+1 = 1 2 . We first prove that the proportions of regions of each color in [a 0 , a 1 ] . . . , [a m , a m+1 ] are "almost independent", and then use Theorem 7 on each of these intervals to get the result. For this, let N 1 , . . . , N m+1 be the number of dots in each of these intervals. Recall that black (resp. white) dots correspond to elements of P (resp. P ). It is clear that (N 1 , . . . , N m+1 ) is distributed as a multinomial of parameters (n; 2(a 1 − a 0 ), 2(a 2 − a 1 ), . . . , 2(a m+1 − a m )). Thus, for any 1 ≤ i ≤ m + 1, a Chernoff bound provides This implies that, with probability going to 1 as n → ∞: Let us assume from now on that this holds. We now need to control the interactions between two different intervals of the form [a i , a i+1 ]. The key remark is the following: among all the lines arising as the boundary of some region entirely contained inside [a i , a i+1 ], at most 4 are not midpoints of two dots that are both inside [a i , a i+1 ]. These four lines involve the leftmost and rightmost dots of each color in the interval. Hence, overall, at most 4(m + 1) regions are created by interactions between two intervals. Consider now the affine map sending a i to 0 and a i+1 to 1 2 . Conditionally to the number of points in the interval [a i , a i+1 ], their images by this map are i.i.d. uniform on [0, 1 2 ]. By (9), using Theorem 7 and the previous key remark, jointly for all i: Thus, the finite-dimensional marginals of the process converge, and one gets (i).
Let us now check that the same method may be applied to prove (ii). The sequence involved is tight for the same reason. To prove the convergence of the finite-dimensional marginals of the process, we again use the fact that only a bounded number of regions arise from the interaction between two different intervals. The only additional ingredient that we need is the following result, whose proof can be found in [5], which tells us that all the regions are small enough, so that omitting a bounded number of regions does not change much the sum of the lengths of the regions of a given type.
Lemma 30 ([5]). Let M n be the maximal distance between two consecutive points for n i.i.d. uniform points on [0, 1 2 ]. Then, as n → ∞, in probability: Considering the first dot to the left and the first dot to the right of a region of type k with k ∈ {0, 1, 2}, we find that an upper bound for the length of that region is (k + 1)M n . By Lemma 30, with probability going to 1 as n → ∞, every region length is less than √ n, and thus one can use the same key argument as in the proof of (i). By Theorem 8, we get jointly for all i: ( 0,n (a i+1 ) − 0,n (a i ), 1,n (a i+1 ) − 1,n (a i ), 2,n (a i+1 ) − 2,n (a i )) Hence, the finite-dimensional marginals of the process converge, and one gets (ii).

Uniform realizable words and bracelets
The aim of this section is to prove Theorem 31 stated below, which immediately implies Theorem 12 about the asymptotic shape of a uniformly random realizable word. Let w (n) be a random word taken uniformly in the set of realizable words of length 2n. We define the folded word obtained from w (n) to be the word w (n) of length n on the alphabet {00, 10, 01, 11}, whose letter in position i is the concatenation w i+n . For x ∈ [0, n] and a ∈ {00, 11, 10, 01}, denote by S a x the number of letters a inŵ (n) between positions 0 and x . Then the following holds: realizable wordŵ :=ŵ 1 · · ·ŵ n , we associate the walk S satisfying S 0 = 0 and, for all i ≥ 1, S i − S i−1 = f (ŵ i ).
Remark that occurrences of 01 (resp. 10) inŵ correspond to jumps by −1 (resp. +1) in S. Jumps by 0 in S may correspond to either 00 or 11, but as these two letters shall alternate in a folded realizable word, if one knows whether the first jump 0 corresponds to 00 or 11, then it is possible to recoverŵ from S. By symmetry, we assume from now on that the first jump by 0 corresponds to 11, so that the mapŵ → S is a bijection fromŴ + n to Walks + (n), whereŴ + n is the set of folded realizable words whose first 11 appears before the first 00, and Walks + (n) is the set of walks of length n, starting from 0 and with steps in {0, +1, −1}, with an even nonzero number of steps 0. Now take n to be a positive integer. We want to study a uniform element of the set Walks + (n). To this end, we first study the set Walks(n) of walks of length n, starting from 0 and with jumps in {0, +1, −1}. We define a walk (T i ) 0≤i≤n := (S i , K i ) 0≤i≤n on Z 2 as follows: its first coordinate is a uniform element of Walks(n), K 0 = 0 and, for any 0 ≤ i ≤ n − 1, K i+1 − K i = 1 S i+1 −S i =0 . In other words, the second coordinate of T enumerates the steps 0 in the walk S. It is clear by definition that (T i ) 0≤i≤n is a random walk on Z 2 starting from (0, 0), with i.i.d. jumps Y 1 , . . . , Y n whose distribution is the following: P (Y 1 = (1, 0)) = P (Y 1 = (−1, 0)) = P (Y 1 = (0, 1)) = 1 3 .
In particular, Y 1 has respective mean and covariance matrix M = 0 1/3 and Σ = 2/3 0 0 2/9 We want to prove the functional convergence of the walk S, along with the process (K i ) 0≤i≤n counting the number of "0" jumps in the walk, conditionally on K n being even and nonzero. Since, clearly, P (K n = 0) = o(P(K n = 0 mod 2)), we only need to condition K n to be even.
The whole proof of this proposition is highly inspired from the one of [10, Lemma 4.1]. Let us start with a result on the corresponding unconditioned random walk: This result is a consequence of Theorem 32. Indeed, by [4,Theorem 16.14], it is enough to check that the one-dimensional convergence holds for c = 1. One gets this from Theorem 32. Uniformly for a, b in a compact subset of R: This implies (see e.g. [1,Theorem 7.8]) that (S n / √ n, (K n − n/3)/ √ n) converges in distribution to (W 1 ). The convergence (10) follows. We now want a conditioned version of (10), taking into account the fact that K n has to be even. To this end, take 0 < u < 1 and take F : C([0, u], R 2 ) → R a bounded continuous functional. Set Setting ϕ n (i) = P(K n = i mod 2) and observing that the (unconditioned) walk until time nu is independent of the walk between nu and n, one can write: In order to estimate this quantity, simply remark that K n is distributed as a binomial Bin n of parameters (n, 1/3). Now, remark by a simple computation that P(Bin n = 0 mod 2) + P(Bin n = 1 mod 2) = 1, and P(Bin n = 0 mod 2) − P(Bin n = 1 mod 2) = 3 −n , which implies that ϕ n (0) and ϕ n (1) both converge to 1/2 as n → ∞.
Thus, (11) can be rewritten: words, which is o(#W n ). Furthermore, a word of length n which is equal to its reversal is determined by its first n/2 letters, hence there are at most n3 n/2 words whose reversal may be equal to one of their shifts. The result follows.