A note on log-concave random graphs

We establish a threshold for the connectivity of certain random graphs whose (dependent) edges are determined by the uniform distributions on generalized Orlicz balls, crucially using their negative correlation properties. We also show the existence of a unique giant component for such random graphs.


Introduction
Probabilistic combinatorics is today a thriving field bridging the classical area of probability with modern developments in combinatorics. The theory of random graphs, pioneered by Erdős-Rényi [2], [3] has given us numerous insights, surprises and techniques and has been used to count, to establish structural properties and to analyze algorithms. There are by now several texts [1], [6], [4] that deal exclusively with the subject. The most heavily studied models being G n,m and G n,p . Both have vertex set [n] and in the first we choose m random edges and in the second we include each possible edge independently with probability p. Let X be a random vector in [0, ∞) ( n 2 ) with a log-concave down-monotone density f , that is (i) log f is concave and (ii) f (x) ≥ f (y) if x ≤ y (coordinate-wise). For 0 < p < 1, let G X,p be a random graph with vertices 1, . . . , n and edges determined by X: for 1 ≤ i < j ≤ n, {i, j} is an edge if and only if X {i,j} ≤ p. Such log-concave random graphs were introduced by Frieze, Vempala and Vera in [5]. For instance, when X is uniform on [0, 1] ( n 2 ) , G X,p is the random graph G n,p . The paper [5] introduced a surprising connection between random graphs and convex geometry. It studied, among other things, the connectivity of G X,p and found a logarithmic gap for the threshold. There is no gap when G X,p is defined by uniform sampling from a "well-behaved" regular simplex 1 and we extend this case to Generalized Orlicz Balls GOBs: that is sets of for some nondecreasing lower semicontinuous convex functions f 1 , . . . , f d : [0, ∞) → [0, ∞] with f i (0) = 0, which are not identically 0 or +∞ on (0, ∞).
The key property of Orlicz balls is negative correlation. We say that a random vector X in R d has negatively correlated coordinates if for any disjoint subsets I, J of {1, . . . , d} and nonnegative numbers s i , t j , we have It was shown in [7] that this property holds for random vectors uniformly distributed on GOBs (see also [8] for a first such result treating two coordinates and [9] for a simpler proof of the general result). Notation: Throughout the paper we will let σ min and σ max be defined by Our result concerning connectivity is the following theorem.
Theorem 1. Let X = (X i,j ) 1≤i<j≤n be a log-concave random vector in [0, ∞) ( n 2 ) with a down-monotone density and negatively correlated coordinates.
(a) For every δ ∈ (0, 1), there are constants c 1 and c 2 dependent only on δ such that for p < c 1 σ min log n n , we have P(G X,p has isolated vertices) > 1 − c 2 n −δ .
(b) For every δ ∈ (0, 1), there are constants C 1 and C 2 dependent only on δ such that for p > C 1 σ max log n n , we have We will also discuss the existence of a giant component for smaller values of p.
where the first maximum is over all nonempty subsets T of the index set For our theorem on the existence of a giant component we need Now our assumptions on the f i,j imply that the a i,j 's are finite. Furthermore, M ≤ max i,j a 2 i,j and so our assumption here is that max i,j a i,j is bounded by an absolute constant.
Theorem 2. Let X = (X i,j ) 1≤i<j≤n be a log-concave random vector in [0, ∞) ( n 2 ) with a down-monotone density. Assume that M = O(1). There are constants c 1 and c 2 such that for every β > 1, we have (i) If p < c 1 σ min n , then P(G X,p has a component of order ≥ β log n) < 12 n β−1 .
Note that we have dropped the assumption of negative correlation.
2 Connectivity: Proof of Theorem 1 Proof. Part (b) is part of Theorem 2.1 of [5]. For (a), we adapt the standard second moment argument used for the Erdös-Rényi model.
and our goal is to show that From the negative correlation of coordinates of X as well as an elementary inequality P(A) ≤ P(A ∩ B) + 1 − P(B), we get = P(Y k = 1) P(Y l = 1) + P(X kl ≤ p) .
By Lemma 3.1 from [5], P(Y k = 1) ≥ e −apn/σ min , for some universal constant a (the assumption p < 1 4 σ min of that lemma is clearly satisfied if p < c 1 σ min log n n ), so Thus, ε = c 2 n ac 1 −1 log n will suffice.

Giant Component: Proof of Theorem 2
Lemma 3. Let X = (X i,j ) 1≤i<j≤n be a log-concave random vector in [0, ∞) ( n 2 ) with a downmonotone density. There are universal constants a and b such that for S, T ⊂ {(i, j), 1 ≤ i < j ≤ n} and p > 0, we have Proof. Fix disjoint sets S, T ⊂ {(i, j), 1 ≤ i < j ≤ n} (if they are not disjoint, the probability in question is 0) and y ∈ [0, ∞) |T | . Let f be the density of (X S , X T ). The conditional density of the vector X S given X T = y, is down-monotone and log-concave. Therefore, by Lemma 3.1 from [5], P(∀s ∈ S X s > p|X T = y) ≤ e −ap|S|/M .
We denote the density of X T by f X T and get where the final inequality follows directly from Lemma 3.2 of [5].
With this lemma in hand, we can prove Theorem 2.
Proof. Let Z k be the number of components of order k (that is, on k vertices) in G X,p . As for the Erdös-Rényi model, looking at a spanning tree for each component and bounding the corresponding in-out edge probabilities using Lemma 3 yields Thus, By the first moment method, this gives (i).

Case 2.
Let c be a large constant, say such that Ace −c/2 ≤ 1 e and Ac ≥ e 2 , which holds when, say c ≥ 4 log A, provided that A is large enough, which leads to the assumption on p in (ii). Then for k ≤ n/2, we have Thus, By the first moment method, this gives the first part of (ii). To go about the second part and show that there is a giant component, we shall simply count the number of vertices on the small components and show that with high probability, there are strictly less n such vertices. The uniqueness of a giant component plainly follows from the fact that it has more than n/2 vertices, so there cannot be more than one such components. Fix 1 ≤ k ≤ β log n and set t = ne −k−1 . For any positive integer l ≤ et + 1, we have As for the upper bound for EZ k , looking at spanning trees for each l-tuple of distinct components of order k and bounding the corresponding in-out edge probabilities using Lemma 3 yields Provided that kl ≤ n/2, under our assumption c ≥ 4 log A, this is further upper bounded by (t/k 2 ) l , which gives For k ≥ 1 2 log n, we choose l = 1 and get For k < 1 2 log n, we have t = ne −k−1 > e −1 √ n, so choosing, say l − 1 = ⌊e −1 √ n⌋ yields Combining the last two estimates, the union bound gives that the probability of the event log n (we check that

Conclusion and Open Questions
We have successfully generalised the results on the regular simplex in [5] to GOBs. The following questions seem most apposite.
Q1 What we prove in Theorem 2 does not rule out the possibility that in some range of p there is more than one giant component. Can the proof be tightened to rule this out?
Q2 What is the connectivity or giant component threshold for the intersection of two wellbehaved regular simplices?
Q3 What is the connectivity or giant component threshold for the intersection of a few regular simplices with independent randomly chosen coefficients?