Home Math A decrease sure on the imply worth of the Erd”os-Hooley delta perform

A decrease sure on the imply worth of the Erd”os-Hooley delta perform

A decrease sure on the imply worth of the Erd”os-Hooley delta perform

[ad_1]

Kevin Ford, Dimitris Koukoulopolous and I’ve simply uploaded to the arXiv our paper “A decrease sure on the imply worth of the Erdös-Hooley delta perform“. This paper enhances the current paper of Dimitris and myself acquiring the higher sure

displaystyle  frac{1}{x} sum_{n leq x} Delta(n) ll (loglog x)^{11/4}

on the imply worth of the Erdös-Hooley delta perform

displaystyle  Delta(n) := sup_u # { d|n: e^u < d leq e^{u+1} }

On this paper we acquire a decrease sure

displaystyle  frac{1}{x} sum_{n leq x} Delta(n) gg (loglog x)^{1+eta-o(1)}

the place {eta = 0.3533227dots} is an exponent that arose in earlier work of results of Ford, Inexperienced, and Koukoulopolous, who confirmed that

displaystyle  Delta(n) gg (loglog n)^{eta-o(1)}      (1)

for all {n} exterior of a set of density zero. The earlier finest recognized decrease sure for the imply worth was

displaystyle  frac{1}{x} sum_{n leq x} Delta(n) gg loglog x,

because of Corridor and Tenenbaum.

The purpose is the primary contributions to the imply worth of {Delta(n)} are pushed not by “typical” numbers {n} of some dimension {x}, however reasonably of numbers which have a splitting

displaystyle  n = n' n''

the place {n''} is the product of primes between some intermediate threshold {1 leq y leq x} and {x} and behaves “sometimes” (so specifically, it has about {loglog x - loglog y + O(sqrt{loglog x})} prime components, as per the Hardy-Ramanujan legislation and the Erdös-Kac legislation, however {n'} is the product of primes as much as {y} and has double the variety of typical prime components – {2 loglog y + O(sqrt{loglog x})}, reasonably than {loglog y + O(sqrt{loglog x})} – thus {n''} is the kind of quantity that might make a major contribution to the imply worth of the divisor perform {tau(n'')}. Right here {y} is such that {loglog y} is an integer within the vary

displaystyle  varepsilonloglog x leq log log y leq (1-varepsilon) loglog x

for some small fixed {varepsilon>0} there are mainly {loglog x} completely different values of {y} give primarily disjoint contributions. From the simple inequalities

displaystyle  Delta(n) gg Delta(n') Delta(n'') geq frac{tau(n')}{log n'} Delta(n'')      (2)

(the latter coming from the pigeonhole precept) and the truth that {frac{tau(n')}{log n'}} has imply about one, one would count on to get the above end result supplied that one may get a decrease sure of the shape

displaystyle  Delta(n'') gg (log log n'')^{eta-o(1)}      (3)

for most common {n''} with prime components between {y} and {x}. Sadly, because of the lack of small prime components in {n''}, the arguments of Ford, Inexperienced, Koukoulopoulos that give (1) for typical {n} don’t fairly work for the rougher numbers {n''}. Nevertheless, it seems that one can get round this drawback by changing (2) by the extra environment friendly inequality

displaystyle  Delta(n) gg frac{tau(n')}{log n'} Delta^{(log n')}(n'')

the place

displaystyle  Delta^{(v)}(n) := sup_u # { d|n: e^u < d leq e^{u+v} }

is an enlarged model of {Delta^{(n)}} when {v geq 1}. This inequality is definitely confirmed by making use of the pigeonhole precept to the components of {n} of the shape {d' d''}, the place {d'} is likely one of the {tau(n')} components of {n'}, and {d''} is likely one of the {Delta^{(log n')}(n'')} components of {n''} within the optimum interval {[e^u, e^{u+log n'}]}. The additional room supplied by the enlargement of the vary {[e^u, e^{u+1}]} to {[e^u, e^{u+log n'}]} seems to be adequate to adapt the Ford-Inexperienced-Koukoulopoulos argument to the tough setting. The truth is we’re in a position to make use of the primary technical estimate from that paper as a “black field”, particularly that if one considers a random subset {A} of {[D^c, D]} for some small {c>0} and sufficiently giant {D} with every {n in [D^c, D]} mendacity in {A} with an impartial chance {1/n}, then with excessive chance there ought to be {gg c^{-1/eta+o(1)}} subset sums of {A} that attain the identical worth. (Initially, what “excessive chance” means is simply “near {1}“, however one can scale back the failure chance considerably as {c rightarrow 0} by a “tensor energy trick” profiting from Bennett’s inequality.)

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here