Home Math The convergence of an alternating collection of Erdos, assuming the Hardy–Littlewood prime tuples conjecture

The convergence of an alternating collection of Erdos, assuming the Hardy–Littlewood prime tuples conjecture

I’ve simply uploaded to the arXiv my paper “The convergence of an alternating collection of Erdös, assuming the Hardy–Littlewood prime tuples conjecture“. This paper issues an previous downside of Erdös regarding whether or not the alternating collection ${sum_{n=1}^infty frac{(-1)^n n}{p_n}}$ converges, the place ${p_n}$ denotes the ${n^{th}}$ prime. The principle results of this paper is that the reply to this query is affirmative assuming a sufficiently sturdy model of the Hardy–Littlewood prime tuples conjecture.
The alternating collection check doesn’t apply right here as a result of the ratios ${frac{n}{p_n}}$ will not be monotonically lowering. The deviations of monotonicity come up from fluctuations within the prime gaps ${p_{n+1}-p_n}$, so the enemy arises from biases within the prime gaps for odd and even ${n}$. By altering variables from ${n}$ to ${p_n}$ (or extra exactly, to integers within the vary between ${p_n}$ and ${p_{n+1}}$), that is principally equal to biases within the parity ${(-1)^{pi(n)}}$ of the prime counting operate. Certainly, it’s an unpublished commentary of Stated that the convergence of ${sum_{n=1}^infty frac{(-1)^n n}{p_n}}$ is equal to the convergence of ${sum_{n=10}^infty frac{(-1)^{pi(n)}}{n log n}}$. So this query is admittedly about attempting to get a sufficiently sturdy quantity of equidistribution for the parity of ${pi(n)}$.
The prime tuples conjecture doesn’t straight say a lot concerning the worth of ${pi(n)}$; nonetheless, it may be used to regulate variations ${pi(n+lambda log x) - pi(n)}$ for ${n sim x}$ and ${lambda>0}$ not too massive. Certainly, it’s a well-known calculation of Gallagher that for mounted ${lambda}$, and ${n}$ chosen randomly from ${1}$ to ${x}$, the amount ${pi(n+lambda log x) - pi(n)}$ is distributed based on the Poisson distribution of imply ${lambda}$ if the prime tuples conjecture maintain. Specifically, the parity ${(-1)^{pi(n+lambda log x)-pi(n)}}$ of this amount ought to have imply asymptotic to ${e^{-2lambda}}$. An software of the van der Corput ${A}$-process then provides some decay on the imply of ${(-1)^{pi(n)}}$ as effectively. Sadly, this decay is a bit too weak for this downside; even when one makes use of probably the most quantitative model of Gallagher’s calculation, labored out in a latest paper of Kuperberg, the very best certain on the imply ${|frac{1}{x} sum_{n leq x} (-1)^{pi(n)}|}$ is one thing like ${1/(loglog x)^{-1/4+o(1)}}$, which isn’t fairly sturdy sufficient to beat the doubly logarithmic divergence of ${sum_{n=1}^infty frac{1}{n log n}}$.
To get round this impediment, we make the most of the random sifted mannequin ${{mathcal S}_z}$ of the primes that was launched in a paper of Banks, Ford, and myself. To mannequin the primes in an interval resembling ${[n, n+lambda log x]}$ with ${n}$ drawn randomly from say ${[x,2x]}$, we take away one random residue class ${a_p hbox{ mod } p}$ from this interval for all primes ${p}$ as much as Pólya’s “magic cutoff” ${z approx x^{1/e^gamma}}$. The prime tuples conjecture can then be intepreted because the assertion that the random set ${{mathcal S}_z}$ produced by this sieving course of is statistically a superb mannequin for the primes in ${[n, n+lambda log x]}$. After some commonplace manipulations (utilizing a model of the Bonferroni inequalities, in addition to some higher bounds of Kuperberg), the issue then boils all the way down to getting sufficiently sturdy estimates for the anticipated parity ${{bf E} (-1)^{|{mathcal S}_z|}}$ of the random sifted set ${{mathcal S}_z}$.
For this downside, the primary benefit of working with the random sifted mannequin, slightly than with the primes or the singular collection arising from the prime tuples conjecture, is that the sifted mannequin might be studied iteratively from the partially sifted units ${{mathcal S}_w}$ arising from sifting primes ${p}$ as much as some intermediate threshold ${w, and that the anticipated parity of the ${{mathcal S}_w}$ experiences some decay in ${w}$. Certainly, as soon as ${w}$ exceeds the size ${lambda log x}$ of the interval ${[n,n+lambda log x]}$, sifting ${{mathcal S}_w}$ by an extra prime ${p}$ will trigger ${{mathcal S}_w}$ to lose one factor with likelihood ${|{mathcal S}_w|/p}$, and stay unchanged with likelihood ${1 - |{mathcal S}_w|/p}$. If ${|{mathcal S}_w|}$ concentrates round some worth ${overline{S}_w}$, this means that the anticipated parity ${{bf E} (-1)^{|{mathcal S}_w|}}$ will decay by an element of about ${|1 - 2 overline{S}_w/p|}$ as one will increase ${w}$ to ${p}$, and iterating this could give good bounds on the ultimate anticipated parity ${{bf E} (-1)^{|{mathcal S}_z|}}$. It seems that current second second calculations of Montgomery and Soundararajan suffice to acquire sufficient focus to make this technique work.