Tim Gowers, Ben Inexperienced, Freddie Manners, and I’ve simply uploaded to the arXiv our paper “On a conjecture of Marton“. This paper establishes a model of the infamous Polynomial Freiman–Ruzsa conjecture (first proposed by Katalin Marton):

Theorem 1 (Polynomial Freiman–Ruzsa conjecture)Let be such that . Then will be lined by at most interprets of a subspace of of cardinality at most .

The earlier greatest recognized end result in direction of this conjecture was by Konyagin (as communicated on this paper of Sanders), who obtained an identical end result however with changed by for any (assuming that say to keep away from some degeneracies as approaches , which isn’t the troublesome case of the conjecture). The conjecture (with changed by an unspecified fixed ) has plenty of equal varieties; see this survey of Inexperienced, and these papers of Lovett and of Inexperienced and myself for some examples; specifically, as mentioned within the latter two references, the constants within the inverse theorem are actually polynomial in nature (though we didn’t attempt to optimize the fixed).

The exponent right here was the product of numerous optimizations to the argument (our unique exponent right here was nearer to ), however will be improved even additional with further effort (our present argument, as an example, permits one to switch it with , however we determined to state our end result utilizing integer exponents as an alternative).

On this paper we’ll focus solely on the attribute case (so we might be cavalier in figuring out addition and subtraction), however in a followup paper we’ll set up comparable ends in different finite traits.

A lot of the prior progress on this form of end result has proceeded through Fourier evaluation. Maybe surprisingly, our strategy makes use of no Fourier evaluation in any respect, being carried out as an alternative totally in “bodily house”. Broadly talking, it follows a pure technique, which is to induct on the doubling fixed . Certainly, suppose as an example that one may present that each set of doubling fixed was “commensurate” in some sense to a set of doubling fixed at most . One measure of commensurability, as an example, is likely to be the *Ruzsa distance* , which one would possibly hope to regulate by . Then one may iterate this process till doubling fixed dropped beneath say , at which level the conjecture is thought to carry (there’s an elementary argument that if has doubling fixed lower than , then is in truth a subspace of ). One can then use a number of purposes of the *Ruzsa triangle inequality*

to conclude (the truth that we scale back to implies that the varied Ruzsa distances that should be summed are managed by a convergent geometric collection).

There are a selection of doable methods to attempt to “enhance” a set of not too massive doubling by changing it with a commensurate set of higher doubling. We notice two specific potential enhancements:

Sadly, there are units the place neither of the above two operations (i), (ii) considerably improves the doubling fixed. For example, if is a random density subset of random interprets of a medium-sized subspace , one can examine that the doubling fixed stays near if one applies both operation (i) or operation (ii). However on this case these operations don’t really worsen the doubling fixed a lot both, and by making use of some mixture of (i) and (ii) (both intersecting with a translate, or taking a sumset of with itself) one can begin reducing the doubling fixed once more.

This begins to recommend a possible technique: present that at the least one of many operations (i) or (ii) will enhance the doubling fixed, or at the least not worsen it an excessive amount of; and within the latter case, carry out some extra sophisticated operation to find the specified doubling fixed enchancment.

An indication that this technique might need an opportunity of working is offered by the next heuristic argument. If has doubling fixed , then the Cartesian product has doubling fixed . Then again, through the use of the projection map outlined by , we see that initiatives to , with fibres being primarily a duplicate of . So, morally, additionally behaves like a “skew product” of and the fibres , which suggests (non-rigorously) that the doubling fixed of can also be one thing just like the doubling fixed of , occasions the doubling fixed of a typical fibre . This could indicate that at the least one in all and would have doubling fixed at most , and thus that at the least one in all operations (i), (ii) wouldn’t worsen the doubling fixed.

Sadly, this argument doesn’t appear to be simply made rigorous utilizing the normal doubling fixed; even simply exhibiting the considerably weaker assertion that has doubling fixed at most is unclear (and maybe even false). Nonetheless, it seems (as mentioned in this latest paper of myself with Inexperienced and Manners) that issues are a lot better. Right here, the analogue of a subset in is a random variable taking values in , and the analogue of the (logarithmic) doubling fixed is the entropic doubling fixed , the place are unbiased copies of . If is a random variable in some additive group and is a homomorphism, one then has what we name the *fibring inequality*

the place the conditional doubling fixed is outlined as

Certainly, from the chain rule for Shannon entropy one has

and

whereas from the non-negativity of conditional mutual data one has

and it’s a simple matter to mix all these identities and inequalities to acquire the declare.

Making use of this inequality with changed by two unbiased copies of itself, and utilizing the addition map for , we acquire specifically that

or (since is set by as soon as one fixes )

So if , then at the least one in all or might be lower than or equal to . That is the entropy analogue of at the least one in all (i) or (ii) bettering, or at the least not degrading the doubling fixed, though there are some minor technicalities involving how one offers with the conditioning to within the second time period that we are going to gloss over right here (one can pigeonhole the cases of to totally different occasions , , and “depolarise” the induction speculation to take care of distances between pairs of random variables that don’t essentially have the identical distribution). Moreover, we will even calculate the defect within the above inequality: a cautious inspection of the above argument ultimately reveals that

the place we now take 4 unbiased copies . This leads (modulo some technicalities) to the next attention-grabbing conclusion: if neither (i) nor (ii) results in an enchancment within the entropic doubling fixed, then and are conditionally unbiased relative to . This case (or an approximation to this example) is what we seek advice from within the paper because the “endgame”.

A model of this endgame conclusion is in truth legitimate in any attribute. However in attribute , we will benefit from the identification

Conditioning on , and utilizing symmetry we now conclude that if we’re within the endgame precisely (in order that the mutual data is zero), then the unbiased sum of two copies of has precisely the identical distribution; specifically, the entropic doubling fixed right here is zero, which is definitely a discount within the doubling fixed.

To take care of the state of affairs the place the conditional mutual data is small however not utterly zero, we’ve to make use of an entropic model of the Balog-Szemeredi-Gowers lemma, however fortuitously this was already labored out in an outdated paper of mine (though with a view to optimise the ultimate fixed, we ended up utilizing a slight variant of that lemma).

I’m planning to formalize this paper within the proof assistant language Lean4; extra particulars to observe.