[ad_1]
Let be a non-empty finite set. If is a random variable taking values in , the Shannon entropy of is outlined as
There’s a good variational method that lets one compute logs of sums of exponentials when it comes to this entropy:
Lemma 1 (Gibbs variational method) Let be a perform. Then
Proof: Be aware that shifting by a continuing impacts each side of (1) the identical means, so we could normalize . Then is now the likelihood distribution of some random variable , and the inequality will be rewritten as
However that is exactly the Gibbs inequality. (The expression contained in the supremum may also be written as , the place denotes Kullback-Liebler divergence. One may also interpret this inequality as a particular case of the Fenchel–Younger inequality relating the conjugate convex capabilities and .)
On this word I want to use this variational method (which is also referred to as the Donsker-Varadhan variational method) to offer one other proof of the next inequality of Carbery.
Theorem 2 (Generalized Cauchy-Schwarz inequality) Let , let be finite non-empty units, and let be capabilities for every . Let and be optimistic capabilities for every . Then
the place is the amount
the place is the set of all tuples such that for .
Thus for example, the id is trivial for . When , the inequality reads
which is well confirmed by Cauchy-Schwarz, whereas for the inequality reads
which may also be confirmed by elementary means. Nevertheless even for , the prevailing proofs require the “tensor energy trick” with a view to scale back to the case when the are step capabilities (during which case the inequality will be confirmed elementarily, as mentioned within the above paper of Carbery).
We now show this inequality. We write and for some capabilities and . If we take logarithms within the inequality to be confirmed and apply Lemma 1, the inequality turns into
the place ranges over random variables taking values in , vary over tuples of random variables taking values in , and vary over random variables taking values in . Evaluating the suprema, the declare now reduces to
Lemma 3 (Conditional expectation computation) Let be an -valued random variable. Then there exists a -valued random variable , the place every has the identical distribution as , and
Proof: We induct on . When we simply take . Now suppose that , and the declare has already been confirmed for , thus one has already obtained a tuple with every having the identical distribution as , and
By speculation, has the identical distribution as . For every worth attained by , we are able to take conditionally unbiased copies of and conditioned to the occasions and respectively, after which concatenate them to kind a tuple in , with an additional copy of that’s conditionally unbiased of relative to . One can the use the entropy chain rule to compute
and the declare now follows from the induction speculation.
With somewhat extra effort, one can exchange by a extra basic measure house (and use differential entropy rather than Shannon entropy), to recuperate Carbery’s inequality in full generality; we depart the main points to the reader.
[ad_2]