Preliminary definitions and known properties are presented in Section 2. This article is a revised, full version of what was presented in part in a previous conference communication. This explains why one obtains equality in the EPI only for normal variables (see for more details). by rotation characterizes the normal distribution-this is known as Bernstein’s lemma (see, e.g., ( ) and ( )). A deeper result states that this property of remaining i.i.d. The proof of Lemma 2 is trivial considering covariance matrices. In this paper, we exploit these ingredients, described in the following lemmas, to establish all the above-mentioned Rényi EPIs and derive new ones. Recently, Shannon’s original EPI was given a simple proof using a transport argument from normal variables and a change of variable by rotation. The starting point of all the above works was Young’s strengthened convolutional inequality. The α-modification of the Rényi EPI was extended to orders <1 for two independent variables having log-concave densities by Marsiglietti and Melbourne. All these EPIs were found for Rényi entropies of orders >1. Bobkov and Marsiglietti proved another modification of the EPI for the Rényi entropy for two independent variables, with a power exponent parameter α whose value was further improved by Li. Ram and Sason improved the value of the constant by making it depend also on the number of variables. Bobkov and Chistyakov extended the classical Shannon’s EPI to the Rényi entropy by incorporating a multiplicative constant that depends on the order of the Rényi entropy. Recently, there has been significant interest in Rényi entropy power inequalities for several independent variables (the survey is recommended to the reader for recent developments on forward and reverse EPIs). A definition of the Renyi entropy-power itself appears in, which is essentially Definition 5 below. It has also been applied to deconvolution problems. It was first considered in to make the transition between the entropy-power and the Brunn–Minkowski inequalities. The (differential) Rényi entropy considered in this paper (Definition 2) generalizes the (differential) Shannon’s entropy for continuous variables. It has found many applications such as source coding, hypothesis testing, channel coding and guessing. The Rényi entropy was first defined as a generalization of Shannon’s entropy for discrete variables, when looking for the most general definition of information measures that would preserve the additivity for independent events. The link with the Rényi entropy was first made by Dembo, Cover and Thomas in connection with Young’s convolutional inequality with sharp constants, where Shannon’s EPI is obtained by letting the Rényi entropy orders tend to one ( Theorem 17.8.3). The entropy power inequality (EPI) dates back to Shannon’s seminal paper and has a long history.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |