On the Fragile Rates of Linear Feedback Coding Schemes of Gaussian Channels with Memory
In <cit.> the linear coding scheme is applied, X_t =g_t(Θ - E{Θ|Y^t-1, V_0=v_0}), t=2,…,n, X_1=g_1Θ, with Θ: Ω→ℝ, a Gaussian random variable, to derive a lower bound on the feedback rate, for additive Gaussian noise (AGN) channels, Y_t=X_t+V_t, t=1, …, n, where V_t is a Gaussian autoregressive (AR) noise, and κ∈ [0,∞) is the total transmitter power. For the unit memory AR noise, with parameters (c, K_W), where c∈ [-1,1] is the pole and K_W is the variance of the Gaussian noise, the lower bound is C^L,B =1/2logχ^2, where χ =lim_n⟶∞χ_n is the positive root of χ^2=1+(1+ |c|/χ)^2 κ/K_W, and the sequence χ_n ≜|g_n/g_n-1|, n=2, 3, …, satisfies a certain recursion, and conjectured that C^L,B is the feedback capacity. In this correspondence, it is observed that the nontrivial lower bound C^L,B=1/2logχ^2 such that χ >1, necessarily implies the scaling coefficients of the feedback code, g_n, n=1,2, …, grow unbounded, in the sense that, lim_n⟶∞|g_n| =+∞. The unbounded behaviour of g_n follows from the ratio limit theorem of a sequence of real numbers, and it is verified by simulations. It is then concluded that such linear codes are not practical, and fragile with respect to a mismatch between the statistics of the mathematical model of the channel and the real statistics of the channel. In particular, if the error is perturbed by ϵ_n>0 no matter how small, then X_n =g_t(Θ - E{Θ|Y^t-1, V_0=v_0})+g_n ϵ_n, and |g_n|ϵ_n ⟶∞, as n ⟶∞.
READ FULL TEXT