Contents
Iterative methods are motivated by considering two classical examples: Newtons method to find the roots of nonlinear functions and the Jacobi- and Gauss-Seidel method to solve large systems of linear equations. Based on these examples convergence and convergence rates of iterative methods are discussed. The concept of the fix point iteration is used to provide a graphical interpretation of iterative processes.
In chapter two the concept of vector-valued transmission is introduced. Based on this, we derive the optimum receiver structure for general linear modulation methods. Besides the optimum vector equalizer also various suboptimum methods (block linear equalizer, block decision feedback equalizer, multistage detector) are discussed. Furthermore iterative equalizer are introduced and the relation to recurrent neural networks is described.
Chapter three first introduces the basic concepts for iterative decoding: maximum a posteriori decoding, probability theory for iterative decoding and tanner graphs as a means to graphically represent iterative decoding. As applications we consider low density parity check codes and convolutional self-orthogonal codes.
In chapter four iterative methods for concatenated systems are considered. This includes a discussion of classical turbo codes as well as receiver concepts based on a joint demapping, equalization and decoding (turbo equalization). As a further example we consider the basic principle of interleave division multiplexing. The iterative methods are analysed using EXIT charts.