University of Notre Dame
Browse
KollerC122012D.pdf (3.31 MB)

Analysis and Design of Asymptotically Good Concatenated Coding Schemes

Download (3.31 MB)
thesis
posted on 2012-12-04, 00:00 authored by Christian Wolfgang Koller

Turbo codes and multiple parallel concatenated codes perform very close to the Shannon limit with suboptimum iterative decoding, but the corresponding code ensembles are asymptotically bad in the sense that the minimum Hamming distance does not grow linearly with block length. As a result, their minimum distance may not be sufficient to yield very low error rates at moderate-to-high signal to noise ratios, and an error floor can occur. On the other hand, multiple serially concatenated code ensembles with three or more component encoders can be asymptotically good. We present a method to obtain an implicit description of the spectral shape and the asymptotic minimum distance growth rate of repeat multiple accumulate and randomly punctured repeat multiple accumulate code ensembles.

Employing suboptimal iterative decoding, the performance of the decoder in the moderate-to-high signal-to-noise region is greatly influenced but not solely determined by the minimum Hamming distance of the code, pseudo-codewords and trapping sets also play a role. We present a closed form finite length ensemble trapping set enumerator for repeat-multiple accumulate codes by creating a trellis representation of trapping sets. We also present a closed form enumerator for absorbing sets, a subset of trapping sets that are fixed points for bit flipping decoders.

Multiple-serially concatenated codes in general exhibit large minimum distance growth rates, but they have the drawback of converging at a signal-to-noise ratio further from capacity than parallel concatenated codes. We introduce the concept of tuned turbo codes, a family of asymptotically good hybrid concatenated code ensembles, where asymptotic minimum distance growth rates, convergence thresholds, and code rates can be traded-off using two tuning parameters, lambda and mu. By decreasing lambda, the asymptotic minimum distance growth rate is reduced in exchange for improved iterative decoding convergence behavior while increasing lambda raises the asymptotic minimum distance growth rate at the expense of worse convergence behavior. By decreasing mu, a similar tuning behavior can be achieved for higher rate code ensembles, thereby allowing a system designer to trade off between code rate, iterative decoding convergence behavior, and error floor performance without changing the encoder structure.

We then investigate the joint design of channel coding and random linear network coding (RLNC) for star networks. Channel coding alone is not sufficient to reliably transmit a message of inite length from a source to one or more destinations as in, e.g., file transfer. To ensure no data is lost, channel coding must be paired with a rateless erasure correcting scheme, such as RLNC or a time division multiple access (TDMA) system paired with automatic repeat request (ARQ). Given messages of a finite length, which can be subdivided into blocks of smaller size prior to channel coding, and RLNC over a finite Galois field we obtain the number of blocks and the channel coding rate that maximizes the expected throughput of both RLNC and TDMA. We then compare the throughput of RLNC to the throughput of TDMA and show that for small total message lengths and RLNC over small Galois fields, TDMA is more throughput efficient while RLNC is more efficient when the message size gets large.

History

Date Modified

2017-06-05

Defense Date

2012-08-15

Research Director(s)

Daniel J. Costello, Jr.

Committee Members

Martin Haenggi Joerg Kliewer Thomas Fuja

Degree

  • Doctor of Philosophy

Degree Level

  • Doctoral Dissertation

Language

  • English

Alternate Identifier

etd-12042012-004451

Publisher

University of Notre Dame

Program Name

  • Electrical Engineering

Usage metrics

    Dissertations

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC