Serial concatenated convolutional codes
Serial concatenated convolutional codes are a class of forward error correction codes suitable for turbo decoding. Data to be transmitted over a noisy channel may first be encoded using an SCCC. Upon reception, the coding may be used to remove any errors introduced during transmission; the decoding is performed by repeated interleaving of the received symbols. SCCCs include an inner code, an outer code, a linking interleaver. A distinguishing feature of SCCCs is the use of a recursive convolutional code as the inner code; the recursive inner code provides the'interleaver gain' for the SCCC, the source of the excellent performance of these codes. The analysis of SCCCs was spawned in part by the earlier discovery of turbo codes in 1993; this analysis of SCCC's took place in the 1990s in a series of publications from NASA's Jet Propulsion Laboratory. The research offered SCCC's as a form of turbo-like serial concatenated codes that 1) were iteratively decodable with reasonable complexity, 2) gave error correction performance comparable with the turbo codes.
Prior forms of serial concatenated codes did not use recursive inner codes. Additionally, the constituent codes used in prior forms of serial concatenated codes were too complex for reasonable soft-in-soft-out decoding. SISO decoding is considered essential for turbo decoding. Serial concatenated convolutional codes have not found widespread commercial use, although they were proposed for communications standards such as DVB-S2. Nonetheless, the analysis of SCCCs has provided insight into the performance and bounds of all types of iterative decodable codes including turbo codes and LDPC codes. US patent 6,023,783 covers some forms of SCCCs; the patent expired on May 15, 2016. Serial concatenated convolutional codes were first analyzed with a view toward turbo decoding in "Serial Concatenation of Interleaved Codes: Performance Analysis and Iterative Decoding" by S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara; this analysis yielded a set of observations for designing high performance, turbo decodable serial concatenated codes that resembled turbo codes.
One of these observations was that "the use of a recursive convolutional inner encoder always yields an interleaver gain." This is in contrast to the use of block codes or non-recursive convolutional codes, which do not provide comparable interleaver gain. Additional analysis of SCCCs was done in "Coding Theorems for'Turbo-Like' Codes" by D. Divsalar, Hui Jin, Robert J. McEliece; this paper analyzed repeat-accumulate codes which are the serial concatenation of an inner two-state recursive convolutional code with a simple repeat code as the outer code, with both codes linked by an interleaver. The performance of the RA codes is quite good considering the simplicity of the constituent codes themselves. SCCC codes were further analyzed in "Serial Turbo Trellis Coded Modulation with Rate-1 Inner Code". In this paper SCCCs were designed for use with higher order modulation schemes. Excellent performing codes with inner and outer constituent convolutional codes of only two or four states were presented.
Fig 1 is an example of a SCCC. The example encoder is composed of a 16-state outer convolutional code and a 2-state inner convolutional code linked by an interleaver; the natural code rate of the configuration shown is 1/4, the inner and/or outer codes may be punctured to achieve higher code rates as needed. For example, an overall code rate of 1/2 may be achieved by puncturing the outer convolutional code to rate 3/4 and the inner convolutional code to rate 2/3. A recursive inner convolutional code is preferable for turbo decoding of the SCCC; the inner code may be punctured to a rate as high as 1/1 with reasonable performance. An example of an interative SCCC decoder; the SCCC decoder includes an interleaver. While shown as separate units, the two SISO decoders may share all or part of their circuitry; the SISO decoding may be done is some combination thereof. The SISO decoding is done using Maximum a posteriori decoders using the BCJR algorithm. SCCCs provide performance comparable to other iteratively decodable codes including turbo codes and LDPC codes.
They are noted for having worse performance at lower SNR environments, but better performance at higher SNR environments. Convolutional code Viterbi algorithm Soft-decision decoding Interleaver BCJR algorithm Low-density parity-check code Repeat-accumulate code Turbo equalizer "Concatenated codes", Scholarpedia "Concatenated Convolutional Codes and Iterative Decoding", Willian E. Ryan
Linear time-invariant system
Linear time-invariant theory known as LTI system theory, investigates the response of a linear and time-invariant system to an arbitrary input signal. Trajectories of these systems are measured and tracked as they move through time, but in applications like image processing and field theory, the LTI systems have trajectories in spatial dimensions. Thus, these systems are called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time systems, linear shift-invariant is the corresponding term. A good example of LTI systems are electrical circuits that can be made up of resistors and inductors.. It has been used in applied mathematics and has direct applications in NMR spectroscopy, circuits, signal processing, control theory, other technical areas; the defining properties of any LTI system are time invariance. Linearity means that the relationship between the input and the output of the system is a linear map: If input x 1 produces response y 1, input x 2 produces response y 2 the scaled and summed input a 1 x 1 + a 2 x 2 produces the scaled and summed response a 1 y 1 + a 2 y 2 where a 1 and a 2 are real scalars.
It follows that this can be extended to an arbitrary number of terms, so for real numbers c 1, c 2, …, c k,Input ∑ k c k x k produces output ∑ k c k y k. In particular, where c ω and x ω are scalars and inputs that vary over a continuum indexed by ω, thus if an input function can be represented by a continuum of input functions, combined "linearly", as shown the corresponding output function can be represented by the corresponding continuum of output functions and summed in the same way. Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds; that is, if the output due to input x is y the output due to input x is y. Hence, the system is time invariant because the output does not depend on the particular time the input is applied; the fundamental result in LTI system theory is that any LTI system can be characterized by a single function called the system's impulse response. The output of the system is the convolution of the input to the system with the system's impulse response.
This method of analysis is called the time domain point-of-view. The same result is true of discrete-time linear shift-invariant systems in which signals are discrete-time samples, convolution is defined on sequences. Equivalently, any LTI system can be characterized in the frequency domain by the system's transfer function, the Laplace transform of the system's impulse response; as a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain. For all LTI systems, the eigenfunctions, the basis functions of the transforms, are complex exponentials; this is, if the input to a system is the complex waveform A s e s t for some complex amplitude A s and complex frequency s, the output will be some complex constant times the input, say B s e s t for some new complex amplitude B s. The ratio B s / A.
Since sinusoids are a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid the output of th
Maximum likelihood estimation
In statistics, maximum likelihood estimation is a method of estimating the parameters of a statistical model so the observed data is most probable. This is done by finding the value of the parameter θ that maximizes the likelihood function L, the joint probability of the observed data y, over a parameter space Θ; the point θ ^ ∈ Θ. The logic of maximum likelihood is both intuitive and flexible, as such the method has become a dominant means of inference within much of the quantitative research of the social and medical sciences; as an example, suppose that we are interested in the heights of adult female penguins, but are unable to measure the height of every penguin in a population. Assuming that the heights are distributed with some unknown mean and variance, the mean and variance can be estimated with MLE while only knowing the heights of some sample of the overall population. MLE would accomplish that by taking the mean and variance as parameters and finding particular parametric values that make the observed results the most probable given the normal model.
If the likelihood function is differentiable with respect to θ, the derivative test for determining maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved explicitly. Under most circumstances, numerical methods will be necessary to find the maximum of the likelihood function. From the point of view of Bayesian inference, MLE is a special case of maximum a posteriori estimation that assumes a uniform prior distribution of the parameters. In frequentist inference, MLE is one of several methods to get estimates of parameters without using prior distributions. Priors are avoided by not making probability statements about the parameters, but only about their estimates, whose properties are defined by the observations and the statistical model; the method of maximum likelihood is based on the likelihood function, L. We are given a statistical model, i.e. a family of distributions, where θ denotes the parameter for the model. The method of maximum likelihood finds the values of the model parameter, θ, that maximize the likelihood function, L. Intuitively, this selects the parameter values that make the data most probable.
The method defines a maximum likelihood estimate: θ ^ ∈. In practice, it is convenient to work with the natural logarithm of the likelihood function, called the log-likelihood: ℓ = ln L, or the average log-likelihood: ℓ ^ = 1 n ln L; the hat over ℓ indicates. Indeed, ℓ ^ estimates the expected log-likelihood of a single observation in the model. An MLE is the same regardless of whether we maximize the likelihood or the log-likelihood, because log is increasing. For many models, a maximum likelihood estimator can be found as an explicit function of the observed data x. For many other models, however, no closed-form solution to the maximization problem is known or available, an MLE can only be found via numerical global optimization. For some problems, there may be multiple values. For other problems, no maximum likelihood estimate exists: either the log-likelihood function increases without reaching a supremum value, or the supremum does exist but is outside the bounds of Θ, the set of acceptable parameter values.
A maximum likelihood estimator is an extremum estimator obtained by maximizing, as a function of θ, the objective function ℓ ^
Reed–Solomon error correction
Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960, they have many applications, the most prominent of which include consumer technologies such as CDs, DVDs, Blu-ray Discs, QR Codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, storage systems such as RAID 6. Reed–Solomon codes operate on a block of data treated as a set of finite field elements called symbols. For example, a block of 4096 bytes could be treated as a set of 2731 12-bit symbols, where each symbol is a finite field element of GF, the last symbol padded with four 0 bits. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding t check symbols to the data, a Reed–Solomon code can detect any combination of up to and including t erroneous symbols, or correct up to and including ⌊t/2⌋ symbols; as an erasure code, it can correct up to and including t known erasures, or it can detect and correct combinations of errors and erasures.
Reed–Solomon codes are suitable as multiple-burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can affect at most two symbols of size b. The choice of t is up to the designer of the code, may be selected within wide limits. There are two basic types of Reed–Solomon codes, original view and BCH view, with BCH view being the most common as BCH view decoders are faster and require less working storage than original view decoders. Reed–Solomon codes were developed in 1960 by Irving S. Reed and Gustave Solomon, who were staff members of MIT Lincoln Laboratory, their seminal article was titled "Polynomial Codes over Certain Finite Fields".. The original encoding scheme described in the Reed & Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values to be encoded are known to encoder and decoder; the original theoretical decoder generated potential polynomials based on subsets of k out of n values of a received message, choosing the most popular polynomial as the correct one, impractical for all but the simplest of cases.
This was resolved by changing the original scheme to a BCH code like scheme based on a fixed polynomial known to both encoder and decoder, but practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed Solomon codes, ones that use the original encoding scheme, ones that use the BCH encoding scheme. In 1960, a practical fixed polynomial decoder for BCH codes developed by Daniel Gorenstein and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and in a paper in June 1961; the Gorenstein-Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson. By 1963, J. J. Stone recognized that Reed Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes, but Reed Solomon codes based on the original encoding scheme, are not a class of BCH codes, depending on the set of evaluation points, they are not cyclic codes.
In 1969, an improved BCH scheme decoder was developed by Elwyn Berlekamp and James Massey, is since known as the Berlekamp–Massey decoding algorithm. In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the extended Euclidean algorithm. In 1977, Reed–Solomon codes were implemented in the Voyager program in the form of concatenated error correction codes; the first commercial application in mass-produced consumer products appeared in 1982 with the compact disc, where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are implemented in digital storage devices and digital communication standards, though they are being replaced by more modern low-density parity-check codes or turbo codes. For example, Reed–Solomon codes are used in the Digital Video Broadcasting standard DVB-S, but LDPC codes are used in its successor, DVB-S2. In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed. In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, work continues on these type of decoders – see Guruswami–Sudan list decoding algorithm.
In 2002, another original scheme decoder was developed by Shuhong Gao, based on the Extended Euclidean algorithm Gao_RS.pdf. Reed–Solomon coding is widely used in mass storage systems to correct the burst errors associated with media defects. Reed–Solomon coding is a key component of the compact disc, it was the first use of strong error correction coding in a mass-produced consumer product, DAT and DVD use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding; the first element of a CIRC decoder is a weak inner Reed–Solomon code, shortened from a code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More it flags as erasures any uncorrectable blocks, i.e. blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are spread by the deinterleaver to different blocks of the outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks.
The outer code corrects this, since it can handle up to 4 such erasures pe
In mathematics convolution is a mathematical operation on two functions to produce a third function that expresses how the shape of one is modified by the other. The term convolution refers to the process of computing it; some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, it differs from cross-correlation only in that either f or g is reflected about the y-axis. For continuous functions, the cross-correlation operator is the adjoint of the convolution operator. Convolution has applications that include probability, computer vision, natural language processing and signal processing and differential equations; the convolution can be defined for functions on Euclidean space, other groups. For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. A discrete convolution can be defined for functions on the set of integers. Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, in the design and implementation of finite impulse response filters in signal processing.
Computing the inverse of the convolution operation is known as deconvolution. The convolution of f and g is written f ∗ g, using an star, it is defined as the integral of the product of the two functions after one is shifted. As such, it is a particular kind of integral transform: An equivalent definition is: ≜ ∫ − ∞ ∞ f g d τ. While the symbol t is used above, it need not represent the time domain, but in that context, the convolution formula can be described as a weighted average of the function f at the moment t where the weighting is given by g shifted by amount t. As t changes, the weighting function emphasizes different parts of the input function. For functions f, g supported on only [0, ∞), the integration limits can be truncated, resulting in: = ∫ 0 t f g d τ for f, g: [ 0, ∞ ) → R. For the multi-dimensional formulation of convolution, see domain of definition. A common engineering convention is: f ∗ g ≜ ∫ − ∞ ∞ f g d τ ⏟, which has to be interpreted to avoid confusion. For instance, f ∗ g is equivalent to.
Convolution describes the output of an important class of operations known as linear time-invariant. See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created; the existing ones are only modified. In other words, the output transform is the pointwise product of the input transform with a third transform. See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754. An expression of the type: ∫ f ⋅ g d u is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series, the last of 3 volumes of the encyclopedic series: Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800.
Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, others. The term itself did not come into wide use until the 60s. Prior to that it was sometimes known as Faltung, composition product, superposition integral, Carson's integral, yet it appears as early as 1903. The o
Telecommunication is the transmission of signs, messages, writings and sounds or information of any nature by wire, optical or other electromagnetic systems. Telecommunication occurs when the exchange of information between communication participants includes the use of technology, it is transmitted either electrically over physical media, such as cables, or via electromagnetic radiation. Such transmission paths are divided into communication channels which afford the advantages of multiplexing. Since the Latin term communicatio is considered the social process of information exchange, the term telecommunications is used in its plural form because it involves many different technologies. Early means of communicating over a distance included visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, optical heliographs. Other examples of pre-modern long-distance communication included audio messages such as coded drumbeats, lung-blown horns, loud whistles. 20th- and 21st-century technologies for long-distance communication involve electrical and electromagnetic technologies, such as telegraph and teleprinter, radio, microwave transmission, fiber optics, communications satellites.
A revolution in wireless communication began in the first decade of the 20th century with the pioneering developments in radio communications by Guglielmo Marconi, who won the Nobel Prize in Physics in 1909, other notable pioneering inventors and developers in the field of electrical and electronic telecommunications. These included Charles Wheatstone and Samuel Morse, Alexander Graham Bell, Edwin Armstrong and Lee de Forest, as well as Vladimir K. Zworykin, John Logie Baird and Philo Farnsworth; the word telecommunication is a compound of the Greek prefix tele, meaning distant, far off, or afar, the Latin communicare, meaning to share. Its modern use is adapted from the French, because its written use was recorded in 1904 by the French engineer and novelist Édouard Estaunié. Communication was first used as an English word in the late 14th century, it comes from Old French comunicacion, from Latin communicationem, noun of action from past participle stem of communicare "to share, divide out.
Homing pigeons have been used throughout history by different cultures. Pigeon post had Persian roots, was used by the Romans to aid their military. Frontinus said; the Greeks conveyed the names of the victors at the Olympic Games to various cities using homing pigeons. In the early 19th century, the Dutch government used the system in Sumatra, and in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed. In the Middle Ages, chains of beacons were used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London. In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system between Lille and Paris.
However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres. As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880. On 25 July 1837 the first commercial electrical telegraph was demonstrated by English inventor Sir William Fothergill Cooke, English scientist Sir Charles Wheatstone. Both inventors viewed their device as "an improvement to the electromagnetic telegraph" not as a new device. Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837, his code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time; the conventional telephone was invented independently by Alexander Bell and Elisha Gray in 1876. Antonio Meucci invented the first device that allowed the electrical transmission of voice over a line in 1849.
However Meucci's device was of little practical value because it relied upon the electrophonic effect and thus required users to place the receiver in their mouth to "hear" what was being said. The first commercial telephone services were set-up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Starting in 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean; this was the start of wireless telegraphy by radio. Voice and music had little early success. World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated development of radio for the wartime purposes of aircraft and land communication, radio navigation and radar. Development of stereo FM broadcasting of radio
In coding theory, puncturing is the process of removing some of the parity bits after encoding with an error-correction code. This has the same effect as encoding with an error-correction code with a higher rate, or less redundancy. However, with puncturing the same decoder can be used regardless of how many bits have been punctured, thus puncturing increases the flexibility of the system without increasing its complexity. In some cases, a pre-defined pattern of puncturing is used in an encoder; the inverse operation, known as depuncturing, is implemented by the decoder. Puncturing is used in UMTS during the rate matching process, it is used in Wi-Fi, GPRS and EDGE, as well as in the DVB-T and DRM Standards. Puncturing is used with the Viterbi algorithm in coding systems. During Radio Resource Control Connection set procedure, during sending NBAP radio link setup message the uplink puncturing limit will send to NODE B, along with U/L spreading factor & U/L scrambling code. Singleton bound Pless, Vera.
Introduction to the Theory of Error-Correcting Codes. Wiley Series in Discrete Mathematics and Optimization. 48. John Wiley & Sons. ISBN 1118030990