Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics and computer science—for the purpose of designing efficient and reliable data transmission methods; this involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: Data compression Error control Cryptographic coding Line codingData compression attempts to remove redundancy from the data from a source in order to transmit it more efficiently. For example, Zip data compression makes data files smaller, for purposes such as to reduce Internet traffic. Data compression and error correction may be studied in combination. Error correction adds extra data bits to make the transmission of data more robust to disturbances present on the transmission channel.
The ordinary user may not be aware of many applications using error correction. A typical music CD uses the Reed-Solomon code to correct for scratches and dust. In this application the transmission channel is the CD itself. Cell phones use coding techniques to correct for the fading and noise of high frequency radio transmission. Data modems, telephone transmissions, the NASA Deep Space Network all employ channel coding techniques to get the bits through, for example the turbo code and LDPC codes. In 1948, Claude Shannon published "A Mathematical Theory of Communication", an article in two parts in the July and October issues of the Bell System Technical Journal; this work focuses on the problem of. In this fundamental work he used tools in probability theory, developed by Norbert Wiener, which were in their nascent stages of being applied to communication theory at that time. Shannon developed information entropy as a measure for the uncertainty in a message while inventing the field of information theory.
The binary Golay code was developed in 1949. It is an error-correcting code capable of correcting up to three errors in each 24-bit word, detecting a fourth. Richard Hamming won the Turing Award in 1968 for his work at Bell Labs in numerical methods, automatic coding systems, error-detecting and error-correcting codes, he invented the concepts known as Hamming codes, Hamming windows, Hamming numbers, Hamming distance. The aim of source coding is to make it smaller. Data can be seen as a random variable X: Ω → X, where x ∈ X appears with probability P. Data are encoded by strings over an alphabet Σ. A code is a function C: X → Σ ∗. C is the code word associated with x. Length of the code word is written as l. Expected length of a code is l = ∑ x ∈ X l P The concatenation of code words C = C C... C; the code word of the empty string is the empty string itself: C = ϵ C: X → Σ ∗ is non-singular if injective. C: X ∗ → Σ ∗ is uniquely decodable if injective. C: X → Σ ∗ is instantaneous if C is not a prefix of C.
Entropy of a source is the measure of information. Source codes try to reduce the redundancy present in the source, represent the source with fewer bits that carry more information. Data compression which explicitly tries to minimize the average length of messages according to a particular assumed probability model is called entropy encoding. Various techniques used by source coding schemes try to achieve the limit of Entropy of the source. C ≥ H, where H is entropy of source, C is the bitrate after compression
A magazine is a publication a periodical publication, printed or electronically published. Magazines are published on a regular schedule and contain a variety of content, they are financed by advertising, by a purchase price, by prepaid subscriptions, or a combination of the three. At its root, the word "magazine" refers to a storage location. In the case of written publication, it is a collection of written articles; this explains why magazine publications share the word root with gunpowder magazines, artillery magazines, firearms magazines, and, in French, retail stores such as department stores. By definition, a magazine paginates with each issue starting at page three, with the standard sizing being 8 3⁄8 in × 10 7⁄8 in. However, in the technical sense a journal has continuous pagination throughout a volume, thus Business Week, which starts each issue anew with page one, is a magazine, but the Journal of Business Communication, which starts each volume with the winter issue and continues the same sequence of pagination throughout the coterminous year, is a journal.
Some professional or trade publications are peer-reviewed, an example being the Journal of Accountancy. Academic or professional publications that are not peer-reviewed are professional magazines; that a publication calls itself a journal does not make it a journal in the technical sense. Magazines can be distributed through the mail, through sales by newsstands, bookstores, or other vendors, or through free distribution at selected pick-up locations; the subscription business models for distribution fall into three main categories. In this model, the magazine is sold to readers for a price, either on a per-issue basis or by subscription, where an annual fee or monthly price is paid and issues are sent by post to readers. Paid circulation allows for defined readership statistics; this means that there is no cover price and issues are given away, for example in street dispensers, airline, or included with other products or publications. Because this model involves giving issues away to unspecific populations, the statistics only entail the number of issues distributed, not who reads them.
This is the model used by many trade magazines distributed only to qualifying readers for free and determined by some form of survey. Because of costs associated with the medium of print, publishers may not distribute free copies to everyone who requests one; this allows a high level of certainty that advertisements will be received by the advertiser's target audience, it avoids wasted printing and distribution expenses. This latter model was used before the rise of the World Wide Web and is still employed by some titles. For example, in the United Kingdom, a number of computer-industry magazines use this model, including Computer Weekly and Computing, in finance, Waters Magazine. For the global media industry, an example would be VideoAge International; the earliest example of magazines was Erbauliche Monaths Unterredungen, a literary and philosophy magazine, launched in 1663 in Germany. The Gentleman's Magazine, first published in 1731, in London was the first general-interest magazine. Edward Cave, who edited The Gentleman's Magazine under the pen name "Sylvanus Urban", was the first to use the term "magazine," on the analogy of a military storehouse.
Founded by Herbert Ingram in 1842, The Illustrated London News was the first illustrated magazine. The oldest consumer magazine still in print is The Scots Magazine, first published in 1739, though multiple changes in ownership and gaps in publication totalling over 90 years weaken that claim. Lloyd's List was founded in Edward Lloyd's England coffee shop in 1734. Under the ancient regime, the most prominent magazines were Mercure de France, Journal des sçavans, founded in 1665 for scientists, Gazette de France, founded in 1631. Jean Loret was one of France's first journalists, he disseminated the weekly news of music and Parisian society from 1650 until 1665 in verse, in what he called a gazette burlesque, assembled in three volumes of La Muse historique. The French press lagged a generation behind the British, for they catered to the needs the aristocracy, while the newer British counterparts were oriented toward the middle and working classes. Periodicals were censored by the central government in Paris.
They were not quiescent politically—often they criticized Church abuses and bureaucratic ineptitude. They supported the monarchy and they played at most a small role in stimulating the revolution. During the Revolution, new periodicals played central roles as propaganda organs for various factions. Jean-Paul Marat was the most prominent editor, his L'Ami du peuple advocated vigorously for the rights of the lower classes against the enemies of the people Marat hated. After 1800 Napoleon reimposed strict censorship. Magazines flourished after Napoleon left in 1815. Most were based in Paris and most emphasized literature and stories, they served religious and political communities. In times of political crisis they expressed and helped shape the views of their readership and thereby were major
The Mathematical Intelligencer
The Mathematical Intelligencer is a mathematical journal published by Springer Verlag that aims at a conversational and scholarly tone, rather than the technical and specialist tone more common among academic journals. It was started by mathematicians Bruce Chandler and Harold Edwards Jr. and first appeared in 1979. Marjorie Senechal is the editor-in-chief. Branislav Kisacanin Review of Mathematical Conversations. Mathematical Association of America. Home page for Mathematical Intelligencer at Springer Verlag
Neil James Alexander Sloane is a British-American mathematician. His major contributions are in the fields of combinatorics, error-correcting codes, sphere packing. Sloane is best known for being the creator and maintainer of the On-Line Encyclopedia of Integer Sequences. Sloane was brought up in Australia, he studied at Cornell University under Nick DeClaris, Frank Rosenblatt, Frederick Jelinek and Wolfgang Heinrich Johannes Fuchs, receiving his Ph. D. in 1967. His doctoral dissertation was titled Lengths of Cycle Times in Random Neural Networks. Sloane joined AT&T Bell Labs in 1968 and retired from AT&T Labs in 2012, he became an AT&T Fellow in 1998. He is a Fellow of the Learned Society of Wales, an IEEE Fellow, a Fellow of the American Mathematical Society, a member of the National Academy of Engineering, he is a winner of a Lester R. Ford Award in 1978 and the Chauvenet Prize in 1979. In 2005 Sloane received the IEEE Richard W. Hamming Medal. In 2008 he received the Mathematical Association of America David P. Robbins award, in 2013 the George Pólya Award.
In 2014, to celebrate his 75th birthday, Sloane shared some of his favorite integer sequences. Besides mathematics, he has authored two rock-climbing guides to New Jersey. N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, NY, 1973. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, Elsevier/North-Holland, Amsterdam, 1977. M. Harwit and N. J. A. Sloane, Hadamard Transform Optics, Academic Press, San Diego CA, 1979. N. J. A. Sloane and A. D. Wyner, Claude Elwood Shannon: Collected Papers, IEEE Press, NY, 1993. N. J. A. Sloane and S. Plouffe, The Encyclopedia of Integer Sequences, Academic Press, San Diego, 1995. J. H. Conway and N. J. A. Sloane, Sphere Packings and Groups, Springer-Verlag, NY, 1st edn. 1988. A. S. Hedayat, N. J. A. Sloane and J. Stufken, Orthogonal Arrays: Theory and Applications, Springer-Verlag, NY, 1999. G. Nebe, E. M. Rains and N. J. A. Sloane, Self-Dual Codes and Invariant Theory, Springer-Verlag, 2006. Reeds–Sloane algorithm Neil Sloane at the Mathematics Genealogy Project IEEE Richard W. Hamming Medal Recipients, 2005 – Neil J. A. Sloane Neil Sloane's entry in the Numericana Hall of Fame "The pattern collector", Science News Doron Zeilberger, Opinion 124: A Database is Worth a Thousand Mathematical Articles
Error correction code
In computing, telecommunication, information theory, coding theory, an error correction code, sometimes error correcting code, is used for controlling errors in data over unreliable or noisy communication channels. The central idea is the sender encodes the message with a redundant in the form of an ECC; the American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming code. The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, to correct these errors without retransmission. ECC gives the receiver the ability to correct errors without needing a reverse channel to request retransmission of data, but at the cost of a fixed, higher forward channel bandwidth. ECC is therefore applied in situations where retransmissions are costly or impossible, such as one-way communication links and when transmitting to multiple receivers in multicast. For example, in the case of a satellite orbiting around Uranus, a retransmission because of decoding errors can create a delay of 5 hours.
ECC information is added to mass storage devices to enable recovery of corrupted data, is used in modems, is used on systems where the primary memory is ECC memory. ECC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, ECC is an integral part of the initial analog-to-digital conversion in the receiver; the Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many ECC encoders/decoders can generate a bit-error rate signal which can be used as feedback to fine-tune the analog receiving electronics; the maximum fractions of errors or of missing bits that can be corrected is determined by the design of the ECC code, so different error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective signal-to-noise ratio.
The noisy-channel coding theorem of Claude Shannon answers the question of how much bandwidth is left for data communication while using the most efficient code that turns the decoding error probability to zero. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, hence gives no insight of how to build a capacity achieving code. After years of research, some advanced ECC systems nowadays come close to the theoretical maximum. ECC is accomplished by adding redundancy to the transmitted information using an algorithm. A redundant bit may be a complex function of many original information bits; the original information may or may not appear in the encoded output. A simplistic example of ECC is to transmit each data bit 3 times, known as a repetition code. Through a noisy channel, a receiver might see table below; this allows an error in any one of the three samples to be corrected by "majority vote" or "democratic voting".
The correcting ability of this ECC is: Up to 1 bit of triplet in error, or up to 2 bits of triplet omitted. Though simple to implement and used, this triple modular redundancy is a inefficient ECC. Better ECC codes examine the last several dozen, or the last several hundred received bits to determine how to decode the current small handful of bits. ECC could be said to work by "averaging noise"; because of this "risk-pooling" effect, digital communication systems that use ECC tend to work well above a certain minimum signal-to-noise ratio and not at all below it. This all-or-nothing tendency – the cliff effect – becomes more pronounced as stronger codes are used that more approach the theoretical Shannon limit. Interleaving ECC coded data can reduce the all or nothing properties of transmitted ECC codes when the channel errors tend to occur in bursts. However, this method has limits. Most telecommunication systems use a fixed channel code designed to tolerate the expected worst-case bit error rate, fail to work at all if the bit error rate is worse.
However, some systems adapt to the given channel error conditions: some instances of hybrid automatic repeat-request use a fixed ECC method as long as the ECC can handle the error rate switch to ARQ when the error rate gets too high. The two main categories of ECC codes are convolutional codes. Block codes work on fixed-size blocks of symbols of predetermined size. Practical block codes can be hard-decoded in polynomial time to their block length. Convolutional codes work on symbol streams of arbitrary length, they are most soft decoded with the Viterbi algorithm, though other algorithms are sometimes used. Viterbi decoding allows asymptotically optimal decoding efficiency with increasing constraint length of the convolutional code, but at the expense of exponentially increasing complexity. A convolu
In mathematics, a group is a set equipped with a binary operation which combines any two elements to form a third element in such a way that four conditions called group axioms are satisfied, namely closure, associativity and invertibility. One of the most familiar examples of a group is the set of integers together with the addition operation, but groups are encountered in numerous areas within and outside mathematics, help focusing on essential structural aspects, by detaching them from the concrete nature of the subject of the study. Groups share a fundamental kinship with the notion of symmetry. For example, a symmetry group encodes symmetry features of a geometrical object: the group consists of the set of transformations that leave the object unchanged and the operation of combining two such transformations by performing one after the other. Lie groups are the symmetry groups used in the Standard Model of particle physics; the concept of a group arose from the study of polynomial equations, starting with Évariste Galois in the 1830s.
After contributions from other fields such as number theory and geometry, the group notion was generalized and established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists study the different ways in which a group can be expressed concretely, both from a point of view of representation theory and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory; the modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4.
The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots. The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois' ideas were rejected by his contemporaries, published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation θn = 1 gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884; the third field contributing to group theory was number theory.
Certain abelian group structures had been used implicitly in Carl Friedrich Gauss' number-theoretical work Disquisitiones Arithmeticae, more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers; the convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques. Walther von Dyck introduced the idea of specifying a group by means of generators and relations, was the first to give an axiomatic definition of an "abstract group", in the terminology of the time; as of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside, who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, more locally compact groups was studied by Hermann Weyl, Élie Cartan and many others.
Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley and by the work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004; this project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research is ongoing to simplify the proof of this classification; these days, group theory is still a active mathematical branch, impacting many other fields. One of the most familiar groups is the set of integers Z which consists of the numbers... − 4, − 3, − − 1, 0, 1, 2, 3, 4... together with addition. The following properties of integer addition serve as a model for the group axioms given in the definition below.
For any two integers a and b, the sum a + b is an integer. That is, addition of integers always yields an integer; this property is known as closure under addition. For all integers a, b and c, + c = a +. Expressed in words
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the