Backward compatibility is a property of a system, product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system in telecommunications and computing. Backward compatibility is sometimes called downward compatibility. Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility. A complementary concept is forward compatibility. A design, forward-compatible has a roadmap for compatibility with future standards and products; the associated benefits of backward compatibility are the appeal to an existing user base through an inexpensive upgrade path as well as the network effect, important, as it increases the value of goods and services proportionally to the size of the user base. One example of this is the Sony PlayStation 2, backward compatible with games for its predecessor PlayStation. While the selection of PS2 games available at launch was small, sales of the console were nonetheless strong in 2000-2001 thanks to the large library of games for the preceding PS1.
This bought time for the PS2 to grow a large installed base and developers to release more quality PS2 games for the crucial 2001 holiday season. The associated costs of backward compatibility are a higher bill of materials if hardware is required to support the legacy systems. A notable example is the Sony PlayStation 3, as the first PS3 iteration was expensive to manufacture in part due to including the Emotion Engine from the preceding PS2 in order to run PS2 games, since the PS3 architecture was different from the PS2. Subsequent PS3 hardware revisions have eliminated the Emotion Engine as it saved production costs while removing the ability to run PS2 titles, as Sony found out that backward compatibility was not a major selling point for the PS3. in contrast to the PS2. The PS3's chief competitor, the Microsoft Xbox 360, took a different approach to backward compatibility by using software emulation in order to run games from the first Xbox, rather than including legacy hardware from the original Xbox, quite different than the Xbox 360, however Microsoft stopped releasing emulation profiles after 2007.
A simple example of both backward and forward compatibility is the introduction of FM radio in stereo. FM radio was mono, with only one audio channel represented by one signal. With the introduction of two-channel stereo FM radio, a large number of listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved through sending the sum of both left and right audio channels in one signal and the difference in another signal; that allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen. Full backward compatibility is important in computer instruction set architectures, one of the most successful being the x86 family of microprocessors.
Their full backward compatibility spans back to the 16-bit Intel 8086/8088 processors introduced in 1978. Backwards compatible processors can process the same binary executable software instructions as their predecessors, allowing the use of a newer processor without having to acquire new applications or operating systems; the success of the Wi-Fi digital communication standard is attributed to its broad forward and backward compatibility. Compiler backward compatibility may refer to the ability of a compiler of a newer version of the language to accept programs or data that worked under the previous version. A data format is said to be backward compatible with its predecessor if every message or file, valid under the old format is still valid, retaining its meaning under the new format
In cryptography, a block cipher is a deterministic algorithm operating on fixed-length groups of bits, called a block, with an unvarying transformation, specified by a symmetric key. Block ciphers operate as important elementary components in the design of many cryptographic protocols, are used to implement encryption of bulk data; the modern design of block ciphers is based on the concept of an iterated product cipher. In his seminal 1949 publication, Communication Theory of Secrecy Systems, Claude Shannon analyzed product ciphers and suggested them as a means of improving security by combining simple operations such as substitutions and permutations. Iterated product ciphers carry out encryption in multiple rounds, each of which uses a different subkey derived from the original key. One widespread implementation of such ciphers, named a Feistel network after Horst Feistel, is notably implemented in the DES cipher. Many other realizations of block ciphers, such as the AES, are classified as substitution–permutation networks.
The publication of the DES cipher by the United States National Bureau of Standards in 1977 was fundamental in the public understanding of modern block cipher design. It influenced the academic development of cryptanalytic attacks. Both differential and linear cryptanalysis arose out of studies on the DES design; as of 2016 there is a palette of attack techniques against which a block cipher must be secure, in addition to being robust against brute-force attacks. A secure block cipher is suitable only for the encryption of a single block under a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudo-random number generators. A block cipher consists of two paired algorithms, one for encryption, E, the other for decryption, D. Both algorithms accept a key of size k bits.
The decryption algorithm D is defined to be the inverse function of encryption, i.e. D = E−1. More formally, a block cipher is specified by an encryption function E K:= E: k × n → n, which takes as input a key K of bit length k, called the key size, a bit string P of length n, called the block size, returns a string C of n bits. P is called the plaintext, C is termed the ciphertext. For each K, the function EK is required to be an invertible mapping on n; the inverse for E is defined as a function E K − 1:= D K = D: k × n → n, taking a key K and a ciphertext C to return a plaintext value P, such that ∀ K: D K = P. For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, output a corresponding 128-bit block of ciphertext; the exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, yields the original 128-bit block of plain text.
For each key K, EK is a permutation over the set of input blocks. Each key selects one permutation from the set of! Possible permutations. Most block cipher algorithms are classified as iterated block ciphers which means that they transform fixed-size blocks of plaintext into identical size blocks of ciphertext, via the repeated application of an invertible transformation known as the round function, with each iteration referred to as a round; the round function R takes different round keys Ki as second input, which are derived from the original key: M i = R K i where M 0 is the plaintext and M r the ciphertext, with r being the number of rounds. Key whitening is used in addition to this. At the beginning and the end, the data is modified with key material: M 0 = M ⊕ K 0 M i = R K
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the
The meet-in-the-middle attack is a generic space–time tradeoff cryptographic attack against encryption schemes which rely on performing multiple encryption operations in sequence. The MITM attack is the primary reason why Double DES is not used and why a Triple DES key can be bruteforced by an attacker with 256 space and 2112 operations; when trying to improve the security of a block cipher, a tempting idea is to encrypt the data several times using multiple keys. One might think this doubles or n-tuples the security of the multiple-encryption scheme, depending on the number of times the data is encrypted, because an exhaustive search on all possible combination of keys would take 2n·k attempts if the data is encrypted with k-bit keys n times; the MITM is a generic attack which weakens the security benefits of using multiple encryptions by storing intermediate values from the encryptions or decryptions and using those to improve the time required to brute force the decryption keys. This makes a Meet-in-the-Middle attack a generic space–time tradeoff cryptographic attack.
The MITM attack attempts to find the keys by using both the range and domain of the composition of several functions such that the forward mapping through the first functions is the same as the backward mapping through the last functions, quite meeting in the middle of the composed function. For example, although Double DES encrypts the data with two different 56-bit keys, Double DES can be broken with 257 encryption and decryption operations; the multidimensional MITM uses a combination of several simultaneous MITM attacks like described above, where the meeting happens in multiple positions in the composed function. Diffie and Hellman first proposed the meet-in-the-middle attack on a hypothetical expansion of a block cipher in 1977, their attack used a space–time tradeoff to break the double-encryption scheme in only twice the time needed to break the single-encryption scheme. In 2011, Bo Zhu and Guang Gong investigated the multidimensional meet-in-the-middle attack and presented new attacks on the block ciphers GOST, KTANTAN and Hummingbird-2.
Assume someone wants to attack an encryption scheme with the following characteristics for a given plaintext P and ciphertext C: C = E N C k 2 P = D E C k 1 where ENC is the encryption function, DEC the decryption function defined as ENC−1 and k1 and k2 are two keys. The naive approach at brute-forcing this encryption scheme is to decrypt the ciphertext with every possible k2, decrypt each of the intermediate outputs with every possible k1, for a total of 2k1 * 2k2 operations; the meet-in-the-middle attack uses a more efficient approach. By decrypting C with k2, one obtains the following equivalence: C = E N C k 2 D E C k 2 = D E C k 2 D E C k 2 = E N C k 1 The attacker can compute ENCk1 for all values of k1 and DECk2 for all possible values of k2, for a total of 2k1 + 2k2 operations. If the result from any of the ENCk1 operations matches a result from the DECk2 operations, the pair of k1 and k2 is the correct key; this potentially-correct key is called a candidate key. The attacker can determine which candidate key is correct by testing it with a second test-set of plaintext and ciphert
United States Department of Commerce
The United States Department of Commerce is the Cabinet department of the United States government concerned with promoting economic growth. Among its tasks are gathering economic and demographic data for business and government decision-making, helping to set industrial standards; this organization's main purpose is to create jobs, promote economic growth, encourage sustainable development and improve standards of living for all Americans. The Department of Commerce headquarters is the Herbert C. Hoover Building in Washington, D. C. Wilbur Ross is the current Commerce secretary; the department was created as the United States Department of Commerce and Labor on February 14, 1903. It was subsequently renamed the Department of Commerce on March 4, 1913, as the bureaus and agencies specializing in labor were transferred to the new Department of Labor; the United States Patent and Trademark Office was transferred from the Interior Department into Commerce, the Federal Employment Stabilization Office existed within the department from 1931 to 1939.
In 1940, the Weather Bureau was transferred from the Agriculture Department, the Civil Aeronautics Authority was merged into the department. In 1949, the Public Roads Administration was added to the department due to the dissolution of the Federal Works Agency. In 1958, the independent Federal Aviation Agency was created and the Civil Aeronautics Authority was abolished; the United States Travel Service was established by the United States Secretary of Commerce on July 1, 1961 pursuant to the International Travel Act of 1961 The Economic Development Administration was created in 1965. In 1966, the Bureau of Public Roads was transferred to the newly created Department of Transportation; the Minority Business Development Agency was created on March 5, 1969 established by President Richard M. Nixon as the Office of Minority Business Enterprise; the National Oceanic and Atmospheric Administration was created on October 3, 1970. The Department of Commerce was authorized a budget for Fiscal Year 2015 of $14.6 billion.
The budget authorization is broken down as follows: Proposals to reorganize the Department go back many decades. The Department of Commerce was one of three departments that Texas governor Rick Perry advocated eliminating during his 2012 presidential campaign, along with the Department of Education and Department of Energy. Perry's campaign cited the frequency with which agencies had been moved into and out of the department and its lack of a coherent focus, advocated moving its vital programs into other departments such as the Department of the Interior, Department of Labor, Department of the Treasury; the Economic Development Administration would be eliminated. On January 13, 2012, President Obama announced his intentions to ask the United States Congress for the power to close the department and replace it with a new cabinet-level agency focused on trade and exports; the new agency would include the Office of the United States Trade Representative part of the Executive Office of the President, as well as the Export-Import Bank of the United States, the Overseas Private Investment Corporation, the United States Trade and Development Agency, the Small Business Administration, which are all independent agencies.
The Obama administration projected that the reorganization would save $3 billion and would help the administration's goal of doubling U. S. exports in five years. The new agency would be organized around four "pillars": a technology and innovation office including the United States Patent and Trademark Office and the National Institute of Standards and Technology; the National Oceanic and Atmospheric Administration would be transferred from the Department of Commerce into the Department of the Interior. That year, shortly before the 2012 presidential election, Obama invoked the idea of a "secretary of business" in reference to the plan; the reorganization was part of a larger proposal which would grant the President the authority to propose mergers of federal agencies, which would be subject to an up-or-down Congressional vote. This ability had existed from the Great Depression until the Reagan presidency, when Congress rescinded the authority; the Obama administration plan faced criticism for some of its elements.
Some Congress members expressed concern that the Office of the United States Trade Representative would lose focus if it were included in a larger bureaucracy given its status as an "honest broker" between other agencies, which tend to advocate for specific points of view. The overall plan has been criticized as an attempt to create an agency similar to Japan's powerful Ministry of International Trade and Industry, abolished in 2001 after some of its initiatives failed and it became seen as a hindrance to growth. NOAA's climate and terrestrial operations and fisheries and endangered species programs would be expected to integrate well with agencies in the Interior Department, such as the United States Geological Survey and the United States Fish and Wildlife Service. However, environmental groups such as the Natural Resources Defense Council feared that the reorganization could distract the agency from its mission of protecting the nation's oceans and ecosystems; the plan was reiterated in the Obama administration's FY2016 budget proposal, released in February 2015.
Title 13 of the C
ArXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, astronomy, electrical engineering, computer science, quantitative biology, mathematical finance and economics, which can be accessed online. In many fields of mathematics and physics all scientific papers are self-archived on the arXiv repository. Begun on August 14, 1991, arXiv.org passed the half-million-article milestone on October 3, 2008, had hit a million by the end of 2014. By October 2016 the submission rate had grown to more than 10,000 per month. ArXiv was made possible by the compact TeX file format, which allowed scientific papers to be transmitted over the Internet and rendered client-side. Around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Paul Ginsparg recognized the need for central storage, in August 1991 he created a central repository mailbox stored at the Los Alamos National Laboratory which could be accessed from any computer.
Additional modes of access were soon added: FTP in 1991, Gopher in 1992, the World Wide Web in 1993. The term e-print was adopted to describe the articles, it began as a physics archive, called the LANL preprint archive, but soon expanded to include astronomy, computer science, quantitative biology and, most statistics. Its original domain name was xxx.lanl.gov. Due to LANL's lack of interest in the expanding technology, in 2001 Ginsparg changed institutions to Cornell University and changed the name of the repository to arXiv.org. It is now hosted principally with eight mirrors around the world, its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv; the annual budget for arXiv is $826,000 for 2013 to 2017, funded jointly by Cornell University Library, the Simons Foundation and annual fee income from member institutions.
This model arose in 2010, when Cornell sought to broaden the financial funding of the project by asking institutions to make annual voluntary contributions based on the amount of download usage by each institution. Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,000 to $4,400. Cornell's goal is to raise at least $504,000 per year through membership fees generated by 220 institutions. In September 2011, Cornell University Library took overall administrative and financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a life sentence". However, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. Although arXiv is not peer reviewed, a collection of moderators for each area review the submissions; the lists of moderators for many sections of arXiv are publicly available, but moderators for most of the physics sections remain unlisted.
Additionally, an "endorsement" system was introduced in 2004 as part of an effort to ensure content is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, but to check whether the paper is appropriate for the intended subject area. New authors from recognized academic institutions receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for restricting scientific inquiry. A majority of the e-prints are submitted to journals for publication, but some work, including some influential papers, remain purely as e-prints and are never published in a peer-reviewed journal. A well-known example of the latter is an outline of a proof of Thurston's geometrization conjecture, including the Poincaré conjecture as a particular case, uploaded by Grigori Perelman in November 2002.
Perelman appears content to forgo the traditional peer-reviewed journal process, stating: "If anybody is interested in my way of solving the problem, it's all there – let them go and read about it". Despite this non-traditional method of publication, other mathematicians recognized this work by offering the Fields Medal and Clay Mathematics Millennium Prizes to Perelman, both of which he refused. Papers can be submitted in any of several formats, including LaTeX, PDF printed from a word processor other than TeX or LaTeX; the submission is rejected by the arXiv software if generating the final PDF file fails, if any image file is too large, or if the total size of the submission is too large. ArXiv now allows one to store and modify an incomplete submission, only finalize the submission when ready; the time stamp on the article is set. The standard access route is through one of several mirrors. Sev
Cryptanalysis is the study of analyzing information systems in order to study the hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages if the cryptographic key is unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems involve solving constructed problems in pure mathematics, the best-known being integer factorization.
Given some encrypted data, the goal of the cryptanalyst is to gain as much information as possible about the original, unencrypted data. It is useful to consider two aspects of achieving this; the first is breaking the system —, discovering how the encipherment process works. The second is solving the key, unique for a particular encrypted message or group of messages. Attacks can be classified based on; as a basic starting point it is assumed that, for the purposes of analysis, the general algorithm is known. This is a reasonable assumption in practice — throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage and reverse engineering.: Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts. Known-plaintext: the attacker has a set of ciphertexts to which he knows the corresponding plaintext. Chosen-plaintext: the attacker can obtain the ciphertexts corresponding to an arbitrary set of plaintexts of his own choosing.
Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions. Adaptive chosen ciphertext attack. Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys; the keys are unknown. Attacks can be characterised by the resources they require; those resources include: Time -- the number of computation steps. Memory — the amount of storage required to perform the attack. Data — the quantity and type of plaintexts and ciphertexts required for a particular approach. It's sometimes difficult to predict these quantities especially when the attack isn't practical to implement for testing, but academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, for example, "SHA-1 collisions now 252."Bruce Schneier notes that computationally impractical attacks can be considered breaks: "Breaking a cipher means finding a weakness in the cipher that can be exploited with a complexity less than brute force.
Never mind that brute-force might require 2128 encryptions. The results of cryptanalysis can vary in usefulness. For example, cryptographer Lars Knudsen classified various types of attack on block ciphers according to the amount and quality of secret information, discovered: Total break — the attacker deduces the secret key. Global deduction — the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key. Instance deduction — the attacker discovers additional plaintexts not known. Information deduction — the attacker gains some Shannon information about plaintexts not known. Distinguishing algorithm — the attacker can distinguish the cipher from a random permutation. Academic attacks are against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong though reduced-round variants are weak.
Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow. In academic cryptography, a weakness or a break in a scheme is defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts, it might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore