1.
Cryptographic hash function
–
A cryptographic hash function is a special class of hash function that has certain properties which make it suitable for use in cryptography. It is an algorithm that maps data of arbitrary size to a bit string of a fixed size which is designed to also be a one-way function, that is. Bruce Schneier has called one-way hash functions the workhorses of modern cryptography, the input data is often called the message, and the output is often called the message digest or simply the digest. Most cryptographic hash functions are designed to take a string of any length as input, a cryptographic hash function must be able to withstand all known types of cryptanalytic attack. This concept is related to that of one-way function, functions that lack this property are vulnerable to preimage attacks. Second pre-image resistance Given an input m1 it should be difficult to find different input m2 such that hash = hash, functions that lack this property are vulnerable to second-preimage attacks. Collision resistance It should be difficult to find two different messages m1 and m2 such that hash = hash, such a pair is called a cryptographic hash collision. This property is referred to as strong collision resistance. It requires a value at least twice as long as that required for preimage-resistance. Collision resistance implies second pre-image resistance, but does not imply pre-image resistance, informally, these properties mean that a malicious adversary cannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be confident that they are identical. A function meeting these criteria may still have undesirable properties and this property can be used to break naive authentication schemes based on hash functions. The HMAC construction works around these problems, in practice, collision resistance is insufficient for many practical uses. In particular, should behave as much as possible like a random function while still being deterministic, checksum algorithms, such as CRC32 and other cyclic redundancy checks, are designed to meet much weaker requirements, and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in the WEP encryption standard, the meaning of the term is therefore somewhat dependent on the application, since the effort that a malicious agent may put into the task is usually proportional to his expected gain. However, since the effort usually grows very quickly with the digest length. For messages selected from a set of messages, for example passwords or other short messages. In some theoretical analyses difficult has a specific meaning, such as not solvable in asymptotic polynomial time
2.
SHA-1
–
SHA-1 produces a 160-bit hash value known as a message digest. A SHA-1 hash value is typically rendered as a number,40 digits long. SHA-1 is no longer considered secure against well-funded opponents, microsoft, Google, Apple and Mozilla have all announced that their respective browsers will stop accepting SHA-1 SSL certificates by 2017. On February 23,2017 CWI Amsterdam and Google announced they had performed a collision attack against SHA-1, publishing two dissimilar PDF files which produce the same SHA-1 hash as proof of concept. SHA-1 produces a message digest based on similar to those used by Ronald L. Rivest of MIT in the design of the MD4 and MD5 message digest algorithms. SHA-1 was developed as part of the U. S, the original specification of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB180, by U. S. government standards agency NIST. This version is now often named SHA-0 and it was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. SHA-1 differs from SHA-0 only by a single bitwise rotation in the schedule of its compression function. According to the NSA, this was done to correct a flaw in the algorithm which reduced its cryptographic security. Publicly available techniques did indeed compromise SHA-0 before SHA-1, SHA-1 forms part of several widely used security applications and protocols, including TLS and SSL, PGP, SSH, S/MIME, and IPsec. Those applications can also use MD5, both MD5 and SHA-1 are descended from MD4, SHA-1 hashing is also used in distributed revision control systems like Git, Mercurial, and Monotone to identify revisions, and to detect data corruption or tampering. FIPS PUB 180-1 also encouraged adoption and use of SHA-1 by private, a prime motivation for the publication of the Secure Hash Algorithm was the Digital Signature Standard, in which it is incorporated. The SHA hash functions have been used for the basis of the SHACAL block ciphers, revision control systems such as Git and Mercurial use SHA-1 not for security but for ensuring that the data has not changed due to accidental corruption. Linus Torvalds said about Git, If you have disk corruption, if you have DRAM corruption, if you have any kind of problems at all and its not a question of if, its a guarantee. You can have people who try to be malicious, nobody has been able to break SHA-1, but the point is the SHA-1, as far as Git is concerned, isnt even a security feature. The security parts are elsewhere, so a lot of people assume that since Git uses SHA-1 and SHA-1 is used for cryptographically secure stuff, they think that, Okay and it has nothing at all to do with security, its just the best hash you can get. One of the reasons I care is for the kernel, we had a break in on one of the BitKeeper sites where people tried to corrupt the kernel source code repositories. This is called an attack and may or may not be practical depending on L
3.
SHA-2
–
SHA-2 is a set of cryptographic hash functions designed by the United States National Security Agency. Cryptographic hash functions are mathematical operations run on digital data, by comparing the computed hash to a known and expected hash value, a person can determine the datas integrity. For example, computing the hash of a file and comparing the result to a previously published hash result can show whether the download has been modified or tampered with. A key aspect of cryptographic hash functions is their collision resistance, SHA-2 includes significant changes from its predecessor, SHA-1. The SHA-2 family consists of six hash functions with digests that are 224,256,384 or 512 bits, SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-256 and SHA-512 are novel hash functions computed with 32-bit and 64-bit words, respectively. They use different shift amounts and additive constants, but their structures are virtually identical. SHA-224 and SHA-384 are simply truncated versions of the first two, computed with different initial values, SHA-512/224 and SHA-512/256 are also truncated versions of SHA-512, but the initial values are generated using the method described in Federal Information Processing Standards PUB 180-4. SHA-2 was published in 2001 by the National Institute of Standards, the SHA-2 family of algorithms are patented in US patent 6829355. The United States has released the patent under a royalty-free license, in 2005, an algorithm emerged for finding SHA-1 collisions in about 2,000 times fewer steps than was previously thought possible. In 2017, an example of a SHA-1 collision was published, the security margin left by SHA-1 is weaker than intended, and its use is therefore no longer recommended for applications that depend on collision resistance, such as digital signatures. Although SHA-2 bears some similarity to the SHA-1 algorithm, these attacks have not been extended to SHA-2. With the publication of FIPS PUB 180-2, NIST added three additional functions in the SHA family. The algorithms are known as SHA-2, named after their digest lengths, SHA-256, SHA-384. The algorithms were first published in 2001 in the draft FIPS PUB 180-2, at which time public review, in August 2002, FIPS PUB 180-2 became the new Secure Hash Standard, replacing FIPS PUB 180-1, which was released in April 1995. The updated standard included the original SHA-1 algorithm, with updated technical notation consistent with that describing the workings of the SHA-2 family. In February 2004, a notice was published for FIPS PUB 180-2, specifying an additional variant, SHA-224. In October 2008, the standard was updated in FIPS PUB 180-3, including SHA-224 from the change notice, the primary motivation for updating the standard was relocating security information about the hash algorithms and recommendations for their use to Special Publications 800-107 and 800-57. Detailed test data and example message digests were also removed from the standard, padding the final data block must still occur prior to hash output
4.
SHA-3
–
SHA-3, a subset of the cryptographic primitive family Keccak, is a cryptographic hash function designed by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche, building upon RadioGatún. SHA-3 is a member of the Secure Hash Algorithm family, the SHA-3 standard was released by NIST on August 5,2015. The reference implementation source code was dedicated to public domain via CC0 waiver, the Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. It is based on earlier hash function designs PANAMA and RadioGatún, PANAMA was designed by Daemen and Craig Clapp in 1998. RadioGatún, a successor of PANAMA, was designed by Daemen, Peeters, and Van Assche, in 2006 NIST started to organize the NIST hash function competition to create a new hash standard, SHA-3. SHA-3 is not meant to replace SHA-2, as no significant attack on SHA-2 has been demonstrated, because of the successful attacks on MD5, SHA-0 and SHA-1, NIST perceived a need for an alternative, dissimilar cryptographic hash, which became SHA-3. After a setup period, admissions were to be submitted by the end of 2008, Keccak was accepted as one of the 51 candidates. In July 2009,14 algorithms were selected for the second round, Keccak advanced to the last round in December 2010. During the competition, entrants were permitted to tweak their algorithms to address issues that were discovered, changes that have been made to Keccak are, The number of rounds was increased from 12 + ℓ to 12 + 2ℓ to be more conservative about security. The message padding was changed from a complex scheme to the simple 10*1 pattern described below. The rate r was increased to the security limit, rather than rounding down to the nearest power of 2, on October 2,2012, Keccak was selected as the winner of the competition. In 2014, the NIST published a draft FIPS202 SHA-3 Standard, Permutation-Based Hash, FIPS202 was approved on August 5,2015. On August 5,2015 NIST announced that SHA-3 had become a hashing standard, SHA-3 uses the sponge construction, in which data is absorbed into the sponge, then the result is squeezed out. In the absorbing phase, message blocks are XORed into a subset of the state, in the squeeze phase, output blocks are read from the same subset of the state, alternated with state transformations. The size of the part of the state that is written and read is called the rate, the capacity determines the security of the scheme. The maximum security level is half the capacity, in SHA-3, the state consists of a 5 ×5 array of 64-bit words,1600 bits total. Keccak is also defined for smaller power-of-2 word sizes w down to 1 bit, small state sizes can be used to test cryptanalytic attacks, and intermediate state sizes can be used in practical, lightweight applications. The block transformation is a permutation that uses xor, and and not operations, the authors claim 12.5 cycles per byte on an Intel Core 2 CPU
5.
National Institute of Standards and Technology
–
The National Institute of Standards and Technology is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness, in 1821, John Quincy Adams had declared Weights and measures may be ranked among the necessities of life to every individual of human society. From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, president Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000, a laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units, in 1905 a meeting was called that would be the first National Conference on Weights and Measures. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, during World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system, in 1948, financed by the Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes, about the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954, due to a changing mission, the National Bureau of Standards became the National Institute of Standards and Technology in 1988. Following 9/11, NIST conducted the investigation into the collapse of the World Trade Center buildings. NIST had a budget for fiscal year 2007 of about $843.3 million. NISTs 2009 budget was $992 million, and it also received $610 million as part of the American Recovery, NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates complement the staff, in addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the Specifications, tolerances, the Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, nISTs activities are organized into laboratory programs and extramural programs. Effective October 1,2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six, nISTs Boulder laboratories are best known for NIST‑F1, which houses an atomic clock. NIST‑F1 serves as the source of the official time. NIST also operates a neutron science user facility, the NIST Center for Neutron Research, the NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields
6.
National Security Agency
–
NSA is concurrently charged with protection of U. S. government communications and information systems against penetration and network warfare. Moreover, NSA maintains physical presence in a number of countries across the globe. SCS collection tactics allegedly encompass close surveillance, burglary, wiretapping, breaking and entering, additionally, the NSA Director simultaneously serves as the Commander of the United States Cyber Command and as Chief of the Central Security Service. Originating as a unit to decipher coded communications in World War II, NSA surveillance has been a matter of political controversy on several occasions, such as its spying on anti-Vietnam-war leaders or economic espionage. In 2013, the extent of some of the NSAs secret surveillance programs was revealed to the public by Edward Snowden, internationally, research has pointed to the NSAs ability to surveil the domestic Internet traffic of foreign countries through boomerang routing. The origins of the National Security Agency can be traced back to April 28,1917, a code and cipher decryption unit was established as the Cable and Telegraph Section which was also known as the Cipher Bureau. It was headquartered in Washington, D. C. and was part of the war effort under the executive branch without direct Congressional authorization, during the course of the war it was relocated in the armys organizational chart several times. On July 5,1917, Herbert O. Yardley was assigned to head the unit, at that point, the unit consisted of Yardley and two civilian clerks. It absorbed the navys cryptoanalysis functions in July 1918, World War I ended on November 11,1918, and MI-8 moved to New York City on May 20,1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley. MI-8 also operated the so-called Black Chamber, the Black Chamber was located on East 37th Street in Manhattan. Its purpose was to crack the codes of foreign governments. Other Black Chambers were also found in Europe, during World War II, the Signal Security Agency was created to intercept and decipher the communications of the Axis powers. When the war ended, the SSA was reorganized as the Army Security Agency, on May 20,1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency. This organization was established within the U. S. Department of Defense under the command of the Joint Chiefs of Staff. The AFSA was tasked to direct Department of Defense communications and electronic intelligence activities, in December 1951, President Harry S. Truman ordered a panel to investigate how AFSA had failed to achieve its goals. The results of the led to improvements and its redesignation as the National Security Agency. The agency was established by Truman in a memorandum of October 24,1952. Since President Trumans memo was a document, the existence of the NSA was not known to the public at that time
7.
Claus P. Schnorr
–
Claus-Peter Schnorr is a German mathematician and cryptographer. He received his Ph. D. from the University of Saarbrücken in 1966, Schnorrs contributions to cryptography include his study of Schnorr groups, which are used in the digital signature algorithm bearing his name. Schnorr is a professor of mathematics and computer science at the Johann Wolfgang Goethe university at Frankfurt and he retired in 2011 after working there for 40 years. He is also a Distinguished Associate of RSA Laboratories, and a joint recipient of the Gottfried Wilhelm Leibniz Prize in 1993 and he received, with Jean-Jacques Quisquater, the RSA Award for Excellence in Mathematics in 2013. Schnorr held a patent on Schnorr signatures until 2008, Schnorrs home page Claus P. Schnorr at the Mathematics Genealogy Project Schnorrs patent and its relation to DSA
8.
Fermat's little theorem
–
Fermats little theorem states that if p is a prime number, then for any integer a, the number a p − a is an integer multiple of p. In the notation of modular arithmetic, this is expressed as a p ≡ a, for example, if a =2 and p =7,27 =128, and 128 −2 =7 ×18 is an integer multiple of 7. If a is not divisible by p, Fermats little theorem is equivalent to the statement that a p −1 −1 is a multiple of p. For example, if a =2 and p =7 then 26 =64 and 64 −1 =63 is thus a multiple of 7, Fermats little theorem is the basis for the Fermat primality test and is one of the fundamental results of elementary number theory. The theorem is named after Pierre de Fermat, who stated it in 1640 and it is called the little theorem to distinguish it from Fermats last theorem. Pierre de Fermat first stated the theorem in a letter dated October 18,1640, to his friend, an early use in English occurs in A. A. Albert, Modern Higher Algebra, which refers to the so-called little Fermat theorem on page 206, some mathematicians independently made the related hypothesis that 2p ≡2 if and only if p is a prime. Indeed, the if part is true, and is a case of Fermats little theorem. However, the if part of this hypothesis is false, for example,2341 ≡2. Several proofs of Fermats little theorem are known and it is frequently proved as a corollary of Eulers theorem. Fermats little theorem is a case of Eulers theorem, for any modulus n and any integer a coprime to n, we have a φ ≡1. Eulers theorem is indeed a generalization, because if n = p is a prime number, then φ = p −1. A slight generalization of Eulers theorem, which follows from it, is, if a, n, x, y are integers with n positive. This follows as x is of the form y + φk, in this form, the theorem finds many uses in cryptography and, in particular, underlies the computations used in the RSA public key encryption method. The special case with n a prime may be considered a consequence of Fermats little theorem, Fermats little theorem is also related to the Carmichael function and Carmichaels theorem, as well as to Lagranges theorem in group theory. The algebraic setting of Fermats little theorem can be generalized to finite fields, the converse of Fermats little theorem is not generally true, as it fails for Carmichael numbers. However, a stronger form of the theorem is true. The theorem is as follows, If there exists an a such that a p −1 ≡1 and this theorem forms the basis for the Lucas–Lehmer test, an important primality test
9.
Sony
–
Sony Corporation is a Japanese multinational conglomerate corporation that is headquartered in Kōnan, Minato, Tokyo. Its diversified business includes consumer and professional electronics, gaming, entertainment, the company is one of the leading manufacturers of electronic products for the consumer and professional markets. Sony was ranked 116th on the 2015 list of Fortune Global 500 and these make Sony one of the most comprehensive entertainment companies in the world. The group consists of Sony Corporation, Sony Pictures Entertainment, Sony Interactive Entertainment, Sony Music Entertainment, Sony Financial Holdings and others. Sony is among the Semiconductor sales leaders by year and as of 2013, the companys current slogan is BE MOVED. Their former slogans were make. believe, like. no. other, Sony has a weak tie to the SMFG keiretsu, the successor to the Mitsui keiretsu. Sony began in the wake of World War II, in 1946, Masaru Ibuka started an electronics shop in a department store building in Tokyo. The company had $530 in capital and a total of eight employees, in the following year he was joined by his colleague, Akio Morita, and they founded a company called Tokyo Tsushin Kogyo 東京通信工業. The company built Japans first tape recorder, called the Type-G, in 1958 the company changed its name to Sony. When Tokyo Tsushin Kogyo was looking for a name to use to market themselves, they strongly considered using their initials. The primary reason they did not is that the railway company Tokyo Kyuko was known as TTK, the company occasionally used the acronym Totsuko in Japan, but during his visit to the United States, Morita discovered that Americans had trouble pronouncing that name. Another early name that was tried out for a while was Tokyo Teletech until Akio Morita discovered that there was an American company already using Teletech as a brand name, the name Sony was chosen for the brand as a mix of two words. One was the Latin word sonus, which is the root of sonic and sound, and the other was sonny, a common slang term used in 1950s America to call a boy. In the 1950s Japan sonny boys, was a word into Japanese which connoted smart and presentable young men. The first Sony-branded product, the TR-55 transistor radio, appeared in 1955, at the time of the change, it was extremely unusual for a Japanese company to use Roman letters to spell its name instead of writing it in kanji. The move was not without opposition, TTKs principal bank at the time and they pushed for a name such as Sony Electronic Industries, or Sony Teletech. Akio Morita was firm, however, as he did not want the company tied to any particular industry. Eventually, both Ibuka and Mitsui Banks chairman gave their approval, according to Schiffer, Sonys TR-63 radio cracked open the U. S. market and launched the new industry of consumer microelectronics
10.
PlayStation 3
–
The PlayStation 3 is a home video game console developed by Sony Computer Entertainment. It is the successor to PlayStation 2, and is part of the PlayStation brand of consoles and it was first released on November 11,2006, in Japan, November 17,2006, in North America, and March 23,2007, in Europe and Australia. The PlayStation 3 mainly competes against consoles such as Microsofts Xbox 360, the console was first officially announced at E32005, and was released at the end of 2006. It was the first console to use Blu-ray Disc as its storage medium. In September 2009, the Slim model of the PlayStation 3 was released and it no longer provided the hardware ability to run PS2 games. It was lighter and thinner than the version, and featured a redesigned logo and marketing design. A Super Slim variation was released in late 2012, further refining and redesigning the console. The system had a start in the market but managed to recover. As of March 2016, PlayStation 3 has sold 85 million units worldwide, putting it about on par with Xbox 360 and its successor, the PlayStation 4, was released later in November 2013. On September 29,2015, Sony confirmed that sales of the PlayStation 3 were to be discontinued in New Zealand, in March 2017, the official site for PlayStation 3 in Japan was updated to state that it would be discontinued soon. Sony officially unveiled PlayStation 3 to the public on May 16,2005, at E32005, Video footage based on the predicted PlayStation 3 specifications was also shown. Two hardware configurations were also announced for the console, a 20 GB model, the 60 GB model was to be the only configuration to feature an HDMI port, Wi-Fi internet, flash card readers and a chrome trim with the logo in silver. Both models were announced for a worldwide release, November 11,2006, for Japan and November 17,2006, for North America. On September 6,2006, Sony announced that PAL region PlayStation 3 launch would be delayed until March 2007, because of a shortage of materials used in the Blu-ray drive. Also, the price of the Japanese 20 GB model was reduced by over 20%. During the event, Sony showed 27 playable PS3 games running on final hardware, PlayStation 3 was first released in Japan on November 11,2006, at 07,00. According to Media Create,81,639 PS3 systems were sold within 24 hours of its introduction in Japan, soon after its release in Japan, PS3 was released in North America on November 17,2006. Reports of violence surrounded the release of PS3, a customer was shot, campers were robbed at gunpoint, customers were shot in a drive-by shooting with BB guns, and 60 campers fought over 10 systems
11.
Modular arithmetic
–
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers wrap around upon reaching a certain value—the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, a familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two 12-hour periods. If the time is 7,00 now, then 8 hours later it will be 3,00. Usual addition would suggest that the time should be 7 +8 =15. Likewise, if the clock starts at 12,00 and 21 hours elapse, then the time will be 9,00 the next day, because the hour number starts over after it reaches 12, this is arithmetic modulo 12. According to the definition below,12 is congruent not only to 12 itself, Modular arithmetic can be handled mathematically by introducing a congruence relation on the integers that is compatible with the operations on integers, addition, subtraction, and multiplication. For a positive n, two integers a and b are said to be congruent modulo n, written, a ≡ b. The number n is called the modulus of the congruence, for example,38 ≡14 because 38 −14 =24, which is a multiple of 12. The same rule holds for negative values, −8 ≡72 ≡ −3 −3 ≡ −8. Equivalently, a ≡ b mod n can also be thought of as asserting that the remainders of the division of both a and b by n are the same, for instance,38 ≡14 because both 38 and 14 have the same remainder 2 when divided by 12. It is also the case that 38 −14 =24 is a multiple of 12. A remark on the notation, Because it is common to consider several congruence relations for different moduli at the same time, in spite of the ternary notation, the congruence relation for a given modulus is binary. This would have been if the notation a ≡n b had been used. The properties that make this relation a congruence relation are the following, if a 1 ≡ b 1 and a 2 ≡ b 2, then, a 1 + a 2 ≡ b 1 + b 2 a 1 − a 2 ≡ b 1 − b 2. The above two properties would still hold if the theory were expanded to all real numbers, that is if a1, a2, b1, b2. The next property, however, would fail if these variables were not all integers, the notion of modular arithmetic is related to that of the remainder in Euclidean division. The operation of finding the remainder is referred to as the modulo operation. For example, the remainder of the division of 14 by 12 is denoted by 14 mod 12, as this remainder is 2, we have 14 mod 12 =2
12.
Wayback Machine
–
The Internet Archive launched the Wayback Machine in October 2001. It was set up by Brewster Kahle and Bruce Gilliat, and is maintained with content from Alexa Internet, the service enables users to see archived versions of web pages across time, which the archive calls a three dimensional index. Since 1996, the Wayback Machine has been archiving cached pages of websites onto its large cluster of Linux nodes and it revisits sites every few weeks or months and archives a new version. Sites can also be captured on the fly by visitors who enter the sites URL into a search box, the intent is to capture and archive content that otherwise would be lost whenever a site is changed or closed down. The overall vision of the machines creators is to archive the entire Internet, the name Wayback Machine was chosen as a reference to the WABAC machine, a time-traveling device used by the characters Mr. Peabody and Sherman in The Rocky and Bullwinkle Show, an animated cartoon. These crawlers also respect the robots exclusion standard for websites whose owners opt for them not to appear in search results or be cached, to overcome inconsistencies in partially cached websites, Archive-It. Information had been kept on digital tape for five years, with Kahle occasionally allowing researchers, when the archive reached its fifth anniversary, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley. Snapshots usually become more than six months after they are archived or, in some cases, even later. The frequency of snapshots is variable, so not all tracked website updates are recorded, Sometimes there are intervals of several weeks or years between snapshots. After August 2008 sites had to be listed on the Open Directory in order to be included. As of 2009, the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month, the growth rate reported in 2003 was 12 terabytes/month, the data is stored on PetaBox rack systems manufactured by Capricorn Technologies. In 2009, the Internet Archive migrated its customized storage architecture to Sun Open Storage, in 2011 a new, improved version of the Wayback Machine, with an updated interface and fresher index of archived content, was made available for public testing. The index driving the classic Wayback Machine only has a bit of material past 2008. In January 2013, the company announced a ground-breaking milestone of 240 billion URLs, in October 2013, the company announced the Save a Page feature which allows any Internet user to archive the contents of a URL. This became a threat of abuse by the service for hosting malicious binaries, as of December 2014, the Wayback Machine contained almost nine petabytes of data and was growing at a rate of about 20 terabytes each week. Between October 2013 and March 2015 the websites global Alexa rank changed from 162 to 208, in a 2009 case, Netbula, LLC v. Chordiant Software Inc. defendant Chordiant filed a motion to compel Netbula to disable the robots. Netbula objected to the motion on the ground that defendants were asking to alter Netbulas website, in an October 2004 case, Telewizja Polska USA, Inc. v. Echostar Satellite, No.02 C3293,65 Fed. 673, a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network
13.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
14.
Public-key cryptography
–
In a public key encryption system, any person can encrypt a message using the public key of the receiver, but such a message can be decrypted only with the receivers private key. For this to work it must be easy for a user to generate a public. The strength of a public key cryptography system relies on the degree of difficulty for a properly generated private key to be determined from its public key. Security then depends only on keeping the key private. Public key algorithms, unlike symmetric key algorithms, do not require a secure channel for the exchange of one secret keys between the parties. Because of the complexity of asymmetric encryption, it is usually used only for small blocks of data. This symmetric key is used to encrypt the rest of the potentially long message sequence. The symmetric encryption/decryption is based on algorithms and is much faster. In a public key system, a person can combine a message with a private key to create a short digital signature on the message. Thus the authenticity of a message can be demonstrated by the signature, Public key algorithms are fundamental security ingredients in cryptosystems, applications and protocols. They underpin various Internet standards, such as Transport Layer Security, S/MIME, PGP, some public key algorithms provide key distribution and secrecy, some provide digital signatures, and some provide both. Public key cryptography finds application in, among others, the information technology security discipline, information security is concerned with all aspects of protecting electronic information assets against security threats. Public key cryptography is used as a method of assuring the confidentiality, authenticity and non-repudiability of electronic communications, two of the best-known uses of public key cryptography are, Public key encryption, in which a message is encrypted with a recipients public key. The message cannot be decrypted by anyone who does not possess the matching private key, who is presumed to be the owner of that key. This is used in an attempt to ensure confidentiality, digital signatures, in which a message is signed with the senders private key and can be verified by anyone who has access to the senders public key. This verification proves that the sender had access to the private key, an analogy to public key encryption is that of a locked mail box with a mail slot. The mail slot is exposed and accessible to the public – its location is, in essence, anyone knowing the street address can go to the door and drop a written message through the slot. However, only the person who possesses the key can open the mailbox, an analogy for digital signatures is the sealing of an envelope with a personal wax seal
15.
Integer factorization
–
In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these integers are further restricted to numbers, the process is called prime factorization. When the numbers are large, no efficient, non-quantum integer factorization algorithm is known. However, it has not been proven that no efficient algorithm exists, the presumed difficulty of this problem is at the heart of widely used algorithms in cryptography such as RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, not all numbers of a given length are equally hard to factor. The hardest instances of these problems are semiprimes, the product of two prime numbers, many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure, by the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. If the integer is then it can be recognized as such in polynomial time. If composite however, the theorem gives no insight into how to obtain the factors, given a general algorithm for integer factorization, any integer can be factored down to its constituent prime factors simply by repeated application of this algorithm. The situation is complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if N =10 × p × q where p < q are very large primes, trial division will quickly produce the factors 2 and 5 but will take p divisions to find the next factor. Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those that are products of two primes of similar size, for this reason, these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA-768, a 768-bit number with 232 decimal digits and this factorization was a collaboration of several research institutions, spanning two years and taking the equivalent of almost 2000 years of computing on a single-core 2.2 GHz AMD Opteron. Like all recent factorization records, this factorization was completed with an optimized implementation of the general number field sieve run on hundreds of machines. No algorithm has been published that can factor all integers in polynomial time, neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P. The problem is clearly in class NP but has not been proved to be in, or not in and it is generally suspected not to be in NP-complete. There are published algorithms that are faster than O for all positive ε, i. e. sub-exponential, the best published asymptotic running time is for the general number field sieve algorithm, which, for a b-bit number n, is, O. For current computers, GNFS is the best published algorithm for large n, for a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time
16.
RSA (cryptosystem)
–
RSA is one of the first practical public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the key is public and differs from the decryption key which is kept secret. In RSA, this asymmetry is based on the difficulty of factoring the product of two large prime numbers, the factoring problem. RSA is made of the letters of the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman. Clifford Cocks, an English mathematician working for the UK intelligence agency GCHQ, had developed an equivalent system in 1973, a user of RSA creates and then publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers must be kept secret, breaking RSA encryption is known as the RSA problem, whether it is as hard as the factoring problem remains an open question. RSA is a relatively slow algorithm, and because of this it is commonly used to directly encrypt user data. More often, RSA passes encrypted shared keys for symmetric key cryptography which in turn can perform bulk encryption-decryption operations at higher speed. The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman and they also introduced digital signatures and attempted to apply number theory, their formulation used a shared secret key created from exponentiation of some number, modulo a prime numbers. However, they open the problem of realizing a one-way function. Ron Rivest, Adi Shamir, and Leonard Adleman at MIT made several attempts over the course of a year to create a function that is hard to invert. Rivest and Shamir, as scientists, proposed many potential functions while Adleman. They tried many approaches including knapsack-based and permutation polynomials, for a time they thought it was impossible for what they wanted to achieve due to contradictory requirements. In April 1977, they spent Passover at the house of a student, Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea and had much of the paper ready by daybreak, the algorithm is now known as RSA – the initials of their surnames in same order as their paper. Clifford Cocks, an English mathematician working for the UK intelligence agency GCHQ, however, given the relatively expensive computers needed to implement it at the time, it was mostly considered a curiosity and, as far as is publicly known, was never deployed. His discovery, however, was not revealed until 1997 due to its secret classification, Kid-RSA is a simplified public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, Patent 4,405,829 for a Cryptographic communications system and method that used the algorithm, on September 20,1983