1.
RSA (cryptosystem)
–
RSA is one of the first practical public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the key is public and differs from the decryption key which is kept secret. In RSA, this asymmetry is based on the difficulty of factoring the product of two large prime numbers, the factoring problem. RSA is made of the letters of the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman. Clifford Cocks, an English mathematician working for the UK intelligence agency GCHQ, had developed an equivalent system in 1973, a user of RSA creates and then publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers must be kept secret, breaking RSA encryption is known as the RSA problem, whether it is as hard as the factoring problem remains an open question. RSA is a relatively slow algorithm, and because of this it is commonly used to directly encrypt user data. More often, RSA passes encrypted shared keys for symmetric key cryptography which in turn can perform bulk encryption-decryption operations at higher speed. The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman and they also introduced digital signatures and attempted to apply number theory, their formulation used a shared secret key created from exponentiation of some number, modulo a prime numbers. However, they open the problem of realizing a one-way function. Ron Rivest, Adi Shamir, and Leonard Adleman at MIT made several attempts over the course of a year to create a function that is hard to invert. Rivest and Shamir, as scientists, proposed many potential functions while Adleman. They tried many approaches including knapsack-based and permutation polynomials, for a time they thought it was impossible for what they wanted to achieve due to contradictory requirements. In April 1977, they spent Passover at the house of a student, Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea and had much of the paper ready by daybreak, the algorithm is now known as RSA – the initials of their surnames in same order as their paper. Clifford Cocks, an English mathematician working for the UK intelligence agency GCHQ, however, given the relatively expensive computers needed to implement it at the time, it was mostly considered a curiosity and, as far as is publicly known, was never deployed. His discovery, however, was not revealed until 1997 due to its secret classification, Kid-RSA is a simplified public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, Patent 4,405,829 for a Cryptographic communications system and method that used the algorithm, on September 20,1983
2.
Daniel J. Bernstein
–
In the mid 90s internet software was not designed for security, and cryptography was controlled. Bernstein addressed cryptography by suing the United States Government in 1995 Bernstein v. United States and by writing software for email, web. The software came with a security guarantee that achieved significant status during the 8 years where no bugs were found, Bernstein was merciless in his criticism of then leading email and dns software packages and both the large teams which supported them and people that distributed them. Sendmail and BIND were both significantly less efficient, more difficult to configure and bug prone by design resulting in a flow of significant bugs. His computer software programs qmail, publicfile, and djbdns were released as license-free software and this issue was resolved when Bernstein released the source code of his projects into public domain software in 2007. He attended Bellport High School, a high school on Long Island. The same year, he ranked fifth place in the Westinghouse Science Talent Search, in 1987, he achieved a Top 10 ranking in the William Lowell Putnam Mathematical Competition. Bernstein earned his bachelors degree in mathematics from New York University and has a PhD in mathematics from the University of California, Berkeley, Bernstein brought the court case Bernstein v. United States. The ruling in the case declared software as protected speech under the First Amendment, Bernstein was originally represented by the Electronic Frontier Foundation, but he later represented himself despite having no formal training as a lawyer. In the autumn of 2004, Bernstein taught a course about computer software security, the sixteen members of the class discovered 91 new UNIX security holes. Bernstein explained, in 2005, that he is pursuing a strategy to produce invulnerable computer systems and he concludes, I won’t be satisfied until Ive put the entire security industry out of work. In spring 2005 Bernstein taught a course on high speed cryptography and he demonstrated new results against implementations of AES in the same time period. As of April 2008, Bernsteins stream cipher Salsa20 was selected as a member of the portfolio of the eSTREAM project. In 2011, Bernstein published RFSB, a variant of the Fast Syndrome Based Hash function, Bernstein claims that the exploit does not fall within the parameters of the qmail security guarantee. In March 2009, Bernstein awarded $1000 to Matthew Dempsky for finding a security hole in djbdns, in August 2008, Bernstein announced DNSCurve, a proposal to secure the Domain Name System. Additionally, the used in OpenBSD for signing releases and packages is based entirely on the algorithms by Bernstein. Both the signed releases and the extra crypto in OpenSSH have first appeared in OpenBSD5.5, Bernstein has published a number of papers on mathematics and computation. Many of his papers deal with algorithms or implementations and he also wrote a survey titled Multidigit multiplication for mathematicians
3.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
4.
Optimal asymmetric encryption padding
–
In cryptography, Optimal Asymmetric Encryption Padding is a padding scheme often used together with RSA encryption. OAEP was introduced by Bellare and Rogaway, and subsequently standardized in PKCS#1 v2, the OAEP algorithm is a form of Feistel network which uses a pair of random oracles G and H to process the plaintext prior to asymmetric encryption. When implemented with certain trapdoor permutations, OAEP is also proved secure against chosen ciphertext attack, OAEP can be used to build an all-or-nothing transform. OAEP satisfies the two goals, Add an element of randomness which can be used to convert a deterministic encryption scheme into a probabilistic scheme. Prevent partial decryption of ciphertexts by ensuring that an adversary cannot recover any portion of the plaintext without being able to invert the trapdoor one-way permutation f. The original version of OAEP showed a form of awareness in the random oracle model when OAEP is used with any trapdoor permutation. Subsequent results contradicted this claim, showing that OAEP was only IND-CCA1 secure, however, the original scheme was proved in the random oracle model to be IND-CCA2 secure when OAEP is used with the RSA permutation using standard encryption exponents, as in the case of RSA-OAEP. An improved scheme that works with any trapdoor one-way permutation was offered by Victor Shoup to solve this problem, more recent work has shown that in the standard model it is impossible to prove the IND-CCA2 security of RSA-OAEP under the assumed hardness of the RSA problem. In the diagram, n is the number of bits in the RSA modulus, k0 and k1 are integers fixed by the protocol. M is the message, an -bit string G and H are typically some cryptographic hash functions fixed by the protocol. ⊕ is an xor operation. To encode, messages are padded with k1 zeros to be n − k0 bits in length, R is a randomly generated k0-bit string G expands the k0 bits of r to n − k0 bits. X = m00.0 ⊕ G H reduces the n − k0 bits of X to k0 bits, Y = r ⊕ H The output is X || Y where X is shown in the diagram as the leftmost block and Y as the rightmost block. Since any changed bit of a cryptographic hash completely changes the result, the entire X, and the entire Y must both be completely recovered
5.
Web of trust
–
In cryptography, a web of trust is a concept used in PGP, GnuPG, and other OpenPGP-compatible systems to establish the authenticity of the binding between a public key and its owner. Its decentralized trust model is an alternative to the centralized trust model of a key infrastructure. As with computer networks, there are many independent webs of trust, and any user can be a part of, and a link between, multiple webs. OpenPGP identity certificates can be signed by other users who, by that act. This is commonly done at key signing parties, openPGP-compliant implementations also include a vote counting scheme which can be used to determine which public key – owner association a user will trust while using PGP. The parameters are user-adjustable and can be bypassed if desired. The scheme is flexible, unlike most public key infrastructure designs and it is not perfect and requires both caution and intelligent supervision by users. Essentially all PKI designs are less flexible and require users to follow the trust endorsement of the PKI generated, certificate authority -signed, there are two keys pertaining to a person, a public key which is shared openly and a private key thats withheld by the owner. The owners private key will decrypt any information encrypted with its public key, in the web of trust, each user has a ring with a group of peoples public keys. Users encrypt their information with the public key, and only the owners private key will decrypt it. Each user then digitally signs the information with their private key, doing this will ensure that the information came from the specific user and has not been tampered with, and only the intended recipient can read the information. In contrast, a typical X.509 PKI permits each certificate to be signed only by a single party, the CAs certificate may itself be signed by a different CA, all the way up to a self-signed root certificate. Root certificates must be available to those who use a lower level CA certificate and they are for instance, distributed with such applications as browsers and email clients. In this way SSL/TLS-protected Web pages, email messages, etc. can be authenticated without requiring users to manually install root certificates. Applications commonly include one hundred root certificates from dozens of PKIs. The OpenPGP web of trust is essentially unaffected by such things as company failures, however, a related problem does occur. Users, whether individuals or organizations, who lose track of a key can no longer decrypt messages sent to them produced using the matching public key found in an OpenPGP certificate. Early PGP certificates did not include dates, and those certificates had unlimited lives
6.
Public-key cryptography
–
In a public key encryption system, any person can encrypt a message using the public key of the receiver, but such a message can be decrypted only with the receivers private key. For this to work it must be easy for a user to generate a public. The strength of a public key cryptography system relies on the degree of difficulty for a properly generated private key to be determined from its public key. Security then depends only on keeping the key private. Public key algorithms, unlike symmetric key algorithms, do not require a secure channel for the exchange of one secret keys between the parties. Because of the complexity of asymmetric encryption, it is usually used only for small blocks of data. This symmetric key is used to encrypt the rest of the potentially long message sequence. The symmetric encryption/decryption is based on algorithms and is much faster. In a public key system, a person can combine a message with a private key to create a short digital signature on the message. Thus the authenticity of a message can be demonstrated by the signature, Public key algorithms are fundamental security ingredients in cryptosystems, applications and protocols. They underpin various Internet standards, such as Transport Layer Security, S/MIME, PGP, some public key algorithms provide key distribution and secrecy, some provide digital signatures, and some provide both. Public key cryptography finds application in, among others, the information technology security discipline, information security is concerned with all aspects of protecting electronic information assets against security threats. Public key cryptography is used as a method of assuring the confidentiality, authenticity and non-repudiability of electronic communications, two of the best-known uses of public key cryptography are, Public key encryption, in which a message is encrypted with a recipients public key. The message cannot be decrypted by anyone who does not possess the matching private key, who is presumed to be the owner of that key. This is used in an attempt to ensure confidentiality, digital signatures, in which a message is signed with the senders private key and can be verified by anyone who has access to the senders public key. This verification proves that the sender had access to the private key, an analogy to public key encryption is that of a locked mail box with a mail slot. The mail slot is exposed and accessible to the public – its location is, in essence, anyone knowing the street address can go to the door and drop a written message through the slot. However, only the person who possesses the key can open the mailbox, an analogy for digital signatures is the sealing of an envelope with a personal wax seal
7.
Integer factorization
–
In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these integers are further restricted to numbers, the process is called prime factorization. When the numbers are large, no efficient, non-quantum integer factorization algorithm is known. However, it has not been proven that no efficient algorithm exists, the presumed difficulty of this problem is at the heart of widely used algorithms in cryptography such as RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, not all numbers of a given length are equally hard to factor. The hardest instances of these problems are semiprimes, the product of two prime numbers, many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure, by the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. If the integer is then it can be recognized as such in polynomial time. If composite however, the theorem gives no insight into how to obtain the factors, given a general algorithm for integer factorization, any integer can be factored down to its constituent prime factors simply by repeated application of this algorithm. The situation is complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if N =10 × p × q where p < q are very large primes, trial division will quickly produce the factors 2 and 5 but will take p divisions to find the next factor. Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those that are products of two primes of similar size, for this reason, these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA-768, a 768-bit number with 232 decimal digits and this factorization was a collaboration of several research institutions, spanning two years and taking the equivalent of almost 2000 years of computing on a single-core 2.2 GHz AMD Opteron. Like all recent factorization records, this factorization was completed with an optimized implementation of the general number field sieve run on hundreds of machines. No algorithm has been published that can factor all integers in polynomial time, neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P. The problem is clearly in class NP but has not been proved to be in, or not in and it is generally suspected not to be in NP-complete. There are published algorithms that are faster than O for all positive ε, i. e. sub-exponential, the best published asymptotic running time is for the general number field sieve algorithm, which, for a b-bit number n, is, O. For current computers, GNFS is the best published algorithm for large n, for a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time
8.
Public key infrastructure
–
A public key infrastructure is a set of roles, policies, and procedures needed to create, manage, distribute, use, store, and revoke digital certificates and manage public-key encryption. The purpose of a PKI is to facilitate the secure transfer of information for a range of network activities such as e-commerce, internet banking. In cryptography, a PKI is an arrangement that binds public keys with identities of entities. The binding is established through a process of registration and issuance of certificates at, depending on the assurance level of the binding, this may be carried out by an automated process or under human supervision. The PKI role that assures valid and correct registration is called a registration authority, an RA is responsible for accepting requests for digital certificates and authenticating the entity making the request. In a Microsoft PKI, an authority is usually called a subordinate CA. An entity must be uniquely identifiable within each CA domain on the basis of information about that entity, a third-party validation authority can provide this entity information on behalf of the CA. Public key cryptography is a technique that enables entities to securely communicate on an insecure public network. A public key infrastructure is a system for the creation, storage, the PKI creates digital certificates which map public keys to entities, securely stores these certificates in a central repository and revokes them if needed. e. A secure location in which to store and index keys A certificate management system managing things like the access to stored certificates or the delivery of the certificates to be issued. A certificate policy Broadly speaking, there have traditionally been three approaches to getting this trust, certificate authorities, web of trust, and simple public key infrastructure, the primary role of the CA is to digitally sign and publish the public key bound to a given user. This is done using the CAs own private key, so that trust in the user key relies on ones trust in the validity of the CAs key. When the CA is a third party separate from the user and the system, then it is called the Registration Authority, the key-to-user binding is established, depending on the level of assurance the binding has, by software or under human supervision. The term trusted third party may also be used for certificate authority, moreover, PKI is itself often used as a synonym for a CA implementation. In this model of trust relationships, a CA is a third party - trusted both by the subject of the certificate and by the party relying upon the certificate. The top spot has been held by Symantec ever since survey began and this approach involves a server that acts as an offline certificate authority within a single sign-on system. A single sign-on server will issue digital certificates into the client system, users can execute programs, etc. with the temporary certificate. It is common to find this solution variety with X. 509-based certificates, the singular term web of trust does not imply the existence of a single web of trust, or common point of trust, but rather one of any number of potentially disjoint webs of trust