1.
Smart card
–
A smart card, chip card, or integrated circuit card is any pocket-sized card that has embedded integrated circuits. Smart cards are made of plastic, generally polyvinyl chloride, but sometimes polyethylene terephthalate based polyesters, since April 2009, a Japanese company has manufactured reusable financial smart cards made from paper. Smart cards can be either contact or contactless smart card, Smart cards can provide personal identification, authentication, data storage, and application processing. Smart cards may provide strong security authentication for single sign-on within large organizations, in 1968 and 1969 Helmut Gröttrup and Jürgen Dethloff jointly filed patents for the automated chip card. Roland Moreno patented the memory card concept in 1974, an important patent for smart cards with a microprocessor and memory as used today was filed by Jürgen Dethloff in 1976 and granted as USP4105156 in 1978. Three years later, Motorola used this patent in its CP8, at that time, Bull had 1,200 patents related to smart cards. In 2001, Bull sold its CP8 division together with its patents to Schlumberger, in 2006, Axalto and Gemplus, at the time the worlds top two smart card manufacturers, merged and became Gemalto. The first mass use of the cards was as a card for payment in French pay phones. After the Télécarte, microchips were integrated into all French Carte Bleue debit cards in 1992, customers inserted the card into the merchants point of sale terminal, then typed the personal identification number, before the transaction was accepted. Only very limited transactions are processed without a PIN, smart-card-based electronic purse systems store funds on the card so that readers do not need network connectivity. They entered European service in the mid-1990s and they have been common in Germany, Austria, Belgium, France, the Netherlands, Switzerland, Norway, Sweden, Finland, UK, Denmark and Portugal. Since the 1990s, smart-cards have been the Subscriber Identity Modules used in European GSM mobile phone equipment, Mobile phones are widely used in Europe, so smart cards have become very common. Europay MasterCard Visa -compliant cards and equipment are widespread, the United States started using the EMV technology in 2014. Historically, in 1993 several international payment companies agreed to develop specifications for debit and credit cards. The original brands were MasterCard, Visa, and Europay, the first version of the EMV system was released in 1994. In 1998 the specifications became stable, eMVcos purpose is to assure the various financial institutions and retailers that the specifications retain backward compatibility with the 1998 version. EMVco upgraded the specifications in 2000 and 2004, EMV compliant cards were first accepted into Malaysia in 2005 and later into United States in 2014. MasterCard was the first company that was allowed to use the technology in the United States, the United States has felt pushed to use the technology because of the increase in identity theft
2.
Brute-force attack
–
In cryptography, a brute-force attack consists of an attacker trying many passwords or passphrases with the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the one is found. Alternatively, the attacker can attempt to guess the key which is created from the password using a key derivation function. This is known as a key search. A brute-force attack is an attack that can, in theory. Such an attack might be used when it is not possible to take advantage of weaknesses in an encryption system that would make the task easier. Longer passwords, passphrases and keys have more possible values, making them more difficult to crack than shorter ones. One of the measures of the strength of a system is how long it would theoretically take an attacker to mount a successful brute-force attack against it. Brute-force attacks are an application of search, the general problem-solving technique of enumerating all candidates. Brute force attacks work by calculating every possible combination that could make up a password, as the password’s length increases, the amount of time, on average, to find the correct password increases exponentially. This means short passwords can usually be discovered quite quickly, the resources required for a brute-force attack grow exponentially with increasing key size, not linearly. There is an argument that a 128-bit symmetric key is computationally secure against brute-force attack.693. No irreversible computing device can use less energy than this, even in principle, thus, in order to simply flip through the possible values for a 128-bit symmetric key would theoretically require 2128 −1 bit flips on a conventional processor. This is equal to 30×109 W×365×24×3600 s =9. 46×1017 J or 262.7 TWh, the full actual computation – checking each key to see if you have found a solution – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space, the time it takes to flip each bit is not considered. However, this argument assumes that the values are changed using conventional set. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction, one is modern graphics processing unit technology, the other is the field-programmable gate array technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from their energy efficiency per cryptographic operation, both technologies try to transport the benefits of parallel processing to brute-force attacks
3.
Euclidean algorithm
–
It is named after the ancient Greek mathematician Euclid, who first described it in Euclids Elements. It is an example of an algorithm, a procedure for performing a calculation according to well-defined rules. It can be used to reduce fractions to their simplest form, the Euclidean algorithm is based on the principle that the greatest common divisor of two numbers does not change if the larger number is replaced by its difference with the smaller number. For example,21 is the GCD of 252 and 105, since this replacement reduces the larger of the two numbers, repeating this process gives successively smaller pairs of numbers until the two numbers become equal. When that occurs, they are the GCD of the two numbers. By reversing the steps, the GCD can be expressed as a sum of the two numbers each multiplied by a positive or negative integer, e. g.21 =5 ×105 + ×252. The fact that the GCD can always be expressed in this way is known as Bézouts identity, the version of the Euclidean algorithm described above can take many subtraction steps to find the GCD when one of the given numbers is much bigger than the other. A more efficient version of the algorithm shortcuts these steps, instead replacing the larger of the two numbers by its remainder when divided by the smaller of the two. With this improvement, the algorithm never requires more steps than five times the number of digits of the smaller integer and this was proven by Gabriel Lamé in 1844, and marks the beginning of computational complexity theory. Additional methods for improving the algorithms efficiency were developed in the 20th century, the Euclidean algorithm has many theoretical and practical applications. It is used for reducing fractions to their simplest form and for performing division in modular arithmetic, finally, it can be used as a basic tool for proving theorems in number theory such as Lagranges four-square theorem and the uniqueness of prime factorizations. This led to abstract algebraic notions such as Euclidean domains. The Euclidean algorithm calculates the greatest common divisor of two numbers a and b. The greatest common divisor g is the largest natural number that divides both a and b without leaving a remainder, synonyms for the GCD include the greatest common factor, the highest common factor, the highest common divisor, and the greatest common measure. The greatest common divisor is often written as gcd or, more simply, as, although the notation is also used for other mathematical concepts. If gcd =1, then a and b are said to be coprime and this property does not imply that a or b are themselves prime numbers. For example, neither 6 nor 35 is a prime number, nevertheless,6 and 35 are coprime. No natural number other than 1 divides both 6 and 35, since they have no prime factors in common
4.
Quantum computing
–
Quantum computing studies theoretical computation systems that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors, a quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of computing was initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968, there exist quantum algorithms, such as Simons algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis, on the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers. A classical computer has a made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits, in general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2 n different states simultaneously. A quantum computer operates by setting the qubits in a drift that represents the problem at hand. The sequence of gates to be applied is called a quantum algorithm, the calculation ends with a measurement, collapsing the system of qubits into one of the 2 n pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most n classical bits of information, Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in case to mean probabilistic. An example of an implementation of qubits of a computer could start with the use of particles with two spin states, down and up. This is true because any such system can be mapped onto an effective spin-1/2 system, a quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. This means that when the state of the qubits is measured. To better understand this point, consider a classical computer that operates on a three-bit register, if there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a computer, then there is a possibility of it being in any one of a number of different states. The state of a quantum computer is similarly described by an eight-dimensional vector. Here, however, the coefficients a k are complex numbers, and it is the sum of the squares of the absolute values, ∑ i | a i |2
5.
Factorization
–
In mathematics, factorization or factoring is the decomposition of an object into a product of other objects, or factors, which when multiplied together give the original. For example, the number 15 factors into primes as 3 ×5, in all cases, a product of simpler objects is obtained. The aim of factoring is usually to reduce something to “basic building blocks”, such as numbers to prime numbers, factoring integers is covered by the fundamental theorem of arithmetic and factoring polynomials by the fundamental theorem of algebra. Viètes formulas relate the coefficients of a polynomial to its roots, the opposite of polynomial factorization is expansion, the multiplying together of polynomial factors to an “expanded” polynomial, written as just a sum of terms. Integer factorization for large integers appears to be a difficult problem, there is no known method to carry it out quickly. Its complexity is the basis of the security of some public key cryptography algorithms. A matrix can also be factorized into a product of matrices of special types, One major example of this uses an orthogonal or unitary matrix, and a triangular matrix. There are different types, QR decomposition, LQ, QL, RQ and this situation is generalized by factorization systems. By the fundamental theorem of arithmetic, every integer greater than 1 has a unique prime factorization. Given an algorithm for integer factorization, one can factor any integer down to its constituent primes by repeated application of this algorithm, for very large numbers, no efficient classical algorithm is known. Modern techniques for factoring polynomials are fast and efficient, but use sophisticated mathematical ideas and these techniques are used in the construction of computer routines for carrying out polynomial factorization in Computer algebra systems. This article is concerned with classical techniques. While the general notion of factoring just means writing an expression as a product of simpler expressions, when factoring polynomials this means that the factors are to be polynomials of smaller degree. Thus, while x 2 − y = is a factorization of the expression, another issue concerns the coefficients of the factors. It is not always possible to do this, and a polynomial that can not be factored in this way is said to be irreducible over this type of coefficient, thus, x2 -2 is irreducible over the integers and x2 +4 is irreducible over the reals. In the first example, the integers 1 and -2 can also be thought of as real numbers, and if they are, then x 2 −2 = shows that this polynomial factors over the reals. Similarly, since the integers 1 and 4 can be thought of as real and hence complex numbers, x2 +4 splits over the complex numbers, i. e. x 2 +4 =. The fundamental theorem of algebra can be stated as, Every polynomial of n with complex number coefficients splits completely into n linear factors
6.
RSA (cryptosystem)
–
RSA is one of the first practical public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the key is public and differs from the decryption key which is kept secret. In RSA, this asymmetry is based on the difficulty of factoring the product of two large prime numbers, the factoring problem. RSA is made of the letters of the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman. Clifford Cocks, an English mathematician working for the UK intelligence agency GCHQ, had developed an equivalent system in 1973, a user of RSA creates and then publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers must be kept secret, breaking RSA encryption is known as the RSA problem, whether it is as hard as the factoring problem remains an open question. RSA is a relatively slow algorithm, and because of this it is commonly used to directly encrypt user data. More often, RSA passes encrypted shared keys for symmetric key cryptography which in turn can perform bulk encryption-decryption operations at higher speed. The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman and they also introduced digital signatures and attempted to apply number theory, their formulation used a shared secret key created from exponentiation of some number, modulo a prime numbers. However, they open the problem of realizing a one-way function. Ron Rivest, Adi Shamir, and Leonard Adleman at MIT made several attempts over the course of a year to create a function that is hard to invert. Rivest and Shamir, as scientists, proposed many potential functions while Adleman. They tried many approaches including knapsack-based and permutation polynomials, for a time they thought it was impossible for what they wanted to achieve due to contradictory requirements. In April 1977, they spent Passover at the house of a student, Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea and had much of the paper ready by daybreak, the algorithm is now known as RSA – the initials of their surnames in same order as their paper. Clifford Cocks, an English mathematician working for the UK intelligence agency GCHQ, however, given the relatively expensive computers needed to implement it at the time, it was mostly considered a curiosity and, as far as is publicly known, was never deployed. His discovery, however, was not revealed until 1997 due to its secret classification, Kid-RSA is a simplified public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, Patent 4,405,829 for a Cryptographic communications system and method that used the algorithm, on September 20,1983
7.
Prime number
–
A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. A natural number greater than 1 that is not a number is called a composite number. For example,5 is prime because 1 and 5 are its only positive integer factors, the property of being prime is called primality. A simple but slow method of verifying the primality of a number n is known as trial division. It consists of testing whether n is a multiple of any integer between 2 and n, algorithms much more efficient than trial division have been devised to test the primality of large numbers. Particularly fast methods are available for numbers of forms, such as Mersenne numbers. As of January 2016, the largest known prime number has 22,338,618 decimal digits, there are infinitely many primes, as demonstrated by Euclid around 300 BC. There is no simple formula that separates prime numbers from composite numbers. However, the distribution of primes, that is to say, many questions regarding prime numbers remain open, such as Goldbachs conjecture, and the twin prime conjecture. Such questions spurred the development of branches of number theory. Prime numbers give rise to various generalizations in other domains, mainly algebra, such as prime elements. A natural number is called a number if it has exactly two positive divisors,1 and the number itself. Natural numbers greater than 1 that are not prime are called composite, among the numbers 1 to 6, the numbers 2,3, and 5 are the prime numbers, while 1,4, and 6 are not prime. 1 is excluded as a number, for reasons explained below. 2 is a number, since the only natural numbers dividing it are 1 and 2. Next,3 is prime, too,1 and 3 do divide 3 without remainder, however,4 is composite, since 2 is another number dividing 4 without remainder,4 =2 ·2. 5 is again prime, none of the numbers 2,3, next,6 is divisible by 2 or 3, since 6 =2 ·3. The image at the right illustrates that 12 is not prime,12 =3 ·4, no even number greater than 2 is prime because by definition, any such number n has at least three distinct divisors, namely 1,2, and n
8.
Meet-in-the-middle attack
–
The Meet-in-the-Middle attack is a generic space–time tradeoff cryptographic attack against encryption schemes which rely on performing multiple encryption operations in sequence. The MITM attack is the reason why Double DES is not used. When trying to improve the security of a cipher, a tempting idea is to encrypt the data several times using multiple keys. This makes a Meet-in-the-Middle attack a generic space–time tradeoff cryptographic attack, for example, although Double DES encrypts the data with two different 56-bit keys, Double DES can be broken with 257 encryption and decryption operations. The Multidimensional MITM uses a combination of several simultaneous MITM-attacks like described above, diffie and Hellman first proposed the Meet-in-the-middle attack on a hypothetical expansion of a block cipher in 1977. Their attack used a space-time tradeoff to break the double-encryption scheme in only twice the time needed to break the single-encryption scheme, in 2011, Bo Zhu and Guang Gong investigated the Multidimensional Meet-in-the-Middle attack and presented new attacks on the block ciphers GOST, KTANTAN and Hummingbird-2. The meet-in-the-middle attack uses an efficient approach. If the result from any of the ENCk1 operations matches a result from the DECk2 operations and this potentially-correct key is called a candidate key. The attacker can determine which candidate key is correct by testing it with a second test-set of plaintext and ciphertext, the MITM attack is one of the reasons why DES was replaced with Triple DES and not Double DES. Double DES was not used because it is vulnerable to a MITM attack, Triple DES uses a triple length key and is also vulnerable to a meet-in-the-middle attack in 256 space and 2112 operations, but is considered secure due to the size of its keyspace. Test pairs in T on a new pair of to confirm validity, if the key-pair does not work on this new pair, do MITM again on a new pair of. If the keysize is k, this uses only 2k+1encryptions in contrast to the naive attack. While 1D-MITM can be efficient, a sophisticated attack has been developed, Multi Dimensional-Meet In The Middle attack. This is preferred when the data has been encrypted using more than 2 encryptions with different keys, instead of meeting in the middle, the MD-MITM attack attempts to reach several specific intermediate states using the forward and backward computations at several positions in the cipher. For each possible guess on a state s n compute the following. If this is the then, Use the found combination of sub-keys on another pair of plaintext/ciphertext to verify the correctness of the key. Note the nested element in the algorithm, the guess on every possible value on sj is done for each guess on the previous sj-1. This make up an element of complexity to overall time complexity of this MD-MITM attack
9.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
10.
Google Patents
–
These documents include the entire collection of granted patents and published patent applications from each database. US patent documents date back to 1790, EPO and WIPO to 1978, Google Patents also indexes documents from Google Scholar and Google Books, and has machine-classified them with Cooperative Patent Classification codes for searching. The service was launched on December 14,2006, Google says it uses the same technology as that underlying Google Books, allowing scrolling through pages, and zooming in on areas. The images are saveable as PNG files, Google Patents was updated in 2012 with coverage of the European Patent Office and the Prior Art Finder tool. In 2013, it was expanded to cover World Intellectual Property Organization, Deutsches_Patent-_und_Markenamt, Canadian Intellectual Property Office, all foreign patents were also translated to English and made searchable. In 2016, coverage of 11 additional patent offices was announced, Google Patents new search homepage Google Patents old search homepage Google Advanced Patent Search
11.
Lattice (group)
–
In geometry and group theory, a lattice in R n is a subgroup of R n which is isomorphic to Z n, and which spans the real vector space R n. In other words, for any basis of R n, the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice, a lattice may be viewed as a regular tiling of a space by a primitive cell. Lattices have many significant applications in mathematics, particularly in connection to Lie algebras, number theory. More generally, lattice models are studied in physics, often by the techniques of computational physics, a lattice is the symmetry group of discrete translational symmetry in n directions. A pattern with this lattice of translational symmetry cannot have more, as a group a lattice is a finitely-generated free abelian group, and thus isomorphic to Z n. A lattice in the sense of a 3-dimensional array of regularly spaced points coinciding with e. g, a simple example of a lattice in R n is the subgroup Z n. More complicated examples include the E8 lattice, which is a lattice in R8, the period lattice in R2 is central to the study of elliptic functions, developed in nineteenth century mathematics, it generalises to higher dimensions in the theory of abelian functions. Lattices called root lattices are important in the theory of simple Lie algebras, for example, a typical lattice Λ in R n thus has the form Λ = where is a basis for R n. Different bases can generate the lattice, but the absolute value of the determinant of the vectors vi is uniquely determined by Λ. If one thinks of a lattice as dividing the whole of R n into equal polyhedra and this is why d is sometimes called the covolume of the lattice. If this equals 1, the lattice is called unimodular, minkowskis theorem relates the number d and the volume of a symmetric convex set S to the number of lattice points contained in S. The number of lattice points contained in an all of whose vertices are elements of the lattice is described by the polytopes Ehrhart polynomial. Formulas for some of the coefficients of this polynomial involve d as well, Lattice basis reduction is the problem of finding a short and nearly orthogonal lattice basis. The Lenstra-Lenstra-Lovász lattice basis reduction algorithm approximates such a basis in polynomial time, it has found numerous applications. There are five 2D lattice types as given by the crystallographic restriction theorem, below, the wallpaper group of the lattice is given in IUC notation, Orbifold notation, and Coxeter notation, along with a wallpaper diagram showing the symmetry domains. Note that a pattern with this lattice of translational symmetry cannot have more, a full list of subgroups is available. For example below the hexagonal/triangular lattice is given twice, with full 6-fold, if the symmetry group of a pattern contains an n-fold rotation then the lattice has n-fold symmetry for even n and 2n-fold for odd n. For the classification of a lattice, start with one point