Introduction to Algorithms
Introduction to Algorithms is a book by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein; the book has been used as the textbook for algorithms courses at many universities and is cited as a reference for algorithms in published papers, with over 10,000 citations documented on CiteSeerX. The book sold half a million copies during its first 20 years, its fame has led to the common use of the abbreviation "CLRS", or, in the first edition, "CLR". In the preface, the authors write about how the book was written to be comprehensive and useful in both teaching and professional environments; each chapter focuses on an algorithm, discusses its design techniques and areas of application. Instead of using a specific programming language, the algorithms are written in pseudocode; the descriptions focus on the aspects of the algorithm itself, its mathematical properties, emphasize efficiency. The first edition of the textbook did not include Stein as an author, thus the book became known by the initialism CLR.
It included two chapters. After the addition of the fourth author in the second edition, many began to refer to the book as "CLRS"; this first edition of the book was known as "The Big White Book." With the second edition, the predominant color of the cover changed to green, causing the nickname to be shortened to just "The Big Book." A third edition was published in August 2009. Plans for the next edition started in 2014, but the fourth edition will not be published earlier than 2020; the mobile depicted on the cover, Big Red by Alexander Calder, can be found at the Whitney Museum of American Art in New York City. I Foundations 1 The Role of Algorithms in Computing 2 Getting Started 3 Growth of Function 4 Divide-and-Conquer 5 Probabilistic Analysis and Randomized Algorithms II Sorting and Order Statistics 6 Heapsort 7 Quicksort 8 Sorting in Linear Time 9 Medians and Order Statistics III Data Structures 10 Elementary Data Structures 11 Hash Tables 12 Binary Search Trees 13 Red-Black Trees 14 Augmenting Data Structures IV Advanced Design and Analysis Techniques 15 Dynamic Programming 16 Greedy Algorithms 17 Amortized Analysis V Advanced Data Structures 18 B-Trees 19 Fibonacci Heap 20 Van Emde Boas Trees 21 Data Structures for Disjoint Sets VI Graph Algorithms 22 Elementary Graph Algorithms 23 Minimum Spanning Trees 24 Single-Source Shortest Paths 25 All-Pairs Shortest Paths 26 Maximum Flow VII Selected Topics 27 Multithreaded Algorithms 28 Matrix Operations 29 Linear Programming 30 Polynomials and the FFT 31 Number-Theoretic Algorithms 32 String Matching 33 Computational Geometry 34 NP-Completeness 35 Approximation Algorithms VIII Appendix: Mathematical Background A Summations B Sets, Etc.
C Counting and Probability D Matrices Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 0-262-03141-8. Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 0-262-03293-7. 12 printings up to 2009, errata: Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 0-262-03384-4. 5 printings up to 2016), errata: The Art of Computer Programming Official websites by MIT Press MIT lecture "MIT 6.046J / 18.410J Introduction to Algorithms - Fall 2005". Held in part by coauthor Charles Leiserson. Released as part of MIT OpenCourseWare. At OCW. MIT. Edu. Video recordings and transcripts of the lectures. At VideoLectures. Net. Video recordings of the lectures. Includes slides automatically synchronized to video content. Exercise Solutions While there are no official solutions, the following may be helpful: Chapters 1 - 11 Chapters 13 - 26 GitHub
In number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset S of positive integers can be partitioned into two subsets S1 and S2 such that the sum of the numbers in S1 equals the sum of the numbers in S2. Although the partition problem is NP-complete, there is a pseudo-polynomial time dynamic programming solution, there are heuristics that solve the problem in many instances, either optimally or approximately. For this reason, it has been called "the easiest NP-hard problem". There is an optimization version of the partition problem, to partition the multiset S into two subsets S1, S2 such that the difference between the sum of elements in S1 and the sum of elements in S2 is minimized; the optimization version can be solved efficiently in practice. Given S =, a valid solution to the partition problem is the two sets S1 = and S2 =. Both sets sum to 5, they partition S. Note that this solution is not unique. S1 = and S2 = is another solution.
Not every multiset of positive integers has a partition into two halves with equal sum. An example of such a set is S =; the problem can be solved using dynamic programming when the size of the set and the size of the sum of the integers in the set are not too big to render the storage requirements infeasible. Suppose the input to the algorithm is a multiset S of cardinality N: S = Let K be the sum of all elements in S; that is: K = x1 +... + xN. We will build an algorithm that determines whether there is a subset of S that sums to ⌊ K / 2 ⌋. If there is a subset, then: if K is the rest of S sums to ⌊ K / 2 ⌋ if K is odd the rest of S sums to ⌈ K / 2 ⌉; this is as good a solution as possible. We wish to determine if there is a subset of S that sums to ⌊ K / 2 ⌋. Let: p be True if a subset of sums to i and False otherwise. P is True if and only if there is a subset of S that sums to ⌊ K / 2 ⌋; the goal of our algorithm will be to compute p. In aid of this, we have the following recurrence relation: p is True if either p is True or if p is True p is False otherwiseThe reasoning for this is as follows: there is some subset of S that sums to i using numbers x1... xjif and only if either of the following is true: There is a subset of that sums to i.
The algorithm consists of building up a table of size ⌊ K / 2 ⌋ by N containing the values of the recurrence. Remember that K is the sum of all N elements in S. Once the entire table is filled in, we return P. Below is a depiction of the table P. There is a blue arrow from one block to another if the value of the target-block might depend on the value of the source-block; this dependence is a property of the recurrence relation. INPUT: A list of integers S OUTPUT: True if S can be partitioned into two subsets that have equal sum 1 function find_partition: 2 n ← |S| 3 K ← sum 4 P ← empty boolean table of size by 5 initialize top row of P to True 6 initialize leftmost column of P, except for P to False 7 for i from 1 to ⌊ K / 2 ⌋ 8 for j from 1 to n 9 if >= 0 10 P ← P or P 11 else 12 P ← P 13 return P Below is the table P for the example set used above S =: This algorithm runs in time O, where N is the number of elements in the input set and K is the sum of elements in the input set. The algorithm can be extended to the k-way multi-partitioning problem, but takes O memory where m is the largest number in the input, making it impractical for k = 3 unless the inputs are small numbers.
The partition problem can be viewed as a special case of the subset sum problem and the pseudo-polynomial time dynamic programming solution given above generalizes to a solution for the subset sum problem. Several heuristic algorithms exist to produce approximations to the partition optimization problem; these can be extended to linear-space exact algorithms. One approach to the problem, imitating the way children choose teams for a game, is the greedy algorithm, which iterates through the numbers in descending order, assigning each of them to whichever subset has the smaller sum. This
Ronald Linn Rivest is a cryptographer and an Institute Professor at MIT. He is a member of MIT's Department of Electrical Engineering and Computer Science and a member of MIT's Computer Science and Artificial Intelligence Laboratory, he was a member of the Election Assistance Commission's Technical Guidelines Development Committee, tasked with assisting the EAC in drafting the Voluntary Voting System Guidelines. Rivest is one of the inventors of the RSA algorithm, he is the inventor of the symmetric key encryption algorithms RC2, RC4, RC5, co-inventor of RC6. The "RC" stands for "Rivest Cipher", or alternatively, "Ron's Code", he authored the MD2, MD4, MD5 and MD6 cryptographic hash functions. In 2006, he published his invention of the ThreeBallot voting system, a voting system that incorporates the ability for the voter to discern that their vote was counted while still protecting their voter privacy. Most this system does not rely on cryptography at all. Stating "Our democracy is too important", he placed ThreeBallot in the public domain.
Rivest collaborates with other researchers in combinatorics, for example working with David A. Klarner to find an upper bound on the number of polyominoes of a given order and working with Jean Vuillemin to prove the deterministic form of the Aanderaa–Rosenberg conjecture. Rivest earned a Bachelor's degree in Mathematics from Yale University in 1969, a Ph. D. degree in Computer Science from Stanford University in 1974 for research supervised by Robert W. Floyd. Rivest is a co-author of Introduction to Algorithms, a standard textbook on algorithms, with Thomas H. Cormen, Charles E. Leiserson and Clifford Stein, he is a member of the MIT Computer Science and Artificial Intelligence Laboratory in the Theory of Computation Group, a founder of its Cryptography and Information Security Group. He was a founder of RSA Data Security, of Peppercoin. Rivest has research interests in algorithms and voting, his former doctoral students include Avrim Blum, Burt Kaliski, Ron Pinter, Robert Schapire, Alan Sherman, Mona Singh.
His publications include: Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 978-0-262-03141-7. Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 978-0-262-53196-2. Cormen, Thomas H.. Introduction to Algorithms. MIT Press. ISBN 978-0-262-03384-8. Rivest is a member of the National Academy of Engineering, the National Academy of Sciences, is a Fellow of the Association for Computing Machinery, the International Association for Cryptologic Research, the American Academy of Arts and Sciences. Together with Adi Shamir and Len Adleman, he has been awarded the 2000 IEEE Koji Kobayashi Computers and Communications Award and the Secure Computing Lifetime Achievement Award, he shared with them the Turing Award. Rivest has received an honorary degree from the Sapienza University of Rome. In 2005, he received the MITX Lifetime Achievement Award. Rivest was named the 2007 the Marconi Fellow, on May 29, 2008 he gave the Chesley lecture at Carleton College, he was named an Institute Professor at MIT in June 2015.
List of Ron Rivest's patents on IPEXL Home page of Ronald L. Rivest Official site of RSA Security Inc. Ron Rivest election research papers The ThreeBallot Voting System
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
ArXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, astronomy, electrical engineering, computer science, quantitative biology, mathematical finance and economics, which can be accessed online. In many fields of mathematics and physics all scientific papers are self-archived on the arXiv repository. Begun on August 14, 1991, arXiv.org passed the half-million-article milestone on October 3, 2008, had hit a million by the end of 2014. By October 2016 the submission rate had grown to more than 10,000 per month. ArXiv was made possible by the compact TeX file format, which allowed scientific papers to be transmitted over the Internet and rendered client-side. Around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Paul Ginsparg recognized the need for central storage, in August 1991 he created a central repository mailbox stored at the Los Alamos National Laboratory which could be accessed from any computer.
Additional modes of access were soon added: FTP in 1991, Gopher in 1992, the World Wide Web in 1993. The term e-print was adopted to describe the articles, it began as a physics archive, called the LANL preprint archive, but soon expanded to include astronomy, computer science, quantitative biology and, most statistics. Its original domain name was xxx.lanl.gov. Due to LANL's lack of interest in the expanding technology, in 2001 Ginsparg changed institutions to Cornell University and changed the name of the repository to arXiv.org. It is now hosted principally with eight mirrors around the world, its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv; the annual budget for arXiv is $826,000 for 2013 to 2017, funded jointly by Cornell University Library, the Simons Foundation and annual fee income from member institutions.
This model arose in 2010, when Cornell sought to broaden the financial funding of the project by asking institutions to make annual voluntary contributions based on the amount of download usage by each institution. Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,000 to $4,400. Cornell's goal is to raise at least $504,000 per year through membership fees generated by 220 institutions. In September 2011, Cornell University Library took overall administrative and financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a life sentence". However, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. Although arXiv is not peer reviewed, a collection of moderators for each area review the submissions; the lists of moderators for many sections of arXiv are publicly available, but moderators for most of the physics sections remain unlisted.
Additionally, an "endorsement" system was introduced in 2004 as part of an effort to ensure content is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, but to check whether the paper is appropriate for the intended subject area. New authors from recognized academic institutions receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for restricting scientific inquiry. A majority of the e-prints are submitted to journals for publication, but some work, including some influential papers, remain purely as e-prints and are never published in a peer-reviewed journal. A well-known example of the latter is an outline of a proof of Thurston's geometrization conjecture, including the Poincaré conjecture as a particular case, uploaded by Grigori Perelman in November 2002.
Perelman appears content to forgo the traditional peer-reviewed journal process, stating: "If anybody is interested in my way of solving the problem, it's all there – let them go and read about it". Despite this non-traditional method of publication, other mathematicians recognized this work by offering the Fields Medal and Clay Mathematics Millennium Prizes to Perelman, both of which he refused. Papers can be submitted in any of several formats, including LaTeX, PDF printed from a word processor other than TeX or LaTeX; the submission is rejected by the arXiv software if generating the final PDF file fails, if any image file is too large, or if the total size of the submission is too large. ArXiv now allows one to store and modify an incomplete submission, only finalize the submission when ready; the time stamp on the article is set. The standard access route is through one of several mirrors. Sev
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is
Cryptography or cryptology is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, physics. Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, military communications. Cryptography prior to the modern age was synonymous with encryption, the conversion of information from a readable state to apparent nonsense; the originator of an encrypted message shares the decoding technique only with intended recipients to preclude access from adversaries. The cryptography literature uses the names Alice for the sender, Bob for the intended recipient, Eve for the adversary. Since the development of rotor cipher machines in World War I and the advent of computers in World War II, the methods used to carry out cryptology have become complex and its application more widespread.
Modern cryptography is based on mathematical theory and computer science practice. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means; these schemes are therefore termed computationally secure. There exist information-theoretically secure schemes that provably cannot be broken with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to use in practice than the best theoretically breakable but computationally secure mechanisms; the growth of cryptographic technology has raised a number of legal issues in the information age. Cryptography's potential for use as a tool for espionage and sedition has led many governments to classify it as a weapon and to limit or prohibit its use and export. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation. Cryptography plays a major role in digital rights management and copyright infringement of digital media.
The first use of the term cryptograph dates back to the 19th century—originating from The Gold-Bug, a novel by Edgar Allan Poe. Until modern times, cryptography referred exclusively to encryption, the process of converting ordinary information into unintelligible form. Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher is a pair of algorithms that create the reversing decryption; the detailed operation of a cipher is controlled both by the algorithm and in each instance by a "key". The key is a secret a short string of characters, needed to decrypt the ciphertext. Formally, a "cryptosystem" is the ordered list of elements of finite possible plaintexts, finite possible cyphertexts, finite possible keys, the encryption and decryption algorithms which correspond to each key. Keys are important both formally and in actual practice, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless for most purposes.
Ciphers were used directly for encryption or decryption without additional procedures such as authentication or integrity checks. There are two kinds of cryptosystems: asymmetric. In symmetric systems the same key is used to decrypt a message. Data manipulation in symmetric systems is faster than asymmetric systems as they use shorter key lengths. Asymmetric systems use a public key to encrypt a private key to decrypt it. Use of asymmetric systems enhances the security of communication. Examples of asymmetric systems include RSA, ECC. Symmetric models include the used AES which replaced the older DES. In colloquial use, the term "code" is used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning, it means the replacement of a unit of plaintext with a code word. Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key required to do so; some use the terms cryptography and cryptology interchangeably in English, while others use cryptography to refer to the use and practice of cryptographic techniques and cryptology to refer to the combined study of cryptography and cryptanalysis.
English is more flexible than several other languages in which crypto