1.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
2.
Textbook
–
A textbook or coursebook is a manual of instruction in any branch of study. Textbooks are produced according to the demands of educational institutions, schoolbooks are textbooks and other books used in schools. Although most textbooks arent only published in printed format, many are now available as online electronic books, the ancient Greeks wrote texts intended for education. The modern textbook has its roots in the standardization made possible by the printing press, Johannes Gutenberg himself may have printed editions of Ars Minor, a schoolbook on Latin grammar by Aelius Donatus. Early textbooks were used by tutors and teachers, who used the books as instructional aids, the Greek philosopher Plato lamented the loss of knowledge because the media of transmission were changing. Before the invention of the Greek alphabet 2,500 years ago, knowledge and stories were recited aloud, the new technology of writing meant stories no longer needed to be memorized, a development Socrates feared would weaken the Greeks mental capacities for memorizing and retelling. The next revolution for books came with the 15th-century invention of printing with changeable type, the invention is attributed to German metalsmith Johannes Gutenberg, who cast type in molds using a melted metal alloy and constructed a wooden-screw printing press to transfer the image onto paper. Gutenbergs invention made mass production of texts possible for the first time, compulsory education and the subsequent growth of schooling in Europe led to the printing of many standardized texts for children. Textbooks have become the primary teaching instrument for most children since the 19th century, two textbooks of historical significance in United States schooling were the 18th century New England Primer and the 19th century McGuffey Readers. Technological advances change the way people interact with textbooks, online and digital materials are making it increasingly easy for students to access materials other than the traditional print textbook. Students now have access to electronic and PDF books, online tutoring systems, an example of an electronically published book, or e-book, is Principles of Biology from Nature Publishing. Most notably, a number of authors are foregoing commercial publishers. The textbook market does not operate in the manner as most consumer markets. First, the end consumers do not select the product, therefore, price is removed from the purchasing decision, giving the producer disproportionate market power to set prices high. This fundamental difference in the market is often cited as the reason that prices are out of control. The term broken market first appeared in the economist James Kochs analysis of the commissioned by the Advisory Committee on Student Financial Assistance. This situation is exacerbated by the lack of competition in the textbook market, consolidation in the past few decades has reduced the number of major textbook companies from around 30 to just a handful. Consequently, there is less competition than there used to be, Students seek relief from rising prices through the purchase of used copies of textbooks, which tend to be less expensive
3.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
4.
OCLC
–
The Online Computer Library Center is a US-based nonprofit cooperative organization dedicated to the public purposes of furthering access to the worlds information and reducing information costs. It was founded in 1967 as the Ohio College Library Center, OCLC and its member libraries cooperatively produce and maintain WorldCat, the largest online public access catalog in the world. OCLC is funded mainly by the fees that libraries have to pay for its services, the group first met on July 5,1967 on the campus of the Ohio State University to sign the articles of incorporation for the nonprofit organization. The group hired Frederick G. Kilgour, a former Yale University medical school librarian, Kilgour wished to merge the latest information storage and retrieval system of the time, the computer, with the oldest, the library. The goal of network and database was to bring libraries together to cooperatively keep track of the worlds information in order to best serve researchers and scholars. The first library to do online cataloging through OCLC was the Alden Library at Ohio University on August 26,1971 and this was the first occurrence of online cataloging by any library worldwide. Membership in OCLC is based on use of services and contribution of data, between 1967 and 1977, OCLC membership was limited to institutions in Ohio, but in 1978, a new governance structure was established that allowed institutions from other states to join. In 2002, the structure was again modified to accommodate participation from outside the United States. As OCLC expanded services in the United States outside of Ohio, it relied on establishing strategic partnerships with networks, organizations that provided training, support, by 2008, there were 15 independent United States regional service providers. OCLC networks played a key role in OCLC governance, with networks electing delegates to serve on OCLC Members Council, in early 2009, OCLC negotiated new contracts with the former networks and opened a centralized support center. OCLC provides bibliographic, abstract and full-text information to anyone, OCLC and its member libraries cooperatively produce and maintain WorldCat—the OCLC Online Union Catalog, the largest online public access catalog in the world. WorldCat has holding records from public and private libraries worldwide. org, in October 2005, the OCLC technical staff began a wiki project, WikiD, allowing readers to add commentary and structured-field information associated with any WorldCat record. The Online Computer Library Center acquired the trademark and copyrights associated with the Dewey Decimal Classification System when it bought Forest Press in 1988, a browser for books with their Dewey Decimal Classifications was available until July 2013, it was replaced by the Classify Service. S. The reference management service QuestionPoint provides libraries with tools to communicate with users and this around-the-clock reference service is provided by a cooperative of participating global libraries. OCLC has produced cards for members since 1971 with its shared online catalog. OCLC commercially sells software, e. g. CONTENTdm for managing digital collections, OCLC has been conducting research for the library community for more than 30 years. In accordance with its mission, OCLC makes its research outcomes known through various publications and these publications, including journal articles, reports, newsletters, and presentations, are available through the organizations website. The most recent publications are displayed first, and all archived resources, membership Reports – A number of significant reports on topics ranging from virtual reference in libraries to perceptions about library funding
5.
Dewey Decimal Classification
–
The Dewey Decimal Classification, or Dewey Decimal System, is a proprietary library classification system first published in the United States by Melvil Dewey in 1876. It has been revised and expanded through 23 major editions, the latest issued in 2011 and it is also available in an abridged version suitable for smaller libraries. It is currently maintained by the Online Computer Library Center, a cooperative that serves libraries. OCLC licenses access to a version for catalogers called WebDewey. The Decimal Classification introduced the concepts of relative location and relative index which allow new books to be added to a library in their location based on subject. Libraries previously had given books permanent shelf locations that were related to the order of acquisition rather than topic, the classifications notation makes use of three-digit Arabic numerals for main classes, with fractional decimals allowing expansion for further detail. Using Arabic numerals for symbols, it is flexible to the degree that numbers can be expanded in linear fashion to cover aspects of general subjects. A library assigns a number that unambiguously locates a particular volume in a position relative to other books in the library. The number makes it possible to find any book and to return it to its place on the library shelves. The classification system is used in 200,000 libraries in at least 135 countries, the major competing classification system to the Dewey Decimal system is the Library of Congress Classification system created by the U. S. Melvil Dewey was an American librarian and self-declared reformer and he was a founding member of the American Library Association and can be credited with the promotion of card systems in libraries and business. He developed the ideas for his classification system in 1873 while working at Amherst College library. He applied the classification to the books in library, until in 1876 he had a first version of the classification. In 1876, he published the classification in pamphlet form with the title A Classification and Subject Index for Cataloguing and Arranging the Books and he used the pamphlet, published in more than one version during the year, to solicit comments from other librarians. It is not known who received copies or how many commented as only one copy with comments has survived, in March 1876, he applied for, and received copyright on the first edition of the index. The edition was 44 pages in length, with 2,000 index entries, comprised 314 pages, with 10,000 index entries. Editions 3–14, published between 1888 and 1942, used a variant of this same title, Dewey modified and expanded his system considerably for the second edition. In an introduction to that edition Dewey states that nearly 100 persons hav contributed criticisms, one of the innovations of the Dewey Decimal system was that of positioning books on the shelves in relation to other books on similar topics
6.
Library of Congress Classification
–
The Library of Congress Classification is a system of library classification developed by the Library of Congress. It is used by most research and academic libraries in the U. S. the Classification is also distinct from Library of Congress Subject Headings, the system of labels such as Boarding schools and Boarding schools—Fiction that describe contents systematically. The classification was invented by Herbert Putnam in 1897, just before he assumed the librarianship of Congress, with advice from Charles Ammi Cutter, it was influenced by his Cutter Expansive Classification, the Dewey Decimal System, and the Putnam Classification System. It was designed specifically for the purposes and collection of the Library of Congress to replace the fixed location system developed by Thomas Jefferson, by the time Putnam departed from his post in 1939, all the classes except K and parts of B were well developed. LCC has been criticized for lacking a theoretical basis, many of the classification decisions were driven by the practical needs of that library rather than epistemological considerations. Although it divides subjects into broad categories, it is essentially enumerative in nature and that is, it provides a guide to the books actually in one librarys collections, not a classification of the world. In 2007 the Wall Street Journal reported that in the countries it surveyed most public libraries, the National Library of Medicine classification system uses the initial letters W and QS–QZ, which are not used by LCC. Some libraries use NLM in conjunction with LCC, eschewing LCCs R for Medicine, others use LCCs QP–QR schedules and include Medicine R. Subclass AC – Collections. Collected works Subclass AE – Encyclopedias Subclass AG – Dictionaries and other reference works Subclass AI – Indexes Subclass AM – Museums. Collectors and collecting Subclass AN – Newspapers Subclass AP – Periodicals Subclass AS – Academies, directories Subclass AZ – History of scholarship and learning. The humanities Subclass B – Philosophy Subclass BC – Logic Subclass BD – Speculative philosophy Subclass BF – Psychology Subclass BH – Aesthetics Subclass BJ – Ethics Subclass BL – Religions, rationalism Subclass BM – Judaism Subclass BP – Islam. Seals Subclass CE – Technical Chronology, calendar Subclass CJ – Numismatics Subclass CN – Inscriptions. Former Soviet Republics – Poland Subclass DL – Northern Europe, maps Subclass GA – Mathematical geography. Cartography Subclass GB – Physical geography Subclass GC – Oceanography Subclass GE – Environmental Sciences Subclass GF – Human ecology, anthropogeography Subclass GN – Anthropology Subclass GR – Folklore Subclass GT – Manners and customs Subclass GV – Recreation. Leisure Subclass H – Social sciences Subclass HA – Statistics Subclass HB – Economic theory, demography Subclass HC – Economic history and conditions Subclass HD – Industries. Labor Subclass HE – Transportation and communications Subclass HF – Commerce Subclass HG – Finance Subclass HJ – Public finance Subclass HM – Sociology Subclass HN – Social history, Social reform Subclass HQ – The family. Marriage, Women and Sexuality Subclass HS – Societies, secret, benevolent, races Subclass HV – Social pathology. Municipal government Subclass JV – Colonies and colonization, International migration Subclass JX – International law, see JZ and KZ Subclass JZ – International relations Subclass K – Law in general
7.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
8.
NP-completeness
–
In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. The set of NP-complete problems is often denoted by NP-C or NPC, the abbreviation NP refers to nondeterministic polynomial time. That is, the required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a consequence, determining whether or not it is possible to solve problems quickly. NP-complete problems are addressed by using heuristic methods and approximation algorithms. A problem p in NP is NP-complete if every problem in NP can be transformed into p in polynomial time. NP-complete problems are studied because the ability to quickly verify solutions to a problem seems to correlate with the ability to solve that problem. It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem, because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general. A decision problem C is NP-complete if, C is in NP, C can be shown to be in NP by demonstrating that a candidate solution to C can be verified in polynomial time. Note that a problem satisfying condition 2 is said to be NP-hard, a consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971, though the term NP-complete was introduced later, at 1971 STOC conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. This is known as the question of whether P=NP, nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US $1 million reward to anyone who has a proof that P=NP or that P≠NP. Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, in 1972, Richard Karp proved that several other problems were also NP-complete, thus there is a class of NP-complete problems. For more details refer to Introduction to the Design and Analysis of Algorithms by Anany Levitin, an interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices, consider these two problems, Graph Isomorphism, Is graph G1 isomorphic to graph G2. Subgraph Isomorphism, Is graph G1 isomorphic to a subgraph of graph G2, the Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete and this is an example of a problem that is thought to be hard, but is not thought to be NP-complete
9.
Edge coloring
–
In graph theory, an edge coloring of a graph is an assignment of colors to the edges of the graph so that no two adjacent edges have the same color. For example, the figure to the shows a edge coloring of a graph by the colors red, blue. Edge colorings are one of different types of graph coloring. The edge-coloring problem asks whether it is possible to color the edges of a graph using at most k different colors, for a given value of k. The minimum required number of colors for the edges of a graph is called the chromatic index of the graph. For example, the edges of the graph in the illustration can be colored by three colors but cannot be colored by two colors, so the graph shown has chromatic index three. By Vizings theorem, the number of colors needed to color a simple graph is either its maximum degree Δ or Δ+1. For some graphs, such as graphs and high-degree planar graphs, the number of colors is always Δ, and for multigraphs. Many variations of the coloring problem, in which an assignments of colors to edges must satisfy other conditions than non-adjacency, have been studied. Edge colorings have applications in scheduling problems and in frequency assignment for fiber optic networks, a cycle graph may have its edges colored with two colors if the length of the cycle is even, simply alternate the two colors around the cycle. However, if the length is odd, three colors are needed, a complete graph Kn with n vertices is edge-colorable with n −1 colors when n is an even number, this is a special case of Baranyais theorem. Soifer provides the geometric construction of a coloring in this case, place n points at the vertices. For each color class, include one edge from the center to one of the polygon vertices, however, when n is odd, n colors are needed, each color can only be used for /2 edges, a 1/n fraction of the total. The case that n =3 gives the well-known Petersen graph, when n is 3,4, or 8, an edge coloring of On requires n +1 colors, but when it is 5,6, or 7, only n colors are needed. Here, two edges are considered to be adjacent when they share a common vertex, a proper edge coloring with k different colors is called a k-edge-coloring. A graph that can be assigned a k-edge-coloring is said to be k-edge-colorable, the smallest number of colors needed in a edge coloring of a graph G is the chromatic index, or edge chromatic number, χ′. The chromatic index is sometimes written using the notation χ1, in this notation. A graph is k-edge-chromatic if its chromatic index is exactly k, the chromatic index should not be confused with the chromatic number χ or χ0, the minimum number of colors needed in a proper vertex coloring of G
10.
Order dimension
–
In mathematics, the dimension of a partially ordered set is the smallest number of total orders the intersection of which gives rise to the partial order. This concept is sometimes called the order dimension or the Dushnik–Miller dimension of the partial order. Dushnik & Miller first studied order dimension, for a detailed treatment of this subject than provided here. That is, P = ⋂ R = ⋂ i =1 t < i, thus, an equivalent definition of the dimension of a poset P is the least cardinality of a realizer of P. It can be shown that any nonempty family R of linear extensions is a realizer of a partially ordered set P if and only if. Let n be an integer, and let P be the partial order on the elements ai and bi in which ai ≤ bj whenever i ≠ j. In particular, ai and bi are incomparable in P, P can be viewed as a form of a crown graph. The illustration shows an ordering of this type for n =4, then, for each i, any realizer must contain a linear order that begins with all the aj except ai, then includes bi, then ai, and ends with all the remaining bj. And conversely, any family of orders that includes one order of this type for each i has P as its intersection. Thus, P has dimension exactly n, in fact, P is known as the standard example of a poset of dimension n, and is usually denoted by Sn. The partial orders with order dimension two may be characterized as the partial orders whose comparability graph is the complement of the comparability graph of a different partial order. If P is realized by two linear extensions, then partial order Q complementary to P may be realized by reversing one of the two linear extensions, the partial orders of order dimension two include the series-parallel partial orders. They are exactly the partial orders whose Hasse diagrams have dominance drawings, however, for any k ≥3, it is NP-complete to test whether the order dimension is at most k. For a complete graph on n vertices, the dimension of the incidence poset is Θ. It follows that all simple n-vertex graphs have incidence posets with order dimension O, a generalization of dimension is the notion of k-dimension which is the minimal number of chains of length at most k in whose product the partial order can be embedded. In particular, the 2-dimension of an order can be seen as the size of the smallest set such that the order embeds in the containment order on this set. Interval dimension Baker, K. A. Fishburn, P. Roberts, F. S. Partial orders of dimension 2, Networks,2, 11–28, doi,10. 1002/net.3230020103. Dushnik, Ben, Miller, E. W. Partially ordered sets, American Journal of Mathematics,63, 600–610, doi,10. 2307/2371374, JSTOR2371374
11.
Linear programming
–
Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a case of mathematical programming. More formally, linear programming is a technique for the optimization of an objective function, subject to linear equality. Its feasible region is a polytope, which is a set defined as the intersection of finitely many half spaces. Its objective function is an affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest value if such a point exists, the expression to be maximized or minimized is called the objective function. The inequalities Ax ≤ b and x ≥0 are the constraints which specify a convex polytope over which the function is to be optimized. In this context, two vectors are comparable when they have the same dimensions, if every entry in the first is less-than or equal-to the corresponding entry in the second then we can say the first vector is less-than or equal-to the second vector. Linear programming can be applied to fields of study. It is widely used in business and economics, and is utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications and it has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design. The first linear programming formulation of a problem that is equivalent to the linear programming problem was given by Leonid Kantorovich in 1939. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army, about the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs, Kantorovich and Koopmans later shared the 1975 Nobel prize in economics. Dantzig independently developed general linear programming formulation to use for planning problems in US Air Force, in 1947, Dantzig also invented the simplex method that for the first time efficiently tackled the linear programming problem in most cases. Dantzig provided formal proof in an unpublished report A Theorem on Linear Inequalities on January 5,1948, postwar, many industries found its use in their daily planning. Dantzigs original example was to find the best assignment of 70 people to 70 jobs, the computing power required to test all the permutations to select the best assignment is vast, the number of possible configurations exceeds the number of particles in the observable universe. However, it only a moment to find the optimum solution by posing the problem as a linear program
12.
Integer factorization
–
In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these integers are further restricted to numbers, the process is called prime factorization. When the numbers are large, no efficient, non-quantum integer factorization algorithm is known. However, it has not been proven that no efficient algorithm exists, the presumed difficulty of this problem is at the heart of widely used algorithms in cryptography such as RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, not all numbers of a given length are equally hard to factor. The hardest instances of these problems are semiprimes, the product of two prime numbers, many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure, by the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. If the integer is then it can be recognized as such in polynomial time. If composite however, the theorem gives no insight into how to obtain the factors, given a general algorithm for integer factorization, any integer can be factored down to its constituent prime factors simply by repeated application of this algorithm. The situation is complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if N =10 × p × q where p < q are very large primes, trial division will quickly produce the factors 2 and 5 but will take p divisions to find the next factor. Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those that are products of two primes of similar size, for this reason, these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA-768, a 768-bit number with 232 decimal digits and this factorization was a collaboration of several research institutions, spanning two years and taking the equivalent of almost 2000 years of computing on a single-core 2.2 GHz AMD Opteron. Like all recent factorization records, this factorization was completed with an optimized implementation of the general number field sieve run on hundreds of machines. No algorithm has been published that can factor all integers in polynomial time, neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P. The problem is clearly in class NP but has not been proved to be in, or not in and it is generally suspected not to be in NP-complete. There are published algorithms that are faster than O for all positive ε, i. e. sub-exponential, the best published asymptotic running time is for the general number field sieve algorithm, which, for a b-bit number n, is, O. For current computers, GNFS is the best published algorithm for large n, for a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time
13.
Scientific journal
–
In academic publishing, a scientific journal is a periodical publication intended to further the progress of science, usually by reporting new research. There are thousands of journals in publication, and many more have been published at various points in the past. Most journals are highly specialized, although some of the oldest journals such as Nature publish articles, Scientific journals contain articles that have been peer reviewed, in an attempt to ensure that articles meet the journals standards of quality, and scientific validity. If the journals editor considers the paper appropriate, at least two researchers preferably from the same field check the paper for soundness of its scientific argument, although scientific journals are superficially similar to professional magazines, they are actually quite different. Issues of a scientific journal are rarely read casually, as one would read a magazine, the publication of the results of research is an essential part of the scientific method. If they are describing experiments or calculations, they must supply enough details that an independent researcher could repeat the experiment or calculation to verify the results, each such journal article becomes part of the permanent scientific record. Over a thousand, mostly ephemeral, were founded in the 18th century, articles in scientific journals can be used in research and higher education. Scientific articles allow researchers to keep up to date with the developments of their field, an essential part of a scientific article is citation of earlier work. The impact of articles and journals is often assessed by counting citations, some classes are partially devoted to the explication of classic articles, and seminar classes can consist of the presentation by each student of a classic or current paper. Schoolbooks and textbooks have been written only on established topics, while the latest research. In a scientific research group or academic department it is usual for the content of current scientific journals to be discussed in journal clubs, the standards that a journal uses to determine publication can vary widely. Some journals, such as Nature, Science, PNAS, and it is also common for journals to have a regional focus, specializing in publishing papers from a particular country or other geographic region, like African Invertebrates. Articles tend to be technical, representing the latest theoretical research. They are often incomprehensible to anyone except for researchers in the field, in some subjects this is inevitable given the nature of the content. Usually, rigorous rules of writing are enforced by the editors, however. Articles are usually either original articles reporting new results or reviews of current literature. There are also publications that bridge the gap between articles and books by publishing thematic volumes of chapters from different authors. Research notes are short descriptions of current research findings that are considered less urgent or important than Letters, supplemental articles contain a large volume of tabular data that is the result of current research and may be dozens or hundreds of pages with mostly numerical data
14.
Victor Klee
–
Victor L. Klee, Jr. was a mathematician specialising in convex sets, functional analysis, analysis of algorithms, optimization, and combinatorics. He spent almost his entire career at the University of Washington in Seattle, born in San Francisco, Vic Klee earned his B. A. degree in 1945 with high honors from Pomona College, majoring in mathematics and chemistry. He did his studies, including a thesis on Convex Sets in Linear Spaces. After teaching for years at the University of Virginia, he moved in 1953 to the University of Washington in Seattle, Washington. Klee wrote more than 240 research papers and he proposed Klees measure problem and the art gallery theorem. Kleetopes are also named him, as is the Klee–Minty cube. Klee served as president of the Mathematical Association of America from 1971 to 1973, in 1972 he won a Lester R. Ford Award. Grünbaum, Branko, Robert R. Phelps, Peter L. Renz, Washington, DC, Mathematical Association of America. Short biography, and reminiscences of colleagues, applied Geometry and Discrete Mathematics a volume dedicated to Klee on his 65th birthday
15.
Juris Hartmanis
–
He was a son of Mārtiņš Hartmanis, a general in the Latvian Army. After the Soviet Union occupied Latvia in 1940, Mārtiņš Hartmanis was arrested by Soviets, at the end of World War II, the wife and children of Mārtiņš Hartmanis left Latvia as refugees, fearing for their safety if the Soviet Union took over Latvia again. They first moved to Germany, where Juris Hartmanis received the equivalent of a Masters degree in Physics from the University of Marburg, the University of Missouri-Kansas City honored him with Honorary Doctor of Humane Letters in May 1999. After teaching at Cornell University and Ohio State University, Hartmanis joined the General Electric Research Laboratory in 1958, while at General Electric, he developed many principles of computational complexity theory. In 1965, he became a professor at Cornell University, at Cornell, he was one of founders and the first chairman of its computer science department. Hartmanis is a Fellow of the Association for Computing Machinery and of the American Mathematical Society and he is best known for his Turing-award winning paper with Richard Stearns, in which he introduced time complexity classes TIME and proved the time hierarchy theorem. Another paper by Hartmanis from 1977, with Leonard Berman, introduced the still-unsolved Berman–Hartmanis conjecture that all NP-complete languages are polynomial time isomorphic. On isomorphisms and density of NP and other sets, SIAM Journal on Computing,6, 305–322, doi,10. 1137/0206023. On the computational complexity of algorithms, Transactions of the American Mathematical Society,117, 285–306, doi,10. 2307/1994208, JSTOR1994208, Hartmanis biography at Cornell Juris Hartmanis at the Mathematics Genealogy Project
16.
JSTOR
–
JSTOR is a digital library founded in 1995. Originally containing digitized back issues of journals, it now also includes books and primary sources. It provides full-text searches of almost 2,000 journals, more than 8,000 institutions in more than 160 countries have access to JSTOR, most access is by subscription, but some older public domain content is freely available to anyone. William G. Bowen, president of Princeton University from 1972 to 1988, JSTOR originally was conceived as a solution to one of the problems faced by libraries, especially research and university libraries, due to the increasing number of academic journals in existence. Most libraries found it prohibitively expensive in terms of cost and space to maintain a collection of journals. By digitizing many journal titles, JSTOR allowed libraries to outsource the storage of journals with the confidence that they would remain available long-term, online access and full-text search ability improved access dramatically. Bowen initially considered using CD-ROMs for distribution, JSTOR was initiated in 1995 at seven different library sites, and originally encompassed ten economics and history journals. JSTOR access improved based on feedback from its sites. Special software was put in place to make pictures and graphs clear, with the success of this limited project, Bowen and Kevin Guthrie, then-president of JSTOR, wanted to expand the number of participating journals. They met with representatives of the Royal Society of London and an agreement was made to digitize the Philosophical Transactions of the Royal Society dating from its beginning in 1665, the work of adding these volumes to JSTOR was completed by December 2000. The Andrew W. Mellon Foundation funded JSTOR initially, until January 2009 JSTOR operated as an independent, self-sustaining nonprofit organization with offices in New York City and in Ann Arbor, Michigan. JSTOR content is provided by more than 900 publishers, the database contains more than 1,900 journal titles, in more than 50 disciplines. Each object is identified by an integer value, starting at 1. In addition to the site, the JSTOR labs group operates an open service that allows access to the contents of the archives for the purposes of corpus analysis at its Data for Research service. This site offers a facility with graphical indication of the article coverage. Users may create focused sets of articles and then request a dataset containing word and n-gram frequencies and they are notified when the dataset is ready and may download it in either XML or CSV formats. The service does not offer full-text, although academics may request that from JSTOR, JSTOR Plant Science is available in addition to the main site. The materials on JSTOR Plant Science are contributed through the Global Plants Initiative and are only to JSTOR
17.
Lecture Notes in Computer Science
–
Springer Lecture Notes in Computer Science is a series of computer science books published by Springer Science+Business Media since 1973. LNCS reports research results in science, especially in the form of proceedings, post-proceedings. In addition, tutorials, state-of-the-art surveys and hot topics are increasingly being included, as of 2013, more than 8,000 LNCS volumes have appeared. An online subscription to the series costs nearly 23,000 euros per year. LNCS is among the largest series of science conference proceedings, along with those of ACM, IEEE. As an example, the post-proceedings of the bioinformatics CIBB conferences are edited and published in the associated Springer LNBI series
18.
Springer Science+Business Media
–
Springer also hosts a number of scientific databases, including SpringerLink, Springer Protocols, and SpringerImages. Book publications include major works, textbooks, monographs and book series. Springer has major offices in Berlin, Heidelberg, Dordrecht, on 15 January 2015, Holtzbrinck Publishing Group / Nature Publishing Group and Springer Science+Business Media announced a merger. In 1964, Springer expanded its business internationally, opening an office in New York City, offices in Tokyo, Paris, Milan, Hong Kong, and Delhi soon followed. The academic publishing company BertelsmannSpringer was formed after Bertelsmann bought a majority stake in Springer-Verlag in 1999, the British investment groups Cinven and Candover bought BertelsmannSpringer from Bertelsmann in 2003. They merged the company in 2004 with the Dutch publisher Kluwer Academic Publishers which they bought from Wolters Kluwer in 2002, Springer acquired the open-access publisher BioMed Central in October 2008 for an undisclosed amount. In 2009, Cinven and Candover sold Springer to two private equity firms, EQT Partners and Government of Singapore Investment Corporation, the closing of the sale was confirmed in February 2010 after the competition authorities in the USA and in Europe approved the transfer. In 2011, Springer acquired Pharma Marketing and Publishing Services from Wolters Kluwer, in 2013, the London-based private equity firm BC Partners acquired a majority stake in Springer from EQT and GIC for $4.4 billion. In 2014, it was revealed that Springer had published 16 fake papers in its journals that had been computer-generated using SCIgen, Springer subsequently removed all the papers from these journals. IEEE had also done the thing by removing more than 100 fake papers from its conference proceedings. In 2015, Springer retracted 64 of the papers it had published after it was found that they had gone through a fraudulent peer review process, Springer provides its electronic book and journal content on its SpringerLink site, which launched in 1996. SpringerProtocols is home to a collection of protocols, recipes which provide step-by-step instructions for conducting experiments in research labs, SpringerImages was launched in 2008 and offers a collection of currently 1.8 million images spanning science, technology, and medicine. SpringerMaterials was launched in 2009 and is a platform for accessing the Landolt-Börnstein database of research and information on materials, authorMapper is a free online tool for visualizing scientific research that enables document discovery based on author locations and geographic maps. The tool helps users explore patterns in scientific research, identify trends, discover collaborative relationships. While open-access publishing typically requires the author to pay a fee for copyright retention, for example, a national institution in Poland allows authors to publish in open-access journals without incurring any personal cost - but using public funds. Springer is a member of the Open Access Scholarly Publishers Association, the Academic Publishing Industry, A Story of Merger and Acquisition – via Northern Illinois University
19.
Journal of the ACM
–
The Journal of the ACM is a peer-reviewed scientific journal covering computer science in general, especially theoretical aspects. It is a journal of the Association for Computing Machinery. Its current editor-in-chief is Victor Vianu, the journal was established in 1954 and computer scientists universally hold the Journal of the ACM in high esteem. Communications of the ACM Official website
20.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized