Operations research
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the term operational analysis is used in the British military as an intrinsic part of capability development and assurance. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals, which support British defense capability acquisition decision-making, it is considered to be a sub-field of applied mathematics. The terms management science and decision science are sometimes used as synonyms. Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex decision-making problems; because of its emphasis on human-technology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, draws on psychology and organization science.
Operations research is concerned with determining the extreme values of some real-world objective: the maximum or minimum. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. Operational research encompasses a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, neural networks, expert systems, decision analysis, the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system; because of the computational and statistical nature of most of these fields, OR has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, constraints on time and computing power.
The major sub-disciplines in modern operational research, as identified by the journal Operations Research, are: Computing and information technologies Financial engineering Manufacturing, service sciences, supply chain management Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation In the decades after the two world wars, the tools of operations research were more applied to problems in business and society. Since that time, operational research has expanded into a field used in industries ranging from petrochemicals to airlines, finance and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize complex systems, has become an area of active academic and industrial research. In the 17th century, mathematicians like Christiaan Huygens and Blaise Pascal tried to solve problems involving complex decisions with probability. Others in the 18th and 19th centuries solved these types of problems with combinatorics.
Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I. Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would attempt to extend these to the social sciences. Modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the station's superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UK's early warning radar system, Chain Home, he analysed the operating of the radar equipment and its communication networks, expanding to include the operating personnel's behaviour.
This allowed remedial action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson, in the United States with George Dantzig looked for ways to make better decisions in such areas as logistics and training schedules The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army. Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an
Parity game
A parity game is played on a colored directed graph, where each node has been colored by a priority – one of finitely many natural numbers. Two players, 0 and 1, move a token along the edges of the graph; the owner of the node that the token falls on selects the successor node, resulting in a path, called the play. The winner of a finite play is the player; the winner of an infinite play is determined by the priorities appearing in the play. Player 0 wins an infinite play if the largest priority that occurs infinitely in the play is even. Player 1 wins otherwise; this explains the word "parity" in the title. Parity games lie in the third level of the Borel hierarchy, are determined. Games related to parity games were implicitly used in Rabin's proof of decidability of second-order theory of n successors, where determinacy of such games was proven; the Knaster–Tarski theorem leads to a simple proof of determinacy of parity games. Moreover, parity games are history-free determined; this means that if a player has a winning strategy that player has a winning strategy that depends only on the current board position, not on the history of the play.
Solving a parity game played on a finite graph means deciding, for a given starting position, which of the two players has a winning strategy. It has been shown that this problem is in NP and Co-NP, more UP and co-UP, as well as in QP, it remains an open question. Given that parity games are history-free determined, solving a given parity game is equivalent to solving the following simple looking graph-theoretic problem. Given a finite colored directed bipartite graph with n vertices V = V 0 ∪ V 1, V colored with colors from 1 to m, is there a choice function selecting a single out-going edge from each vertex of V 0, such that the resulting subgraph has the property that in each cycle the largest occurring color is even. Zielonka outlined a recursive algorithm. Let G = be a parity game, where V 0 resp. V 1 are the sets of nodes belonging to player 0 resp. 1, V = V 0 ∪ V 1 is the set of all nodes, E ⊆ V × V is the total set of edges, Ω: V → N is the priority assignment function. Zielonka's algorithm is based on the notation of attractors.
Let U ⊆ V be a set of nodes and i = 0, 1 be a player. The i-attractor of U is the least set of nodes A t t r i containing U such that i can force a visit to U from every node in A t t r i, it can be defined by a fix-point computation: A t t r i 0:= U A t t r i j + 1:= A t t r i j ∪ ∪ A t t r i:= ⋃ j = 0 ∞ A t t r i j {\disp
Artificial intelligence
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of achieving its goals. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"; as machines become capable, tasks considered to require "intelligence" are removed from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler's Theorem says "AI is whatever hasn't been done yet." For instance, optical character recognition is excluded from things considered to be AI, having become a routine technology. Modern machine capabilities classified as AI include understanding human speech, competing at the highest level in strategic game systems, autonomously operating cars, intelligent routing in content delivery networks and military simulations.
Artificial intelligence can be classified into three different types of systems: analytical, human-inspired, humanized artificial intelligence. Analytical AI has only characteristics consistent with cognitive intelligence. Human-inspired AI has elements from emotional intelligence. Humanized AI shows characteristics of all types of competencies, is able to be self-conscious and is self-aware in interactions with others. Artificial intelligence was founded as an academic discipline in 1956, in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding, followed by new approaches and renewed funding. For most of its history, AI research has been divided into subfields that fail to communicate with each other; these sub-fields are based on technical considerations, such as particular goals, the use of particular tools, or deep philosophical differences. Subfields have been based on social factors; the traditional problems of AI research include reasoning, knowledge representation, learning, natural language processing and the ability to move and manipulate objects.
General intelligence is among the field's long-term goals. Approaches include statistical methods, computational intelligence, traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, methods based on statistics and economics; the AI field draws upon computer science, information engineering, psychology, linguistics and many other fields. The field was founded on the claim that human intelligence "can be so described that a machine can be made to simulate it"; this raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth and philosophy since antiquity. Some people consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment. In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, theoretical understanding.
Thought-capable artificial beings appeared as storytelling devices in antiquity, have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R. U. R.. These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence; the study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction; this insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that "if a human could not distinguish between responses from a machine and a human, the machine could be considered "intelligent".
The first work, now recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was born at a workshop at Dartmouth College in 1956. Attendees Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky and Arthur Samuel became the founders and leaders of AI research, they and their students produced programs that the press described as "astonishing": computers were learning checkers strategies (and by 1959 were playing better than the average human
Michael L. Littman
Michael Lederman Littman is a computer scientist. He works in reinforcement learning, but has done work in machine learning, game theory, computer networking observable Markov decision process solving, computer solving of analogy problems and other areas, he is a professor of computer science at Brown University. Before graduate school, Littman worked with Thomas Landauer at Bellcore and was granted a patent for one of the earliest systems for Cross-language information retrieval. Littman received his Ph. D. in computer science from Brown University in 1996. From 1996 to 1999, he was a professor at Duke University. During his time at Duke, he worked on an automated crossword solver PROVERB, which won an Outstanding Paper Award in 1999 from AAAI and competed in the American Crossword Puzzle Tournament. From 2000 to 2002, he worked at AT&T. From 2002 to 2012, he was a professor at Rutgers University. In Summer 2012 he returned to Brown University as a full professor, he appeared in a TurboTax commercial.
Elected as an ACM Fellow in 2018 for "contributions to the design and analysis of sequential decision making algorithms in artificial intelligence". Winner of the IFAAMAS Influential Paper Award Winner of the AAAI “Shakey” Award for Overfitting: Machine Learning Music Video Winner of the AAAI “Shakey” Award for Short Video for Aibo Ingenuity Winner of the Warren I. Susman Award for Excellence in Teaching at Rutgers Winner of the Robert B. Cox Award at Duke Winner of the AAAI Outstanding Paper Award Lisa Littman Littman, Michael L.. "Predictive Representations of State". Advances in Neural Information Processing Systems 14. Pp. 1555–1561. Littman, Michael L.. "Solving crosswords with PROVERB". Proceedings of the Sixteenth National Conference on Artificial Intelligence. American Association for Artificial Intelligence. Pp. 914–915. Kaelbling, Leslie P.. "Reinforcement Learning: A Survey". Journal of Artificial Intelligence Research. 4: 237–285. Archived from the original on 2001-11-20. Littman, Michael L..
"Markov Games as a Framework for Multi-Agent Reinforcement Learning". International Conference on Machine Learning. Pp. 157–163. Smart Home Programming: If-Then Statements Make A Comeback- Science 2.0 Computer Science for the Rest of Us- New York Times Many Scientists Dismiss the Fear of Robots- Fortune Celebrating the 20th Anniversary of MIME Email Attachments- NJ Tech Weekly Humans Beat Poker Bot… Barely -NBC News Duke Researchers Pit Computer Against Human Crossword Puzzle Players Going Cruciverbalistic- American Scientist Intro to Algorithms Machine Learning Michael L. Littman at the Mathematics Genealogy Project Michael Littman's Homepage YouTube page Music Videos
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
ArXiv
ArXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, astronomy, electrical engineering, computer science, quantitative biology, mathematical finance and economics, which can be accessed online. In many fields of mathematics and physics all scientific papers are self-archived on the arXiv repository. Begun on August 14, 1991, arXiv.org passed the half-million-article milestone on October 3, 2008, had hit a million by the end of 2014. By October 2016 the submission rate had grown to more than 10,000 per month. ArXiv was made possible by the compact TeX file format, which allowed scientific papers to be transmitted over the Internet and rendered client-side. Around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Paul Ginsparg recognized the need for central storage, in August 1991 he created a central repository mailbox stored at the Los Alamos National Laboratory which could be accessed from any computer.
Additional modes of access were soon added: FTP in 1991, Gopher in 1992, the World Wide Web in 1993. The term e-print was adopted to describe the articles, it began as a physics archive, called the LANL preprint archive, but soon expanded to include astronomy, computer science, quantitative biology and, most statistics. Its original domain name was xxx.lanl.gov. Due to LANL's lack of interest in the expanding technology, in 2001 Ginsparg changed institutions to Cornell University and changed the name of the repository to arXiv.org. It is now hosted principally with eight mirrors around the world, its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv; the annual budget for arXiv is $826,000 for 2013 to 2017, funded jointly by Cornell University Library, the Simons Foundation and annual fee income from member institutions.
This model arose in 2010, when Cornell sought to broaden the financial funding of the project by asking institutions to make annual voluntary contributions based on the amount of download usage by each institution. Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,000 to $4,400. Cornell's goal is to raise at least $504,000 per year through membership fees generated by 220 institutions. In September 2011, Cornell University Library took overall administrative and financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a life sentence". However, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. Although arXiv is not peer reviewed, a collection of moderators for each area review the submissions; the lists of moderators for many sections of arXiv are publicly available, but moderators for most of the physics sections remain unlisted.
Additionally, an "endorsement" system was introduced in 2004 as part of an effort to ensure content is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, but to check whether the paper is appropriate for the intended subject area. New authors from recognized academic institutions receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for restricting scientific inquiry. A majority of the e-prints are submitted to journals for publication, but some work, including some influential papers, remain purely as e-prints and are never published in a peer-reviewed journal. A well-known example of the latter is an outline of a proof of Thurston's geometrization conjecture, including the Poincaré conjecture as a particular case, uploaded by Grigori Perelman in November 2002.
Perelman appears content to forgo the traditional peer-reviewed journal process, stating: "If anybody is interested in my way of solving the problem, it's all there – let them go and read about it". Despite this non-traditional method of publication, other mathematicians recognized this work by offering the Fields Medal and Clay Mathematics Millennium Prizes to Perelman, both of which he refused. Papers can be submitted in any of several formats, including LaTeX, PDF printed from a word processor other than TeX or LaTeX; the submission is rejected by the arXiv software if generating the final PDF file fails, if any image file is too large, or if the total size of the submission is too large. ArXiv now allows one to store and modify an incomplete submission, only finalize the submission when ready; the time stamp on the article is set. The standard access route is through one of several mirrors. Sev
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the