Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the term operational analysis is used in the British military as an intrinsic part of capability development and assurance. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals, which support British defense capability acquisition decision-making, it is considered to be a sub-field of applied mathematics. The terms management science and decision science are sometimes used as synonyms. Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex decision-making problems; because of its emphasis on human-technology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, draws on psychology and organization science.
Operations research is concerned with determining the extreme values of some real-world objective: the maximum or minimum. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. Operational research encompasses a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, neural networks, expert systems, decision analysis, the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system; because of the computational and statistical nature of most of these fields, OR has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, constraints on time and computing power.
The major sub-disciplines in modern operational research, as identified by the journal Operations Research, are: Computing and information technologies Financial engineering Manufacturing, service sciences, supply chain management Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation In the decades after the two world wars, the tools of operations research were more applied to problems in business and society. Since that time, operational research has expanded into a field used in industries ranging from petrochemicals to airlines, finance and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize complex systems, has become an area of active academic and industrial research. In the 17th century, mathematicians like Christiaan Huygens and Blaise Pascal tried to solve problems involving complex decisions with probability. Others in the 18th and 19th centuries solved these types of problems with combinatorics.
Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I. Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would attempt to extend these to the social sciences. Modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the station's superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UK's early warning radar system, Chain Home, he analysed the operating of the radar equipment and its communication networks, expanding to include the operating personnel's behaviour.
This allowed remedial action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson, in the United States with George Dantzig looked for ways to make better decisions in such areas as logistics and training schedules The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army. Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an
Float (project management)
In project management, float or slack is the amount of time that a task in a project network can be delayed without causing a delay to: subsequent tasks project completion date. Total float is associated with the path. If a project network chart/diagram has 4 non-critical paths that project would have 4 total float values; the total float of a path is the combined free float values of all activities in a path. The total float represents the schedule flexibility and can be measured by subtracting early start dates from late start dates of path completion. Float is core to critical path method, with the total floats of noncritical activities key to computing the critical path drag of an activity, i.e. the amount of time it is adding to the project's duration. Consider the process of replacing a broken pane of glass in the window of your home. There are various component activities involved in the project as a whole; some of these activities can run concurrently e.g. obtaining the glass, obtaining the putty, choosing the paint etc. while others are consecutive e.g. the paint cannot be bought until it has been chosen, the new window cannot be painted until the window is installed and the new putty has set.
Delaying the acquisition of the glass is to delay the entire project - this activity will be on the critical path and have no float, of any sort, attached to it and hence it is a'critical activity'. A short delay in the purchase of the paint may not automatically hold up the entire project as there is still some waiting time for the new putty to dry before it can be painted anyway - there will be some'free float' attached to the activity of purchasing the paint and hence it is not a critical activity. However, a delay in choosing the paint, in turn delays buying the paint which, although it may not subsequently mean any delay to the entire project, does mean that choosing the paint has no'free float' attached to it - despite having no free float of its own, the choosing of the paint is involved with a path through the network which does have'total float'. List of project management topics Critical path method AACE International. "Cost Engineering Terminology, AACE International Recommended Practice No.
Design structure matrix
The Design Structure Matrix is a simple and visual representation of a system or project in the form of a square matrix. It is the equivalent of an adjacency matrix in graph theory, is used in systems engineering and project management to model the structure of complex systems or processes, in order to perform system analysis, project planning and organization design. Don Steward coined the term "design structure matrix" in the 1960s, using the matrices to solve mathematical systems of equations. A design structure matrix lists all constituent subsystems/activities and the corresponding information exchange and dependency patterns. For example, where the matrix elements represent activities, the matrix details what pieces of information are needed to start a particular activity, shows where the information generated by that activity leads. In this way, one can recognize which other activities are reliant upon information outputs generated by each activity; the use of DSMs in both research and industrial practice increased in the 1990s.
DSMs have been applied in the building construction, real estate development, automotive, aerospace, small-scale manufacturing, factory equipment, electronics industries, to name a few, as well as in many government agencies. The matrix representation has several strengths; the matrix can represent a large number of system elements and their relationships in a compact way that highlights important patterns in the data. The presentation is amenable to matrix-based analysis techniques, which can be used to improve the structure of the system. In modeling activities precedence it allows representing feedback linkages that cannot be modeled by Gantt chart/PERT modeling techniques DSM analysis provides insights into how to manage complex systems or projects, highlighting information flows, task/activities sequences and iteration, it can help teams to streamline their processes based on the optimal flow of information between different interdependent activities. DSM analysis can be used to manage the effects of a change.
For example, if the specification for a component had to be changed, it would be possible to identify all processes or activities, dependent on that specification, reducing the risk that work continues based on out-of-date information. A DSM is a square matrix; the system elements are labeled in the rows to the left of the matrix and/or in the columns above the matrix. These elements can represent for example product components, organization teams, or project activities; the off-diagonal cells are used to indicate relationships between the elements. A marking of the cell indicates a directed link between two elements and can represent design relations or constraints between product components, communication between teams, information flow or precedence relations between activities. In one convention, reading across a row reveals the outputs that the element in that row provides to other elements, scanning a column reveals the inputs that the element in that column receives from other elements.
For example, in the DSM, the marking in row A and column C indicated a link from A to C. Alternatively, the rows and columns may be switched. Both conventions may be found in the literature; the cells along the diagonal are used to represent the system elements. However, the diagonal cells can be used for representing self-iterations. Self-iterations are required when a matrix element represents a block of activities/subsystems that may be further detailed, allowing hierarchical DSM structure. Two main categories of DSMs have been time-based. Static DSMs represent systems where all of the elements exist such as components of a machine or groups in an organization. A static DSM is equivalent to an adjacency matrix; the marking in the off-diagonal cells is largely symmetrical to the diagonal. Static DSMs are analyzed with clustering algorithms. A time-based DSM is akin to the matrix representation of a directed graph. In time-based DSMs, the ordering of the rows and columns indicates a flow through time: earlier activities in a process appear in the upper-left of the DSM and activities appear in the lower-right.
Terms like "feedforward" and "feedback" become meaningful. A feedback mark is an above-diagonal mark. Time-based DSMs are analyzed using sequencing algorithms, that reorder the matrix elements to minimize the amount of feedback marks, make them as close as possible to the diagonal. DSM matrices were categorized to Component-based or Architecture DSM. Activity-based or Schedule DSM and Parameter-based DSM are defined as time-based, as their ordering implies flow; the off-diagonal cell markings indicated only the existence/non-existence of an interaction between elements, using a symbol. Such marking is defined as Binary DSM; the marking has developed to indicate quantitative rel
PRINCE2 is a structured project management method and practitioner certification programme. PRINCE2 emphasises dividing projects into controllable stages, it is adopted in many countries worldwide, including the UK, Western European countries, Australia. PRINCE2 training is available in many languages. PRINCE2 was developed as a UK government standard for information systems projects. In July 2013, ownership of the rights to PRINCE2 was transferred from HM Cabinet Office to AXELOS Ltd, a joint venture by the Cabinet Office and Capita, with 49% and 51% stakes respectively. PRINCE was derived from an earlier method called PROMPT II. In 1989 the Central Computer and Telecommunications Agency adopted a version of PROMPT II as a UK Government standard for information systems project management, they gave it the name'PRINCE', which stood for "PROMPT II IN the CCTA Environment". PRINCE was renamed in a Civil service competition as an acronym for "PRojects IN Controlled Environments", it soon became applied outside the purely IT environment, both in UK government and in the private sector around the world.
PRINCE2 was released in 1996 as a generic project management method. PRINCE2 has become popular and is now a de facto standard for project management in many UK government departments and across the United Nations system. In the 2009 revision, the acronym was changed to mean'PRojects IN a Controlled Environment'. There have been two major revisions of PRINCE2 since its launch in 1996: "PRINCE2:2009 Refresh" in 2009, "PRINCE2 2017 Update" in 2017; the justification for the 2017 update was the evolutions in practical business practices and feedbacks from PRINCE2 practitioners in the actual project environment. These aspects are called tolerances or performance goals, they are considered during decision-making processes. In some organizations these can be KPIs. In the following table project level tolerances are summarized: Benefits can have as target the cost of the benefit, but the cost tolerance above is related to the cost of the project, not the cost of the benefit; each management level is checked against these tolerances.
PRINCE2 is based on seven principles and these cannot be tailored. The PRINCE2 principles can be described as a mindset that keeps the project aligned with the PRINCE2 methodology. If a project does not adhere to these principles, it is not being managed using PRINCE2. Continued Business Justification: The business case is the most important document, is updated at every stage of the project to ensure that the project is still viable. Early termination can occur. Learn From Experience: each project maintains a lessons log and projects should continually refer to their own and to previous and concurrent projects' lesson logs to avoid reinventing wheels. Unless lessons provoke change, they are only lessons identified. Defined Roles and Responsibilities: Roles are separated from individuals, who may take on multiple roles or share a role. Roles in PRINCE2 are structured in four levels. Project Management Team contains the last three, where all primary stakeholders need to be presented. Manage by Stages: the project is planned and controlled on a stage by stage basis.
Moving between stages includes updating the business case, overall plan, detailed next-stage plan in the light of new evidence. Manage by Exception: A PRINCE2 project has defined tolerances for each project objective, to establish limits of delegated authority. If a management level forecasts that these tolerances are exceeded, it is escalated to the next management level for a decision. Focus on Products: A PRINCE2 project focuses on the definition and delivery of the products, in particular their quality requirements. Tailor to Suit Project Environment: PRINCE2 is tailored to suit the project environment, complexity, time capability and risk. Tailoring is the first activity in the process reviewed for each stage. Not every aspect of PRINCE2 will be applicable to every project, thus every process has a note on scalability; this provides guidance to the project manager as to. The positive aspect of this is; the negative aspect is that many of the essential elements of PRINCE2 can be omitted sometimes resulting in a PINO project – Prince in Name Only Starting Up A Project, in which the project team is appointed including an executive and a project manager, a project brief is produced Initiating A Project, in which the business case refined and Project Initiation Documentation assembled Directing A Project, which dictates the ways in which the Project Board oversees the project Controlling A Stage, which dictates how each individual stage should be controlled, including the way in which work packages are authorised and distributed Managing Product Delivery, which has the purpose of controlling the link between the Project Manager and the Team Manager by placing formal requirements on accepting and delivering project work.
Managing Stage Boundaries, which dictates how to transition from one stage to the next Closing A Project, which covers the formal decommissioning of the project, follow-on actions and evaluation of the benefits. The PRINCE2 manual contains 26 suggested template
Estimation is the process of finding an estimate, or approximation, a value, usable for some purpose if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable. Estimation involves "using the value of a statistic derived from a sample to estimate the value of a corresponding population parameter"; the sample provides information that can be projected, through various formal or informal processes, to determine a range most to describe the missing information. An estimate that turns out to be incorrect will be an overestimate if the estimate exceeded the actual result, an underestimate if the estimate fell short of the actual result. Estimation is done by sampling, counting a small number of examples something, projecting that number onto a larger population. An example of estimation would be determining; because the distribution of candies inside the jar may vary, the observer can count the number of candies visible through the glass, consider the size of the jar, presume that a similar distribution can be found in the parts that can not be seen, thereby making an estimate of the total number of candies that could be in the jar if that presumption were true.
Estimates can be generated by projecting results from polls or surveys onto the entire population. In making an estimate, the goal is most useful to generate a range of possible outcomes, precise enough to be useful, but not so precise that it is to be inaccurate. For example, in trying to guess the number of candies in the jar, if fifty were visible, the total volume of the jar seemed to be about twenty times as large as the volume containing the visible candies one might project that there were a thousand candies in the jar; such a projection, intended to pick the single value, believed to be closest to the actual value, is called a point estimate. However, a point estimation is to be incorrect, because the sample size—in this case, the number of candies that are visible—is too small a number to be sure that it does not contain anomalies that differ from the population as a whole. A corresponding concept is an interval estimate, which captures a much larger range of possibilities, but is too broad to be useful.
For example, if one were asked to estimate the percentage of people who like candy, it would be correct that the number falls between zero and one hundred percent. Such an estimate would provide no guidance, however, to somebody, trying to determine how many candies to buy for a party to be attended by a hundred people. In mathematics, approximation describes the process of finding estimates in the form of upper or lower bounds for a quantity that cannot be evaluated and approximation theory deals with finding simpler functions that are close to some complicated function and that can provide useful estimates. In statistics, an estimator is the formal name for the rule by which an estimate is calculated from data, estimation theory deals with finding estimates with good properties; this process is used in signal processing, for approximating an unobserved signal on the basis of an observed signal containing noise. For estimation of yet-to-be observed quantities and prediction are applied. A Fermi problem, in physics, is one concerning estimation in problems which involve making justified guesses about quantities that seem impossible to compute given limited available information.
Estimation is important in business and economics, because too many variables exist to figure out how large-scale activities will develop. Estimation in project planning can be significant, because plans for the distribution of labor and for purchases of raw materials must be made, despite the inability to know every possible problem that may come up. A certain amount of resources will be available for carrying out a particular project, making it important to obtain or generate a cost estimate as one of the vital elements of entering into the project; the U. S. Government Accountability Office defines a cost estimate as, "the summation of individual cost elements, using established methods and valid data, to estimate the future costs of a program, based on what is known today", reports that "realistic cost estimating was imperative when making wise decisions in acquiring new systems". Furthermore, project plans must not underestimate the needs of the project, which can result in delays while unmet needs are fulfilled, nor must they overestimate the needs of the project, or else the unneeded resources may go to waste.
An informal estimate when little information is available is called a guesstimate, because the inquiry becomes closer to purely guessing the answer. The "estimated" sign, ℮, is used to designate that package contents are close to the nominal contents. Abundance estimation Ansatz Ballpark estimate Back-of-the-envelope calculation Conjecture Cost estimation Estimation statistics Estimation theory Fermi problem German tank problem Kalman filter Mark and recapture Sales quote Upper and lower bounds Estimation chapter from "Applied Software Project Management"
Project management is the practice of initiating, executing and closing the work of a team to achieve specific goals and meet specific success criteria at the specified time. The primary challenge of project management is to achieve all of the project goals within the given constraints; this information is described in project documentation, created at the beginning of the development process. The primary constraints are scope, time and budget; the secondary – and more ambitious – challenge is to optimize the allocation of necessary inputs and apply them to meet pre-defined objectives. The object of project management is to produce a complete project which complies with the client's objectives. In many cases the object of project management is to shape or reform the client's brief in order to feasibly be able to address the client's objectives. Once the client's objectives are established they should influence all decisions made by other people involved in the project – for example project managers, designers and sub-contractors.
Ill-defined or too prescribed project management objectives are detrimental to decision making. A project is a temporary endeavor designed to produce a unique product, service or result with a defined beginning and end undertaken to meet unique goals and objectives to bring about beneficial change or added value; the temporary nature of projects stands in contrast with business as usual, which are repetitive, permanent, or semi-permanent functional activities to produce products or services. In practice, the management of such distinct production approaches requires the development of distinct technical skills and management strategies; until 1900, civil engineering projects were managed by creative architects and master builders themselves, for example, Christopher Wren, Thomas Telford and Isambard Kingdom Brunel. In the 1950s organizations started to systematically apply project-management tools and techniques to complex engineering projects; as a discipline, project management developed from several fields of application including civil construction and heavy defense activity.
Two forefathers of project management are Henry Gantt, called the father of planning and control techniques, famous for his use of the Gantt chart as a project management tool. Both Gantt and Fayol were students of Frederick Winslow Taylor's theories of scientific management, his work is the forerunner to modern project management tools including work breakdown structure and resource allocation. The 1950s marked the beginning of the modern project management era where core engineering fields come together to work as one. Project management became recognized as a distinct discipline arising from the management discipline with engineering model. In the United States, prior to the 1950s, projects were managed on an ad-hoc basis, using Gantt charts and informal techniques and tools. At that time, two mathematical project-scheduling models were developed; the "critical path method" was developed as a joint venture between DuPont Corporation and Remington Rand Corporation for managing plant maintenance projects.
The "program evaluation and review technique", was developed by the U. S. Navy Special Projects Office in conjunction with the Lockheed Corporation and Booz Allen Hamilton as part of the Polaris missile submarine program. PERT and CPM are similar in their approach but still present some differences. CPM is used for projects. PERT, on the other hand, allows for stochastic activity times; because of this core difference, CPM and PERT are used in different contexts. These mathematical techniques spread into many private enterprises. At the same time, as project-scheduling models were being developed, technology for project cost estimating, cost management and engineering economics was evolving, with pioneering work by Hans Lang and others. In 1956, the American Association of Cost Engineers was formed by early practitioners of project management and the associated specialties of planning and scheduling, cost estimating, cost/schedule control. AACE continued its pioneering work and in 2006 released the first integrated process for portfolio and project management.
In 1969, the Project Management Institute was formed in the USA. PMI publishes A Guide to the Project Management Body of Knowledge, which describes project management practices that are common to "most projects, most of the time." PMI offers a range of certifications. Project management can apply to any project, but it is tailored to accommodate the specific needs of different and specialized industries. For example, the construction industry, which focuses on the delivery of things like buildings and bridges, has developed its own specialized form of project management that it refers to as construction project management and in which project managers can become trained and certified; the information technology industry has evolved to develop its own form of project management, referred to as IT project manag
A megaproject is an large-scale investment project. According to the Oxford Handbook of Megaproject Management, "Megaprojects are large-scale, complex ventures that cost $1 billion or more, take many years to develop and build, involve multiple public and private stakeholders, are transformational, impact millions of people". However, $1 billion is not a constraint in defining megaprojects. Therefore, a more general definition is "Megaprojects are temporary endeavours characterized by: large investment commitment, vast complexity, long-lasting impact on the economy, the environment, society". According to the European Cooperation in Science and Technology, megaprojects are characterized both by "extreme complexity and by a long record of poor delivery". Megaprojects attract a lot of public attention because of substantial impacts on communities and budgets, the high costs involved. Megaprojects can be defined as "initiatives that are physical expensive, public". Bent Flyvbjerg, a professor at the Saïd Business School of the University of Oxford says that globally, megaprojects make up 8 percent of total global GDP.
Care in the project development process is required to reduce any possible optimism bias and strategic misrepresentation, as a curious paradox exists in which more and more megaprojects are being proposed despite their poor performance against initial forecasts of budget and benefits. Megaprojects are affected by corruption leading to higher cost and lower benefit. Megaprojects include bridges, highways, airports, power plants, wastewater projects, Special Economic Zones and natural gas extraction projects, public buildings, information technology systems, aerospace projects, weapons systems, large-scale sporting events and, more mixed use waterfront redevelopments. Megaprojects can include large-scale high-cost initiatives in scientific research and infrastructure, such as the sequencing of the human genome, a significant global advance in genetics and biotechnology. According to Bent Flyvbjerg, "As a general rule of thumb,'megaprojects' are measured in billions of dollars,'major projects' in hundreds of millions, and'projects' in millions and tens of millions."
The logic on which many of the typical megaprojects are built is collective benefits. They may serve as the means for opening frontiers. Megaprojects have undergone a wide criticism for their top down planning processes and for their ill effects on certain communities. Large scale projects advantage one group of people while disadvantaging another, for instance, the Three Gorges Dam in China is the largest hydroelectric project in the world, but required the displacement of 1.2 million farmers. In the 1970s, the Highway revolts involved urban activists opposing government plans to demolish buildings in freeway routes that would disadvantage the urban working class to benefit commuters. Anti-nuclear protests against proposed nuclear power plants in the United States and Germany prevented developments due to environmental and social concerns. More new types of megaprojects have been identified that no longer follow the old models of being singular and monolithic in their purposes, but have become quite flexible and diverse, such as waterfront redevelopment schemes that seem to offer something to everybody.
However, just like the old megaprojects, the new ones foreclose "upon a wide variety of social practices, reproducing rather than resolving urban inequality and disenfranchisement". Because of their plethora of land uses "these mega-projects inhibit the growth of oppositional and contestational practices"; the collective benefits that are the underlying logic of a mega-project, are here reduced to an individualized form of public benefit. Bent Flyvbjerg argues that policymakers are attracted to megaprojects for four reasons: Technological sublime: the rapture that engineers and technologists get from building large and innovative projects, pushing the boundaries for what technology can do. Political sublime: the rapture politicians get from building monuments to themselves and for their causes. Economic sublime: the delight business people and trade unions get from making lots of money and jobs from megaprojects. Aesthetic sublime: the pleasure designers and people who appreciate good design get from building and looking at something large, iconically beautiful.
Proponents of infrastructure-based development advocate for funding large-scale projects to create long-term economic benefits. Investing in megaprojects in order to stimulate the general economy has been a popular policy measure since the economic crisis of the 1930s. Recent examples are the 2008–2009 Chinese economic stimulus program, the 2008 European Union stimulus plan, the American Recovery and Reinvestment Act of 2009. Megaprojects raise capital based on expected returns—though projects go overbudget and over time, market conditions like commodity prices can change. Concern at cost overruns is expressed by critics of megaprojects during the planning phase. Bent Flyvbjerg has noted the existence of incentives to overstate income, underestimate costs, exaggerate future social and economic benefits due t