1.
Queueing theory
–
Queueing theory is the mathematical study of waiting lines, or queues. In queueing theory, a model is constructed so that queue lengths, Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research by Agner Krarup Erlang when he created models to describe the Copenhagen telephone exchange. The ideas have seen applications including telecommunication, traffic engineering, computing. The spelling queueing over queuing is typically encountered in the research field. In fact, one of the journals of the profession is named Queueing Systems. Many theorems in queueing theory can be proved by reducing queues to mathematical systems known as Markov chains, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory in 1909. He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917, in an M/G/1 queue the G stands for general and indicates an arbitrary probability distribution. The M/G/1 model was solved by Felix Pollaczek in 1930, a solution later recast in probabilistic terms by Aleksandr Khinchin, after the 1940s queueing theory became an area of research interest to mathematicians. In 1953 David George Kendall solved the GI/M/k queue and introduced the notation for queues. In 1957 Pollaczek studied the GI/G/1 using an integral equation, John Kingman gave a formula for the mean waiting time in a G/G/1 queue, Kingmans formula. The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival, problems such as performance metrics for the M/G/k queue remain an open problem. Last in first out This principle also serves customers one at a time, processor sharing Service capacity is shared equally between customers. Priority Customers with high priority are served first, priority queues can be of two types, non-preemptive and preemptive. No work is lost in either model, when a customer is serviced at one node it can join another node and queue for service, or leave the network. For a network of m the state of the system can be described by a vector where xi represents the number of customers at each node. This result was extended to the BCMP network where a network with very general service time, regimes, the normalizing constant can be calculated with the Buzens algorithm, proposed in 1973. Networks of customers have also investigated, Kelly networks where customers of different classes experience different priority levels at different service nodes
2.
Equilibrium distribution
–
In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. e. Conditional on the present state of the system, its future, a Markov chain is a type of Markov process that has either discrete state space or discrete index set, but the precise definition of a Markov chain varies. Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906, random walks on the integers and the Gamblers ruin problem are examples of Markov processes and were studied hundreds of years earlier. These two processes are Markov processes in time, while random walks on the integers and the Gamblers ruin problem are examples of Markov processes in discrete time. The algorithm known as PageRank, which was proposed for the internet search engine Google, is based on a Markov process. The adjective Markovian is used to something that is related to a Markov process. A Markov chain is a process with the Markov property. The term Markov chain refers to the sequence of variables such a process moves through. It can thus be used for describing systems that follow a chain of linked events, the systems state space and time parameter index need to be specified. In addition, there are extensions of Markov processes that are referred to as such. Moreover, the index need not necessarily be real-valued, like with the state space. Notice that the state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the space of a Markov chain does not have any generally agreed-on restrictions. However, many applications of Markov chains employ finite or countably infinite state spaces, besides time-index and state-space parameters, there are many other variations, extensions and generalizations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, the changes of state of the system are called transitions. The probabilities associated with state changes are called transition probabilities. The process is characterized by a space, a transition matrix describing the probabilities of particular transitions. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate
3.
Operations research
–
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the operational analysis is used in the British military, as an intrinsic part of capability development, management. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals and it is often considered to be a sub-field of applied mathematics. The terms management science and decision science are used as synonyms. Operation research is concerned with determining the maximum or minimum of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries, nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has ties to computer science. In the decades after the two wars, the techniques were more widely applied to problems in business, industry. Early work in research was carried out by individuals such as Charles Babbage. Percy Bridgman brought operational research to bear on problems in physics in the 1920s, modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the stations superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UKs early warning radar system, initially, he analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnels behaviour. This revealed unappreciated limitations of the CH network and allowed action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, other names for it included operational analysis and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research, about 200 operational research scientists worked for the British Army. Patrick Blackett worked for different organizations during the war. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941, blacketts team at Coastal Commands Operational Research Section included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of analyses that aided the war effort. Convoys travel at the speed of the slowest member, so small convoys can travel faster and it was also argued that small convoys would be harder for German U-boats to detect
4.
Agner Krarup Erlang
–
Agner Krarup Erlang was a Danish mathematician, statistician and engineer, who invented the fields of traffic engineering and queueing theory. By the time of his early death at the age of 51. Erlang was born at Lønborg, near Tarm, in Jutland and he was the son of a schoolmaster, and a descendant of Thomas Fincke on his mothers side. At age 14, he passed the Preliminary Examination of the University of Copenhagen with distinction, for the next two years he taught alongside his father. A distant relative provided free board and lodging, and Erlang prepared for and took the University of Copenhagen entrance examination in 1896 and he won a scholarship to the University and majored in mathematics, and also studied astronomy, physics and chemistry. He graduated in 1901 with an MA and over the next 7 years taught at several schools and he maintained his interest in mathematics, and received an award for a paper that he submitted to the University of Copenhagen. Erlang worked for the CTC from 1908 for almost 20 years and he was an associate of the British Institution of Electrical Engineers. While working for the CTC, Erlang was presented with the problem of determining how many circuits were needed to provide an acceptable telephone service. His thinking went further by finding how many telephone operators were needed to handle a given volume of calls, most telephone exchanges then used human operators and cord boards to switch telephone calls by means of jack plugs. Out of necessity, Erlang was a hands-on researcher and he would conduct measurements and was prepared to climb into street manholes to do so. He was also an expert in the history and calculation of the tables of mathematical functions. He devised new methods for certain forms of tables. He developed his theory of traffic over several years. His significant publications include, In 1909 - The Theory of Probabilities, in 1917 - Solution of some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges - which contains his classic formulae for call loss and waiting time. These and other papers were translated into English, French. His papers were prepared in a very brief style and can be difficult to understand without a background in the field, one researcher from Bell Telephone Laboratories is said to have learned Danish to study them. The British Post Office accepted his formula as the basis for calculating circuit facilities, a unit of measurement, statistical distribution and programming language listed below have been named in his honour. An Introduction to Erlang B and Erlang C Brockmeyer, E. Halstrøm, H. L. & Jensen, the Life and Works of A. K
5.
Telecommunication
–
Telecommunication is the transmission of signs, signals, messages, writings, images and sounds or intelligence of any nature by wire, radio, optical or other electromagnetic systems. Telecommunication occurs when the exchange of information between communication participants includes the use of technology and it is transmitted either electrically over physical media, such as cables, or via electromagnetic radiation. Such transmission paths are divided into communication channels which afford the advantages of multiplexing. The term is used in its plural form, telecommunications. Early means of communicating over a distance included visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, other examples of pre-modern long-distance communication included audio messages such as coded drumbeats, lung-blown horns, and loud whistles. Zworykin, John Logie Baird and Philo Farnsworth, the word telecommunication is a compound of the Greek prefix tele, meaning distant, far off, or afar, and the Latin communicare, meaning to share. Its modern use is adapted from the French, because its use was recorded in 1904 by the French engineer. Communication was first used as an English word in the late 14th century, in the Middle Ages, chains of beacons were commonly used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could pass a single bit of information. One notable instance of their use was during the Spanish Armada, in 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system between Lille and Paris. However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres, as a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880. Homing pigeons have occasionally used throughout history by different cultures. Pigeon post is thought to have Persians roots and was used by the Romans to aid their military, frontinus said that Julius Caesar used pigeons as messengers in his conquest of Gaul. The Greeks also conveyed the names of the victors at the Olympic Games to various cities using homing pigeons, in the early 19th century, the Dutch government used the system in Java and Sumatra. And in 1849, Paul Julius Reuter started a service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed. Sir Charles Wheatstone and Sir William Fothergill Cooke invented the telegraph in 1837. Also, the first commercial electrical telegraph is purported to have constructed by Wheatstone and Cooke. Both inventors viewed their device as an improvement to the electromagnetic telegraph not as a new device, samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837
6.
Traffic engineering (transportation)
–
Traffic engineering is a branch of civil engineering that uses engineering techniques to achieve the safe and efficient movement of people and goods on roadways. It focuses mainly on research for safe and efficient traffic flow, such as geometry, sidewalks and crosswalks, cycling infrastructure, traffic signs, road surface markings. Traffic engineering deals with the part of transportation system, except the infrastructures provided. Typical traffic engineering projects involve designing traffic control device installations and modifications, including signals, signs. However, traffic engineers also consider traffic safety by investigating locations with high crash rates, traffic flow management can be short-term or long-term. Traditionally, road improvements have consisted mainly of building additional infrastructure, however, dynamic elements are now being introduced into road traffic management. Dynamic elements have long used in rail transport. These include sensors to measure flows and automatic, interconnected. Also, traffic flow and speed sensors are used to detect problems and alert operators, so that the cause of the congestion can be determined and these systems are collectively called intelligent transportation systems. However, above a threshold, increased density reduces speed. Additionally, beyond a threshold, increased density reduces flow as well. Therefore, speeds and lane flows at bottlenecks can be high during peak periods by managing traffic density using devices that limit the rate at which vehicles can enter the highway. Ramp meters, signals on entrance ramps that control the rate at which vehicles are allowed to enter the mainline facility, highway safety engineering is a branch of traffic engineering that deals with reducing the frequency and severity of crashes. It uses physics and vehicle dynamics, as well as road user psychology and human factors engineering, a typical traffic safety investigation follows these steps 1. Locations are selected by looking for sites with higher than average crash rates and this includes obtaining police reports of crashes, observing road user behavior, and collecting information on traffic signs, road surface markings, traffic lights and road geometry. Look for collisions patterns or road conditions that may be contributing to the problem, identify possible countermeasures to reduce the severity or frequency of crashes. • Evaluate cost/benefit ratios of the alternatives • Consider whether a proposed improvement will solve the problem, for example, preventing left turns at one intersection may eliminate left turn crashes at that location, only to increase them a block away. • Are any disadvantages of proposed improvements likely to be worse than the problem you are trying to solve, usually, this occurs some time after the implementation
7.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
8.
Industrial engineering
–
Industrial engineering is a branch of engineering which deals with the optimization of complex processes, systems or organizations. Industrial engineers work to eliminate waste of time, money, materials, man-hours, machine time, energy, according to the Institute of Industrial and Systems Engineers, they figure out how to do things better, they engineer processes and systems that improve quality and productivity. Some engineering universities and educational agencies around the world have changed the term industrial to broader terms such as production or systems, the various topics concerning industrial engineers include, Process engineering, design, operation, control, and optimization of chemical, physical, and biological processes. Systems engineering, a field of engineering that focuses on how to design. Safety engineering, a discipline which assures that engineered systems provide acceptable levels of safety. Value engineering, a method to improve the value of goods or products. Quality engineering, a way of preventing mistakes or defects in manufactured products, Project management, is the process and activity of planning, organizing, motivating, and controlling resources, procedures and protocols to achieve specific goals in scientific or daily problems. It includes the movement and storage of raw materials, work-in-process inventory, ergonomics, the practice of designing products, systems or processes to take proper account of the interaction between them and the people that use them. Logistics, the management of the flow of goods between the point of origin and the point of consumption in order to some requirements. Many of the tools and principles of engineering can be applied to the configuration of work activities within a project. The application of engineering and operations management concepts and techniques to the execution of projects has been thus referred to as Project Production Management. Traditionally, an aspect of industrial engineering was planning the layouts of factories and designing assembly lines. And now, in manufacturing systems, industrial engineers work to eliminate wastes of time, money, materials, energy. There is a consensus among a historian that the roots of the Industrial Engineering Profession date back to the Industrial Revolution. The concept of the system had its genesis in the factories created by these innovations. Eli Whitney and Simeon North proved the feasibility of the notion of Interchangeable parts in the manufacture of muskets, under this system, individual parts were mass-produced to tolerances to enable their use in any finished product. The result was a significant reduction in the need for skill from specialized workers, frederick Taylor is generally credited as being the father of the Industrial Engineering discipline. He earned a degree in engineering from Stevens University
9.
Kendall's notation
–
In queueing theory, a discipline within the mathematical theory of probability, Kendalls notation is the standard system used to describe and classify a queueing node. It has since extended to A/S/c/K/N/D where K is the capacity of the queue, D is the queueing discipline. When the final three parameters are not specified, it is assumed K = ∞, N = ∞ and D = FIFO, a code describing the arrival process. The codes used are, This gives the distribution of time of the service of a customer, some common notations are, The number of service channels. The M/M/1 queue has a server and the M/M/c queue c servers. The capacity of the system, or the number of customers allowed in the system including those in service. When the number is at maximum, further arrivals are turned away. If this number is omitted, the capacity is assumed to be unlimited, note, This is sometimes denoted C + k where k is the buffer size, the number of places in the queue above the number of servers C. The size of the population from which the customers come, a small population will significantly affect the effective arrival rate, because, as more jobs queue up, there are fewer left available to arrive into the system. If this number is omitted, the population is assumed to be unlimited and this does not normally cause confusion because the notation is different
10.
Markov chain
–
In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. e. Conditional on the present state of the system, its future, a Markov chain is a type of Markov process that has either discrete state space or discrete index set, but the precise definition of a Markov chain varies. Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906, random walks on the integers and the Gamblers ruin problem are examples of Markov processes and were studied hundreds of years earlier. These two processes are Markov processes in time, while random walks on the integers and the Gamblers ruin problem are examples of Markov processes in discrete time. The algorithm known as PageRank, which was proposed for the internet search engine Google, is based on a Markov process. The adjective Markovian is used to something that is related to a Markov process. A Markov chain is a process with the Markov property. The term Markov chain refers to the sequence of variables such a process moves through. It can thus be used for describing systems that follow a chain of linked events, the systems state space and time parameter index need to be specified. In addition, there are extensions of Markov processes that are referred to as such. Moreover, the index need not necessarily be real-valued, like with the state space. Notice that the state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the space of a Markov chain does not have any generally agreed-on restrictions. However, many applications of Markov chains employ finite or countably infinite state spaces, besides time-index and state-space parameters, there are many other variations, extensions and generalizations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, the changes of state of the system are called transitions. The probabilities associated with state changes are called transition probabilities. The process is characterized by a space, a transition matrix describing the probabilities of particular transitions. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate
11.
Andrey Markov
–
Andrey Andreyevich Markov was a Russian mathematician. He is best known for his work on stochastic processes, a primary subject of his research later became known as Markov chains and Markov processes. Markov and his younger brother Vladimir Andreevich Markov proved Markov brothers inequality and his son, another Andrei Andreevich Markov, was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Andrey Markov was born on 14 June 1856 in Russia and he attended Petersburg Grammar, where he was seen as a rebellious student by a select few teachers. In his academics he performed poorly in most subjects other than mathematics, later in life he attended Petersburg University and was lectured by Pafnuty Chebyshev. Among his teachers were Yulian Sokhotski, Konstantin Posse, Yegor Zolotarev, Pafnuty Chebyshev, Aleksandr Korkin, Mikhail Okatov, Osip Somov and he completed his studies at the University and was later asked if he would like to stay and have a career as a Mathematician. He later taught at schools and continued his own mathematical studies. In this time he found a use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and he also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922, during the following year, he passed the candidates examinations, and he remained at the university to prepare for a lecturers position. In April 1880, Markov defended his masters thesis About Binary Quadratic Forms with Positive Determinant, five years later, in January 1885, there followed his doctoral thesis About Some Applications of Algebraic Continuous Fractions. His pedagogical work began after the defense of his masters thesis in autumn 1880, as a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on introduction to analysis, probability theory, from 1895 through 1905 he also lectured in differential calculus. One year after the defense of his thesis, Markov was appointed extraordinary professor. In 1890, after the death of Viktor Bunyakovsky, Markov became a member of the academy. His promotion to a professor of St. Petersburg University followed in the fall of 1894. In 1896, Markov was elected a member of the academy as the successor of Chebyshev. In 1905, he was appointed merited professor and was granted the right to retire, until 1910, however, he continued to lecture in the calculus of differences
12.
Poisson process
–
In probability, statistics and related fields, a Poisson point process or Poisson process is a type of random mathematical object that consists of points randomly located on a mathematical space. The Poisson point process is defined on the real line. In this setting, it is used, for example, in queueing theory to model events, such as the arrival of customers at a store or phone calls at an exchange. In this setting, the process is used in mathematical models and in the related fields of spatial point processes, stochastic geometry, spatial statistics. On more abstract spaces, the Poisson point process serves as an object of study in its own right. This has inspired the proposal of other point processes, some of which are constructed with the Poisson point process, the process is named after French mathematician Siméon Denis Poisson despite Poisson never having studied the process. The process was discovered independently and repeatedly in different settings, including experiments on radioactive decay, telephone call arrivals and insurance mathematics. The point process depends on a mathematical object, which, depending on the context, may be a constant, a locally integrable function or, in more general settings. In the first case, the constant, known as the rate or intensity, is the density of the points in the Poisson process located in some region of space. The resulting point process is called a homogeneous or stationary Poisson point process, depending on the setting, the process has several equivalent definitions as well as definitions of varying generality owing to its many applications and characterizations. Consequently, the notation, terminology and level of mathematical rigour used to define and study the Poisson point process, despite its different forms and varying generality, the Poisson point process has two key properties. The Poisson point process is related to the Poisson distribution, which implies that the probability of a Poisson random variable N being equal to n is given by, P = Λ n n. E − Λ where n. denotes n factorial and Λ is the single Poisson parameter that is used to define the Poisson distribution. If a Poisson point process is defined on some underlying space and this property is known under several names such as complete randomness, complete independence, or independent scattering and is common to all Poisson point processes. In other words, there is a lack of interaction between different regions and the points in general, which motivates the Poisson process being called a purely or completely random process. For all the instances of the Poisson point process, the two key properties of the Poisson distribution and complete independence play an important role, if a Poisson point process has a constant parameter, say, λ, then it is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the number of Poisson points existing in some bounded region. The homogeneous Poisson point process, when considered on the positive half-line, can be defined as a process, a type of stochastic process
13.
M/M/1 queue
–
The model name is written in Kendalls notation. The model is the most elementary of queueing models and an object of study as closed-form expressions can be obtained for many metrics of interest in this model. An extension of this model more than one server is the M/M/c queue. An M/M/1 queue is a process whose state space is the set where the value corresponds to the number of customers in the system. Arrivals occur at rate λ according to a Poisson process and move the process from state i to i +1, service times have an exponential distribution with rate parameter μ in the M/M/1 queue, where 1/μ is the mean service time. A single server serves customers one at a time from the front of the queue, according to a first-come, when the service is complete the customer leaves the queue and the number of customers in the system reduces by one. The buffer is of size, so there is no limit on the number of customers it can contain. The model can be described as a continuous time Markov chain with transition rate matrix Q = on the state space and this is the same continuous time Markov chain as in a birth–death process. The state space diagram for this chain is as below and we can write a probability mass function dependent on t to describe the probability that the M/M/1 queue is in a particular state at a given time. We assume that the queue is initially in state i and write pk for the probability of being in state k at time t. Then p k = e − t where ρ = λ / μ, moments for the transient solution can be expressed as the sum of two monotone functions. The model is considered only if λ < μ. If, on average, arrivals happen faster than service completions the queue will grow indefinitely long, the stationary distribution is the limiting distribution for large values of t. Various performance measures can be computed explicitly for the M/M/1 queue and we write ρ = λ/μ for the utilization of the buffer and require ρ <1 for the queue to be stable. ρ represents the proportion of time which the server is occupied. The probability that the process is in state i is π i = ρ i. We see that the number of customers in the system is distributed with parameter 1 − ρ. Thus the average number of customers in the system is ρ/ and this result holds for any work conserving service regime, such as processor sharing
14.
Exponentially distributed
–
It is a particular case of the gamma distribution. It is the analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson processes, the probability density function of an exponential distribution is f = { λ e − λ x x ≥0,0 x <0. Alternatively, this can be defined using the right-continuous Heaviside step function, H where H=1, f = λ e − λ x H Here λ >0 is the parameter of the distribution, the distribution is supported on the interval [0, ∞). If a random variable X has this distribution, we write X ~ Exp, the exponential distribution exhibits infinite divisibility. The cumulative distribution function is given by F = {1 − e − λ x x ≥0,0 x <0. Where β >0 is mean, standard deviation, and scale parameter of the distribution and that is to say, the expected duration of survival of the system is β units of time. The parametrization involving the rate parameter arises in the context of events arriving at a rate λ, the alternative specification is sometimes more convenient than the one given above, and some authors will use it as a standard definition. This alternative specification is not used here, unfortunately this gives rise to a notational ambiguity. An example of this switch, reference uses λ for β. The mean or expected value of an exponentially distributed random variable X with rate parameter λ is given by E =1 λ = β, see above. In light of the examples given above, this sense, if you receive phone calls at an average rate of 2 per hour. The variance of X is given by Var =1 λ2 = β2, the moments of X, for n =1,2. are given by E = n. The median of X is given by m = ln λ < E , where ln refers to the natural logarithm. Thus the absolute difference between the mean and median is | E − m | =1 − ln λ <1 λ = standard deviation, an exponentially distributed random variable T obeys the relation Pr = Pr, ∀ s, t ≥0. The exponential distribution and the distribution are the only memoryless probability distributions. The exponential distribution is also necessarily the only continuous probability distribution that has a constant Failure rate. The quantile function for Exp is F −1 = − ln λ,0 ≤ p <1 The quartiles are therefore, first quartile, ln/λ median, ln/λ third quartile, ln/λ And as a consequence the interquartile range is ln/λ
15.
Probability distribution
–
For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. In more technical terms, the probability distribution is a description of a phenomenon in terms of the probabilities of events. Examples of random phenomena can include the results of an experiment or survey, a probability distribution is defined in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed. The sample space may be the set of numbers or a higher-dimensional vector space, or it may be a list of non-numerical values, for example. Probability distributions are divided into two classes. A discrete probability distribution can be encoded by a discrete list of the probabilities of the outcomes, on the other hand, a continuous probability distribution is typically described by probability density functions. The normal distribution represents a commonly encountered continuous probability distribution, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is the set of numbers is called univariate. Important and commonly encountered univariate probability distributions include the distribution, the hypergeometric distribution. The multivariate normal distribution is a commonly encountered multivariate distribution, to define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. For example, the probability that an object weighs exactly 500 g is zero. Continuous probability distributions can be described in several ways, the cumulative distribution function is the antiderivative of the probability density function provided that the latter function exists. As probability theory is used in diverse applications, terminology is not uniform. The following terms are used for probability distribution functions, Distribution. Probability distribution, is a table that displays the probabilities of outcomes in a sample. Could be called a frequency distribution table, where all occurrences of outcomes sum to 1. Distribution function, is a form of frequency distribution table. Probability distribution function, is a form of probability distribution table
16.
David George Kendall
–
David George Kendall FRS was an English statistician and mathematician, known for his work on probability, statistical shape analysis, ley lines and queueing theory. He spent most of his life in the University of Oxford. He worked with M. S. Bartlett during World War II, David George Kendall was born on 15 January 1918 in Ripon, West Riding of Yorkshire, and attended Ripon Grammar School before attending Queens College, Oxford, graduating in 1939. He worked on rocketry during the World War II, before moving to Magdalen College, Oxford, in 1962 he was appointed the first Professor of Mathematical Statistics in the Statistical Laboratory, University of Cambridge, in which post he remained until his retirement in 1985. He was elected to a fellowship at Churchill College. In 1986, he was awarded an Honorary Degree by the University of Bath, Kendall was an expert in probability and data analysis, and pioneered statistical shape analysis including the study of ley lines. He defined Kendalls notation for queueing theory, the Royal Statistical Society awarded him the Guy Medal in Silver in 1955, followed in 1981 by the Guy Medal in Gold. In 1980 the London Mathematical Society awarded Kendall their Senior Whitehead Prize and he was elected a fellow of the Royal Society in 1964. He was married to Diana Fletcher from 1952 until his death and they had two sons and four daughters, including Wilfrid Kendall, professor in the Department of Statistics at the University of Warwick and journalist Bridget Kendall MBE. Geometric ergodicity and the theory of queues, in Arrow, Kenneth J. 176–195, janus, The Papers of Professor David Kendall Royal Society citation
17.
Mean sojourn time
–
The mean sojourn time for an object in a system is a mathematical term for the amount of time an object is expected to spend in a system before leaving the system for good. Imagine you are standing in line to buy a ticket at the counter, if you, after one minute, observe the number of customers that are behind you it might be looked upon as a estimate of the number of customers entering the system per unit time. To formalize this somewhat let us consider the line as a system S into which there is a flow of particles. Similar theorems have been discovered in fields, and in physiology it was earlier known as one of the Stewart-Hamilton equations. Thus, let us consider a system S in the form of a domain of finite volume in the Euclidean space. The sum of these times is the sojourn time of s for that particular particle. If the motions of the particles are looked upon as realizations of one and that is, the mean sojourn time of a subsystem is the total time a particle is expected to spend in the subsystem s before leaving the system S for good. It can then be demonstrated that the state number of particles in the subsystem s equals the stream of particles into the system S times the mean sojourn time of the subsystem. This is thus a more form of what above was referred to as Little’s theorem. This mass-time equivalence has found applications in, say, medicine for the study of metabolism of individual organs, when the whole system is considered is it true that sojourn time always equals transit time. Ergodic theory Queueing theory Bergner, DMP--A kinetics of macroscopic particles in open heterogeneous systems
18.
FIFO (computing and electronics)
–
FIFO is an acronym for first in, first out, a method for organizing and manipulating a data buffer, where the oldest entry, or head of the queue, is processed first. It is analogous to processing a queue with first-come, first-served behaviour, FCFS is also the jargon term for the FIFO operating system scheduling algorithm, which gives every process central processing unit time in the order in which it is demanded. FIFOs opposite is LIFO, last-in-first-out, where the youngest entry or top of the stack is processed first, a priority queue is neither FIFO or LIFO but may adopt similar behaviour temporarily or by default. Queueing theory encompasses these methods for processing data structures, as well as interactions between strict-FIFO queues, depending on the application, a FIFO could be implemented as a hardware shift register, or using different memory structures, typically a circular buffer or a kind of List. For information on the data structure, see Queue. The following code shows a linked list FIFO C++ language implementation, the ends of a FIFO queue are often referred to as head and tail. Unfortunately, a controversy exists regarding those terms, To many people, items should enter a queue at the tail, and remain in the queue until they reach the head and leave the queue from there. This point of view is justified by analogy with queues of people waiting for some kind of service and parallels the use of front, other people believe that items enter a queue at the head and leave at the tail, in the manner of food passing through a snake. Queues written in that way appear in places that could be considered authoritative, in computing environments that support the pipes and filters model for interprocess communication, a FIFO is another name for a named pipe. Disk controllers can use the FIFO as a scheduling algorithm to determine the order in which to service disk I/O requests. Communication network bridges, switches and routers used in computer networks use FIFOs to hold data packets en route to their next destination, typically at least one FIFO structure is used per network connection. Some devices feature multiple FIFOs for simultaneously and independently queuing different types of information, FIFOs are commonly used in electronic circuits for buffering and flow control between hardware and software. In its hardware form, a FIFO primarily consists of a set of read and write pointers, storage, storage may be static random access memory, flip-flops, latches or any other suitable form of storage. For FIFOs of non-trivial size, a dual-port SRAM is usually used, where one port is dedicated to writing, a synchronous FIFO is a FIFO where the same clock is used for both reading and writing. An asynchronous FIFO uses different clocks for reading and writing, a common implementation of an asynchronous FIFO uses a Gray code for the read and write pointers to ensure reliable flag generation. One further note concerning flag generation is that one must necessarily use pointer arithmetic to generate flags for asynchronous FIFO implementations, conversely, one may use either a leaky bucket approach or pointer arithmetic to generate flags in synchronous FIFO implementations. Examples of FIFO status flags include, full, empty, almost full, almost empty, the first known FIFO implemented in electronics was done by Peter Alfke in 1969 at Fairchild Semiconductors. Peter Alfke was later a Director at Xilinx, a hardware FIFO is used for synchronization purposes
19.
LIFO (computing)
–
The order in which elements come off a stack gives rise to its alternative name, LIFO. Additionally, an operation may give access to the top without modifying the stack. Considered as a data structure, or more abstractly a sequential collection. This makes it possible to implement a stack as a linked list. A stack may be implemented to have a bounded capacity, if the stack is full and does not contain enough space to accept an entity to be pushed, the stack is then considered to be in an overflow state. The pop operation removes an item from the top of the stack, Stacks entered the computer science literature in 1946, in the computer design of Alan M. Turing as a means of calling and returning from subroutines. Subroutines had already implemented in Konrad Zuses Z4 in 1945. Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea in 1955, the same concept was developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1957. Stacks are often described by analogy to a stack of plates in a cafeteria. Clean plates are placed on top of the stack, pushing down any already there, when a plate is removed from the stack, the one below it pops up to become the new top. In many implementations, a stack has more operations than push, an example is top of stack, or peek, which observes the top-most element without removing it from the stack. Since this can be done with a pop and a push with the same data, an underflow condition can occur in the stack top operation if the stack is empty, the same as pop. Also, implementations often have a function which just returns whether the stack is empty, a stack can be easily implemented either through an array or a linked list. The following will demonstrate both implementations, using pseudocode, an array can be used to implement a stack, as follows. The first element is the bottom, resulting in array being the first element pushed onto the stack, some languages, notably those in the Forth family, are designed around language-defined stacks that are directly visible to and manipulated by the programmer. Javas library contains a Stack class that is a specialization of Vector, following is an example program in Java language, using that class. A common use of stacks at the level is as a means of allocating and accessing memory. A typical stack is an area of memory with a fixed origin
20.
Stack (data structure)
–
The order in which elements come off a stack gives rise to its alternative name, LIFO. Additionally, an operation may give access to the top without modifying the stack. Considered as a data structure, or more abstractly a sequential collection. This makes it possible to implement a stack as a linked list. A stack may be implemented to have a bounded capacity, if the stack is full and does not contain enough space to accept an entity to be pushed, the stack is then considered to be in an overflow state. The pop operation removes an item from the top of the stack, Stacks entered the computer science literature in 1946, in the computer design of Alan M. Turing as a means of calling and returning from subroutines. Subroutines had already implemented in Konrad Zuses Z4 in 1945. Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea in 1955, the same concept was developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1957. Stacks are often described by analogy to a stack of plates in a cafeteria. Clean plates are placed on top of the stack, pushing down any already there, when a plate is removed from the stack, the one below it pops up to become the new top. In many implementations, a stack has more operations than push, an example is top of stack, or peek, which observes the top-most element without removing it from the stack. Since this can be done with a pop and a push with the same data, an underflow condition can occur in the stack top operation if the stack is empty, the same as pop. Also, implementations often have a function which just returns whether the stack is empty, a stack can be easily implemented either through an array or a linked list. The following will demonstrate both implementations, using pseudocode, an array can be used to implement a stack, as follows. The first element is the bottom, resulting in array being the first element pushed onto the stack, some languages, notably those in the Forth family, are designed around language-defined stacks that are directly visible to and manipulated by the programmer. Javas library contains a Stack class that is a specialization of Vector, following is an example program in Java language, using that class. A common use of stacks at the level is as a means of allocating and accessing memory. A typical stack is an area of memory with a fixed origin
21.
Shortest job first
–
Shortest job next, also known as shortest job first or shortest process next, is a scheduling policy that selects for execution the waiting process with the smallest execution time. Shortest remaining time is a variant of SJN. Shortest job next is advantageous because of its simplicity and because it minimizes the amount of time each process has to wait until its execution is complete. However, it has the potential for process starvation for processes which require a long time to complete if short processes are continually added. Highest response ratio next is similar but provides a solution to this problem using a technique called aging, another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is impossible to predict execution time perfectly, several methods can be used to estimate it, Shortest job next can be effectively used with interactive processes which generally follow a pattern of alternating between waiting for a command and executing it. If the execution burst of a process is regarded as a job, past behaviour can indicate which process to run next. Shortest job next is used in specialized environments where accurate estimates of running time are available, weighted shortest job first is a modification of the concept used in agile development where jobs get weighted with the cost of delay so that the most costly jobs get done sooner. Value-flow rate is an alternate, more intuitive name given to WSJF which expresses cost of delay, Shortest remaining time Shortest job first scheduling
22.
Shortest remaining processing time
–
Shortest remaining time, also known as shortest remaining time first, is a scheduling method that is a preemptive version of shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount of remaining until completion is selected to execute. Shortest remaining time is advantageous because short processes are handled very quickly, like shortest job first, it has the potential for process starvation, long processes may be held off indefinitely if short processes are continually added. This threat can be minimal when process times follow a heavy-tailed distribution, a similar algorithm which avoids starvation at the cost of higher tracking overhead is Highest response ratio next. Like shortest job next scheduling, shortest remaining time scheduling is used outside of specialized environments because it requires accurate estimates of the runtime of each process
23.
Jackson network
–
The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years. An earlier product-form solution was found by R. R. P. Jackson for tandem queues, a Jackson network consists of a number of nodes, where each node represents a queue in which the service rate can be both node-dependent and state-dependent. Jobs travel among the following a fixed routing matrix. All jobs at each node belong to a class and jobs follow the same service-time distribution. Consequently, there is no notion of priority in serving the jobs, all jobs at each node are served on a first-come, Jackson networks where a finite population of jobs travel around a closed network also have a product-form solution described by the Gordon–Newell theorem. The result π = ∏ i =1 m π i also holds for M/M/c model stations with ci servers at the i th station, in an open network, jobs arrive from outside following a Poisson process with rate α >0. Each arrival is independently routed to node j with probability p 0 j ≥0 and ∑ j =1 J p 0 j =1. Upon service completion at node i, a job may go to node j with probability p i j or leave the network with probability p i 0 =1 − ∑ j =1 J p i j. All jobs leave each node also following Poisson process, and define μ i as the rate of node i when there are x i jobs at node i. Let X i denote the number of jobs at node i at time t, and X = i =1 J. Suppose a vector of independent random variables with each Y i having a probability mass function as P = p ⋅ λ i n M i and it relates the distribution of X by a vector of independent variable Y. A generalized Jackson network allows renewal arrival processes that need not be Poisson processes, in general, this network does not have a product-form stationary distribution, so approximations are sought. This is a two-order approximation obtained by relation between general Jackson network with homogeneous fluid network and reflected Brownian motion
24.
Erol Gelenbe
–
Sami Erol Gelenbe is a French and Turkish computer scientist, electronic engineer and applied mathematician who is professor in Computer-Communications at Imperial College. Known for pioneering the field of modelling and performance evaluation of systems and networks throughout Europe, he invented the random neural network. His many awards include the ACM SIGMETRICS Life-Time Achievement Award, after graduation he joined the University of Michigan as an assistant professor. In 1971 he was elected to the chair in Computer Science at the University of Liège. In 1973, he was awarded a Doctorat dÉtat ès Sciences Mathématiques from the Paris VI University with a thesis on Modèlisation des systèmes informatiques and he formed, led, and trained the team that designed the commercial QNAP Computer and Network Performance Modeling Tool. He introduced the Flexsim Object Oriented approach for the simulation in manufacturing systems and he developed new product form queueing networks with negative customers and triggers known as G-networks. From 1984 to 1986 he served as the Science and Technology Advisor to the French Secretary of State for Universities and he founded the ISCIS series of conferences that since 1986 are held annually in Turkey, the USA and Europe to bring together Turkish computer scientists with their international counterparts. According to the Mathematics Genealogy project, Gelenbe has graduated over 72 PhD students, Gelenbe currently works on energy efficient computer systems and self-aware networks, and on network security and on networked auctions. His recent collaborations with biologists include Gene Regulatory Networks and Protein Sequence Alignment, Gelenbe On languages defined by linear probabilistic automata, Information and Control,16, 487–501, July 1970. E. Gelenbe A realizable model for stochastic sequential machines, IEEE Trans, E. Gelenbe On approximate computer system models, Journal of the ACM22, 261–269. Mitrani Analysis and synthesis of systems, Academic Press,239 pp. ISBN 0-12-279350-1. E. Gelenbe On the optimum checkpoint interval, Journal of the ACM,26, 259–270, E. Gelenbe Product-Form queueing networks with negative and positive customers, Journal of Applied Probability, Vol.28, 656–663. Function approximation with spiked random networks, IEEE Trans. on Neural Networks,10, E. Gelenbe and G. Pujolle Introduction to Queueing Networks, John Wiley & Sons, Inc. New York City,1987 and 2000, E. Gelenbe, R. Lent and Z. Xu Design and performance of a cognitive packet network, Performance Evaluation,46, 155–176, October 2001. Learning in the multiple class random neural network, IEEE Transactions on Neural Networks,13, 1257–1267, E. Gelenbe, Gellman, R. M. Lent, P. Liu and Pu Su Autonomous smart routing for network QoS, Proc. International Conference on Autonomic Computing, 232–239, ISBN 0-7695-2114-2, 17–18 May 2004, fourneau and E. Gelenbe Flow equivalence and stochastic equivalence in G-networks, doi,10. 1007/s10287-003-0008-z, Computational Management Science,1, 179–192, July 2004. E. Gelenbe Steady-state solution of probabilistic gene regulatory networks, Physical Review E,76,031903, E. Gelenbe A Diffusion Model for Packet Travel Time in a Random Multi-Hop Medium, ACM Trans. on Sensor Networks,3, Article 10, June 2007. E. Gelenbe Dealing with software viruses, a paradigm, Information Security Technical Reports 12, 242–250
25.
Backpressure routing
–
Backpressure routing considers the situation where each job can visit multiple service nodes in the network. It is an extension of max-weight scheduling where rather each job visits only a single service node, backpressure routing is an algorithm for dynamically routing traffic over a multi-hop network by using congestion gradients. The algorithm can be applied to communication networks, including sensor networks, mobile ad hoc networks. Backpressure principles can also be applied to areas, such as to the study of product assembly systems. This article focuses on communication networks, where packets from multiple data streams arrive, the backpressure algorithm operates in slotted time. Every time slot it seeks to route data in directions that maximize the differential backlog between neighboring nodes and this is similar to how water flows through a network of pipes via pressure gradients. However, the algorithm can be applied to multi-commodity networks. However, the algorithm may introduce large delays, and may be difficult to implement exactly in networks with interference, modifications of backpressure that reduce delay and simplify implementation are described below under Improving delay and Distributed backpressure. Backpressure routing has mainly studied in a theoretical context. The original backpressure algorithm was developed by Tassiulas and Ephremides and they considered a multi-hop packet radio network with random packet arrivals and a fixed set of link selection options. Their algorithm consisted of a max-weight link selection stage and a differential backlog routing stage, an algorithm related to backpressure, designed for computing multi-commodity network flows, was developed in Awerbuch and Leighton. The backpressure algorithm was extended by Neely, Modiano. Backpressure is mathematically analyzed via the theory of Lyapunov drift, backpressure routing is designed to make decisions that minimize the sum of squares of queue backlogs in the network from one timeslot to the next. The precise mathematical development of technique is described in later sections. This section describes the network model and the operation of backpressure routing with respect to this model. Consider a multi-hop network with N nodes, the network operates in slotted time t ∈. On each slot, new data can arrive to the network, let data that is destined for node c ∈ be labeled as commodity c data. Data in each node is stored according to its commodity, for n ∈ and c ∈, let Q n be the current amount of commodity c data in node n, also called the queue backlog
26.
Network scheduler
–
A network scheduler, also called packet scheduler, is an arbiter program on a node in packet switching communication network. It manages the sequence of packets in the transmit and receive queues of the network interface controller. There are several network schedulers available for the different operating system kernels, the network scheduler logic decides, in a way similar to statistical multiplexers, which network packet to forward next from the buffer. The buffer works as a system, storing the network packets temporarily until they are transmitted. Network scheduling algorithms and their associated settings determine how the network scheduler manages the buffer, also, network schedulers are enabling accomplishment of the active queue management and traffic shaping. In the course of several network scheduling algorithms have been developed. Each of the algorithms used internally for these queuing disciplines provides specific reordering or dropping of network packets inside various transmit or receive buffers. Examples of algorithms suitable for managing network traffic include, Several of the above have been implemented as Linux kernel modules and are freely available, the CoDel algorithm attempts to reduce this problem by improving upon the RED algorithm. CoDel is less prone to the effects of bufferbloat than the common tail drop disciplines, as the default queuing discipline, the packet scheduler uses a FIFO implementation called pfifo_fast, although systemd since its version 217 changes the default queuing discipline to fq_codel. It manages the transmit and receive buffers of all NICs installed in a computer, the Linux kernels network stack contains several other buffers, which are not managed by the network scheduler. The overall size of all buffers has been the point of critique by the Bufferbloat project, another network scheduler is being developed as part of Netfilter and nftables. EBPF was merged into the Linux kernel mainline in kernel version 3.18, the eBPF functionality brought by version 4. Since OpenBSD version 5.5 ALTQ was totally replaced by HFSC scheduler, ALTQ is the implementation of a network scheduler for BSDs. Network congestion Quality of service Differentiated services Integrated services Queue Queueing theory Statistical time division multiplexing Traffic shaping Traffic classification Type of service