In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet; the networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’Jackson was inspired by the work of Burke and Reich, though Jean Walrand notes "product-form results … a much less immediate result of the output theorem than Jackson himself appeared to believe in his fundamental paper". An earlier product-form solution was found by R. R. P. Jackson for tandem queues and cyclic networks.
A Jackson network consists of a number of nodes, where each node represents a queue in which the service rate can be both node-dependent and state-dependent. Jobs travel among the nodes following a fixed routing matrix. All jobs at each node belong to a single "class" and jobs follow the same service-time distribution and the same routing mechanism. There is no notion of priority in serving the jobs: all jobs at each node are served on a first-come, first-served basis. Jackson networks where a finite population of jobs travel around a closed network have a product-form solution described by the Gordon–Newell theorem. A network of m interconnected queues is known as a Jackson network or Jacksonian network if it meets the following conditions: if the network is open, any external arrivals to node i form a Poisson process, all service times are exponentially distributed and the service discipline at all queues is first-come, first-served, a customer completing service at queue i will either move to some new queue j with probability P i j or leave the system with probability 1 − ∑ j = 1 m P i j, for an open network, is non-zero for some subset of the queues, the utilization of all of the queues is less than one.
In an open Jackson network of m M/M/1 queues where the utilization ρ i is less than 1 at every queue, the equilibrium state probability distribution exists and for state is given by the product of the individual queue equilibrium distributions π = ∏ i = 1 m π i = ∏ i = 1 m. The result π = ∏ i = 1 m π i holds for M/M/c model stations with ci servers at the i th station, with utilization requirement ρ i < c i. In an open network, jobs arrive from outside following a Poisson process with rate α > 0. Each arrival is independently routed to node j with probability p 0 j ≥ 0 and ∑ j = 1 J p 0 j = 1. Upon service completion at node i, a job may go to another node j with probability p i j or leave the network with probability p i 0 = 1 − ∑ j = 1 J p i j. Hence we have the overall arrival rate to node i, λ i, including both external arrivals and internal transitions: λ i = α p 0 i + ∑ j = 1 J λ j p j
Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that waiting time can be predicted. Queueing theory is considered a branch of operations research because the results are used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research by Agner Krarup Erlang when he created models to describe the Copenhagen telephone exchange; the ideas have since seen applications including telecommunication, traffic engineering, computing and in industrial engineering, in the design of factories, shops and hospitals, as well as in project management. The spelling "queueing" over "queuing" is encountered in the academic research field. In fact, one of the flagship journals of the profession is named Queueing Systems. Simple description and analogyA queue, or "queueing node" can be thought of as nearly a black box. Jobs or "customers" arrive to the queue wait some time, take some time being processed, depart from the queue.
The queueing node is not quite a pure black box, since there is some information we need to specify about the inside of the queuing node. The queue has one or more "servers" which can each be paired with an arriving job until it departs, after which that server will be free to be paired with another arriving job. An analogy used is that of the cashier at a supermarket. There are other models, but this is one encountered in the literature. Customers arrive, are processed by the cashier, depart; each cashier processes one customer at a time, hence this is a queueing node with only one server. A setting, where a customer will leave when in arriving he finds the cashier busy, is called a queue with no buffer. A setting with a waiting zone for up to n customers is called a queue with a buffer of size n. Birth-death processThe behaviour / state of a single queue can be described by a Birth-death process, which describe the arrivals and departures from the queue, along with the number of jobs in the system.
An arrival increases the number of jobs by 1, a departure decreases k by 1. Kendall's notationSingle queueing nodes are described using Kendall's notation in the form A/S/c where A describes the distribution of durations between each arrival to the queue, S the distribution of service times for jobs and c the number of servers at the node. For an example of the notation, the M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process and have exponentially distributed service times. In an M/G/1 queue, the G stands for general and indicates an arbitrary probability distribution for inter-arrival times. In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory, he modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920. In Kendall's notation: M stands for Markov or memoryless and means arrivals occur according to a Poisson process.
If there are more jobs at the node than there are servers jobs will queue and wait for service The M/G/1 queue was solved by Felix Pollaczek in 1930, a solution recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula. After the 1940s queueing theory became an area of research interest to mathematicians. In 1953 David George Kendall solved the GI/M/k queue and introduced the modern notation for queues, now known as Kendall's notation. In 1957 Pollaczek studied the GI/G/1 using an integral equation. John Kingman gave a formula for the mean waiting time in a G/G/1 queue: Kingman's formula. Leonard Kleinrock worked on the application of queueing theory to message switching and packet switching, his initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964 in the field of digital message switching. His theoretical work after 1967 underpinned the use of packing switching in the ARPANET, a forerunner to the Internet.
The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered. Problems such as performance metrics for the M/G/k queue remain an open problem. Various scheduling policies can be used at queuing nodes: First in first out Also called first-come, first-served, this principle states that customers are served one at a time and that the customer, waiting the longest is served first. Last in first out This principle serves customers one at a time, but the customer with the shortest waiting time will be served first. Known as a stack. Processor sharing Service capacity is shared between customers. Priority Customers with high priority are served first. Priority queues can be of two types, preemptive. No work is lost in either model. Shortest job first The next job to be served is the one with the smallest sizePreemptive shortest job first The next job to be served is the one with the
In queueing theory, a discipline within the mathematical theory of probability, Burke's theorem is a theorem asserting that, for the M/M/1 queue, M/M/c queue or M/M/∞ queue in the steady state with arrivals a Poisson process with rate parameter λ: The departure process is a Poisson process with rate parameter λ. At time t the number of customers in the queue is independent of the departure process prior to time t. Burke first published this theorem along with a proof in 1956; the theorem was not proved by O'Brien and Morse. A second proof of the theorem follows from a more general result published by Reich; the proof offered by Burke shows that the time intervals between successive departures are independently and exponentially distributed with parameter equal to the arrival rate parameter, from which the result follows. An alternative proof is possible by considering the reversed process and noting that the M/M/1 queue is a reversible stochastic process. Consider the figure. By Kolmogorov's criterion for reversibility, any birth-death process is a reversible Markov chain.
Note that the arrival instants in the forward Markov chain are the departure instants of the reversed Markov chain. Thus the departure process is a Poisson process of rate λ. Moreover, in the forward process the arrival at time t is independent of the number of customers after t, thus in the reversed process, the number of customers in the queue is independent of the departure process prior to time t. This proof could be counter-intuitive, in the sense that the departure process of a birth-death process is independent of the service offered; the theorem can be generalised for "only a few cases," but remains valid for M/M/c queues and Geom/Geom/1 queues. It is thought that Burke's theorem does not extend to queues fed by a Markovian arrival processes and is conjectured that the output process of an MAP/M/1 queue is an MAP only if the queue is an M/M/1 queue. An analogous theorem for the Brownian queue was proven by J. Michael Harrison
A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. Speaking, a process satisfies the Markov property if one can make predictions for the future of the process based on its present state just as well as one could knowing the process's full history, hence independently from such history. A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set, but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space, but it is common to define a Markov chain as having discrete time in either countable or continuous state space. Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906.
Random walks based on integers and the gambler's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process known as the Brownian motion process, the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes, were discovered and independently, both before and after 1906, in various settings; these two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, exchange rates of currencies, storage systems such as dams, population growths of certain animal species.
The algorithm known as PageRank, proposed for the internet search engine Google, is based on a Markov process. Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, have found extensive application in Bayesian statistics; the adjective Markovian is used to describe something, related to a Markov process. A Markov chain is a stochastic process with the Markov property; the term "Markov chain" refers to the sequence of random variables such a process moves through, with the Markov property defining serial dependence only between adjacent periods. It can thus be used for describing systems that follow a chain of linked events, where what happens next depends only on the current state of the system; the system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes.
The term "Markov chain" is reserved for a process with a discrete set of times, i.e. a discrete-time Markov chain, but a few authors use the term "Markov process" to refer to a continuous-time Markov chain without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not fall within any of these four categories. Moreover, the time index need not be real-valued. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is discrete, the state space of a Markov chain does not have any agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations and generalizations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise.
The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities; the process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, an initial state across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, the process does not terminate. A discrete-time random process involves a system, in a certain state at each step, with the state changing randomly between steps; the steps are thought of as moments in time, but they can well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, the random process is a mapping of these to states; the Markov property states that the conditional probability
Stack (abstract data type)
In computer science, a stack is an abstract data type that serves as a collection of elements, with two principal operations: push, which adds an element to the collection, pop, which removes the most added element, not yet removed. The order in which elements come off a stack gives rise to its alternative name, LIFO. Additionally, a peek operation may give access to the top without modifying the stack; the name "stack" for this type of structure comes from the analogy to a set of physical items stacked on top of each other, which makes it easy to take an item off the top of the stack, while getting to an item deeper in the stack may require taking off multiple other items first. Considered as a linear data structure, or more abstractly a sequential collection, the push and pop operations occur only at one end of the structure, referred to as the top of the stack; this makes it possible to implement a stack as a singly linked list and a pointer to the top element. A stack may be implemented to have a bounded capacity.
If the stack is full and does not contain enough space to accept an entity to be pushed, the stack is considered to be in an overflow state. The pop operation removes an item from the top of the stack. Stacks entered the computer science literature in 1946, when Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. Subroutines had been implemented in Konrad Zuse's Z4 in 1945. Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea in 1955 and filed a patent in 1957, in March 1988 Bauer received the Computer Pioneer Award for the invention of the stack principle; the same concept was developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1954. Stacks are described by analogy to a spring-loaded stack of plates in a cafeteria. Clean plates are placed on top of the stack, pushing down any there; when a plate is removed from the stack, the one below it pops up to become the new top. In many implementations, a stack has more operations than "push" and "pop".
An example is "top of stack", or "peek", which observes the top-most element without removing it from the stack. Since this can be done with a "pop" and a "push" with the same data, it is not essential. An underflow condition can occur in the "stack top" operation if the stack is empty, the same as "pop". Implementations have a function which just returns whether the stack is empty. A stack can be implemented either through an array or a linked list. What identifies the data structure as a stack in either case is not the implementation but the interface: the user is only allowed to pop or push items onto the array or linked list, with few other helper operations; the following will demonstrate both implementations. An array can be used to implement a stack; the first element is the bottom, resulting in array being the first element pushed onto the stack and the last element popped off. The program must keep track of the size of the stack, using a variable top that records the number of items pushed so far, therefore pointing to the place in the array where the next element is to be inserted.
Thus, the stack itself can be implemented as a three-element structure: structure stack: maxsize: integer top: integer items: array of item procedure initialize: stk.items ← new array of size items empty stk.maxsize ← size stk.top ← 0 The push operation adds an element and increments the top index, after checking for overflow: procedure push: if stk.top = stk.maxsize: report overflow error else: stk.items ← x stk.top ← stk.top + 1 Similarly, pop decrements the top index after checking for underflow, returns the item, the top one: procedure pop: if stk.top = 0: report underflow error else: stk.top ← stk.top − 1 r ← stk.items return r Using a dynamic array, it is possible to implement a stack that can grow or shrink as much as needed. The size of the stack is the size of the dynamic array, a efficient implementation of a stack since adding items to or removing items from the end of a dynamic array requires amortized O time. Another option for implementing stacks is to use a singly linked list.
A stack is a pointer to the "head" of the list, with a counter to keep track of the size of the list: structure frame: data: item next: frame or nil structure stack: head: frame or nil size: integer procedure initialize: stk.head ← nil stk.size ← 0 Pushing and popping items happens at the head of the list. Some languages, notably those in the Forth family, are designed around language-defined stacks that are directly visible to and manipulated by the programmer; the following is an example of manipulating a stack in Common Lisp: Several of the C
Round-robin is one of the algorithms employed by process and network schedulers in computing. As the term is used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement, starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks, it is an operating system concept. The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn. To schedule processes a round-robin scheduler employs time-sharing, giving each job a time slot or quantum, interrupting the job if it is not completed by then; the job is resumed next time a time slot is assigned to that process. If the process terminates or changes its state to waiting during its attributed time quantum, the scheduler selects the first process in the ready queue to execute.
In the absence of time-sharing, or if the quanta were large relative to the sizes of the jobs, a process that produced large jobs would be favoured over other processes. Round-robin algorithm is a pre-emptive algorithm as the scheduler forces the process out of the CPU once the time quota expires. For example, if the time slot is 100 milliseconds, job1 takes a total time of 250 ms to complete, the round-robin scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had their equal share, job1 will get another allocation of CPU time and the cycle will repeat; this process continues until the job finishes and needs no more time on the CPU. Job1 = Total time to complete 250 ms. First allocation = 100 ms. Second allocation = 100 ms. Third allocation = 100 ms Total CPU time of job1 = 250 msConsider the following table with the arrival time and execute time of the process with the quantum time of 100ms to understand the round-robin scheduling: Another approach is to divide all processes into an equal number of timing quanta such that the quantum size is proportional to the size of the process.
Hence, all processes end at the same time. In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-served queuing. A multiplexer, switch, or router that provides round-robin scheduling has a separate queue for every data flow, where a data flow may be identified by its source and destination address; the algorithm lets every active data flow that has data packets in the queue to take turns in transferring packets on a shared channel in a periodically repeated order. The scheduling is work-conserving, meaning that if one flow is out of packets, the next data flow will take its place. Hence, the scheduling tries to prevent link resources from going unused. Round-robin scheduling results in max-min fairness if the data packets are sized, since the data flow that has waited the longest time is given scheduling priority, it may not be desirable if the size of the data packets varies from one job to another. A user that produces large packets would be favored over other users.
In that case fair queuing would be preferable. If guaranteed or differentiated quality of service is offered, not only best-effort communication, deficit round-robin scheduling, weighted round-robin scheduling, or weighted fair queuing may be considered. In multiple-access networks, where several terminals are connected to a shared physical medium, round-robin scheduling may be provided by token passing channel access schemes such as token ring, or by polling or resource reservation from a central control station. In a centralized wireless packet radio network, where many stations share one frequency channel, a scheduling algorithm in a central base station may reserve time slots for the mobile stations in a round-robin fashion and provide fairness. However, if link adaptation is used, it will take a much longer time to transmit a certain amount of data to "expensive" users than to others since the channel conditions differ, it would be more efficient to wait with the transmission until the channel conditions are improved, or at least to give scheduling priority to less expensive users.
Round-robin scheduling does not utilize this. Higher throughput and system spectrum efficiency may be achieved by channel-dependent scheduling, for example a proportionally fair algorithm, or maximum throughput scheduling. Note that the latter is characterized by undesirable scheduling starvation; this type of scheduling is one of the basic algorithms for Operating Systems in computers which can be implemented through circular queue data structure. Weighted round robin Deficit round robin Multilevel queue Round robin algorithm with example
Poisson point process
In probability and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space. The Poisson point process is called the Poisson process, but it is called a Poisson random measure, Poisson random point field or Poisson point field; this point process has convenient mathematical properties, which has led to it being defined in Euclidean space and used as a mathematical model for random processes in numerous disciplines such as astronomy, ecology, seismology, economics, image processing, telecommunications. The process is named after French mathematician Siméon Denis Poisson despite Poisson never having studied the process, its name derives from the fact that if a collection of random points in some space forms a Poisson process the number of points in a region of finite size is a random variable with a Poisson distribution. The process was discovered independently and in several settings, including experiments on radioactive decay, telephone call arrivals and insurance mathematics.
The Poisson point process is defined on the real line, where it can be considered as a stochastic process. In this setting, it is used, for example, in queueing theory to model random events, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes, distributed in time. In the plane, the point process known as a spatial Poisson process, can represent the locations of scattered objects such as transmitters in a wireless network, particles colliding into a detector, or trees in a forest. In this setting, the process is used in mathematical models and in the related fields of spatial point processes, stochastic geometry, spatial statistics and continuum percolation theory; the Poisson point process can be defined on more abstract spaces. Beyond applications, the Poisson point process is an object of mathematical study in its own right. In all settings, the Poisson point process has the property that each point is stochastically independent to all the other points in the process, why it is sometimes called a purely or random process.
Despite its wide use as a stochastic model of phenomena representable as points, the inherent nature of the process implies that it does not adequately describe phenomena where there is sufficiently strong interaction between the points. This has inspired the proposal of other point processes, some of which are constructed with the Poisson point process, that seek to capture such interaction; the point process depends on a single mathematical object, depending on the context, may be a constant, a locally integrable function or, in more general settings, a Radon measure. In the first case, the constant, known as the rate or intensity, is the average density of the points in the Poisson process located in some region of space; the resulting point process is called stationary Poisson point process. In the second case, the point process is called an inhomogeneous or nonhomogeneous Poisson point process, the average density of points depend on the location of the underlying space of the Poisson point process.
The word point is omitted, but there are other Poisson processes of objects, instead of points, consist of more complicated mathematical objects such as lines and polygons, such processes can be based on the Poisson point process. The Poisson point process is one of the most studied and used point processes, in both the field of probability and in more applied disciplines concerning random phenomena, due to its convenient properties as a mathematical model as well as being mathematically interesting. Depending on the setting, the process has several equivalent definitions as well as definitions of varying generality owing to its many applications and characterizations. A Poisson point process is defined on some underlying mathematical space, called a carrier space, or state space, though the latter term has a different meaning in the context of stochastic processes; the Poisson point process can be defined and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model.
The notation and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context. Despite all this, the Poisson point process has two key properties. A Poisson point process is characterized via the Poisson distribution; the Poisson distribution is the probability distribution of a random variable N such that the probability that N equals n is given by: P = Λ n n! E − Λ where n! Denotes n factorial and the parameter Λ determines the shape of the distribution. By definition, a Poisson point process has the property that the number of points in a bounded region of its carrier s