Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that waiting time can be predicted. Queueing theory is considered a branch of operations research because the results are used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research by Agner Krarup Erlang when he created models to describe the Copenhagen telephone exchange; the ideas have since seen applications including telecommunication, traffic engineering, computing and in industrial engineering, in the design of factories, shops and hospitals, as well as in project management. The spelling "queueing" over "queuing" is encountered in the academic research field. In fact, one of the flagship journals of the profession is named Queueing Systems. Simple description and analogyA queue, or "queueing node" can be thought of as nearly a black box. Jobs or "customers" arrive to the queue wait some time, take some time being processed, depart from the queue.
The queueing node is not quite a pure black box, since there is some information we need to specify about the inside of the queuing node. The queue has one or more "servers" which can each be paired with an arriving job until it departs, after which that server will be free to be paired with another arriving job. An analogy used is that of the cashier at a supermarket. There are other models, but this is one encountered in the literature. Customers arrive, are processed by the cashier, depart; each cashier processes one customer at a time, hence this is a queueing node with only one server. A setting, where a customer will leave when in arriving he finds the cashier busy, is called a queue with no buffer. A setting with a waiting zone for up to n customers is called a queue with a buffer of size n. Birth-death processThe behaviour / state of a single queue can be described by a Birth-death process, which describe the arrivals and departures from the queue, along with the number of jobs in the system.
An arrival increases the number of jobs by 1, a departure decreases k by 1. Kendall's notationSingle queueing nodes are described using Kendall's notation in the form A/S/c where A describes the distribution of durations between each arrival to the queue, S the distribution of service times for jobs and c the number of servers at the node. For an example of the notation, the M/M/1 queue is a simple model where a single server serves jobs that arrive according to a Poisson process and have exponentially distributed service times. In an M/G/1 queue, the G stands for general and indicates an arbitrary probability distribution for inter-arrival times. In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory, he modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920. In Kendall's notation: M stands for Markov or memoryless and means arrivals occur according to a Poisson process.
If there are more jobs at the node than there are servers jobs will queue and wait for service The M/G/1 queue was solved by Felix Pollaczek in 1930, a solution recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula. After the 1940s queueing theory became an area of research interest to mathematicians. In 1953 David George Kendall solved the GI/M/k queue and introduced the modern notation for queues, now known as Kendall's notation. In 1957 Pollaczek studied the GI/G/1 using an integral equation. John Kingman gave a formula for the mean waiting time in a G/G/1 queue: Kingman's formula. Leonard Kleinrock worked on the application of queueing theory to message switching and packet switching, his initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964 in the field of digital message switching. His theoretical work after 1967 underpinned the use of packing switching in the ARPANET, a forerunner to the Internet.
The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered. Problems such as performance metrics for the M/G/k queue remain an open problem. Various scheduling policies can be used at queuing nodes: First in first out Also called first-come, first-served, this principle states that customers are served one at a time and that the customer, waiting the longest is served first. Last in first out This principle serves customers one at a time, but the customer with the shortest waiting time will be served first. Known as a stack. Processor sharing Service capacity is shared between customers. Priority Customers with high priority are served first. Priority queues can be of two types, preemptive. No work is lost in either model. Shortest job first The next job to be served is the one with the smallest sizePreemptive shortest job first The next job to be served is the one with the
FIFO (computing and electronics)
FIFO is an acronym for first in, first out, a method for organising and manipulating a data buffer, where the oldest entry, or'head' of the queue, is processed first. It is analogous to processing a queue with first-come, first-served behaviour: where the people leave the queue in the order in which they arrive. FCFS is the jargon term for the FIFO operating system scheduling algorithm, which gives every process central processing unit time in the order in which it is demanded. FIFO's opposite is LIFO, last-in-first-out, where the youngest entry or'top of the stack' is processed first. A priority queue may adopt similar behaviour temporarily or by default. Queueing theory encompasses these methods for processing data structures, as well as interactions between strict-FIFO queues. Depending on the application, a FIFO could be implemented as a hardware shift register, or using different memory structures a circular buffer or a kind of list. For information on the abstract data structure, see Queue.
Most software implementations of a FIFO queue are not thread safe and require a locking mechanism to verify the data structure chain is being manipulated by only one thread at a time. The following code shows a linked list FIFO C++ language implementation. In practice, a number of list implementations exist, including popular Unix systems C sys/queue.h macros or the C++ standard library std::list template, avoiding the need for implementing the data structure from scratch. The ends of a FIFO queue are referred to as head and tail. A controversy exists regarding those terms: To many people, items should enter a queue at the tail, remain in the queue until they reach the head and leave the queue from there; this point of view is justified by analogy with queues of people waiting for some kind of service and parallels the use of front and back in the above example. Other people believe that items enter a queue at the head and leave at the tail, in the manner of food passing through a snake. Queues written in that way appear in places that could be considered authoritative, such as the operating system Linux.
In computing environments that support the pipes and filters model for interprocess communication, a FIFO is another name for a named pipe. Disk controllers can use the FIFO as a disk scheduling algorithm to determine the order in which to service disk I/O requests. Communication network bridges and routers used in computer networks use FIFOs to hold data packets en route to their next destination. At least one FIFO structure is used per network connection; some devices feature multiple FIFOs for and independently queuing different types of information. FIFOs are used in electronic circuits for buffering and flow control between hardware and software. In its hardware form, a FIFO consists of a set of read and write pointers and control logic. Storage may be static random access memory, flip-flops, latches or any other suitable form of storage. For FIFOs of non-trivial size, a dual-port SRAM is used, where one port is dedicated to writing and the other to reading. A synchronous FIFO is a FIFO where the same clock is used for both writing.
An asynchronous FIFO uses different clocks for writing. Asynchronous FIFOs introduce metastability issues. A common implementation of an asynchronous FIFO uses a Gray code for the read and write pointers to ensure reliable flag generation. One further note concerning flag generation is that one must use pointer arithmetic to generate flags for asynchronous FIFO implementations. Conversely, one may use either a leaky bucket approach or pointer arithmetic to generate flags in synchronous FIFO implementations. Examples of FIFO status flags include: full, empty full empty, etc; the first known FIFO implemented in electronics was done by Peter Alfke in 1969 at Fairchild Semiconductors. Peter Alfke was a director at Xilinx. A hardware FIFO is used for synchronization purposes, it is implemented as a circular queue, thus has two pointers: Read pointer / read address register Write pointer / write address registerRead and write addresses are both at the first memory location and the FIFO queue is empty.
FIFO empty When the read address register reaches the write address register, the FIFO triggers the empty signal. FIFO full When the write address register reaches the read address register, the FIFO triggers the full signal. In both cases, the read and write addresses end up being equal. To distinguish between the two situations, a simple and robust solution is to add one extra bit for each read and write address, inverted each time the address wraps. With this set up, the disambiguation conditions are: When the read address register equals the write address register, the FIFO is empty; when the read address LSBs equal the write address LSBs and the extra MSBs are different, the FIFO is full. FINO Garbage in, garbage out Cummings et al. Simulation and Synthesis Techniques for Asynchronous FIFO Design with Asynchronous Pointer Comparisons, SNUG San Jose 2002
A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. In probability theory and related fields, a Markov process, named after the Russian mathematician Andrey Markov, is a stochastic process that satisfies the Markov property. Speaking, a process satisfies the Markov property if one can make predictions for the future of the process based on its present state just as well as one could knowing the process's full history, hence independently from such history. A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set, but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space, but it is common to define a Markov chain as having discrete time in either countable or continuous state space. Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906.
Random walks based on integers and the gambler's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process known as the Brownian motion process, the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes, were discovered and independently, both before and after 1906, in various settings; these two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. Markov chains have many applications as statistical models of real-world processes, such as studying cruise control systems in motor vehicles, queues or lines of customers arriving at an airport, exchange rates of currencies, storage systems such as dams, population growths of certain animal species.
The algorithm known as PageRank, proposed for the internet search engine Google, is based on a Markov process. Markov processes are the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, have found extensive application in Bayesian statistics; the adjective Markovian is used to describe something, related to a Markov process. A Markov chain is a stochastic process with the Markov property; the term "Markov chain" refers to the sequence of random variables such a process moves through, with the Markov property defining serial dependence only between adjacent periods. It can thus be used for describing systems that follow a chain of linked events, where what happens next depends only on the current state of the system; the system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes.
The term "Markov chain" is reserved for a process with a discrete set of times, i.e. a discrete-time Markov chain, but a few authors use the term "Markov process" to refer to a continuous-time Markov chain without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not fall within any of these four categories. Moreover, the time index need not be real-valued. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is discrete, the state space of a Markov chain does not have any agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations and generalizations. For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise.
The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities; the process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, an initial state across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, the process does not terminate. A discrete-time random process involves a system, in a certain state at each step, with the state changing randomly between steps; the steps are thought of as moments in time, but they can well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, the random process is a mapping of these to states; the Markov property states that the conditional probability
Agile software development
Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer/end user. It advocates adaptive planning, evolutionary development, early delivery, continual improvement, it encourages rapid and flexible response to change; the term agile was popularized, by the Manifesto for Agile Software Development. The values and principles expoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban. There is significant anecdotal evidence that adopting agile practices and values improves the agility of software professionals and organizations. Iterative and incremental development methods can be traced back as early as 1957, with evolutionary project management and adaptive software development emerging in the early 1970s. During the 1990s, a number of lightweight software development methods evolved in reaction to the prevailing heavyweight methods that critics described as overly regulated and micro-managed.
These included: rapid application development, from 1991. Although these all originated before the publication of the Agile Manifesto, they are now collectively referred to as agile software development methods. At the same time, similar changes were underway in aerospace. In 2001, seventeen software developers met at a resort in Snowbird, Utah to discuss these lightweight development methods, including among others Kent Beck, Ward Cunningham, Dave Thomas, Jeff Sutherland, Ken Schwaber, Jim Highsmith, Alistair Cockburn, Robert C. Martin. Together they published the Manifesto for Agile Software Development. In 2005, a group headed by Cockburn and Highsmith wrote an addendum of project management principles, the PM Declaration of Interdependence, to guide software project management according to agile software development methods. In 2009, a group working with Martin wrote an extension of software development principles, the Software Craftsmanship Manifesto, to guide agile software development according to professional conduct and mastery.
In 2011, the Agile Alliance created the Guide to Agile Practices, an evolving open-source compendium of the working definitions of agile practices and elements, along with interpretations and experience guidelines from the worldwide community of agile practitioners. Based on their combined experience of developing software and helping others do that, the seventeen signatories to the manifesto proclaimed that they value: Individuals and Interactions over processes and tools Working Software over comprehensive documentation Customer Collaboration over contract negotiation Responding to Change over following a plan That is to say, the items on the left are valued more than the items on the right; as Scott Ambler elucidated: Tools and processes are important, but it is more important to have competent people working together effectively. Good documentation is useful in helping people to understand how the software is built and how to use it, but the main point of development is to create software, not documentation.
A contract is important but is no substitute for working with customers to discover what they need. A project plan is important, but it must not be too rigid to accommodate changes in technology or the environment, stakeholders' priorities, people's understanding of the problem and its solution; some of the authors formed the Agile Alliance, a non-profit organization that promotes software development according to the manifesto's values and principles. Introducing the manifesto on behalf of the Agile Alliance, Jim Highsmith said, The Agile movement is not anti-methodology, in fact many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of rarely-used tomes. We recognize the limits of planning in a turbulent environment; those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as "hackers" are ignorant of both the methodologies and the original definition of the term hacker.
The Manifesto for Agile Software Development is based on twelve principles: Customer satisfaction by early and continuous delivery of valuable software. Welcome changing requirements in late development. Deliver working software Close, daily cooperation between business people and developers Projects are built around motivated individuals, who should be trusted Face-to-face conversation is the best form of communication Working software is the primary measure of progress Sustainable development, able to maintain a constant pace Continuous attention to technical excellence and good design Simplicity—the art of maximizing the amount of work not done—is essential Best architectures and designs emerge from self-organizing teams Regularly, the team reflects on how to become more effective, adjusts accordingly Most agile development methods break product development work into small increments that minimize the amount of up-front planning and design. Iterations, or sprints, are short time frames that last from one to four weeks.
Each iteration involves a cross-functional team working in all functions: pl
Shortest job next
Shortest job next known as shortest job first or shortest process next, is a scheduling policy that selects for execution the waiting process with the smallest execution time. SJN is a non-preemptive algorithm. Shortest remaining time is a preemptive variant of SJN. Shortest job next is advantageous because of its simplicity and because it minimizes the average amount of time each process has to wait until its execution is complete. However, it has the potential for process starvation for processes which will require a long time to complete if short processes are continually added. Highest response ratio next is similar but provides a solution to this problem using a technique called aging. Another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is impossible to predict execution time several methods can be used to estimate it, such as a weighted average of previous execution times. Shortest job next can be used with interactive processes which follow a pattern of alternating between waiting for a command and executing it.
If the execution burst of a process is regarded as a separate "job", past behaviour can indicate which process to run next, based on an estimate of its running time. Shortest job next is used in specialized environments where accurate estimates of running time are available. Weighted shortest job first is a modification of the concept used in agile development where jobs get weighted with the cost of delay so that the highest valued jobs get done sooner. Value-flow rate is an alternate, more intuitive name given to WSJF which expresses cost of delay and duration using unitless relative "points" rather than actual units of time or money. Shortest remaining time Shortest job first scheduling Scheduling Algorithm-SJF with example
Poisson point process
In probability and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space. The Poisson point process is called the Poisson process, but it is called a Poisson random measure, Poisson random point field or Poisson point field; this point process has convenient mathematical properties, which has led to it being defined in Euclidean space and used as a mathematical model for random processes in numerous disciplines such as astronomy, ecology, seismology, economics, image processing, telecommunications. The process is named after French mathematician Siméon Denis Poisson despite Poisson never having studied the process, its name derives from the fact that if a collection of random points in some space forms a Poisson process the number of points in a region of finite size is a random variable with a Poisson distribution. The process was discovered independently and in several settings, including experiments on radioactive decay, telephone call arrivals and insurance mathematics.
The Poisson point process is defined on the real line, where it can be considered as a stochastic process. In this setting, it is used, for example, in queueing theory to model random events, such as the arrival of customers at a store, phone calls at an exchange or occurrence of earthquakes, distributed in time. In the plane, the point process known as a spatial Poisson process, can represent the locations of scattered objects such as transmitters in a wireless network, particles colliding into a detector, or trees in a forest. In this setting, the process is used in mathematical models and in the related fields of spatial point processes, stochastic geometry, spatial statistics and continuum percolation theory; the Poisson point process can be defined on more abstract spaces. Beyond applications, the Poisson point process is an object of mathematical study in its own right. In all settings, the Poisson point process has the property that each point is stochastically independent to all the other points in the process, why it is sometimes called a purely or random process.
Despite its wide use as a stochastic model of phenomena representable as points, the inherent nature of the process implies that it does not adequately describe phenomena where there is sufficiently strong interaction between the points. This has inspired the proposal of other point processes, some of which are constructed with the Poisson point process, that seek to capture such interaction; the point process depends on a single mathematical object, depending on the context, may be a constant, a locally integrable function or, in more general settings, a Radon measure. In the first case, the constant, known as the rate or intensity, is the average density of the points in the Poisson process located in some region of space; the resulting point process is called stationary Poisson point process. In the second case, the point process is called an inhomogeneous or nonhomogeneous Poisson point process, the average density of points depend on the location of the underlying space of the Poisson point process.
The word point is omitted, but there are other Poisson processes of objects, instead of points, consist of more complicated mathematical objects such as lines and polygons, such processes can be based on the Poisson point process. The Poisson point process is one of the most studied and used point processes, in both the field of probability and in more applied disciplines concerning random phenomena, due to its convenient properties as a mathematical model as well as being mathematically interesting. Depending on the setting, the process has several equivalent definitions as well as definitions of varying generality owing to its many applications and characterizations. A Poisson point process is defined on some underlying mathematical space, called a carrier space, or state space, though the latter term has a different meaning in the context of stochastic processes; the Poisson point process can be defined and used in one dimension, for example, on the real line, where it can be interpreted as a counting process or part of a queueing model.
The notation and level of mathematical rigour used to define and study the Poisson point process and points processes in general vary according to the context. Despite all this, the Poisson point process has two key properties. A Poisson point process is characterized via the Poisson distribution; the Poisson distribution is the probability distribution of a random variable N such that the probability that N equals n is given by: P = Λ n n! E − Λ where n! Denotes n factorial and the parameter Λ determines the shape of the distribution. By definition, a Poisson point process has the property that the number of points in a bounded region of its carrier s
Round-robin is one of the algorithms employed by process and network schedulers in computing. As the term is used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement, starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks, it is an operating system concept. The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn. To schedule processes a round-robin scheduler employs time-sharing, giving each job a time slot or quantum, interrupting the job if it is not completed by then; the job is resumed next time a time slot is assigned to that process. If the process terminates or changes its state to waiting during its attributed time quantum, the scheduler selects the first process in the ready queue to execute.
In the absence of time-sharing, or if the quanta were large relative to the sizes of the jobs, a process that produced large jobs would be favoured over other processes. Round-robin algorithm is a pre-emptive algorithm as the scheduler forces the process out of the CPU once the time quota expires. For example, if the time slot is 100 milliseconds, job1 takes a total time of 250 ms to complete, the round-robin scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had their equal share, job1 will get another allocation of CPU time and the cycle will repeat; this process continues until the job finishes and needs no more time on the CPU. Job1 = Total time to complete 250 ms. First allocation = 100 ms. Second allocation = 100 ms. Third allocation = 100 ms Total CPU time of job1 = 250 msConsider the following table with the arrival time and execute time of the process with the quantum time of 100ms to understand the round-robin scheduling: Another approach is to divide all processes into an equal number of timing quanta such that the quantum size is proportional to the size of the process.
Hence, all processes end at the same time. In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-served queuing. A multiplexer, switch, or router that provides round-robin scheduling has a separate queue for every data flow, where a data flow may be identified by its source and destination address; the algorithm lets every active data flow that has data packets in the queue to take turns in transferring packets on a shared channel in a periodically repeated order. The scheduling is work-conserving, meaning that if one flow is out of packets, the next data flow will take its place. Hence, the scheduling tries to prevent link resources from going unused. Round-robin scheduling results in max-min fairness if the data packets are sized, since the data flow that has waited the longest time is given scheduling priority, it may not be desirable if the size of the data packets varies from one job to another. A user that produces large packets would be favored over other users.
In that case fair queuing would be preferable. If guaranteed or differentiated quality of service is offered, not only best-effort communication, deficit round-robin scheduling, weighted round-robin scheduling, or weighted fair queuing may be considered. In multiple-access networks, where several terminals are connected to a shared physical medium, round-robin scheduling may be provided by token passing channel access schemes such as token ring, or by polling or resource reservation from a central control station. In a centralized wireless packet radio network, where many stations share one frequency channel, a scheduling algorithm in a central base station may reserve time slots for the mobile stations in a round-robin fashion and provide fairness. However, if link adaptation is used, it will take a much longer time to transmit a certain amount of data to "expensive" users than to others since the channel conditions differ, it would be more efficient to wait with the transmission until the channel conditions are improved, or at least to give scheduling priority to less expensive users.
Round-robin scheduling does not utilize this. Higher throughput and system spectrum efficiency may be achieved by channel-dependent scheduling, for example a proportionally fair algorithm, or maximum throughput scheduling. Note that the latter is characterized by undesirable scheduling starvation; this type of scheduling is one of the basic algorithms for Operating Systems in computers which can be implemented through circular queue data structure. Weighted round robin Deficit round robin Multilevel queue Round robin algorithm with example