In mathematics and statistics, a stationary process is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Parameters such as mean and variance do not change over time. Since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data is transformed to become stationary; the most common cause of violation of stationarity is a trend in the mean, which can be due either to the presence of a unit root or of a deterministic trend. In the former case of a unit root, stochastic shocks have permanent effects, the process is not mean-reverting. In the latter case of a deterministic trend, the process is called a trend stationary process, stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving mean. A trend stationary process is not stationary, but can be transformed into a stationary process by removing the underlying trend, a function of time.
Processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, a stochastic process that varies cyclically with time. For many applications strict-sense stationarity is too restrictive. Other forms of stationarity such as wide-sense stationarity or N-th order stationarity are employed; the definitions for different kinds of stationarity are not consistent among different authors. Formally, let be a stochastic process and let F X represent the cumulative distribution function of the unconditional joint distribution of at times t 1 + τ, …, t n + τ. Is said to be stationary stationary or strict-sense stationary if Since τ does not affect F X, F X is not a function of time. White noise is the simplest example of a stationary process. An example of a discrete-time stationary process where the sample space is discrete is a Bernoulli scheme. Other examples of a discrete-time stationary process with continuous sample space include some autoregressive and moving average processes which are both subsets of the autoregressive moving average model.
Models with a non-trivial autoregressive component may be either stationary or non-stationary, depending on the parameter values, important non-stationary special cases are where unit roots exist in the model. Let Y be any scalar random variable, define a time-series, by X t = Y for all t. Is a stationary time series, for which realisations consist of a series of constant values, with a different constant value for each realisation. A law of large numbers does not apply on this case, as the limiting value of an average from a single realisation takes the random value determined by Y, rather than taking the expected value of Y; as a further example of a stationary process for which any single realisation has an noise-free structure, let Y have a uniform distribution on ( 0, 2 π ] and define the time series by X t = cos for t ∈ R. Is stationary. In Eq.2, the distribution of n samples of the stochastic process must be equal to the distribution of the samples shifted in time for all n. N-th order stationarity is a weaker form of stationarity where this is only requested for all n up to a certain order N.
A random process is said to be N-th order stationary if: A weaker form of stationarity employed in signal processing is known as we
In probability theory and related fields, a stochastic or random process is a mathematical object defined as a collection of random variables. The random variables were associated with or indexed by a set of numbers viewed as points in time, giving the interpretation of a stochastic process representing numerical values of some system randomly changing over time, such as the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes are used as mathematical models of systems and phenomena that appear to vary in a random manner, they have applications in many disciplines including sciences such as biology, ecology and physics as well as technology and engineering fields such as image processing, signal processing, information theory, computer science and telecommunications. Furthermore random changes in financial markets have motivated the extensive use of stochastic processes in finance. Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes.
Examples of such stochastic processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse, the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time; these two stochastic processes are considered the most important and central in the theory of stochastic processes, were discovered and independently, both before and after Bachelier and Erlang, in different settings and countries. The term random function is used to refer to a stochastic or random process, because a stochastic process can be interpreted as a random element in a function space; the terms stochastic process and random process are used interchangeably with no specific mathematical space for the set that indexes the random variables. But these two terms are used when the random variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space the collection of random variables is called a random field instead.
The values of a stochastic process are not always numbers and can be vectors or other mathematical objects. Based on their mathematical properties, stochastic processes can be divided into various categories, which include random walks, Markov processes, Lévy processes, Gaussian processes, random fields, renewal processes, branching processes; the study of stochastic processes uses mathematical knowledge and techniques from probability, linear algebra, set theory, topology as well as branches of mathematical analysis such as real analysis, measure theory, Fourier analysis, functional analysis. The theory of stochastic processes is considered to be an important contribution to mathematics and it continues to be an active topic of research for both theoretical reasons and applications. A stochastic or random process can be defined as a collection of random variables, indexed by some mathematical set, meaning that each random variable of the stochastic process is uniquely associated with an element in the set.
The set used to index. The index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time; each random variable in the collection takes values from the same mathematical space known as the state space. This state space can be, for example, the integers, the real n - dimensional Euclidean space. An increment is the amount that a stochastic process changes between two index values interpreted as two points in time. A stochastic process can have many outcomes, due to its randomness, a single outcome of a stochastic process is called, among other names, a sample function or realization. A stochastic process can be classified in different ways, for example, by its state space, its index set, or the dependence among the random variables. One common way of classification is by the cardinality of the state space; when interpreted as time, if the index set of a stochastic process has a finite or countable number of elements, such as a finite set of numbers, the set of integers, or the natural numbers the stochastic process is said to be in discrete time.
If the index set is some interval of the real line time is said to be continuous. The two types of stochastic processes are referred to as discrete-time and continuous-time stochastic processes. Discrete-time stochastic processes are considered easier to study because continuous-time processes require more advanced mathematical techniques and knowledge due to the index set being uncountable. If the index set is the integers, or some subset of them the stochastic process can be called a random sequence. If the state space is the integers or natural numbers the stochastic process is called a discrete or integer-valued stochastic process. If the state space is the real line the stochastic process is referred to as a real-valued stochastic process or a process with continuous state space. If the state space is n -dimensional Euclidean space the stochastic process is called a n -dimensional vector process or n -vector process; the word stochastic in English was used as an adjective with the definition "pertaining to conjecturing", stemming from a Greek word meaning "to aim at a mark, guess", the Oxford English Dictionary gives the year 16
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms; these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion. Although it is not possible to predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.
As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics; the mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, by Pierre de Fermat and Blaise Pascal in the seventeenth century. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, Pierre Laplace completed what is today considered the classic interpretation. Probability theory considered discrete events, its methods were combinatorial. Analytical considerations compelled the incorporation of continuous variables into the theory; this culminated on foundations laid by Andrey Nikolaevich Kolmogorov.
Kolmogorov combined the notion of sample space, introduced by Richard von Mises, measure theory and presented his axiom system for probability theory in 1933. This became the undisputed axiomatic basis for modern probability theory. Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately; the measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, more. Consider an experiment that can produce a number of outcomes; the set of all outcomes is called the sample space of the experiment. The power set of the sample space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the power set of the sample space of die rolls; these collections are called events. In this case, is the event that the die falls on some odd number.
If the results that occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events, the probability that any of these events occurs is given by the sum of the probabilities of the events; the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6; this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, the event has a probability of 1, that is, absolute certainty; when doing calculations using the outcomes of an experiment, it is necessary that all those elementary events have a number assigned to them. This is done using a random variable.
A random variable is a function that assigns to each elementary event in the sample space a real number. This function is denoted by a capital letter. In the case of a die, the assignment of a number to a certain elementary events can be done using the identity function; this does not always work. For example, when flipping a coin the two possible outcomes are "heads" and "tails". In this example, the random variable X could assign to the outcome "heads" the number "0" and to the outcome "tails" the number "1". Discrete probability theory deals with events. Examples: Throwing dice, experiments with decks of cards, random walk, tossing coins Classical definition: Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an number when a die is